Watch Live: Taibbi Discusses ‘Hamilton 68’ Link To Government’s ‘Cyber Threat League’

Watch Live: Taibbi Discusses ‘Hamilton 68’ Link To Government’s ‘Cyber Threat League’

Yesterday, journalist Alex Gutentag covered the “CITL Files,” which stands for Cyber Threat Intelligence League – a Department of Homeland Security partner which sought to implement something called “AMITT,” which stands for “Adversarial Misinformation and Influence Tactics and Techniques.”

Far from simply protecting the public from falsehoods, both government and non-profit actors within Censorship Industrial Complex have followed CTIL’s exact playbook and have waged a full-fledged influence operation against Americans. -Public

Today, Matt Taibbi drops another “CITL Files” report. In conjunction, he’s hosting a livestream to discuss “an odd little detail” which involves “connections between the group and Hamilton 68.

Watch Live (and scroll down for more info):

More via Gutentag’s report in Public regarding the CITL:

But the CTIL Files, a trove of documents that a whistleblower provided to Public and Racket, reveal that US and UK military contractors developed and used advanced tactics — including demanding that social media platforms change their Terms of Service — to shape public opinion about Covid-19, and that getting content removed was just one strategy used by the Censorship Industrial Complex.

The CTI League, which partnered with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), aimed to implement something called “AMITT,” which stood for “Adversarial Misinformation and Influence Tactics and Techniques.”

AMITT was a disinformation framework that included many offensive actions, including working to influence government policy, discrediting alternative media, using bots and sock puppets, pre-bunking, and pushing counter-messaging.

The specific “counters” to “disinformation” in AMITT and its successor framework, DISARM, include many we have observed in our study of the Censorship Industrial Complex: 

  • “Create policy that makes social media police disinformation”

  • “Strong dialogue between the federal government and private sector to encourage better reporting”

  • “Marginalize and discredit extremists”

  • “Name and Shame influencers”

  • “Simulate misinformation and disinformation campaigns, and responses to them, before campaigns happen”

  • Use banking to cut off access

  • “Inoculate populations through media literacy training”

For issues like the Russiagate hoax to the Hunter Biden laptop to Covid-19, organizations within the Censorship Industrial Complex have used many of DISARM’s offensive methods like tabletop exercises, psychological inoculation, propaganda messaging, and punishment of dissent. Even its extreme proposal of debanking was used against Canada’s Freedom Convoy.

Far from simply protecting the public from falsehoods, both government and non-profit actors within Censorship Industrial Complex have followed CTIL’s exact playbook and have waged a full-fledged influence operation against Americans.  

This influence operation has deep ties to security and intelligence agencies, as is evidenced through many examples of collaboration. In one instance of such collaboration, supposedly independent “disinformation researchers” like Renée DiResta coordinated a 2020 election tabletop exercise with military officials.

Defense and intelligence funding supports much of the Censorship Industrial Complex. For instance, Graphika, which was involved in both EIP and VP, receives grants from the Department of Defense, DARPA, and the Navy.

Pentagon-affiliated entities are heavily involved in “anti-disinformation” work. Mitre, a major defense contractor, received funding to tackle “disinformation” about elections and Covid. The US government paid Mitre, an organization staffed by former intelligence and military personnel, to monitor and report what Americans said about the virus online, and to develop vaccine confidence messaging. This government-backed military research group, Public discovered, was present in the EIP and VP misinformation reporting system, and in election disinformation report emails to CISA.

The AMITT framework also includes many counters we have yet to find concrete evidence for, but which we suspect may have been attempted:

  • “Infiltrate the in-group to discredit leaders”

  • “Honeypot with coordinated inauthentics”

  • “Co-opt a hashtag and drown it out (hijack it back)”

  • “Dilute the core narrative – create multiple permutations, target/amplify”

  • “Newsroom/Journalist training to counter influence moves”

  • “Educate high profile influencers on best practices”               

  • “Create fake website to issue counter narrative”

Subscribers to Public can read the rest here…

*  *  *

Meanwhile, Taibbi wrote this earlier today: “Information Warfare” Comes Home

In March of 2022, shortly after Russia invaded Ukraine, the New York Times published a curious story titled “Fact and Mythmaking Blend in Ukraine’s Information War.” It seemed much-hyped episodes celebrating Ukrainian mettle on the battlefield, like the exploits of the “Ghost of Kiev” ace pilot, “may be a myth,” as the Times put it euphemistically. The paper noted with seeming approval that platforms like Twitter chose not to remove that and other tales that turned out to be not-exactly-true, like the famed “Go Fuck Yourself” send-off of Ukrainian soldiers who reportedly chose to die rather than surrender to Russians on Snake Island.

Who cared if that story sounded just a tad too much like an R-rated version of General Anthony McAuliffe’s “Nuts” reply to Nazis demanding American surrender at Bastogne? What if that was the point, the paper wondered?

“Why can’t we just let people believe some things?” the Times quoted one “Twitter user” as saying. “If the Russians believe it, it brings fear. If the Ukrainians believe it, it gives them hope.” The sentiment was expressed in plainer terms later in the article by former Facebook executive Alex Stamos, head of the Stanford Internet Observatory, which piloted the controversial Election Integrity Partnership social-media-monitoring project:

In exercising discretion over how unverified or false content is moderated, social media companies have decided to “pick a side,” said Alex Stamos, the director of the Stanford Internet Observatory and a former head of security at Facebook.

The theme of the U.S. and its allies not only engaging in informational fakery but boasting about deceptions in public has been a constant since Russia’s invasion. NBC for instance did a story — before you check, yes, it was written by Ken Dilanian, lol — celebrating the Biden administration’s decision to “break with the past” and release “classified” intelligence even if it “wasn’t rock solid.” An example was an announcement that the Russians were considering the use of chemical weapons.

That American officials engage in public deception is no surprise to anyone who remembers the runup to the Iraq War. Still, the eagerness of officials to admit this on TV, or in papers like the Times, and even embrace goofball terms like “false flag,” is a new development.

It’s becoming clear that deploying fake news themes as “information warfare” is a tactic American government agencies are bringing home. Last week, in a story that first broke on Public, Michael Shellenberger, Alexandra Gutentag, and myself began publishing documents provided by a whistleblower about a group called the Cyber Threat Intelligence or (CTI) League, CTIL for short. CTIL, a supposed volunteer organization named as partner in April of 2020 by Cybersecurity and Infrastructure Security Agency chief Chris Krebs, ostensibly had a narrow focus on Covid-19 “misinformation.” But the whistleblower’s documents revealed something far more ambitious, and unnerving.

It was obvious right away that the #CTIFiles Michael and I testified about before congress last week were newsworthy, quickly filling gaps in the public’s understanding of the mechanics of state-aided censorship programs. However, as was the case with the Twitter Files, more troubling themes have emerged as we’ve had more time to read through the material. In a piece published on Public yesterday, for instance, Alex detailed the myriad guidelines in the #CTIFiles for “offensive” information operations.

These include discrediting techniques, use of sock-puppet accounts for trolling and surveillance purposes, strategies to divide groups via infiltration, and a long list of tradecraft lunacies called “counter” actions described taxonomically in the AMITT framework pushed by CTI figures like British data scientist Sarah-Jayne Terp and Special Operations Command “technologist” Pablo Breuer.

The punch line of the upcoming #CTIFiles #4 thread is that these documents don’t merely offer instructions in the use of sockpuppets and small-scale trolling operations. They show a through-line to the much larger frauds that spread like wildfire in the legacy news landscape between 2016 and the present, chief among them the Hamilton 68 scam exposed in the Twitter Files.

Subscribers to Racket News can read the rest here…

Tyler Durden
Tue, 12/05/2023 – 18:00

via ZeroHedge News https://ift.tt/U2pDZNR Tyler Durden

Leave a Reply

Your email address will not be published. Required fields are marked *