DHS Still Policing Disinformation Despite Dissolving Disinformation Governance Board


The Department of Homeland Security is still exerting pressure on tech platforms to censor false information.

Earlier this year, the Department of Homeland Security (DHS) created an internal board dedicated to combating misinformation and disinformation. Despite scrapping it after facing criticism, new reporting indicates that the agency is still pursuing the constitutionally dubious project.

The DHS announced the Disinformation Governance Board in April specifically to address Russian disinformation and false information spread by border traffickers. Nina Jankowicz, a Wilson Center fellow and “disinformation expert,” was put in charge.

The board immediately garnered controversy, both for its “Orwellian” overtones as well as Jankowicz’s then-recent statements decrying “free speech absolutists” and advocating for more countries to criminalize “‘awful but lawful content.'”

Within days, DHS Secretary Alejandro Mayorkas clarified that the board had no regulatory power and was simply meant to determine “best practices” for dealing with misinformation and disinformation. Less than three weeks after the initial announcement, the DHS “paused” the board’s rollout, and Jankowicz resigned. After DHS advisers indicated no need for such a board in the first place, Mayorkas dissolved it in August.

After resigning, Jankowicz told NPR that “everything you may have heard about the Disinformation Governance Board is wrong or is just a flat out lie” and that “we weren’t going to be doing anything related to policing speech.”

But reporting from The Intercept this week indicates that the government is indeed actively involved in policing disinformation, often pressuring private companies to do so on its behalf.

The outlet cites a number of both public and leaked documents showing internal DHS deliberations regarding influencing websites and social media platforms. Beginning in the lead-up to the 2020 election, representatives from the DHS and the FBI began holding monthly meetings with tech companies, including social media platforms, to discuss what the companies should do about election misinformation.

The government reported nearly 4,800 social media posts to the respective platforms during the election. More than a third were then either removed or labeled as potential misinformation. Facebook even developed an online portal, accessible only with a government or law enforcement email, for reporting content directly.

Hostile foreign actors spreading disinformation is certainly a real thing (if not always particularly effective). But the vast majority of online misinformation is likely to be much more prosaic. In one amusing example cited by The Intercept, the DHS forwarded accounts to Twitter that could be mistaken for official government entities; one, with fewer than 60 followers, featured the Twitter bio, “dm us your weed store locations (hoes be mad, but this is a parody account).”

Policing misinformation also poses numerous risks to free speech. This was one of the justifications initially given for shutting down the Disinformation Governance Board. With narrow exceptions, false statements are protected by the First Amendment, and any broad efforts to restrict misinformation would have a chilling effect on other speech.

For example, the New York Post reported in 2020 that a laptop belonging to then-candidate Joe Biden’s son Hunter turned up at a Delaware repair shop, full of salacious and potentially damaging information. The story was widely panned, including by Jankowicz, as likely Russian disinformation. Twitter banned users from sharing the article, and Facebook limited its spread as well. But a year and a half later, The New York Times largely confirmed the veracity of the original report.

The laptop story provides a useful template for how DHS influence over social media moderation could look. In fact, according to The Intercept, an FBI agent and an FBI section chief were directly involved in talks that “led to Facebook’s suppression” of the Post story. And a draft copy of the DHS’s Quadrennial Homeland Security Review includes, among the topics it hopes to police for misinformation, “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.”

Reasonable people can, and do, disagree on any or all of those topics. But accepted facts can change over time. The origins of the COVID-19 pandemic provide a useful test case, such as the shift of the “lab leak” theory from fringe conspiracy to plausible alternative.

The government has a terrible track record for deciding which speech is appropriate and which is not, and yet government agents do not seem dissuaded. As one Microsoft executive texted a DHS director*, “Platforms have got to get comfortable with gov’t. It’s really interesting how hesitant they remain.” Interesting, indeed.

*CORRECTION: An earlier version of The Intercept‘s article stated that “a DHS official texted a representative from Microsoft.” It was later stealth-edited to say that “Microsoft executive Matt Masterson, a former DHS official, texted Jen Easterly, a DHS director.”

The post DHS Still Policing Disinformation Despite Dissolving Disinformation Governance Board appeared first on Reason.com.

from Latest https://ift.tt/a0r8ZXF
via IFTTT

Leave a Reply

Your email address will not be published. Required fields are marked *