Twitter is purging accounts at a rapid pace – suspending over a 70 million accounts in two months as it ramps up its battle against “fake and suspicious” accounts, reports the Washington Post.
The social media giant has more than doubled its rate of suspensions since October, when the company suggested that Russia used fake accounts to manipulate the 2016 U.S. presidential election.
While the 70 million accounts suspended in May and June represent an amount equal to roughly 20% of Twitter’s 336 million active monthly users – the company says that the purge mostly applies to inactive users, or bot accounts, instead of the revenue-generating accounts of real people.
[T]he crackdown has not had “a ton of impact” on the numbers of active users – which stood at 336 million at the end of the first quarter – because many of the problematic accounts were not tweeting regularly.
Legitimate human users — the only ones capable of responding to the advertising that is the main source of revenue for the company — are central to Twitter’s stock price and broader perceptions of a company that has struggled to generate profits. –WaPo
Another aspect of the crackdown is the policing free speech in a world where the 1st Amendment seems to have sprouted a “safe space” clause.
“One of the biggest shifts is in how we think about balancing free expression versus the potential for free expression to chill someone else’s speech,” said Twitter VP for Trust and Safety, Del Harvey. “Free expression doesn’t really mean much if people don’t feel safe.”
Many on the left have criticized Twitter for allowing bots and trolls to amplify disinformation. “Though some go dormant for years at a time, the most active of these accounts tweet hundreds of times a day with the help of automation software,” writes WaPo.
“When you have an account tweeting over a thousand times a day, there’s no question that it’s a bot,” said Samuel C. Woolley, research director of the Palo Alto-based Digital Intelligence Lab at the Institute for the Future. “Twitter has to be doing more to prevent the amplification and suppression of political ideas.”
That said, Facebook VP of advertising, Rob Goldman, said in February after the indictment of 13 Russian nationals running a “bot farm” that the majority of advertising purchased by Russians on Facebook occurred after the election – and was in fact designed to sow discord and divide Americans.
The majority of the Russian ad spend happened AFTER the election. We shared that fact, but very few outlets have covered it because it doesn’t align with the main media narrative of Tump and the election. https://t.co/2dL8Kh0hof
— Rob Goldman (@robjective) February 17, 2018
The main goal of the Russian propaganda and misinformation effort is to divide America by using our institutions, like free speech and social media, against us. It has stoked fear and hatred amongst Americans. It is working incredibly well. We are quite divided as a nation.
— Rob Goldman (@robjective) February 17, 2018
In January, Twitter emailed almost 1.4 million accounts – or .04% of their Active Monthly Users, to warn them that they may have engaged with Russian accounts. Two weeks later, that number had more In February, Twitter deleted 200,000 tweets they say came from “Russian troll” accounts.
“I wish Twitter had been more proactive, sooner,” said top ranking Senate Intelligence Committee Democrat Sen. Mark R. Warner (Va.). “I’m glad that – after months of focus on this issue – Twitter appears to be cracking down on the use of bots and other fake accounts, though there is still much work to do.”
Twitter’s decision to forcefully target suspicious accounts came on the heels of an intense internal debate over whether to implement new detection tools.
One previously undisclosed effort called “Operation Megaphone” involved quietly buying fake accounts and seeking to detect connections among them, said two people familiar with internal deliberations. They spoke on the condition of anonymity to share details of private conversations.
The name of the operation referred to the virtual megaphones — such as fake accounts and automation — that abusers of Twitter’s platforms use to drown out other voices. The program, also known as a white hat operation, was part of a broader plan to get the company to treat disinformation campaigns by governments differently than it did more traditional problems such as spam, which is aimed at tricking individual users as opposed to shaping the political climate in an entire country, according to these people. –WaPo
While some Twitter executives were reluctant to purge suspected fake accounts – even questioning the legality of doing so – others pointed out that the platform is very easily manipulated. One engineer illustrated the problem by buying thousands of fake followers for a Twitter manager, according to WaPo, citing two people familiar with the account.
A person with access to one of Twitter’s “Firehose” products, which organizations buy to track tweets and social media metrics, provided the data to the Post. The Firehose reports what accounts have been suspended and unsuspended, along with data on individual tweets. –WaPo
In March, Twitter CEO Jack Dorsey announced a company-wide program to promote “healthy conversations” on the platform. Two months later, they made major changes to algorithms used to police bad behavior.
Next week, another announcement is expected along these lines.
In order to implement suspect accounts, Twitter has built on the technical expertise of AI startup Magic Pony, which the company acquired in 2016. The acquisition “laid the groundwork that allowed us to get more aggressive,” Harvey said. “Before that, we had this blunt hammer of your account is suspended, or it wasn’t.”
Harvey says that Twitter will take further measures down the road, telling WaPo “We have to keep observing what the newest vectors are, and changing our ways to counter those,” adding “This doesn’t mean we’re going to sit on our laurels.”
via RSS https://ift.tt/2KB9lyt Tyler Durden