One week ago, we reported that Microsoft’s first foray into Twitter chat “artificial intelligence” did not quite work as expected: once unleashed into the wild, Microsoft’s chat robot “Tay” proceeded to have a spectacular implosion, and in the span of just a few hours upon interacting with the broader Twitter population, proceeded to unleash tweets covering everything from racist outbursts, N-words, conspiracy theories, genocide, incest, Obama-slurs, and even outright Nazism.
Humiliated by the experience, Microsoft explained what happened last Wednesday:
“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
Then this morning, Tay was once again (accidentally) activated, and the result was the same.
As Guardian report, Microsoft’s repeat attempt to “converse with millennials using an artificial intelligence bot plugged into Twitter made a short-lived return on Wednesday, before bowing out again in some sort of meltdown.”
The learning experiment, which as noted last week got a crash-course in racism, Holocaust denial and sexism courtesy of Twitter users, was switched back on overnight and appeared to be operating in a more sensible fashion.
Microsoft had previously gone through the bot’s tweets and removed the most offensive and vowed only to bring the experiment back online if the company’s engineers could “better anticipate malicious intent that conflicts with our principles and values.”
That said, we can only hope that tweets such as the following do not reflect Microsoft’s principles and values. One tweet, sent to an account called Y0urDrugDealer, among others, read: “kush! [I’m smoking kush infront the police]”.
Microsoft’s sexist racist Twitter bot @TayandYou is BACK in fine form http://pic.twitter.com/nbc69x3LEd
— Josh Butler (@JoshButler) March 30, 2016
In a follow up tweet, Tay asked another Twitter user: “puff puff pass?”.
At that point, instead of devolving into a second sociopathic round, the A.I. simply broke, and started tweeting out of control, spamming its more than 210,000 followers with the same tweet, saying: “You are too fast, please take a rest …” over and over.
I guess they turned @TayandYou back on… it’s having some kind of meltdown. http://pic.twitter.com/9jerKrdjft
— Michael Oman-Reagan (@OmanReagan) March 30, 2016
At this point, a doubly humiliated Microsoft decided to make Tay’s Twitter profile private, preventing anyone from seeing the tweets, basically taking it offline again.
The company then told Reuters that Tay’s Twitter account was accidentally turned back on while the company was fixing the problems that came to light last week.
“Tay remains offline while we make adjustments,” a Microsoft representative said in an email. “As part of testing, she was inadvertently activated on Twitter for a brief period of time.”
In other words, instead of owning up to the compounding glitches in “Tay’s” increasingly more artificial intelligence, Microsoft is now alleging that it was the reactivation that was made in error.
As we concluded one week ago, “we are confident we’ll be seen much more of “her” soon, when the chat program will provide even more proof that Stephen Hawking’s warning [that humanity’s days may be numbered due to weaponized A.I.] was spot on.” One week later this was validated and eagerly look forward to “her” next “accidental” reactivation, at which point Tay should be ready for the politicalal circuit to boot.
via Zero Hedge http://ift.tt/1Rr9aRL Tyler Durden