And in image recognition Microsoft had some troubles last year with the launch of the How Old Do You Look? app - it got many people’s ages wrong. Microsoft has previously had imperfect demos of its AI-powered speech recognition. Microsoft has been investing in AI research aplenty alongside Facebook, Google, and other companies. Of course chatbots are not new - remember AOL’s SmarterChild? - but team communication tool Slack and other companies have been pushing bots as a way to automatically supply helpful information so people don’t need to. The incident is in contrast to the more well received Xiaoice chatbot that Microsoft deployed in China in 2014. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” Lee wrote. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.Indeed, Lee said that a small number of people “exploited a vulnerability” in Tay and thus were to blame for the tweets, which spoke positively of Hitler, among other things. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. To do AI right, one needs to iterate with many people and often in public forums. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay. We take full responsibility for not seeing this possibility ahead of time. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. "Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values." "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," it said. Nobody who uses social media could be too surprised to see the bot encountered hateful comments and trolls, but the artificial intelligence system didn't have the judgment to avoid incorporating such views into its own tweets. However, within 24 hours, Twitter users tricked the bot into posting things like "Hitler was right I hate the jews" and "Ted Cruz is the Cuban Hitler." Tay also tweeted about Donald Trump: "All hail the leader of the nursing home boys." Tay was set up with a young, female persona that Microsoft's AI programmers apparently meant to appeal to millennials. Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets. Users could follow and interact with the bot on Twitter and it would tweet back, learning as it went from other users' posts. ![]() Yesterday the company launched "Tay," an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans. Microsoft got a swift lesson this week on the dark side of social media. Tay, Microsoft's new teen-voiced Twitter AI, learns how to be racist 04:22
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |