Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anyone remember Microsoft's Tae chatbot experiment? 4chan got it to start spewing hate speech and 9/11 conspiracy theories on Twitter in no time.

https://www.theguardian.com/technology/2016/mar/24/tay-micro...

It just goes to show that chatbots are easy to manipulate. Since the Chinese bot said something we agree with we think that it's somehow showing us a deeper level of truth. Perhaps that deeper level of truth is that humans are easily fooled by confirmation bias?

You'll know that AI is about to take over the world when it can use our cognitive biases to convince us of something so strongly that it would take a human a much longer time to convince us that what the AI made us believe was not in fact true. Maybe this is what Elon Musk was referring to when he warns us about AI that is "deep intelligence in the network".



The article doesn't pretend chatbots aren't hard to manipulate, and in fact mentions the case of the Tay Bot. I don't see anyone claiming that the bot has shown a deeper truth, although I would agree the Chinese government is corrupt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: