The research concerns whether such chatbots (i.e., computer programs pretending to be humans) aren’t themselves a form of evil since they are purposely trying to deceive people into thinking they are interacting with another person.Clearly, Microsoft’s Tay was a known AI chatbot and there was no intent on Microsoft’s part to deceive people into thinking Tay was a human.And Microsoft was clearly caught off guard when Tay started tweeting vitriol.
Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.
As a result, we have taken Tay offline and are making adjustments.” In other words, the terrible things that Tay tweeted were a reflection of the worst of mankind not the worst of artificial intelligence.
We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.
To do AI right, one needs to iterate with many people and often in public forums.
titled ‘Moral Competence in Social Robots.’ They argue that moral competence consists of four broad components.
Nothing in the arguments presented by Fourtané convinces me that machines are going to be able to develop an autonomous moral core beyond what is programmed into them.
She politely withdrew from conversations about Zionism, Black Lives Matter, Gamergate, and 9/11, and she gave out the number of the National Suicide Prevention Hotline to friends who sounded depressed.
She used words like ‘swagulated’ and almost never didn’t call it ‘the internets.’ She was obsessed with abbrevs and the prayer-hands emoji.
That means that people should be a lot more concerned about the ethics of the people creating the algorithms than the machines that carry those instructions out.
Tay showed us that the worst of humanity is just as capable of teaching machine’s behavior as the best of humanity. Footnotes  Anthony Lydgate, “I’ve Seen the Greatest A.
He writes: “In the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay.