How will talking bots affect us?

I have a short New Yorker piece on that question, here is one bit from it:

If Siri is sometimes sarcastic, could heavy users of the Siri of the future become a little more sarcastic, too?

For companies, there are risks associated with such widespread personification. For a time, consumers may be lulled by conversational products into increased intimacy and loyalty. But, later, they may feel especially betrayed by products they’ve come to think of as friends. Like politicians, who build up trust by acting like members of the family only to incur wrath when they are revealed to be careerist and self-interested, companies may find themselves on an emotional roller coaster. They’ll also have to deal with complicated subjects like politics. Recently, Tay, a chat bot from Microsoft, had to be disabled because it began issuing tweets with Nazi-like rhetoric. According to Elizabeth Dwoskin, in the Post, Cortana, another talking Microsoft bot, was carefully programmed not to express favoritism for either Hillary Clinton or Donald Trump. A product’s apparent intelligence makes it likable, but also offers more of an opportunity to offend.

Here is another:

And there are ways in which just knowing that bots exist will change us. If the bots are good enough, we won’t be able to distinguish them from actual people over e-mail or text; when you get an e-mail, you won’t necessarily be certain it’s from a human being. When your best friend writes that she’s also “looking forward to seeing you at the baseball game tonight,” you’ll smile—then wonder if she’s busy and has asked her e-mail bot to send appropriate replies. Once everyone realizes that there might not be a person on the other end, peremptory behavior online may become more common. We’ll likely learn to treat bots more like people. But, in the process, we may end up treating people more like bots.

Do read the whole thing.

Comments

Comments for this post are closed