Will poets program your bots?

Here is the latest from Silicon Valley:

As tech behemoths and a wave of start-ups double down on virtual assistants that can chat with human beings, writing for AI is becoming a hot job in Silicon Valley. Behind Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana are not just software engineers. Increasingly, there are poets, comedians, fiction writers, and other artistic types charged with engineering the personalities for a fast-growing crop of artificial intelligence tools.

…a new crop of virtual assistant start-ups, whose products will soon flood the market, have in mind more ambitious bots that can interact seamlessly with human beings.

Because this wave of technology is distinguished by the ability to chat, writers for AI must focus on making the conversation feel natural. Designers for Amazon’s Alexa have built humanizing “hmms” and “ums” into her responses to questions. Apple’s Siri assistant is known for her wry jokes, as well as her ability to beatbox upon request.

As in fiction, the AI writers for virtual assistants dream up a life story for their bots. Writers for medical and productivity apps make character decisions such as whether bots should be workaholics, eager beavers or self-effacing. “You have to develop an entire backstory — even if you never use it,” Ewing said.

Even mundane tasks demand creative effort, as writers try to build personality quirks into the most rote activities. At the start-up x.ai, a Harvard theater graduate is tasked with deciding whether its scheduling bots, Amy and Andrew, should use emojis or address people by first names. “We don’t want people saying, ‘Your assistant is too casual — or too much,’ ” said Anna Kelsey, whose title is AI interaction designer. “We don’t want her to be one of those crazy people who uses 15 million exclamation points.”

Here is the full story.


The link is blank


here is the article

1) yes. 2) neat. 3) death (metaphorically) to stagnationists. 4) hard work ahead separating progress and productivity.

BTW, the entry price to play with such AIs is $10


That's the price for a computer that can connect to an enormous cloud of computers running such an AI. AFAIK, there hasn't been any public discussion of just how many computers or how much software is actually involved in the implementing Alexa (or Siri or Cortana, for that matter).

Acknowledged, but then again if the tech giants offer them all for free that might imply a low marginal cost for a new end-user/experimenter. When 10 TFlops in a box only cost $50k .. who can keep a $50k box busy?

One of the longstanding debates is whether personal characteristics are properties of an individual --- a combination of inate genetics and individual choice --- or whether they are actually the result of social forces. Do people choose their own behavior of does "society" determine it? The latter seems to be more PC as it diminishes individual responsibility and allows blame to be shifted towards politically disfavored groups, and Microsoft's experience with the Tay AI chatbot would seem to lend some credence to this view. It's interesting, though, that on balance when firms place bets with their own capital, they seem to be betting that they can engineer AI personality exogenously: when there is no explicit political context, their natural intuition about reality seems to be that, if they want to, they can choose their AI's personality, and such choices won't just be wiped out by the AI's social environment. Essentially, when firms forget to worry about being PC, they default to assuming personality is exogenous rather than endogenously determined by society.

In the very last paragraph of the Post article, the Cortana team also forgets to be PC: "The team insists that Cortana isn’t a woman. 'She knows that gender is a biological construct...'" Oops.

At this point AIs are very much faking it. That might lead to a slightly different philosophical question: How often are we faking it?

Definitely. It is a completely new, modern and original thing that someone tries to push off responsibility for problems they cause onto politically disfavoured groups. Never happened before. Definitely not to the people who PC-ists want more respect for. Gays, blacks, hispanics, and other groups have never been blamed for anything.

PC is the new totalitarianism. Respecting people is THAT hard.

There may be some truth to that - most people's default position is not PC - but it may also be the result of the engineers. They do not study this sort of thing philosophical issue deeply I would guess. They just assume that as they are unique individuals who are not affected by society at all, and don't we all?, the same must be true for AIs. Saying otherwise would strike at our own sense of self.

Also, of course, the sort of people who try to create an AI are likely to be the sort of people who think it is possible. A distinct individual is a lot easier to create that a society.

It is first very disempowering, but then later possibly very empowering, to realize the extent to which you are formed by the society/culture you grow up in.

If you are deluded into believing that you came up with all that stuff on your very own, and purely as an act of independent will, well, you're not gonna engage in the sort of introspection to think about which aspects are worth retaining.

BC's comment is profoundly misguided about how AI works, and no one seems to have caught on, so preoccupied are they with bashing / praising PC culture.

The companies making personal assistants are NOT trying to make them broadly learn anything. Just stuff like what kind of hotel and flight times you prefer.

Worrying about them learning "personal characteristics" is like worrying that an airplane might learn to move by walking, since it is around people so much.

Just as you can "exogenously" make a plane fly rather than walk, you can make an AI "exogenously" have this or that "personal characteristic".

There is NO ANALOGY to human acculturation except in AI systems explicitly designed to take in culture broadly.

The never-ending effort of the Silicon Valley Madmen to show they are doing something more than developing creative ways to sell stuff. Of course, anybody can believe whatever they wish to believe.

I want AI that does stuff for me. I'm not looking for a friend, a companion, or something that is sensitive to my personal quirks, humour, or any such thing. A well-enunciated and "not toooo robotic" sounding voice is about enough.

In fact, if it was learning so much about me that it could guess at what makes me laugh or predict what I might want for dinner, I would probably crush the thing into little bits or delete any remnants of its software trace, out of fear that it might be repurposed towards programming ME via sub-audible means to love AI or jump on a eugenics band wagon or some such thing.

It is a tool. Let's not make it anything more than that. AI with personality just seems unappealing to me, but AI which knows you better than you know yourself could be catastrophically dangerous.

This is a common confusion, flipping between current "AI" and the long time (old time) hope for an artificial brain. Perhaps because we see "dramatic progress" people think we are far along. Not really. Cortana is to an artificial braon as 1960s Disney animatronics is to a Terminator robot.

Now simple AI, far lower on the evolutionary tree than human intelligence, is useful, but is is not nearly as smart as a parrot. Even if they both can say "hello."

(That there are human operators behind these animatronics means we should be wary, but about them, not the trained parrot.)

I'm thinking of an AI that might pre-order my dinner at the statistically assumed preferred restaurant, book holidays for me, pre-order my books, select my preferences for the cable package, arrange a date with a suitable candidate who is available at the right time, or whatever other stuff you might imagine - conditioned on its observation of my previous choices and whatever other information it might hoover up about me from all communications, etc. All the stuff you might expect of a billionaire's personal assistant (or butler) and more, say. Consider Pandora for music, which basically learns about your preferences and adds a little bit of variety to keep things interesting, but applied across many of most aspects of your life.

Yes, I think you are right to draw more attention to the human operators than the AI itself. However, what I am more thinking of is the data systems that would be required for such an AI to do such things, and the nefarious ends to which others (e.g., the human operators you draw attention to, but basically anyone who can access a full copy of your personal data which drives the AI) may apply the same data system.

Anyways, since when I do translations and not uncommonly refer to a number of online machine translations to get a few quick ideas of some likely translations if I'm not too sure, I'm extremely aware of the existing very poor abilities of AI especially in the realm of language. Far better are outlets which pair lists of short extracts which compare the original phrase to diverse translated outputs, and a human can evaluate which translation is relevant to the context whereas a computer generally cannot come remotely close.

We are a very long way off, but those who work with AI, even the very well intentioned ones, will want to build expansive data systems which will pose societal risks. I think the principle applied in economics research, where research is deemed essentially unethical if data collection may compromise the privacy/identity of individuals, would be a good minimum starting point.

They already hacked our computers, "the world of the electron and the switch, the beauty of the baud" and all that.

Futurama predicted this!

Comments for this post are closed