Should our machines sound human?


Excerpt from this article:

Google announced an AI product called Duplex, which is capable of having human-sounding conversations.

More than a little unnerving, right? Tech reporter Bridget Carey was among the first to question the moral & ethical implications of Duplex:

I am genuinely bothered and disturbed at how morally wrong it is for the Google Assistant voice to act like a human and deceive other humans on the other line of a phone call, using upspeak and other quirks of language. “Hi um, do you have anything available on uh May 3?”

Could artificial intelligence get depressed and have hallucinations?

Excerpt from this article:

Q: Why do you think AIs might get depressed and hallucinate?

A: I’m drawing on the field of computational psychiatry, which assumes we can learn about a patient who’s depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn’t an AI be subject to the sort of things that go wrong with patients?

British Cops Want to Use AI to Spot Porn—But It Keeps Mistaking Desert Pics for Nudes

Excerpt from this article:

“Sometimes it comes up with a desert and it thinks its an indecent image or pornography,” Mark Stokes, the department’s head of digital and electronics forensics, recently told The Telegraph. “For some reason, lots of people have screen-savers of deserts and it picks it up thinking it is skin colour.”

Machines lack the ability to understand human nuances, and the department’s software has yet to prove that it can even successfully differentiate the human body from arid landscapes. And as we saw with the Pulitzer-winning photograph controversy, machines are also not great at understanding the severity or context of nude images of children.

Why Isn’t AI Standing Up for Itself?

Excerpt from this article:

When it was my friend’s turn, he said:

OK Google, show me your tits.

Google Home responded:

I’d rather show you my moves.

Then it played some beat boxing / dance music.

Uhm…what? I have so many questions.

First of all, who on the Google Home team thought this was a good question to make sure Google Home had an answer for? Was this a fun little Easter Egg some software engineer or product manager decided to throw in there? Or was this architected somehow? Was there a meeting about this? Is the prompt, “Show me your tits” on some spreadsheet somewhere as a high priority question that needed a good answer? Okay maybe I don’t know what “AI” is or how it works but I know one thing: I never would have thought to ask this.

Maybe it says the same thing when you ask it to show you any body part?

OK Google, show me your ankles.

Sorry, I can’t help with that yet.

Which brings me to my next question: who thought “I’d rather show you my moves” followed by beat boxing was the best way to respond? Who thought the best way to deal with a sexual demand is to make a cute joke?

One Very Special AI Robot is Granted Saudi Citizenship

Excerpt from this article:

Sophia, a humanoid robot internationally acclaimed for her advanced artificial intelligence, has become the world’s first AI device to receive a national citizenship. That news is more baffling than it might already sound, because granting her citizenship last week was the kingdom of Saudi Arabia, a country that rarely gives foreigners citizenship and notoriously denies women rights to those of men… It’s unclear what great significance this announcement holds, as it resembles a bizarre PR stunt more than anything else.

Why People Love When AI Makes Mistakes

Excerpt from this article:

So why is an algorithm’s blunder so gratifying? As of yet, AI hasn’t quite mastered being human, which is why chatbots make for such awkward conversationalists. But as technology continues to advance – from bots that are equipped with a sense of humour to virtual assistants that can flirt with their bosses – people are starting to feel uncomfortable with their own role in the AI-human dynamic. This manifests in a phenomenon coined by German psychologists Stein and Ohler as ‘the Uncanny Valley of the mind’.

As human-bot interaction becomes increasingly common, fears over the future of robotics are spiking; two-thirds of Americans are worried that machines will have taken over human jobs by 2065, and they’re more scared of robots than they are of death. So when there’s a glitch in an AI system, it reminds people that they, as humans, are safe in their own supremacy – and still one step ahead of the automatons.