Could artificial intelligence get depressed and have hallucinations?

Excerpt from this article:

Q: Why do you think AIs might get depressed and hallucinate?

A: I’m drawing on the field of computational psychiatry, which assumes we can learn about a patient who’s depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn’t an AI be subject to the sort of things that go wrong with patients?

Advertisements