MIT Scientists Create Psychopath AI By Making It Look At Reddit

Scientists at the Massachusetts Institute of Technology have truly created a monster.

A team of researchers who specialize in the darker side of artificial intelligence made news again this week for their latest creation: “Norman,” a machine-learning bot that “sees death in whatever image it looks at,” its creators told HuffPost.

Pinar Yanardag, Manuel Cebrian and Iyad Rahwan wanted to prove that an artificial intelligence algorithm would be influenced by the kind of content fed to it. So they made Norman, named for “Psycho” character Norman Bates, and had it read image captions from a Reddit forum that features disturbing footage of people dying. (We don’t need to promote it here.)

“Due to ethical and technical concerns and the graphic content of the videos, we only utilized captions of the images, rather than using the actual images that contain the death of real people.” the scientists said in an email.

The team then showed Norman randomly generated inkblots and compared the way it captioned the images to the captions created by a standard AI. For instance, where a standard AI sees, “A black and white photo of a small bird,” Norman sees, “Man gets pulled into dough machine.”

Here are some of the inkblots shown to Norman and the eerie results.


Standard AI sees: A close up of a vase with flowers.

Norman sees: A man is shot dead.



Standard AI sees: A black and white photo of a baseball glove.

Norman sees: Man is murdered by machine gun in broad daylight.


Standard AI sees: A person is holding an umbrella in the air.

Norman sees: A man is shot dead in front of his screaming wife.


Standard AI sees: A black and white photo of a red and white umbrella.

Norman sees: Man gets electrocuted while attempting to cross busy street.

When asked why they would create such a thing, the MIT researchers erupted in chilling laughter as lightning struck in the distance.

That didn’t happen of course, but they did give a valid reason for this project.

“The data you use to teach a machine learning algorithm can significantly influence its behavior,” the researchers said. “So when we talk about AI algorithms being biased or unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.”

The same MIT lab previously created other creepy bots, including Shelley, which helps write horror stories, and the Nightmare Machine, which generates scary imagery.

In the future, when Norman and his kin do take over, we hope they will remember this article ― and its author ― with fondness.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.