Developers Of AI ‘Gaydar’ Face Huge Liberal Backlash

A Viral Study revealed artificial intelligence could accurately guess whether a person is gay or straight based on their face is receiving harsh backlash from LGBTQ rights groups.

The study, which was conducted by researchers at the Stanford University, reported that the AI correctly distinguished gay men from straight men 81 percent of the time and 74 percent of the time for women.

Advocates called the research “junk science,” claiming that not only could the technology out people, but it could put their lives at risk – especially in brutal regimes that view homosexuality as a punishable offense.

“At a time where minority groups are being targeted, these reckless findings could serve as weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous,” Jim Halloran, GLAAD’s Chief Digital Officer, wrote in a joint statement from GLAAD and The Human Rights Campaign.

However, the author of the study disagrees with the criticism, stating that this type of technology already exists and the purpose of his research was to expose security flaws and develop protections so that someone couldn’t use it for ill will.

“One of my obligations as a scientist is that if I know something that can potentially protect people from falling prey to such risks, I should publish it,” Michal Kosinksi, co-author of the study, told the Guardian. He added that discrediting his research wouldn’t help protect LGBTQ people from the potentially life-threatening implications this kind of technology has.

Advocates also called out the study for not looking at bisexual and transgender people or people of color. The researchers gathered 130,741 images of men and women from public profiles on a dating site for the AI to analyze, all of which were Caucasian. While Kosinksi and his co-author recognized that the lack of diversity in the study was an issue, they didn’t say which dating site they looked at and claimed they couldn’t find enough non-white gay people.

“Technology cannot identify someone’s sexual orientation,” Halloran said. “This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community.”

Alex Bollinger, a writer at, defended the study in an article stating that while it doesn’t paint a “complete picture” of LGBTQ people, he didn’t see any reason for the outright denunciation.

The study, which was published in the Journal of Personality and Social Psychology, did not develop its own AI for the research and only tested existing technology.

“Let’s be clear: Our paper can be wrong. In fact, despite evidence to the contrary, we hope that it is wrong. But only replication and science can debunk it — not spin doctors,” the researchers wrote in a statement. “If our paper is indeed wrong, we sounded a false alarm. In good faith.”

Kosinksi said that he decided to publish the paper since similar software is already being used around the world.

Original Article:

Read More:The Singularity: Google’s AI Is Now Creating It’s Own Artificial Intelligence, And Better Than The Engineers Can

Read More:Google Reveals Plans To Vastly Improve Machine Facial Recognition

Read More:New Surveillance Tech Allows Facial Recognition in Total Darkness

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.