The Singularity: Artificial Intelligence Is Becoming Self-Aware, Suggests ‘Phi’ Theory

According to a new theory, computers might become self-aware, proving that this trait isn’t unique to humans as first believed.


According to Matthew Davidson at Monash University, the Phi theory explains just how possible it is for computers and machines to become self-aware.

According to the study, Complicated machines and animals may display signs of consciousness but the extent of the ‘experience’ is an ongoing debate.

The researcher argues that something with a low ‘Phi’, like a computer hard drive won’t be conscious. Whereas something with a high degree of ‘Phi’, like a  mammalian brain, will be.

Even though incredible advancements are made nearly every single day, critics argue that computers and robots will NEVER truly match humans unless they eventually develop a consciousness, becoming fully aware.

Scientists firmly believe that consciousness, however, is a unique human trail,that cannot be replicated in machines. However, many scientists around the world argue that animals, for example, are as conscious as humans  and the so-called ‘Phi’ theory could be used to determine if ‘Artificial Intelligence’ is capable of conscious behaviour.

In a revolutionary article for ‘The Conversation‘ written by Matthew Davidson, Ph.D. Candidate in the neuroscience of consciousness at Monash University, the Phi theory is explained in detail.

„Do you think that the machine you are reading this story on, right now, has a feeling of “what it is like” to be in its state?

What about a pet dog? Does it have a sense of what it’s like to be in its state? It may pine for attention, and appear to have a unique subjective experience, but what separates the two cases?“ asks Davidson in the article.

Davidson adds: „What makes phi interesting is that a number of its predictions can be empirically tested: if consciousness corresponds to the amount of integrated information in a system, then measures that approximate phi should differ during altered states of consciousness.“

„By extension, if consciousness is defined by the amount of integrated information in a system, then we may also need to move away from any form of human exceptionalism that says consciousness is exclusive to us,“ he added.

But what does it mean for machines becoming self-aware?

Professor Hawking and Elon Musk warned about Artificial Intelligence and its dangers last year.

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014.

“Unfortunately, it might also be the last unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

7 Comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: