Machines That Attack On Their Own: Autonomous Artificial Intelligence Needs To Be Thought Through

 Artificial Intelligence robot
  • Artificial intelligence has clear positive uses, but it could be used to teach machines to attack people and their computer networks on their own.
  • Drones and autonomous vehicles could be hacked using AI and turned into weapons
  • Traditional cybersecurity methods won’t know how to cope with new attacks carried out by smart machines.

The idea of a computer program learning by itself, growing in knowledge and becoming increasingly sophisticated may be a scary one. It’s even scarier when it’s learning to attack things.

It’s easy to dismiss artificial intelligence as yet another tech buzzword, but it’s already being used in everyday applications via algorithmic processes known as machine learning.

Far from the killer robots of “Blade Runner,” machine learning applications are designed to train a computer to fulfill a certain task on its own. Machines are essentially “taught” to complete that task by doing it over and over, learning the many obstacles that could inhibit them.

“Such attacks, which seem like science fiction today, might become reality in the next few years,” Guy Caspi, CEO of cybersecurity start-up Deep Instinct, told CNBC’s new podcast “Beyond the Valley.

Such technology promises to provide many benefits, such as smoother computing and the automation of many tasks we may, in years’ time, consider manageable without human intervention. But it also has experts worried.

Hacking, then weaponizing, drones and cars

Technicians and researchers are cautioning about the threat such technology poses for cybersecurity, that fundamentally important practice that keeps our computers and data — and governments’ and corporations’ computers and data — safe from hackers.

In February, a study from teams at the University of Oxford and University of Cambridge warned that AI could be used as a tool to hack into drones and autonomous vehicles, and turn them into potential weapons.

“Autonomous cars like Google’s (Waymo) are already using deep learning, can already raid obstacles in the real world,” Caspi said, “so raiding traditional anti-malware system in cyber domain is possible.”

Another study, by U.S. cybersecurity software giant Symantec, said that 978 million people across 20 countries were affected by cybercrime last year. Victims of cybercrime lost a total of $172 billion — an average of $142 per person — as a result, researchers said.

The fear for many is that AI will bring with it a dawn of new forms of cyber breaches that bypass traditional means of countering attacks.

“We’re still in the early days of the attackers using artificial intelligence themselves, but that day is going to come,” warns Nicole Eagan, CEO of cybersecurity firm Darktrace. “And I think once that switch is flipped on, there’s going to be no turning back, so we are very concerned about the use of AI by the attackers in many ways because they could try to use AI to blend into the background of these networks.”

Original Article:https://www.cnbc.com/2018/07/20/ai-cyberattacks-artificial-intelligence-threatens-cybersecurity.html

Read More:The Singularity: Google’s AI Now Creating It’s Own Artificial Intelligence, And Better Than Engineers Can

5 Comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.