What is likely to be the biggest threat to humans over the next few years? Super volcanoes, artificial intelligence, or global pandemics? Well, according to the new Global Catastrophic Risks report published this week, it seems that we might well have to be prepared for all these things, with an asteroid impact, nuclear war, and climate change thrown in for good measure.
Compiled by researchers at Oxford University, the report details threats that could wipe out at least 10 percent of the global human population, which we might face over the next five years. They break the threats down into two categories: those which are ongoing and could happen at any point, and those that may be unlikely today, but may become more significant in the future.
“There are some things that are on the horizon, things that probably won’t happen in any one year but could happen, which could completely reshape our world and do so in a really devastating and disastrous way,” Sebastian Farquhar, director of the Global Priorities Project that helped write the report, told the Press Association. “History teaches us that many of these things are more likely than we intuitively think. Many of these risks are changing and growing as technologies change and grow and reshape our world. But there are also things we can do about the risks.”
The table drawn up by the group looking into the major risks they judge humanity faces. Global Challeneges Foundation
The researchers point only to recent history to show how catastrophes can and do occur. In 1918, for example, the world was gripped by the Spanish Flu pandemic, which is estimated to have infected over 500 million people globally and killed up to 100 million people, equating to 5 percent of the world’s population. The researchers note that a similar pandemic, either natural or man-made, has a “higher likelihood” to occur over the next 5 years than scenarios such as asteroid impacts.
The researchers have also drawn attention to the threat from artificial intelligence (AI). They’re not the first people to do so (Elon Musk and Stephen Hawking have already gone there), but the team warns that AI might well be able to out-compete humans, especially “if its goals don’t match with what humanity’s values are.” This talk of robots uprising and destroying humanity has, however, been criticized before and blamed for the lack of development seen in the field.
While pandemics, nuclear war, climate change, AI, and geo-engineering have all been given attention for the threat they pose, the researchers warn that asteroid impacts, super volcano eruptions, and the far more elusive “unknown risks” have not. They argue that while in the past the major threats faced by mankind were all natural, in the modern era we are now facing more and more anthropogenic risks, and considering that we have more control over these, perhaps we should pay them more attention.