Skype co-founder Jaan Tallinn on the three most essential existential dangers

Skype co-founder Jaan Tallinn

Center for the Investigation of Existential Risk

LONDON – Skype co-founder Jaan Tallinn has figured out what he believes are the top three threats to human existence this century.

While the climate emergency and coronavirus pandemic are viewed as issues that urgently require global solutions, Tallinn told CNBC that artificial intelligence, synthetic biology and so-called unknown unknowns each pose an existential risk through 2100.

Synthetic biology is the design and construction of new biological parts, devices and systems, while unknown unknowns, according to Tallinn, are “things we may not be able to think about right now”.

The Estonian computer programmer, who helped set up the Kazaa file-sharing platform in the 1990s and the Skype video call service in the 00s, has become increasingly concerned about AI in recent years.

“Climate change will not be an existential risk unless there is an out of control scenario,” he told CNBC over Skype.

Of course, the United Nations has recognized the climate crisis as the “defining issue of our time” and recognized its impact as global and unprecedented. The international group has also warned that there is alarming evidence that “major turning points leading to irreversible changes in key ecosystems and the planetary climate system may have already been reached or passed”.

Of the three threats Tallinn is most concerned about, AI is at the center and it spends millions of dollars making sure the technology is developed safely. This includes investing early in AI labs like DeepMind (partly so he can keep an eye on their activities) and funding AI security research at universities like Oxford and Cambridge.

Referring to a book by Oxford Professor Toby Ord, Tallinn said there was a one-in-six chance people will not survive this century. Why? One of the biggest potential threats in the short term is AI, according to the book, while the likelihood that climate change will cause human extinction is less than 1%.

Predicting the future of AI

When it comes to AI, nobody knows how smart machines get, and it’s basically impossible to guess how advanced AI will be in the next 10, 20 or 100 years.

Trying to predict the future of AI is made even more difficult by the fact that AI systems are starting to create other AI systems without human input.

“There is a very important parameter when it comes to predicting AI and the future,” said Tallinn. “How much and how exactly will AI development give feedback on AI development? We know that AI is currently being used to search for AI architectures.”

If AI turns out to be not good at building other AIs, we needn’t be unduly concerned as there will be time to disperse and deploy AI skill gains, Tallinn said. (Should this line be in quotation marks? I think we should rephrase if this is not a literal quote.) However, if AI is able to create other AIs it is “very justified to be concerned … about what happens next, “he said.

Tallinn explained how there are two main scenarios that AI security researchers are looking at.

The first is a laboratory accident in which a research team leaves an AI system in the evening to train on some computer servers and “the world is no longer there in the morning”. The second is where the research teams produce a prototechnology which is then adopted and applied to different areas “where it has an unfortunate effect”.

Tallinn said it is focusing more on the former as fewer people think about this scenario.

When asked if he’s more or less concerned about the idea of ​​superintelligence (the hypothetical point where machines reach human-level intelligence and then quickly surpass it) than three years ago, Tallinn says his view is “muddier” or less has become “more nuanced”. “”

“If you say that it will happen tomorrow or that it won’t happen in the next 50 years, I would say that both of them are cocky,” he said.

Open and closed laboratories

The world’s largest tech companies are spending billions of dollars driving the state of AI. While some of their research is openly published, many are not, and this has raised alarm bells in some corners.

“The question of transparency is not at all obvious,” says Tallinn, claiming that it is not necessarily a good idea to reveal the details of a very powerful technology.

Tallinn says some companies take AI security seriously than others. For example, DeepMind is in regular contact with AI security researchers at places like the Future of Humanity Institute in Oxford. It also employs dozens of people who focus on AI security.

At the other end of the scale, business centers like Google Brain and Facebook AI Research are less connected to the AI ​​security community, according to Tallinn. We must seek comments from both of them.

If the AI ​​becomes an “arms race,” it will be better if there are fewer participants in the game, according to Tallinn, who recently heard the audiobook for “Making the Atomic Bomb,” where we were (typo? Goods?) Great concern about how many research groups worked on science. “I think it’s a similar situation,” he said.

“If it turns out that the AI ​​isn’t going to be very disruptive any time soon, it would certainly be useful for companies to actually try to solve some of the problems in a more distributed manner,” he said.

Comments are closed.