Toggle light / dark theme

Dangers of superintelligence | Separating sci-fi from plausible speculation

Just after filming this video, Sam Altman, CEO of OpenAI published a blog post about the governance of superintelligence in which he, along with Greg Brockman and Ilya Sutskever, outline their thinking about how the world should prepare for a world with superintelligences. And just before filming Geoffrey Hinton quite his job at Google so that he could express more openly his concerns about the imminent arrival of an artificial general intelligence, an AGI that could soon get beyond our control if it became superintelligent. So, the basic idea is moving from sci-fi speculation into being a plausible scenario, but how powerful will they be and which of the concerns about superAI are reasonably founded? In this video I explore the ideas around superintelligence with Nick Bostrom’s 2014 book, Superintelligence, as one of our guides and Geoffrey Hinton’s interviews as another, to try to unpick which aspects are plausible and which are more like speculative sci-fi. I explore what are the dangers, such as Eliezer Yudkowsky’s notion of a rapid ‘foom’ take over of humanity, and also look briefly at the control problem and the alignment problem. At the end of the video I then make a suggestion for how we could maybe delay the arrival of superintelligence by withholding the ability of the algorithms to self-improve themselves, withholding what you could call, meta level agency.

▬▬ Chapters ▬▬

00:00 — Questing for an Infinity Gauntlet.
01:38 — Just human level AGI
02:27 — Intelligence explosion.
04:10 — Sparks of AGI
04:55 — Geoffrey Hinton is concerned.
06:14 — What are the dangers?
10:07 — Is ‘foom’ just sci-fi?
13:07 — Implausible capabilities.
14:35 — Plausible reasons for concern.
15:31 — What can we do?
16:44 — Control and alignment problems.
18:32 — Currently no convincing solutions.
19:16 — Delay intelligence explosion.
19:56 — Regulating meta level agency.

▬▬ Other videos about AI and Society ▬▬

AI wants your job | Which jobs will AI automate? | Reports by OpenAI and Goldman Sachs.
• Which jobs will AI automate? | Report…

How ChatGPT Works (a non technical explainer):

A robot that can play video games with humans

In recent years, engineers have developed a wide range of robotic systems that could soon assist humans with various everyday tasks. Rather than assisting with chores or other manual jobs, some of these robots could merely act as companions, helping older adults or individuals with different disabilities to practice skills that typically entail interacting with another human.

Researchers at Nara Institute of Science and Technology in Japan recently developed a new that can play video games with a human user. This robot, introduced in a paper presented at the 11th International Conference on Human-Agent Interaction, can play games with users while communicating with them.

“We have been developing robots that can chat while watching TV together, and interaction technology that creates empathy, in order to realize a partner robot that can live together with people in their daily life,” Masayuki Kanbara, one of the researchers who carried out the study, told Tech Xplore. “In this paper, we developed a robot that plays TV games together to provide opportunities for people to interact with the robot in their daily lives.”

Clever Apes in the Modern Workplace

“Rather than seeing the organization as a machine, we need to see it as a collection of clever apes.” Psychologist Robin Dunbar’s latest book argues companies are social groups that can’t be perfected like a machine.


What is it about working life that can make us feel so alienated and isolated, and what can we do to prevent it? In The Social Brain: The Psychology of Successful Groups, the evolutionary psychologist Robin Dunbar joins forces with Tracey Camilleri and Samantha Rockey, associate fellows at Oxford’s Saïd Business School, to apply Dunbar’s own scientific discoveries about human cooperation to our work lives. The idea is that, in order to perform our jobs more effectively, we need to go with, and at times go against, the grain of human nature. The authors home in on what makes us best work together, given the central importance of groups throughout our evolutionary history.

Dunbar spent the better part of two decades, starting in the 1970s, studying wild monkeys in Africa to understand why some species develop their own societies. His close contact with our primate cousins gave him a new perspective from which to approach questions about human nature, and that led him, in 1998, to propose the “social brain hypothesis”—the idea that keeping track of who’s who, and cooperating effectively, takes considerable brain power.

/* */