Toggle light / dark theme

On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward.

Timestamps:
00:00:00 Preview and intro.
00:01:59 Thinking about AGI in the 1970s.
00:07:28 What’s different about this AI boom?
00:16:10 Former taboos about AGI
00:19:53 AI research worth revisiting.
00:35:53 Will the first AGI be simple?
00:48:49 Is alignment achievable?
01:02:40 Benchmarks and economic impact.
01:15:23 Bottlenecks to superintelligence.
01:23:09 What should we do?

Jakub Pachocki, OpenAI’s chief scientist since 2024, believes artificial intelligence models will soon be capable of producing original research and making measurable economic impacts. In a conversation with Nature, Pachocki outlined how he sees the field evolving — and how OpenAI plans to balance innovation with safety concerns.

Pachocki, who joined OpenAI in 2017 after a career in theoretical computer science and competitive programming, now leads the firm’s development of its most advanced AI systems. These systems are designed to tackle complex tasks across science, mathematics, and engineering, moving far beyond the chatbot functions that made ChatGPT a household name in 2022.

Using global land use and carbon storage data from the past 175 years, researchers at The University of Texas at Austin and Cognizant AI Labs have trained an artificial intelligence system to develop optimal environmental policy solutions that can advance global sustainability initiatives of the United Nations.

The AI tool effectively balances various complex trade-offs to recommend ways of maximizing carbon storage, minimizing economic disruptions and helping improve the environment and people’s everyday lives, according to a paper published today in the journal Environmental Data Science.

The project is among the first applications of the UN-backed Project Resilience, a team of scientists and experts working to tackle global decision-augmentation problems—including ambitious sustainable development goals this decade—through part of a broader effort called AI for Good.

“The backing of these global financial institutions is a testament to the strength of our business and the resonance of our mission,” Krishna Rao, Anthropic’s finance chief, said in a statement.

Countries in the Global South risk being left out of the quantum revolution — along with its economic, technological and security benefits — due to growing export controls, siloed research initiatives and national security concerns, a new policy analysis argues.

In the first of a series of articles on quantum technologies published by the policy journal Just Securit y, researchers Michael Karanicolas, of Dalhousie University, and Alessia Zornetta, of UCLA Law, examine how the geopolitics of emerging quantum technologies are replicating long-standing patterns of technological exclusion. The authors argue that absent meaningful interventions, quantum could become another engine of global inequality, one that threatens to lock poorer nations out of the next era of technological and economic development.

The authors trace the roots of this divide to export control regimes that are quickly expanding in response to the strategic potential of quantum systems. Since 2020, governments in the U.S., EU and China have implemented targeted restrictions on quantum-enabling hardware, software, and communications systems.