Archive for the ‘existential risks’ category: Page 21
Sep 3, 2023
The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI
Posted by Shubham Ghosh Roy in categories: biotech/medical, existential risks, robotics/AI
“It’s a time of huge uncertainty,” says Geoffrey Hinton from the living room of his home in London. “Nobody really knows what’s going to happen … I’m just sounding the alarm.”
In The Godfather in Conversation, the cognitive psychologist and computer scientist ‘known as the Godfather of AI’ explains why, after a lifetime spent developing a type of artificial intelligence known as deep learning, he is suddenly warning about existential threats to humanity.
Aug 26, 2023
This new technology could change AI (and us)
Posted by Dan Breeden in categories: existential risks, robotics/AI
Organoid intelligence is an emerging field in computing and artificial intelligence.
Earlier this year, an Australian startup Cortical Labs developed a cybernetic system made from human brain cells. They called it DishBrain and taught it to play Pong.
Continue reading “This new technology could change AI (and us)” »
Aug 26, 2023
Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less
Posted by Dan Breeden in categories: business, existential risks, robotics/AI
The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them. Learn more, read the summary and find the full transcript on the 80,000 Hours website: https://80000hours.org/podcast/episodes/jan-leike-superalignment.
In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.
Aug 25, 2023
ChatGPT Still Needs Humans
Posted by Shubham Ghosh Roy in categories: employment, existential risks, robotics/AI
The media frenzy surrounding ChatGPT and other large, language model, artificial intelligence systems spans a range of themes, from the prosaic – large language models could replace conventional web search – to the concerning – AI will eliminate many jobs – and the overwrought – AI poses an extinction-level threat to humanity. All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity.
But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor. They can’t reliably generate new knowledge, of course, but there’s more to it than that.
Aug 17, 2023
Elon Musk on Neuralink: Solving Brain Diseases & Reducing the Risk of AI
Posted by Dan Breeden in categories: biotech/medical, Elon Musk, existential risks, genetics, robotics/AI, singularity
Elon Musk delves into the groundbreaking potential of Neuralink, a revolutionary venture aimed at interfacing with the human brain to tackle an array of brain-related disorders. Musk envisions a future where Neuralink’s advancements lead to the resolution of conditions like autism, schizophrenia, memory loss, and even spinal cord injuries.
Elon Musk discusses the transformative power of Neuralink, highlighting its role in restoring motor control after spinal cord injuries, revitalizing brain function post-stroke, and combating genetically or trauma-induced brain diseases. Musk’s compelling insights reveal how interfacing with neurons at an intricate level can pave the way for repairing and enhancing brain circuits using cutting-edge technology.
Continue reading “Elon Musk on Neuralink: Solving Brain Diseases & Reducing the Risk of AI” »
Aug 15, 2023
🔴 The Fermi Paradox, Cyborgs, And Artificial Intelligence — My Interview With Isaac Arthur
Posted by Dan Breeden in categories: cyborgs, existential risks, robotics/AI
In this week’s live stream, I’m going to share clips of my interview with Isaac Arthur, which you can find the full version on the Answers With Joe Podcast: h…
Aug 14, 2023
How to Survive a Nuclear War: Study Reveals the Safest Places to Wait Out the Conflict
Posted by Shubham Ghosh Roy in categories: existential risks, food, military
New research indicates that Australia and New Zealand are the two best places on Earth to survive a nuclear war. The recently published set of calculations don’t focus on blast-related deaths or even deaths caused by radiation fall-out, which most estimates say would number in the hundreds of millions, but instead look at how a nuclear winter caused by nuclear bomb explosions would affect food supplies, potentially leading to the starvation of billions.
Nuclear War Simulations Performed For Decades
Since the first atomic bombs were dropped on the Japanese cities of Hiroshima and Nagasaki in 1945, effectively spelling the end of World War II, war game theorists have looked at a myriad of simulations to determine the potential effects of a full-blown nuclear battle. Many simulations look at the potentially hundreds of millions that would likely die in the initial blasts, while others have tried to model the slower but equally as deadly body count from radiation sickness.
Aug 13, 2023
Daniel Schmachtenberger: “Artificial Intelligence and The Superorganism” | The Great Simplification
Posted by Dan Breeden in categories: existential risks, health, robotics/AI
On this episode, Daniel Schmachtenberger returns to discuss a surprisingly overlooked risk to our global systems and planetary stability: artificial intelligence. Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards (and beyond) numerous planetary boundaries and facing geopolitical risks all with existential consequences. How does artificial intelligence, not only add to these risks, but accelerate the entire dynamic of the metacrisis? What is the role of intelligence vs wisdom on our current global pathway, and can we change course? Does artificial intelligence have a role to play in creating a more stable system or will it be the tipping point that drives our current one out of control?
About Daniel Schmachtenberger:
Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue.
Aug 12, 2023
Artificial intelligence could lead to extinction, experts warn
Posted by Nicholas Play in categories: biotech/medical, existential risks, robotics/AI
Heads of OpenAI, Google Deepmind and Anthropic say the threat is as great as pandemics and nuclear war.