БЛОГ

Archive for the ‘existential risks’ category: Page 14

Sep 8, 2023

The Fermi Paradox: Digital Empires & Miniaturization

Posted by in categories: computing, existential risks, food

Many believe the future of humanity is to go Digital, uploading our minds to computers, living in virtual worlds that are vastly more efficient and compact. If we might do this, might distant alien empires too? And if so, might this be the reason we don’t see them?

Use code ISAACARTHUR14 for up to 14 FREE MEALS + 3 Free Gifts across 5 HelloFresh boxes plus free shipping at https://bit.ly/3xDFDye.
Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE

Continue reading “The Fermi Paradox: Digital Empires & Miniaturization” »

Sep 6, 2023

The Berserker Hypothesis: The Darkest Explanation Of The Fermi Paradox

Posted by in categories: alien life, existential risks

Look, we write rather a lot about the Fermi Paradox, so trust us when we say that the Berserker Hypothesis may be the darkest explanation out there. Not only would it mean that the universe is a dead, lifeless husk, but it would also imply that our own destruction is imminent.

The Fermi Paradox at its most basic is, given the high probability that alien life exists out there (bearing in mind the vastness of space and that we keep finding planets within habitable zones), why has nobody got in touch yet?

Sep 5, 2023

North Korean hackers have allegedly stolen hundreds of millions in crypto to fund nuclear programs

Posted by in categories: blockchains, business, cryptocurrencies, cybercrime/malcode, existential risks, military

North Korea-linked hackers have stolen hundreds of millions of crypto to fund the regime’s nuclear weapons programs, research shows.

So far this year, from January to Aug. 18, North Korea-affiliated hackers stole $200 million worth of crypto — accounting for over 20% of all stolen crypto this year, according to blockchain intelligence firm TRM Labs.

“In recent years, there has been a marked rise in the size and scale of cyber attacks against cryptocurrency-related businesses by North Korea. This has coincided with an apparent acceleration in the country’s nuclear and ballistic missile programs,” said TRM Labs in a June discussion with North Korea experts.

Sep 5, 2023

North Korea stages tactical nuclear attack drill

Posted by in categories: existential risks, military, nuclear weapons

SEOUL, Sept 3 (Reuters) — North Korea conducted a simulated tactical nuclear attack drill that included two long-range cruise missiles in an exercise to “warn enemies” the country would be prepared in case of nuclear war, the KCNA state news agency said on Sunday.

KCNA said the drill was successfully carried out on Saturday and two cruise missiles carrying mock nuclear warheads were fired towards the West Sea of the Korean peninsula and flew 1,500 km (930 miles) at a preset altitude of 150 meters.

Pyongyang also said it would bolster its military deterrence against the United States and South Korea.

Sep 5, 2023

Asteroid the size of 81 bulldogs to pass Earth on Wednesday

Posted by in categories: asteroid/comet impacts, existential risks

Asteroid 2021 JA5 is around the size of 81 bulldogs, the symbol of the college football team of the University of Georgia. But it won’t hit us – hopefully the Bulldog team will have better luck.

Sep 4, 2023

OpenAI’s Moonshot: Solving the AI Alignment Problem

Posted by in categories: existential risks, robotics/AI

Jan Leike explains OpenAI’s effort to protect humanity from superintelligent AI.

Sep 3, 2023

The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI

Posted by in categories: biotech/medical, existential risks, robotics/AI

“It’s a time of huge uncertainty,” says Geoffrey Hinton from the living room of his home in London. “Nobody really knows what’s going to happen … I’m just sounding the alarm.”

In The Godfather in Conversation, the cognitive psychologist and computer scientist ‘known as the Godfather of AI’ explains why, after a lifetime spent developing a type of artificial intelligence known as deep learning, he is suddenly warning about existential threats to humanity.

Continue reading “The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI” »

Aug 26, 2023

This new technology could change AI (and us)

Posted by in categories: existential risks, robotics/AI

Organoid intelligence is an emerging field in computing and artificial intelligence.

Earlier this year, an Australian startup Cortical Labs developed a cybernetic system made from human brain cells. They called it DishBrain and taught it to play Pong.

Continue reading “This new technology could change AI (and us)” »

Aug 26, 2023

Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less

Posted by in categories: business, existential risks, robotics/AI

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them. Learn more, read the summary and find the full transcript on the 80,000 Hours website: https://80000hours.org/podcast/episodes/jan-leike-superalignment.

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.

Continue reading “Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less” »

Aug 25, 2023

ChatGPT Still Needs Humans

Posted by in categories: employment, existential risks, robotics/AI

The media frenzy surrounding ChatGPT and other large, language model, artificial intelligence systems spans a range of themes, from the prosaic – large language models could replace conventional web search – to the concerning – AI will eliminate many jobs – and the overwrought – AI poses an extinction-level threat to humanity. All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity.

But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor. They can’t reliably generate new knowledge, of course, but there’s more to it than that.

Continue reading “ChatGPT Still Needs Humans” »

Page 14 of 143First1112131415161718Last