БЛОГ

Archive for the ‘existential risks’ category: Page 94

Mar 29, 2016

Something Just Slammed Into Jupiter

Posted by in categories: asteroid/comet impacts, existential risks

Astronomers have captured video evidence of a collision between Jupiter and a small celestial object, likely a comet or asteroid. Though it looks like a small blip of light, the resulting explosion was unusually powerful.

As Phil Plait of Bad Astronomy reports, the collision occurred on March 17, but confirmation of the event only emerged this week. An amateur Austrian astronomer used a 20-centimeter telescope to chronicle the unexpected event, but it could’ve been some kind of visual artifact.

Continue reading “Something Just Slammed Into Jupiter” »

Mar 29, 2016

Flyby Comet Was WAY Bigger Than Thought

Posted by in categories: asteroid/comet impacts, existential risks

Oh, joy. I hope it doesn’t take an actual catastrophe before the world comes together to get all of our eggs out of this one basket.


Comet P/2016 BA14 was initially thought to be a cosmic lightweight, but as it flew past Earth on March 22, NASA pinged it with radar to reveal just what a heavyweight it really is.

Read more

Mar 18, 2016

Who’s Afraid of Existential Risk? Or, Why It’s Time to Bring the Cold War out of the Cold

Posted by in categories: defense, disruptive technology, economics, existential risks, governance, innovation, military, philosophy, policy, robotics/AI, strategy, theory, transhumanism

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

Continue reading “Who's Afraid of Existential Risk? Or, Why It's Time to Bring the Cold War out of the Cold” »

Mar 3, 2016

Dr. Sarif, Or How I Learned To Stop Worrying And Love The Human Revolution

Posted by in categories: computing, existential risks, government

I am not in fact talking about the delightful Deus Ex game, but rather about the actual revolution in society and technology we are witnessing today. Pretty much every day I look at any news source, be it on cable news networks or facebook feeds or whathaveyou, I always see fear mongering. “Implantable chips will let the government track you!” or “Hackers will soon be able to steal your thoughts!” (Seriously, seen both of these and much more and much crazier.) …But I’m here to tell you two things. First, calm the hell down. Nearly every doomsday scenario painted by fear-mongering assholes is either impossible or so utterly unlikely as to be effectively impossible. And second… that you should psych the hell up because its actually extremely exciting and worth getting excited about. But for good reasons, not bad.

Read more

Mar 2, 2016

Inside the Artificial Intelligence Revolution: A Special Report, Pt. 1

Posted by in categories: existential risks, innovation, robotics/AI

We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species.

Read more

Feb 25, 2016

WW3 Could Be Thermonuclear, With ‘Human-machine’ Teams

Posted by in category: existential risks

United States, Russia, China remain the greatest threat against each other in what could be the next World War 3.

Read more

Feb 21, 2016

100-foot asteroid to zoom past Earth in two weeks; no chance of collision, scientists say

Posted by in categories: asteroid/comet impacts, existential risks

An article for the “Dooms Day” fans.


An asteroid roughly 100 feet long and moving at more than 34,000 mph is scheduled to make a close pass by Earth in two weeks.

But don’t worry, scientists say. It has no chance of hitting us, and may instead help draw public attention to growing efforts at tracking the thousands of asteroids zooming around space that could one day wipe out a city — or worse — if they ever hit our planet.

Continue reading “100-foot asteroid to zoom past Earth in two weeks; no chance of collision, scientists say” »

Feb 5, 2016

Strategies for Growing the Transhumanism Movement

Posted by in categories: existential risks, geopolitics, life extension, Ray Kurzweil, transhumanism

https://youtube.com/watch?v=MGbGVGgoSPo

An article on transhumanism in the Huff Post:


2016-02-05-1454642218-44797-futurecity.jpg
Future Transhumanist City — Image by Sam Howzit

Transhumanism–the international movement that aims to use science and technology to improve the human being–has been growing quickly in the last few years. Everywhere one looks, there seems to be more and more people embracing radical technology that is already dramatically changing lives. Ideas that seemed science fiction just a decade ago are now here.

Continue reading “Strategies for Growing the Transhumanism Movement” »

Jan 21, 2016

Martin Rees: Can we prevent the end of the world?

Posted by in category: existential risks

Very well thought out, quite intelligent points.


A post-apocalyptic Earth, emptied of humans, seems like the stuff of science fiction TV and movies. But in this short, surprising talk, Lord Martin Rees asks us to think about our real existential risks — natural and human-made threats that could wipe out humanity. As a concerned member of the human race, he asks: What’s the worst thing that could possibly happen?

Continue reading “Martin Rees: Can we prevent the end of the world?” »

Jan 21, 2016

Scientist dismisses Stephen Hawking’s doomsday predictions

Posted by in categories: existential risks, robotics/AI

Yuste v. Hawkins — battle of the brains.


Renowned neuroscientist Rafael Yuste on Wednesday dismissed the latest doomsday predictions of Stephen Hawking, saying the British astrophysicist “doesn’t know what he’s talking about.”

In a recent lecture in London, Hawking indicated that advances in science and technology will lead to “new ways things can go wrong,” especially in the field of artificial intelligence.

Yuste, a Columbia University neuroscience professor, was less pessimistic. “We don’t have enough knowledge to be able to say such things,” he told Radio Cooperativa in Santiago, Chile.

Read more

Page 94 of 141First9192939495969798Last