БЛОГ

Archive for the ‘existential risks’ category: Page 96

Apr 12, 2017

‘Doomsday’ Library Joins Seed Vault in Arctic Norway

Posted by in category: existential risks

Major Ed Dames predicted that “a series of powerful, deadly solar flares” he termed “the killshot” would impact the Earth and wipe out civilization (preceding this event was an event in North Korea)


A second “doomsday” vault will join the seed vault on Svalbard, with the new one offering an offline archive for important literature, data and other cultural relics.

Read more

Apr 11, 2017

Limits to the Nonparametric Intuition: Superintelligence and Ecology

Posted by in categories: environmental, existential risks, machine learning

In a previous essay, I suggested how we might do better with the unintended consequences of superintelligence if, instead of attempting to pre-formulate satisfactory goals or providing a capacity to learn some set of goals, we gave it the intuition that knowing all goals is not a practical possibility. Instead, we can act with a modest confidence having worked to discover goals, developing an understanding of our discovery processes that allows asserting an equilibrium between the risk of doing something wrong and the cost of work to uncover more stakeholders and their goals. This approach promotes moderation given the potential of undiscovered goals potentially contradicting any particular action. In short, we’d like a superintelligence that applies the non-parametric intuition, the intuition that we can’t know all the factors but can partially discover them with well-motivated trade-offs.

However, I’ve come to the perspective that the non-parametric intuition, while correct, on its own can be cripplingly misguided. Unfortunately, going through a discovery-rich design process doesn’t promise an appropriate outcome. It is possible for all of the apparently relevant sources not to reflect significant consequences.

How could one possibly do better than accepting this limitation, that relevant information is sometimes not present in all apparently relevant information sources? The answer is that, while in some cases it is impossible, there is always the background knowledge that all flourishing is grounded in material conditions, and that “staying grounded” in these conditions is one way to know that important design information is missing and seek it out. The Onion article “Man’s Garbage To Have Much More Significant Effect On Planet Than He Will” is one example of a common failure at living in a grounded way.

In other words, “staying grounded” means recognizing that just because we do not know all of the goals informing our actions does not mean that we do not know any of them. There are some goals that are given to us by the nature of how we are embedded in the world and cannot be responsibly ignored. Our continual flourishing as sentient creatures means coming to know and care for those systems that sustain us and creatures like us. A functioning participation in these systems at a basic level means we should aim to see that our inputs are securely supplied, our wastes properly processed, and the supporting conditions of our environment maintained.

Continue reading “Limits to the Nonparametric Intuition: Superintelligence and Ecology” »

Apr 10, 2017

Old generations should step down in favour of the new ones

Posted by in categories: existential risks, life extension

Dismantling the idea that older generations should ‘step down’ for younger ones.


Humans are really pros at sugarcoating. If you say old people should step down for the sake of new generations, it sounds so noble and rightful, doesn’t it? What it actually means, though, is ‘We value old people less than new ones,’ and this doesn’t sound very noble or rightful. This is plain and brutal survival of the species.

Continue reading “Old generations should step down in favour of the new ones” »

Apr 9, 2017

The Cybernetic Messiah: Transhumanism and Artificial Intelligence

Posted by in categories: biotech/medical, business, Elon Musk, ethics, existential risks, robotics/AI, space travel, transhumanism

Some weird religious stories w/ transhumanism Expect the conflict between religion and transhumanism to get worse, as closed-minded conservative viewpoints get challenged by radical science and a future with no need for an afterlife: http://barbwire.com/2017/04/06/cybernetic-messiah-transhuman…elligence/ & http://www.livebytheword.blog/google-directors-push-for-comp…s-explain/ & http://ctktexas.com/pastoral-backstory-march-30th-2017/


By J. Davila Ashcroft

The recent film Ghost in the Shell is a science fiction tale about a young girl (known as Major) used as an experiment in a Transhumanist/Artificial Intelligence experiment, turning her into a weapon. At first, she complies, thinking the company behind the experiment saved her life after her family died. The truth is, however, that the company took her forcefully while she was a runaway. Major finds out that this company has done the same to others as well, and this knowledge causes her to turn on the company. Throughout the story the viewer is confronted with the existential questions behind such an experiment as Major struggles with the trauma of not feeling things like the warmth of human skin, and the sensations of touch and taste, and feels less than human, though she is told many times she is better than human. While this is obviously a science fiction story, what might comes as a surprise to some is that the subject matter of the film is not just fiction. Transhumanism and Artificial Intelligence on the level of the things explored in this film are all too real, and seem to be only a few years around the corner.

Continue reading “The Cybernetic Messiah: Transhumanism and Artificial Intelligence” »

Apr 2, 2017

Norway Gets a New Doomsday Vault That Stores Data

Posted by in category: existential risks

Just in time for doomsday, Norway’s “Doomsday Vault” is getting an expansion. Officially known as the World Arctic Archive, the vault opened this week and has already taken submissions from two countries. This time, instead of storing seeds that will survive the apocalypse, the vault is archiving data using specially developed film.

Read more

Apr 1, 2017

10 High-Tech Ways Billionaires Plan to Survive Doomsday

Posted by in category: existential risks

Stay safe when society is unraveling above you.

Read more

Mar 13, 2017

The future looks too grim to wish for a longer life

Posted by in categories: existential risks, life extension

Is the future going to be so bad that longer, healthier lives will be undesirable? No, probably not.


The future looks grim? That’s quite an interesting claim, and I wonder whether there is any evidence to support it. In fact, I think there’s plenty of evidence to believe the opposite, i.e. that the future will be bright indeed. However, I can’t promise the future will certainly be bright. I am no madame clearvoyant, but neither are doomsday prophets. We can all only speculate, no matter how ‘sure’ pessimists may say they are about the horrible dystopian future that allegedly awaits us. I’m soon going to present the evidence of the bright future I believe in, but before I do, I would like to point out a few problems in the reasoning of the professional catastrophists who say that life won’t be worth living and there’s thus no point in extending it anyway.

First, we need to take into account that the quality of human life has been improving, not worsening, throughout history. Granted, there still are things that are not optimal, but there used to be many more. Sure, it sucks that your pet-peeve politician has been appointed president of your country (any reference to recent historical events is entirely coincidental), and it sucks that poverty and famine haven’t yet been entirely eradicated, but none of these implies that things will get worse. There’s a limit to how long a president can be such, and poverty and famine are disappearing all over the world. It takes time for changes to take place, and the fact the world isn’t perfect yet doesn’t mean it will never be. Especially people who are still chronologically young should appreciate the fact that by the time they’re 80 or 90, a long time will have passed, and the world will certainly have changed in the meanwhile.

Continue reading “The future looks too grim to wish for a longer life” »

Mar 5, 2017

‘Who’s in control?’ Scientists gather to discuss AI doomsday scenarios

Posted by in categories: existential risks, robotics/AI

Artificial intelligence has the capability to transform the world — but not necessarily for the better. A group of scientists gathered to discuss doomsday scenarios, addressing the possibility that AI could become a serious threat.

The event, ‘Great Debate: The Future of Artificial Intelligence — Who’s in Control?’, took place at Arizona State University (ASU) over the weekend.

“Like any new technology, artificial intelligence holds great promise to help humans shape their future, and it also holds great danger in that it could eventually lead to the rise of machines over humanity, according to some futurists. So which course will it be for AI and what can be done now to help shape its trajectory?” ASU wrote in a press release.

Continue reading “‘Who’s in control?’ Scientists gather to discuss AI doomsday scenarios” »

Mar 5, 2017

AI Scientists Gather to Plot Doomsday Scenarios (and Solutions)

Posted by in categories: biotech/medical, cybercrime/malcode, Elon Musk, existential risks, military, policy, robotics/AI

Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen — and how to stop it.

Their workshop took place last weekend at Arizona State University with funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn. Officially dubbed “Envisioning and Addressing Adverse AI Outcomes,” it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers — the red team — and defenders — blue team — playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.

Horvitz is optimistic — a good thing because machine intelligence is his life’s work — but some other, more dystopian-minded backers of the project seemed to find his outlook too positive when plans for this event started about two years ago, said Krauss, a theoretical physicist who directs ASU’s Origins Project, the program running the workshop. Yet Horvitz said that for these technologies to move forward successfully and to earn broad public confidence, all concerns must be fully aired and addressed.

Continue reading “AI Scientists Gather to Plot Doomsday Scenarios (and Solutions)” »

Jan 20, 2017

The UN Okays Synthetic Biology

Posted by in categories: bioengineering, biological, ethics, existential risks, genetics

That’s a relief.


Of all the potentially apocalyptic technologies scientists have come up with in recent years, the gene drive is easily one of the most terrifying. A gene drive is a tool that allows scientists to use genetic engineering to override natural selection during reproduction. In theory, scientists could use it to alter the genetic makeup of an entire species—or even wipe that species out. It’s not hard to imagine how a slip-up in the lab could lead to things going very, very wrong.

But like most great risks, the gene drive also offers incredible reward. Scientists are, for example, exploring how gene drive might be used to wipe out malaria and kill off Hawaii’s invasive species to save endangered native birds. Its perils may be horrifying, but its promise is limitless. And environmental groups have been campaigning hard to prevent that promise from ever being realized.

Continue reading “The UN Okays Synthetic Biology” »

Page 96 of 148First93949596979899100Last