БЛОГ

Archive for the ‘existential risks’ category: Page 76

Feb 25, 2011

Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction

Posted by in categories: complex systems, existential risks, information science, robotics/AI

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Continue reading “Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction” »

Feb 10, 2011

New Implication of Einstein’s Happiest Thought Is Last Hope for Planet

Posted by in categories: existential risks, particle physics

Einstein saw that clocks located “more downstairs” in an accelerating rocket predictably tick slower. This was his “happiest thought” as he often said.

However,as everything looks normal on the lower floor, the normal-appearing photons generated there do actually have less mass-energy. So do all local masses there by general covariance, and hence also all associated charges down there.

The last two implications were overlooked for a century. “This cannot be,” more than 30 renowned scientists declared, to let a prestigious experiment with which they have ties appear innocuous.

This would make for an ideal script to movie makers and for a bonanza to metrologists. But why the political undertones above? Because, like the bomb, this new crumb from Einstein’s table has a potentially unbounded impact. Only if it gets appreciated within a few days time, all human beings — including the Egyptians — can breathe freely again.

Continue reading “New Implication of Einstein's Happiest Thought Is Last Hope for Planet” »

Jan 30, 2011

Summary of My Scientific Results on the LHC-Induced Danger to the Planet

Posted by in categories: existential risks, particle physics

- submitted to the District Attorney of Tubingen, to the Administrative Court of Cologne, to the Federal Constitutional Court (BVerfG) of Germany, to the International Court for Crimes Against Humanity, and to the Security Council of the United Nations -

by Otto E. Rössler, Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle A, 72076 Tubingen, Germany

The results of my group represent fundamental research in the fields of general relativity, quantum mechanics and chaos theory. Several independent findings obtained in these disciplines do jointly point to a danger — almost as if Nature had posed a trap for humankind if not watching out.

MAIN RESULT. It concerns BLACK HOLES and consists of 10 sub-results

Continue reading “Summary of My Scientific Results on the LHC-Induced Danger to the Planet” »

Jan 17, 2011

Stories We Tell

Posted by in categories: complex systems, existential risks, futurism, lifeboat, policy


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

Continue reading “Stories We Tell” »

Nov 26, 2010

“Rogue states” as a source of global risk

Posted by in categories: existential risks, geopolitics

Some countries are a threat as possible sources of global risk. First of all we are talking about countries which have developed, but poorly controlled military programs, as well as the specific motivation that drives them to create a Doomsday weapon. Usually it is a country that is under threat of attack and total conquest, and in which the control system rests on a kind of irrational ideology.

The most striking example of such a global risk are the efforts of North Korea’s to weaponize Avian Influenza (North Korea trying to weaponize bird flu http://www.worldnetdaily.com/news/article.asp?ARTICLE_ID=50093), which may lead to the creation of the virus capable of destroying most of Earth’s population.

There is not really important, what is primary: an irrational ideology, increased secrecy, the excess of military research and the real threat of external aggression. Usually, all these causes go hand in hand.

The result is the appearance of conditions for creating the most exotic defenses. In addition, an excess of military scientists and equipment allows individual scientists to be, for example, bioterrorists. The high level of secrecy leads to the fact that the state as a whole does not know what they are doing in some labs.

Continue reading “"Rogue states" as a source of global risk” »

Nov 21, 2010

TSA and the Coming Great Filter

Posted by in categories: existential risks, policy

Many people think that the issues Lifeboat Foundation is discussing will not be relevant for many decades to come. But recently a major US Governmental Agency, the TSA, decided to make life hell for 310 million Americans (and anyone who dares visit the USA) as it reacts to the coming Great Filter.

What is the Great Filter? Basically it is whatever has caused our universe to be dead with no advanced civilizations in it. (An advanced civilization is defined as a civilization advanced enough to be self-sustaining outside its home planet.)

The most likely explanation for this Great Filter is that civilizations eventually develop technologies so powerful that they provide individuals with the means to destroy all life on the planet. Technology has now become powerful enough that the TSA even sees 3-year-old girls as threats who may take down a plane so they take away her teddy bear and grope her.

Continue reading “TSA and the Coming Great Filter” »

Nov 11, 2010

What’s Your Dream for the Future of California?

Posted by in categories: education, events, existential risks, futurism, habitats, human trajectories, open access, policy, sustainability


California Dreams Video 1 from IFTF on Vimeo.

INSTITUTE FOR THE FUTURE ANNOUNCES CALIFORNIA DREAMS:
A CALL FOR ENTRIES ON IMAGINING LIFE IN CALIFORNIA IN 2020

Put yourself in the future and show us what a day in your life looks like. Will California keep growing, start conserving, reinvent itself, or collapse? How are you living in this new world? Anyone can enter,anyone can vote; anyone can change the future of California!

California has always been a frontier—a place of change and innovation, reinventing itself time and again. The question is, can California do it again? Today the state is facing some of its toughest challenges. Launching today, IFTF’s California Dreams is a competition with an urgent challenge to recruit citizen visions of the future of California—ideas for what it will be like to live in the state in the next decade—to start creating a new California dream.

Continue reading “What's Your Dream for the Future of California?” »

Nov 9, 2010

The Singularity Hypothesis: A Scientific and Philosophical Assessment

Posted by in categories: cybercrime/malcode, ethics, existential risks, futurism, robotics/AI

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Continue reading “The Singularity Hypothesis: A Scientific and Philosophical Assessment” »

Sep 2, 2010

Self Transcendence

Posted by in categories: ethics, existential risks, futurism

Will our lumbering industrial age driven information age segue smoothly into a futuristic marvel of yet to be developed technology? It might. Or take quantum leaps. It could. Will information technology take off exponentially? It’s accelerating in that direction. The way knowledge is unraveling its potential for enhancing human ingenuity, the future looks bright indeed. But there is a problem. It’s that egoistic tendency we have of defending ourselves against knowing, of creating false images to delude ourselves and the world, and of resolving conflict violently. It’s as old as history and may be an inevitable part of life. If so, there will be consequences.

Who has ever seen drama/comedy without obstacles to overcome, conflicts to confront, dilemmas to address, confrontations to endure and the occasional least expected outcome? Just as Shakespeare so elegantly illustrated. Good drama illustrates aspects of life as lived, and we do live with egoistic mental processes that are both limited and limiting. Wherefore it might come to pass that we who are of this civilization might encounter an existential crisis. Or crunch into a bottleneck out of which … will emerge what? Or extinguish civilization with our egoistic conduct acting from regressed postures with splintered perception.

What’s least likely is that we’ll continue cruising along as usual.

Not with massive demographic changes, millions on the move, radical climate changes, major environmental shifts, cyber vulnerabilities, changing energy resources, inadequate clean water and values colliding against each other in a world where future generations of the techno-savvy will be capable of wielding the next generation of weapons of mass destruction.

Continue reading “Self Transcendence” »

Jul 22, 2010

My book in Lulu

Posted by in category: existential risks

My book “STRUCTURE OF THE GLOBAL CATASTROPHE Risks of human extinction in the XXI century” is now available through Lulu http://www.lulu.com/product/paperback/structure-of-the-globa…y/11727068 But it also available free on scribd http://www.scribd.com/doc/6250354/STRUCTURE-OF-THE-GLOBAL-CA…I-century– This book is intended to be complete up to date source book on information about existential risks.

Page 76 of 83First7374757677787980Last