Toggle light / dark theme

Why Pessimistic Predictions For Future of AI May be More Hype than High Tech

The growth of human and computer intelligence has triggered a barrage of dire predictions about the rise of super intelligence and the singularity. But some retain their skepticism, including Dr. Michael Shermer, a science historian and founding publisher of Skeptic Magazine.

The reason so many rational people put forward hypotheses that are more hype than high tech, Shermer says, is that being smart and educated doesn’t protect anyone from believing in “weird things.” In fact, sometimes smart and educated people are better at rationalizing beliefs that they hold for not-so-rational reasons. The smarter and more educated you are, the better able you are to find evidence to support what you want to be true, suggests Shermer.

“This explains why Nobel Prize winners speak about areas they know nothing about with great confidence and are sure that they’re right. Just because they have this great confidence of being able to do that (is) a reminder that they’re more like lawyers than scientists in trying to marshal a case for their client,” Shermer said. “(Lawyers) just put together the evidence, as much as you can, in support of your client and get rid of the negative evidence. In science you’re not allowed to do that, you’re supposed to look at all the evidence, including the counter evidence to your theory.”

The root of many of these false hypotheses, Shermer believes, is based in religion. Using immortality as an example, Shermer said the desire to live forever has strong parallels to religious beliefs; however, while there are many making prophecies that technology will insure we’ll live forever, too many people in groups throughout history have made similar yet unfulfilled promises.

“What we’d like to be true is not necessarily what is true, so the burden of proof is on them to go ahead and make the case. Like the cryonics people…they make certain claims that this or that technology is going to revive people that are frozen later…I hope they do it, but you’ve got to prove otherwise. You have to show that you can actually do that.”

Even if we do find a way to live forever, Shermer notes the negatives may outweigh the positives. It’s not just living longer that we want to achieve, but living longer at a high quality of life. There’s not much benefit in living to age 150, he adds, if one is bedridden for 20 or 30 years.

Instead, Shermer compares the process to the evolution of the automobile. While the flying cars promised by 1950’s-era futurists haven’t come to pass, today’s automobile is exponentially smarter and safer than those made 50 or 60 years ago. While forward thinkers have had moments of lucid foresight, humans also have a history of making technology predictions that often don’t turn out to be realized. Often, as is the case with the automobile, we don’t notice differences in technological changes because the changes happen incrementally each year.

“That’s what’s really happening with health and longevity. We’re just creeping up the ladder slowly but surely. We’ve seen hip replacements, organ transplants, better nutrition, exercise, and getting a better feel for what it takes to be healthy,” Shermer says. “The idea that we’re gonna’ have one big giant discovery made that’s going to change everything? I think that’s less likely than just small incremental things. A Utopian (society) where everybody gets to live forever and they’re infinitely happy and prosperous and so on? I think it’s unrealistic to think along those lines.”

Looking at the future of technology, Shermer is equally reticent to buy in to the predictions of artificial intelligence taking over the world. “I think the concern about AI turning evil (and) this dystopian, science fiction perspective is again, not really grounded in reality. I’m an AI optimist, but I don’t think the AI pessimists have any good arguments,” Shermer said

While we know, for the most part, which types of governments work well, we don’t have any similar precedent for complex AI systems. Humans will remain in control and, before we start passing laws and restrictions to curb AI out of fear, Shermer believes we should keep improving our computers and artificial intelligence to make life better, evaluating and taking action as these systems continue to evolve.

Newly discovered planet could destroy Earth any day now

Look up the definition of irresponsible journalism and you’ll probably find a link to THIS article.


A mysterious planet that wiped out life on Earth millions of years ago could do it again, according to a top space scientist.

And some believe the apocalyptic event could happen as early as this month.

Planet Nine — a new planet discovered at the edge of the solar system in January — has triggered comet showers that bomb the Earth’s surface, killing all life, says Daniel Whitmire, of the University of Louisiana.

Could ‘Planet X’ Cause Comet Catastrophes on Earth?

As astronomers track down more clues as to the existence of a large world orbiting the sun in the outer fringes of the solar system, a classic planetary purveyor of doom has been resurrected as a possible trigger behind mass extinctions on Earth.

Yes, I’m talking about “Planet X.” And yes, there’s going to be hype.

MORE: 9th Planet May Lurk in the Outer Solar System.

Something Just Slammed Into Jupiter

Astronomers have captured video evidence of a collision between Jupiter and a small celestial object, likely a comet or asteroid. Though it looks like a small blip of light, the resulting explosion was unusually powerful.

As Phil Plait of Bad Astronomy reports, the collision occurred on March 17, but confirmation of the event only emerged this week. An amateur Austrian astronomer used a 20-centimeter telescope to chronicle the unexpected event, but it could’ve been some kind of visual artifact.

Who’s Afraid of Existential Risk? Or, Why It’s Time to Bring the Cold War out of the Cold

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

In fact, we would do better to revisit the signature Cold War way of thinking about these matters, which the RAND Corporation strategist Herman Kahn dubbed ‘thinking the unthinkable’. What he had in mind was the aftermath of a thermonuclear war in which, say, 25–50% of the world’s population is wiped out over a relatively short period of time. How do we rebuild humanity under those circumstances? This is not so different from ‘the worst case scenarios’ proposed nowadays, even under conditions of severe global warming. Kahn’s point was that we need now to come up with the relevant new technologies that would be necessary the day after Doomsday. Moreover, such a strategy was likely to be politically more tractable than trying actively to prevent Doomsday, say, through unilateral nuclear disarmament.

And indeed, we did largely follow Kahn’s advice. And precisely because Doomsday never happened, we ended up in peacetime with the riches that we have come to associate with Silicon Valley, a major beneficiary of the US federal largesse during the Cold War. The internet was developed as a distributed communication network in case the more centralized telephone system were taken down during a nuclear attack. This sort of ‘ahead of the curve’ thinking is characteristic of military-based innovation generally. Warfare focuses minds on what’s dispensable and what’s necessary to preserve – and indeed, how to enhance that which is necessary to preserve. It is truly a context in which we can say that ‘necessity is the mother of invention’. Once again, and most importantly, we win even – and especially – if Doomsday never happens.

An interesting economic precedent for this general line of thought, which I have associated with transhumanism’s ‘proactionary principle’, is what the mid-twentieth century Harvard economic historian Alexander Gerschenkron called ‘the relative advantage of backwardness’. The basic idea is that each successive nation can industrialise more quickly by learning from its predecessors without having to follow in their footsteps. The ‘learning’ amounts to innovating more efficient means of achieving and often surpassing the predecessors’ level of development. The post-catastrophic humanity would be in a similar position to benefit from this sense of ‘backwardness’ on a global scale vis-à-vis the pre-catastrophic humanity.

Doomsday scenarios invariably invite discussions of our species’ ‘resilience’ and ‘adaptability’, but these terms are far from clear. I prefer to start with a distinction drawn in cognitive archaeology between ‘reliable’ and ‘maintainable’ artefacts. Reliable artefacts tend to be ‘overdesigned’, which is to say, they can handle all the anticipated forms of stress, but most of those never happen. Maintainable artefacts tend to be ‘underdesigned’, which means that they make it easy for the user to make replacements when disasters strike, which are assumed to be unpredictable.

In a sense, ‘resilience’ and ‘adaptability’ could be identified with either position, but the Cold War’s proactionary approach to Doomsday suggests that the latter would be preferable. In other words, we want a society that is not so dependent on the likely scenarios – including the likely negative ones — that we couldn’t cope in case a very unlikely, very negative scenario comes to pass. Recalling US Defence Secretary Donald Rumsfeld’s game-theoretic formulation, we need to address the ‘unknown unknowns’, not merely the ‘known unknowns’. Good candidates for the relevant ‘unknown unknowns’ are the interaction effects of relatively independent research and societal trends, which while benign in themselves may produce malign consequences — call them ‘emergent’, if you wish.

It is now time for social scientists to present both expert and lay subjects with such emergent scenarios and ask them to pinpoint their ‘negativity’: What would be potentially lost in the various scenarios which would be vital to sustain the ‘human condition’, however defined? The answers would provide the basis for future innovation policy – namely, to recover if not strengthen these vital features in a new guise. Even if the resulting innovations prove unnecessary in the sense that the Doomsday scenarios don’t come to pass, nevertheless they will make our normal lives better – as has been the long-term effect of the Cold War.

References

Bleed, P. (1986). ‘The optimal design of hunting weapons: Maintainability or reliability?’ American Antiquity 51: 737– 47.

Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.

Fuller, S. and Lipinska, V. (2014). The Proactionary Imperative. London: Palgrave (pp. 35–36).

Gerschenkron, A. (1962). Economic Backwardness in Historical Perspective. Cambridge MA: Harvard University Press.

Kahn, H. (1960). On Thermonuclear War. Princeton: Princeton University Press.

Dr. Sarif, Or How I Learned To Stop Worrying And Love The Human Revolution

I am not in fact talking about the delightful Deus Ex game, but rather about the actual revolution in society and technology we are witnessing today. Pretty much every day I look at any news source, be it on cable news networks or facebook feeds or whathaveyou, I always see fear mongering. “Implantable chips will let the government track you!” or “Hackers will soon be able to steal your thoughts!” (Seriously, seen both of these and much more and much crazier.) …But I’m here to tell you two things. First, calm the hell down. Nearly every doomsday scenario painted by fear-mongering assholes is either impossible or so utterly unlikely as to be effectively impossible. And second… that you should psych the hell up because its actually extremely exciting and worth getting excited about. But for good reasons, not bad.