БЛОГ

Archive for the ‘robotics/AI’ category: Page 2412

Aug 13, 2012

The Electric Septic Spintronic Artilect

Posted by in categories: biological, biotech/medical, business, chemistry, climatology, complex systems, counterterrorism, defense, economics, education, engineering, ethics, events, evolution, existential risks, futurism, geopolitics, homo sapiens, human trajectories, information science, military, neuroscience, nuclear weapons, policy, robotics/AI, scientific freedom, singularity, space, supercomputing, sustainability, transparency

AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades. I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

Jun 1, 2012

Response to the Global Futures 2045 Video

Posted by in categories: futurism, human trajectories, nanotechnology, robotics/AI, scientific freedom, singularity, space

I have just watched this video by Global Futures 2045.

This is my list of things I disagree with:

It starts with scary words about how every crisis comes faster and faster. However this is untrue. Many countries have been running deficits for decades. The financial crisis is no surprise. The reason the US has such high energy costs goes back to government decisions made in the 1970s. And many things that used to be crises no longer happen, like the Black Plague. We have big problems, but we’ve also got many resources we’ve built up over the centuries to help. Much of the challenges we face are political and social, not technical.

We will never fall into a new Dark Ages. The biggest problem is that we aren’t advancing as fast as we could and many are still starving, sick, etc. However, it has always been this way. The 20th century was very brutal! But we are advancing and it is mostly known threats like WMDs which could cause a disaster. In the main, the world is getting safer every day as we better understand it.

Continue reading “Response to the Global Futures 2045 Video” »

Jan 10, 2012

Verne, Wells, and the Obvious Future Part 1

Posted by in categories: asteroid/comet impacts, business, education, engineering, ethics, events, existential risks, finance, fun, futurism, media & arts, military, nuclear weapons, philosophy, physics, policy, robotics/AI, space, transparency

Steamships, locomotives, electricity; these marvels of the industrial age sparked the imagination of futurists such as Jules Verne. Perhaps no other writer or work inspired so many to reach the stars as did this Frenchman’s famous tale of space travel. Later developments in microbiology, chemistry, and astronomy would inspire H.G. Wells and the notable science fiction authors of the early 20th century.

The submarine, aircraft, the spaceship, time travel, nuclear weapons, and even stealth technology were all predicted in some form by science fiction writers many decades before they were realized. The writers were not simply making up such wonders from fanciful thought or childrens ryhmes. As science advanced in the mid 19th and early 20th century, the probable future developments this new knowledge would bring about were in some cases quite obvious. Though powered flight seems a recent miracle, it was long expected as hydrogen balloons and parachutes had been around for over a century and steam propulsion went through a long gestation before ships and trains were driven by the new engines. Solid rockets were ancient and even multiple stages to increase altitude had been in use by fireworks makers for a very long time before the space age.

Some predictions were seen to come about in ways far removed yet still connected to their fictional counterparts. The U.S. Navy flagged steam driven Nautilus swam the ocean blue under nuclear power not long before rockets took men to the moon. While Verne predicted an electric submarine, his notional Florida space gun never did take three men into space. However there was a Canadian weapons designer named Gerald Bull who met his end while trying to build such a gun for Saddam Hussien. The insane Invisible Man of Wells took the form of invisible aircraft playing a less than human role in the insane game of mutually assured destruction. And a true time machine was found easily enough in the mathematics of Einstein. Simply going fast enough through space will take a human being millions of years into the future. However, traveling back in time is still as much an impossibillity as the anti-gravity Cavorite from the First Men in the Moon. Wells missed on occasion but was not far off with his story of alien invaders defeated by germs- except we are the aliens invading the natural world’s ecosystem with our genetically modified creations and could very well soon meet our end as a result.

While Verne’s Captain Nemo made war on the death merchants of his world with a submarine ram, our own more modern anti-war device was found in the hydrogen bomb. So destructive an agent that no new world war has been possible since nuclear weapons were stockpiled in the second half of the last century. Neither Verne or Wells imagined the destructive power of a single missile submarine able to incinerate all the major cities of earth. The dozens of such superdreadnoughts even now cruising in the icy darkness of the deep ocean proves that truth is more often stranger than fiction. It may seem the golden age of predictive fiction has passed as exceptions to the laws of physics prove impossible despite advertisments to the contrary. Science fiction has given way to science fantasy and the suspension of disbelief possible in the last century has turned to disappointment and the distractions of whimsical technological fairy tales. “Beam me up” was simply a way to cut production costs for special effects and warp drive the only trick that would make a one hour episode work. Unobtainium and wishalloy, handwavium and technobabble- it has watered down what our future could be into childish wish fulfillment and escapism.

Continue reading “Verne, Wells, and the Obvious Future Part 1” »

Nov 13, 2011

D’Nile aint just a river in Egypt…

Posted by in categories: business, complex systems, cosmology, economics, education, ethics, existential risks, finance, futurism, geopolitics, human trajectories, humor, life extension, lifeboat, media & arts, neuroscience, open access, open source, philosophy, policy, rants, robotics/AI, space, sustainability

Greetings fellow travelers, please allow me to introduce myself; I’m Mike ‘Cyber Shaman’ Kawitzky, independent film maker and writer from Cape Town, South Africa, one of your media/art contributors/co-conspirators.

It’s a bit daunting posting to such an illustrious board, so let me try to imagine, with you; how to regard the present with nostalgia while looking look forward to the past, knowing that a millisecond away in the future exists thoughts to think; it’s the mode of neural text, reverse causality, non-locality and quantum entanglement, where the traveller is the journey into a world in transition; after 9/11, after the economic meltdown, after the oil spill, after the tsunami, after Fukushima, after 21st Century melancholia upholstered by anti-psychotic drugs help us forget ‘the good old days’; because it’s business as usual for the 1%; the rest continue downhill with no brakes. Can’t wait to see how it all works out.

Please excuse me, my time machine is waiting…
Post cyberpunk and into Transhumanism

Aug 20, 2011

The Nature of Identity Part 3

Posted by in categories: neuroscience, robotics/AI

The Nature of Identity Part 3
(Drawings not reproduced here — contact the author for copies)
We have seen how the identity is defined by the 0,0 point – the centroid or locus of perception.

The main problem we have is finding out how neural signals translate into sensory signals – how neural information is translated into the language we understand – that of perception. How does one neural pattern become Red and another the Scent of coffee. Neurons do not emit any color nor any scent.

As in physics, so in cognitive science, some long cherished theories and explanations are having to change.

Perception, and the concept of an Observer (the 0,0 point), are intimately related to the idea of Identity.

Continue reading “The Nature of Identity Part 3” »

Aug 20, 2011

More on Problems of Uploading an Identity

Posted by in categories: neuroscience, robotics/AI

The vulnerability of the bio body is the source of most threats to its existence.

We have looked at the question of uploading the identity by uploading the memory contents, on the assumption that the identity is contained in the memories. I believe this assumption has been proved to be almost certainly wrong.

What we are concentrating on is the identity as the viewer of its perceptions, the centroid or locus of perception.

It is the fixed reference point. And the locus of perception is always Here, and it is always Now. This is abbreviated here to 0,0.

Continue reading “More on Problems of Uploading an Identity” »

Aug 20, 2011

The Nature of the Identity, with Reference to Androids

Posted by in categories: neuroscience, robotics/AI

I have been asked to mention the following.
The Nature of The Identity — with Reference to Androids

The nature of the identity is intimately related to information and information processing.

The importance and the real nature of information is only now being gradually realised.

But the history of the subject goes back a long way.

Continue reading “The Nature of the Identity, with Reference to Androids” »

Aug 4, 2011

The Basic Problem

Posted by in category: robotics/AI

Most of the threats to human survival come down to one factor – the vulnerability of the human biological body.

If a tiny faction of the sums being spent on researching or countering these threats was to be used to address the question of a non-biological alternative, a good team could research and develop a working prototype in a matter of years.

The fundamental question does not lie in the perhaps inappropriately named “Singularity”, (of the AI kind), but rather in by what means are neural impulses translated into sensory experience – sounds, colors, tastes, odours, tactile sensations.

By what means is the TRANSLATION effected?

Continue reading “The Basic Problem” »

Feb 25, 2011

Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction

Posted by in categories: complex systems, existential risks, information science, robotics/AI

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Continue reading “Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction” »

Nov 9, 2010

The Singularity Hypothesis: A Scientific and Philosophical Assessment

Posted by in categories: cybercrime/malcode, ethics, existential risks, futurism, robotics/AI

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Continue reading “The Singularity Hypothesis: A Scientific and Philosophical Assessment” »