БЛОГ

Archive for the ‘robotics/AI’ category: Page 2032

Feb 24, 2009

I Don’t Want To Live in a Post-Apocalyptic World

Posted by in categories: asteroid/comet impacts, defense, existential risks, futurism, habitats, robotics/AI, space

Image from The Road film, based on Cormac McCarthy's book

How About You?
I’ve just finished reading Cormac McCarthy’s The Road at the recommendation of my cousin Marie-Eve. The setting is a post-apocalyptic world and the main protagonists — a father and son — basically spend all their time looking for food and shelter, and try to avoid being robbed or killed by other starving survivors.

It very much makes me not want to live in such a world. Everybody would probably agree. Yet few people actually do much to reduce the chances of of such a scenario happening. In fact, it’s worse than that; few people even seriously entertain the possibility that such a scenario could happen.

People don’t think about such things because they are unpleasant and they don’t feel they can do anything about them, but if more people actually did think about them, we could do something. We might never be completely safe, but we could significantly improve our odds over the status quo.

Continue reading “I Don't Want To Live in a Post-Apocalyptic World” »

Oct 8, 2008

Global Catastrophic Risks: Building a Resilient Civilization

Posted by in categories: biological, biotech/medical, chemistry, cybercrime/malcode, defense, events, futurism, geopolitics, lifeboat, military, nanotechnology, nuclear weapons, robotics/AI

November 14, 2008
Computer History Museum, Mountain View, CA

http://ieet.org/index.php/IEET/eventinfo/ieet20081114/

Organized by: Institute for Ethics and Emerging Technologies, the Center for Responsible Nanotechnology and the Lifeboat Foundation

A day-long seminar on threats to the future of humanity, natural and man-made, and the pro-active steps we can take to reduce these risks and build a more resilient civilization. Seminar participants are strongly encouraged to pre-order and review the Global Catastrophic Risks volume edited by Nick Bostrom and Milan Cirkovic, and contributed to by some of the faculty for this seminar.

Continue reading “Global Catastrophic Risks: Building a Resilient Civilization” »

Aug 30, 2008

The Singularity Summit 2008

Posted by in categories: futurism, robotics/AI

The Singularity Institute for Artificial Intelligence has announced the details of The Singularity Summit 2008. The event will be held October 25, 2008 at the Montgomery Theater in San Jose, California. Previous summits have featured Nick Bostrom, Eric Drexler, Douglas Hofstadter, Ray Kurzweil, and Peter Thiel.

Keynote speakers include Ray Kurzweil, author of The Singularity is Near, and Justin Rattner, CTO of Intel. At the Intel Developer Forum on August 21, 2008, Rattner explained why he thinks the gap between humans and machines will close by 2050. “Rather than look back, we’re going to look forward 40 years,” said Rattner. “It’s in that future where many people think that machine intelligence will surpass human intelligence.”

Other featured speakers include:

  • Dr. Ben Goertzel, CEO of Novamente, director of research at SIAI
  • Dr. Marvin Minsky
  • Nova Spivack, CEO of Radar Networks, creator of Twine.com
  • Dr. Vernor Vinge
  • Eliezer Yudkowsky

You can find a comprehensive list of other upcoming Singularity and Artificial Intelligence events here.

Jan 13, 2008

Carnegie Mellon study achieves significant results in decoding human thought

Posted by in categories: neuroscience, robotics/AI

Newsweek is reporting the results of a scientific study by researchers at Carnegie Mellon who used MRI technology to scan the brains of human subjects. The subjects were shown a series of images of various tools (hammer, drill, pliers, etc). The subjects were then asked to think about the properties of the tools and the computer was tasked with determining which item the subject was thinking about. To make the computer task even more challenging, the researchers excluded information from the brain’s visual cortex which would have made the problem a simpler pattern recognition exercise in which decoding techniques are already known. Instead, they focused the scanning on higher level cognitive areas.

The computer was able to determine with 78 percent accuracy when a subject was thinking about a hammer, say, instead of a pair of pliers. With one particular subject, the accuracy reached 94 percent.

Nov 29, 2007

Planning for First Lifeboat Foundation Conference Underway

Posted by in categories: biological, biotech/medical, cybercrime/malcode, defense, existential risks, futurism, geopolitics, lifeboat, nanotechnology, robotics/AI, space

Planning for the first Lifeboat Foundation conference has begun. This FREE conference will be held in Second Life to keep costs down and ensure that you won’t have to worry about missing work or school.

While an exact date has not yet been set, we intend to offer you an exciting line up of speakers on a day in the late spring or early summer of 2008.

Several members of Lifeboat’s Scientific Advisory Board (SAB) have already expressed interest in presenting. However, potential speakers need not be Lifeboat Foundation members.

If you’re interested in speaking, want to help, or you just want to learn more, please contact me at [email protected].

Mar 15, 2007

2007 DARPA Military Technology Plan: Future Medical Promise or Danger?

Posted by in categories: biological, biotech/medical, defense, lifeboat, robotics/AI

darpaachievements.jpg

DARPA (the defense advanced research projects agency) is the R&D arm of he US military for far-reaching future technology. What most people do not realize is how much revolutionary medical technology comes out of this agency’s military R&D programs. For those in need of background, you can read about the Army & DARPA’s future soldier Landwarrior program and its medtech offshoots as well as why DARPA does medical research and development that industry won’t. Fear of these future military technologies runs high with a push towards neural activation as a weapon, direct brain-computer interfaces, and drones. However, the new program has enormous potential for revolutionary medical progess as well.

It has been said technology is neutral, it is the application that is either good or evil. (It is worth a side-track to read a discussion on this concept)

The Areas of Focus for DARPA in 2007 and Forward Are:

  1. Chip-Scale Atomic Clock
  2. Global War on TerrorismUnmanned Air Vehicles
  3. Militarization of Space
  4. Supercomputer Systems
  5. Biological Warfare Defense
  6. Prosthetics
  7. Quantum Information Science
  8. Newton’s Laws for Biology
  9. Low-Cost Titanium
  10. Alternative Energy
  11. High Energy Liquid Laser Area Defense System

the potential for the destructive use of these technologies is obvious, for a a complete review of these projects and the beneficial medical applications of each visit docinthemachine.com

Dec 22, 2006

UK Government Report Talks Robot Rights

Posted by in categories: robotics/AI, supercomputing

In an important step forward for acknowledging the possibility of real AI in our immediate future, a report by the UK government that says robots will have the same rights and responsibilities as human citizens. The Financial Times reports:

The next time you beat your keyboard in frustration, think of a day when it may be able to sue you for assault. Within 50 years we might even find ourselves standing next to the next generation of vacuum cleaners in the voting booth. Far from being extracts from the extreme end of science fiction, the idea that we may one day give sentient machines the kind of rights traditionally reserved for humans is raised in a British government-commissioned report which claims to be an extensive look into the future. Visions of the status of robots around 2056 have emerged from one of 270 forward-looking papers sponsored by Sir David King, the UK government’s chief scientist.

The paper covering robots’ rights was written by a UK partnership of Outsights, the management consultancy, and Ipsos Mori, the opinion research organisation. “If we make conscious robots they would want to have rights and they probably should,” said Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology. The idea will not surprise science fiction aficionados.

It was widely explored by Dr Isaac Asimov, one of the foremost science fiction writers of the 20th century. He wrote of a society where robots were fully integrated and essential in day-to-day life.In his system, the ‘three laws of robotics’ governed machine life. They decreed that robots could not injure humans, must obey orders and protect their own existence – in that order.

Continue reading “UK Government Report Talks Robot Rights” »