БЛОГ

Archive for the ‘singularity’ category: Page 83

Dec 9, 2014

Sam Chaltain: The Singularity is Coming. What Should Schools Be Doing About It?

Posted by in categories: education, singularity

Sam Chaltain — NEPC

Whenever I want to get a feel for the national mood, I look to Hollywood – and the films it thinks we’ll pay to see. In the post-911 malaise, there was the dystopian world of The Dark Knight. In the era of extended male adolescence, there’s just about anything from Judd Apatow. And now, in the shadow of the Technological Singularity, there are a slew of movies about humankind’s desire to transcend the biological limits of body and brain.

Continue reading “Sam Chaltain: The Singularity is Coming. What Should Schools Be Doing About It?” »

Oct 3, 2014

What if your memories could live past your mortal shelf life?

Posted by in categories: cyborgs, futurism, innovation, life extension, posthumanism, singularity, transhumanism

Would you have your brain preserved? Do you believe your brain is the essence of you?

To noted American PhD Neuroscientist and Futurist, Ken Hayworth, the answer is an emphatic, “Yes.” He is currently developing machines and techniques to map brain tissue at the nanometer scale — the key to encoding our individual identities.

A self-described transhumanist and President of the Brain Preservation Foundation, Hayworth’s goal is to perfect existing preservation techniques, like cryonics, as well as explore and push evolving opportunities to effect a change on the status quo. Currently there is no brain preservation option that offers systematic, scientific evidence as to how much human brain tissue is actually preserved when undergoing today’s experimental preservation methods. Such methods include vitrification, the procedure used in cryonics to try and prevent human organs from freezing and being destroyed when tissue is cooled for cryopreservation.

Hayworth believes we can achieve his vision of preserving an entire human brain at an accepted and proven standard within the next decade. If Hayworth is right, is there a countdown to immortality?

Continue reading “What if your memories could live past your mortal shelf life?” »

Sep 30, 2014

Dr. Ken Hayworth: What is the Future of your Mind?

Posted by in categories: biotech/medical, futurism, neuroscience, singularity

We live in world, where technological advances continually allow new and provocative opportunities to deeply explore every aspect of our existence. Understanding the human brain remains one of our most important challenges– but with 100 billion neurons to contend with, the painstakingly slow progress can give the impression that we may never succeed. Brain mapping research unlocks secrets to our mental, social and physical wellness.

In our upcoming releases for the Galactic Public Archives, noted American PhD Neuroscientist and Futurist, Ken Hayworth outlines why he feels that mapping the brain will not be a quixotic task. Through this, he reveals his unconventional plan to ensure humanity’s place in the universe—forever.

We admit to teasing you with the below link in preparation for the main events.

Sep 22, 2014

VICTORY!

Posted by in categories: business, entertainment, finance, futurism, science, singularity

VICTORY: Getting Fortune-500 Prospective Client’s Cash, Continually and Successfully! By Mr. Andres Agostini at www.linkedin.com/in/AndresAgostini

HOW TO SUCCEED IN BUSINESS ACCORDING TO THESE COMPANIES:

Mitsubishi Motors, Honda, Daimler-Chrysler’s Mercedes-Benz, Toyota, Royal Dutch Shell Oil Company, Google, Xerox, Exxon-Mobil, Boeing, Amazon, Procter & Gamble, NASA and DARPA, Lockheed Martin, RAND Corporation and HUDSON Institute, Northrop Grumman Corporation, etc.

a Amazon and Lifeboat

Continue reading “VICTORY!” »

Sep 18, 2014

Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

Posted by in categories: alien life, biological, cyborgs, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Continue reading “Why Superintelligence May Not Help Us Think about Existential Risks -- or Transhumanism” »

Jul 6, 2014

By 2045, Physicist Says ‘The Top Species Will No Longer Be Humans

Posted by in categories: human trajectories, posthumanism, singularity

Dylan Love — Business Insider

“Today there’s no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you’re going to see that the top species will no longer be humans, but machines.”

These are the words of Louis Del Monte, physicist, entrepreneur, and author of “The Artificial Intelligence Revolution.” Del Monte spoke to us over the phone about his thoughts surrounding artificial intelligence and the singularity, an indeterminate point in the future when machine intelligence will outmatch not only your own intelligence, but the world’s combined human intelligence too.

Read more

Jun 30, 2014

New book: The Beginning and the End by Clément Vidal

Posted by in categories: alien life, complex systems, ethics, philosophy, physics, posthumanism, singularity

By Clément Vidal — Vrije Universiteit Brussel, Belgium.

I am happy to inform you that I just published a book which deals at length with our cosmological future. I made a short book trailer introducing it, and the book has been mentioned in the Huffington Post and H+ Magazine.

Inline image 1
About the book:
In this fascinating journey to the edge of science, Vidal takes on big philosophical questions: Does our universe have a beginning and an end, or is it cyclic? Are we alone in the universe? What is the role of intelligent life, if any, in cosmic evolution? Grounded in science and committed to philosophical rigor, this book presents an evolutionary worldview where the rise of intelligent life is not an accident, but may well be the key to unlocking the universe’s deepest mysteries. Vidal shows how the fine-tuning controversy can be advanced with computer simulations. He also explores whether natural or artificial selection could hold on a cosmic scale. In perhaps his boldest hypothesis, he argues that signs of advanced extraterrestrial civilizations are already present in our astrophysical data. His conclusions invite us to see the meaning of life, evolution, and intelligence from a novel cosmological framework that should stir debate for years to come.
About the author:
Dr. Clément Vidal is a philosopher with a background in logic and cognitive sciences. He is co-director of the ‘Evo Devo Universe’ community and founder of the ‘High Energy Astrobiology’ prize. To satisfy his intellectual curiosity when facing the big questions, he brings together many areas of knowledge such as cosmology, physics, astrobiology, complexity science, evolutionary theory and philosophy of science.
http://clement.vidal.philosophons.com

You can get 20% off with the discount code ‘Vidal2014′ (valid until 31st July)!

Jun 19, 2014

‘We’re Living in Science Fiction Right Now’ Diamandis Tells GSP 2014

Posted by in category: singularity

— Singularity Hub

http://cdn.singularityhub.com/wp-content/uploads/2014/06/diamandis-gsp-2014-opening-ceremony.jpg

“It’s that time again.” These were the words on more than one pair of lips at Singularity University’s 2014 Graduate Studies Program (GSP) opening ceremony.

The 10-week summer program was Singularity University’s first offering six years ago, and it remains at the heart of Singularity University’s mission today—to use technology to positively impact a billion people in the next ten years.

Read More

Jun 19, 2014

Mind uploading won’t lead to immortality

Posted by in categories: bionic, biotech/medical, evolution, futurism, human trajectories, life extension, neuroscience, philosophy, posthumanism, robotics/AI, singularity, transhumanism

Uploading the content of one’s mind, including one’s personality, memories and emotions, into a computer may one day be possible, but it won’t transfer our biological consciousness and won’t make us immortal.

Uploading one’s mind into a computer, a concept popularized by the 2014 movie Transcendence starring Johnny Depp, is likely to become at least partially possible, but won’t lead to immortality. Major objections have been raised regarding the feasibility of mind uploading. Even if we could surpass every technical obstacle and successfully copy the totality of one’s mind, emotions, memories, personality and intellect into a machine, that would be just that: a copy, which itself can be copied again and again on various computers.

THE DILEMMA OF SPLIT CONSCIOUSNESS

Neuroscientists have not yet been able to explain what consciousness is, or how it works at a neurological level. Once they do, it is might be possible to reproduce consciousness in artificial intelligence. If that proves feasible, then it should in theory be possible to replicate our consciousness on computers too. Or is that jumpig to conclusions ?

Continue reading “Mind uploading won't lead to immortality” »

Jun 12, 2014

Could a machine or an AI ever feel human-like emotions ?

Posted by in categories: bionic, cyborgs, ethics, existential risks, futurism, neuroscience, philosophy, posthumanism, robotics/AI, singularity, transhumanism

Computers will soon be able to simulate the functioning of a human brain. In a near future, artificial superintelligence could become vastly more intellectually capable and versatile than humans. But could machines ever truly experience the whole range of human feelings and emotions, or are there technical limitations ?

In a few decades, intelligent and sentient humanoid robots will wander the streets alongside humans, work with humans, socialize with humans, and perhaps one day will be considered individuals in their own right. Research in artificial intelligence (AI) suggests that intelligent machines will eventually be able to see, hear, smell, sense, move, think, create and speak at least as well as humans. They will feel emotions of their own and probably one day also become self-aware.

There may not be any reason per se to want sentient robots to experience exactly all the emotions and feelings of a human being, but it may be interesting to explore the fundamental differences in the way humans and robots can sense, perceive and behave. Tiny genetic variations between people can result in major discrepancies in the way each of us thinks, feels and experience the world. If we appear so diverse despite the fact that all humans are in average 99.5% identical genetically, even across racial groups, how could we possibly expect sentient robots to feel the exact same way as biological humans ? There could be striking similarities between us and robots, but also drastic divergences on some levels. This is what we will investigate below.

Continue reading “Could a machine or an AI ever feel human-like emotions ?” »

Page 83 of 91First8081828384858687Last