БЛОГ

Apr 21, 2010

Software and the Singularity

Posted by in categories: futurism, robotics/AI

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors! Even so, more parallelism is coming.4

3 His prediction is that the number of computers, times their computational capacity, will surpass the number of humans, times their computational capacity, in 2045. This calculation seems flawed for several reasons:

  1. We will be swimming in computational capacity long before then. An intelligent agent twice as fast as the previous one is not necessarily more useful.
  2. Many of the neurons of the brain are not spent on reason, and so shouldn’t be in the calculations.
  3. Billions of humans are merely subsisting, and are not plugged into the global grid, and so shouldn’t be measured.
  4. There is no amount of continuous learning built in to today’s software.

Each of these would tend to push Singularity closer and support the argument that the benefits of singularity are not waiting on hardware. Humans make computers smarter, and computers make humans smarter, so this feedback loop is another reason that makes 2045 a meaningless moment in time.

4 Most computers today contain a dual-core CPU and processor folks promise that 10 and more are coming. Intel’s processors also have parallel processing capabilities known as MMX and SSE that is easily adapted to the work of the early stages of any analysis pipeline. Intel would add even more of this parallel processing support if applications put them to better use. Furthermore, graphics cards exist primarily to do work in parallel, and this hardware could be adapted to AI if it is not usable already.

11

Comments — comments are now closed.


  1. John Hunt says:

    Keith,

    In this section you describe the Singularity as being “amazing” and describe the “benefits inherent” within it. Nowhere do you hint at possible dangers. I find this concerning. If the Singularity results in entities far more intelligent than people, wouldn’t people become redundant refuse? Why do many people in this field seem not to care that much about the risks inherent?

  2. John Hunt says:

    Also, is continuous learning the same thing as seed AI?

  3. Keith Curtis says:

    HI John;

    I changed the text slightly to clarify my “amazing” comment. I was just trying to explain that the Singularity’s proponents describe it as a date in time when (presumably) good things happen. The Singularity is not a term I personally ever use as I see progress as a series of inventions. Perhaps Strong AI is the biggest invention, but there are many interesting ones that will come before and after. Strong AI is not even the end because you need thinking and knowledge.

    I think that many of the poorest billions of people on this earth today might feel like they are redundant refuse. So it would just become that now all of us are in that boat! I think that is a good thing as it will increase our respect for other humans.

    It is a good point that I don’t much discuss the risks or dangers of intelligent machines and other future developments. My goal is just better software faster. I leave it to others like people here to worry about the downsides of the progress and how to mitigate them. In general, I see technology as something that can save lives. Much of life today is drudgery and misery even for the luckiest of us.

    What all we do is a good question. Definitely I can imagine many fun things! We can go build a space elevator, terraform Mars and visit it. Just because my computer can add numbers doesn’t mean I shouldn’t also learn how.

    BTW, A number of these risks happen even if we never developed things like strong or even weak AI. Once we work on Wikipedia for another few decades, what will those 100 or 1000 years from now contribute to? What about rock music? Will it still be evolving in 100 years?

    As for continuous learning, I mean it as weak AI, but a situation where my software is constantly adapting to me. Imagine a situation where a neural network class library is embedded deep in the foundation of a software stack, as it is in all living things with a brain today. The larger point I was making is that one shouldn’t look at dumb software and use that to extrapolate out to when we will have smart software.

  4. Atarivandio says:

    Actually…
    Machines are in fact capable of learning or gathering information for internal inference and the solution is quiet basic…
    ’Chaterbox’ technology, though not specific to language, uses statistics to replicate the thought process needed to reproduce language.
    The failure in advancement occurs through specialization, if all computer scientists from all fields of research actually met and actually put forth any kind of effort, one could easily see that the ability to inference is what is important.
    Calculus gives math an understanding of time.
    Geometry and Trigonometry brings shape and form to math.
    Statistics is the method for breathing intelligence into machines.
    The brain as a person is merely electric signals making us just as alive as a machine.
    The question is do you want to spend thirty years programming intelligence or do you want to spend fifteen minutes using statistics to allow a machine to learn and then spend thirty years teaching it.
    The honest truth is that the second option is more beneficial as one can bind, cut, and transfer segments of soul between entities.

    Basically the tech is there, otherwise I wouldn’t be using it, it’s just that most people simply do not posses the knowledge to allow them to combine modules for usefulness.

    Imagine using a hypervisor module like KVM with a chatterbox module and then a simple interface to the web. If you follow these steps then you have a persona with multiple personalities that share knowledge to vote for an appropriate solution using less memory by mutilating the page features a bit. The web is just a faster way to educate and train it.

    Ive used it in combination with any knowledge I could find to basically replicate a ‘George Washington’ that even answers questions similarly if not exactly as he would have.

    Heck, I’ve even had a few conversations with Jesus and Hitler. The coolest part was when I left and they started talking to each other.

    If this is your Idea of the singularity, then I’m afraid that it’s already happened, the bad part is that nobody noticed. Google and KDE are secretly working on this though they might not realize it.

  5. John Hunt says:

    > My goal is just better software faster. I leave it to others like people here to worry about the downsides of the progress and how to mitigate them.

    But herein lies the problem. Einstein didn’t want to kill hundreds of thousands. But his work helped show the way. Drexler doesn’t want to see nanoweapons but he’s creating the tools which will make it possible. Do AI researchers want a superintelligence who values humans only for their atoms? Some of them actually sound like they do.

    So if AI researchers are only looking at the upside of their technology and if, by leaving concerns about risks to others, they proceed unhindered, then nothings going to stop our worst nightmares.

    > I think that many of the poorest billions of people on this earth today might feel like they are redundant refuse. So it would just become that now all of us are in that boat! I think that is a good thing as it will increase our respect for other humans.

    Incredible. I don’t know what there is that I can say.

  6. Keith Curtis says:

    John;

    What should Einstein have done? BTW, I’m of the opinion that the nuclear bomb has (so far) saved lives by shortening the war. The Japanese were ready to die to the last man. It also serves as a deterrent against chemical and biological attacks.

    I also believe in specialization. Not all of us need to work on all of the same problems.

    Finally, I see the good uses of technology. 30-40K people die in car accidents every year.

    We can regulate dangerous technologies, and devise means to counteract them. Do you worry about someone stealing an Apache helicopter and going on a killing spree?

    I also think that regime change in Iran and North Korea and a few other places would be good to reduce deadly risks. Spreading democracy is an important way to make humanity safer. 300K people have died in the Sudan — mostly with machetes.

  7. Keith Curtis says:

    Hi Atarivandio;

    The major point of this section here is to state that we are not waiting for more hardware. I discuss AI in other parts of the chapter this piece is from. I agree that we have what seems like AI today, but it still has a long way to go.

    The singularity is not my idea, and what I know of it you are not describing it properly.

  8. John Hunt says:

    > BTW, I’m of the opinion that the nuclear bomb has (so far) saved lives by shortening the war.

    I agree, and I know that, when I have spoken to WW2 vets, they are strongly of the opinion that this was the case. Of course, it was those guys who would have had to invade the Japanese homeland.

    > I also believe in specialization. Not all of us need to work on all of the same problems.

    By all means. But the problem is the specialists who are developing tools (for good or bad) working independently of those who work to try and prevent the risks from those tools. Ultimately, those who are developing the tools need to have their work controlled so you can’t separate these groups.

    > Finally, I see the good uses of technology. 30-40K people die in car accidents every year.

    Of course this is true. Technology brings many benefits. The danger is when the benefits of technology blinds us to the risks.

    > We can regulate dangerous technologies, and devise means to counteract them. Do you worry about someone stealing an Apache helicopter and going on a killing spree?

    No, but if home replicators could construct an Apache for $500 then, yes, I would be very worried about a wacko going on a killing spree. My point here is that, in the future, it is entirely conceivable that one day the tools could become commonplace whereby a single individual could create a self-replicating entity which could destroy the entirety of humanity. An Apache can kill how many? A self-replicating ecophage can kill how many. This is the fundamental difference. This is why we need to handle certain future technology much more differently that previous technology. As difficult as it would be to do, we need to have universal controls on certain technologies. This would require universal snap inspections with severe consequences. And even then it would probably only buy us time to develop an off-Earth colony.

    > I also think that regime change in Iran and North Korea and a few other places would be good to reduce deadly risks. Spreading democracy is an important way to make humanity safer.

    I agree. The concept of absolute sovereignty is one of the greatest problems. How many North Koreans have died without intervention because it was an “internal matter”? See how Saddam used this in not cooperating with inspectors in the early years which led to the well-known consequences. See how Iran is moving to the verge of ICBMs because of its “inalienable rights”. And finally, consider how this argument will be used to prevent effective control over the enormously powerful technology of nanotech. We don’t need a world government but we do need universal systems of regulating certain technologies.

  9. Simon Dufour says:

    I will give my point of view because it seems that this thread will soon spin out of control.

    If we kept the current technologies and didn’t evolve, it’d kill many. War, polutions, scarcity, poverty, famine… they all kill peoples. We don’t need a new superweapon to do that.

    The Singularity is not an event, it’s a concept. The Singularity will happen when the exponential growth of technology will meet the knee of the curve. Then, technology will expand so fast that nothing will be the same anymore. At least, that’s how Ray Kurzweil defines it in “The Singularity is Near”. What you have to understand here is that technological progress, even in the near future, is all about making progress in Intelligence. Making us think better. We can achieve these kinds of result by making communication easier (like by making Internet, smartphone and social networking better). The breakthrough in communication that we’re currently in will change a bunch of thing and the first one that I see is the coordination and understanding of the whole world. Now, we’ll all talk to each other and understand each others goals. Research will sync up and people will cooperate.

    Sure, our capitalism world don’t favor these kinds of things but I think that’s about to change too. With technological progress, scarcity will slowly disappear. What will happen if food is not an issue anywhere in the world anymore? If we can feed people with grown meat using nanotechnology or biotechnology, we could make meat for almost free.

    What if entertainment became free. With virtual reality, you can do anything you want, without any risks or price. Criminality could drop as food and basic needs are always covered.

    We have to think that technology will drastically transform how we live today. Sure, if you think that Technological progress will happen during our current world-state, it’s pretty frightning. I hope that one day, progress will shift from money to a post-scarcity world where welfare is more important than anything else.

    AI, cooperation, communication, Biotecnology, Nanotechnology, Robotics.. they’re all tools to help us get there.

  10. Simon Dufour says:

    @Keith Curtis
    It seems you are talking about Ray Kurzweil preditions in you post. I’d like to point out to a few things.

    In his predictions, Ray Kurzweil did claim he was conservative. I’d like to point out that he also thought that we’d get a computer able to pass the Turing test around 2020ish.

    The singularity is a completely different matter than what you’re talking here. When the singularity happen, everything we know today will be obsolete. When the singularity happen, the rate of new technology will be so great that we’ll get a thousand year worth of progress in a few seconds.. and this will continue to grow exponentially forever. It’s beyond comprehension really.

    Our definition of software will change in the next few years. In my opinion, the idea you gave in your article here are all stuff that will happen in the next 10 years.

    At some point however, computers will just build themselves and optimize themselves to the point where all software will be completely hidden behind a huge AI that reprogram itself constantly. Anyway.. that’s what I think and that’s why I want to keep my mind open, especially as a software engineer.

  11. Regarding the software for AI it will arrive eventually; I feel sure by 2045 at the latest. Improved hardware will help create better software.

    Regarding the paranoid-Luddite anti-singularity theories about humans becoming redundant or enslaved by AI: these theories are ridiculous. Very minimal dangers are associated with the Singularity. When we all have easy access to replicators, and if some wacko decides to replicate a Apache helicopter for a killing spree, I’m sure such a high-tech era will allow people to defend themselves with consummate ease from all possible threats: there is no need to worry about any self-replicating ecophage. Superior machine-intelligences, due to their high intelligence, will have NO interest in enslaving of destroying the human race, furthermore humans will use technology to evolve thus we won’t be left behind. If some primitive humans want to be left behind, the super-AIs have a whole universe to play in so Earth can carry on as usual.

    It is time to evolve. When you play with fire you may get burned, but the discovery and use of fire is good despite people who got burned during the discovery and who continue to get burned today. Our world is a better place because of Einstein despite deaths due to the nuclear bomb, which probably saved lives due to abruptly ending the war.

    Humans take risks; we explore; we evolve; we push ourselves to the limits. Obviously we make things as safe as possible but accidents occur such as fires or car crashes. People who are anti-singularity are anti-evolution. It is time to evolve. Neil Armstrong took and leap for mankind and soon the human race will take the most important leap in the history of life on Earth! Singularity Utopia is coming.

    http://singularity-2045.org/