Advisory Board

Jeffrey Herrlich

The article Existential Risk and Fermi’s Paradox said

The odds may be so heavily stacked against us that the probability of success is only 0.0000000000001% for any given civilization (or worse). That doesn’t mean that we can’t possibly be that one civilization. And it doesn’t mean we shouldn’t try. What if the goal is possible (albeit very remotely possible) but all civilizations decide to give up prematurely. That would ultimately make this an entirely pointless Universe.
 
I’m starting to believe more and more, that a very large “fraction” of the paradox, is that an evolved intelligence like us is simply extremely rare in this Universe.

Jeffrey Herrlich is the author of this article and a Singularitarian who has been aware of the Singularity hypothesis and its implications since 2005 after reading The Singularity is Near by Ray Kurzweil. He has a deep desire to witness progress toward the mitigation of existential risks. In particular, he believes that the vigorous pursuit of “Friendly AI”, as advocated by SIAI, is the most promising pathway toward a desirable future for all sentient beings originating from Earth.
 
Jeff is a Sustaining Donor of SIAI and is currently a university student pursuing a bachelor’s degree in Computer Science.
 
He says “If we are not careful with the design of the first Strong AI, one possibility is that the AI will pursue a course which we humans would consider both of no value to humanity and of no value to the AI. A classic illustration is a Strong AI, without independent motivations, that ceaselessly pursues a trivial goal such as tiling the solar system with optimal paper clips, and destroying humanity in the process. It is for this reason among others that I believe that the design of a ‘Friendly AI’ is a win : win situation both for humanity and for the Strong AI.”