Toggle light / dark theme

Researchers at California State Polytechnic University (CalPoly), Pomona are carrying out a series of quantum physics experiments expected to provide strong scientific evidence that we live in a computer simulated virtual reality.

Devised by former NASA physicist Thomas Campbell, the five experiments are variations of the double-slit and delayed-choice quantum eraser experiments, which explore the conditions under which quantum objects ‘collapse’ from a probabilistic wavefunction to a defined particle. In line with the Copenhagen Interpretation of quantum mechanics, Campbell attributes a fundamental role to measurement, but extends it to human observers. In his view, quantum mechanics shows that the physical world is a virtual reality simulation that is computed for our consciousness on demand. In essence, what you do not see does not exist.

Campbell and Khoshnoud.


Campbell’s quantum experiments have been designed to reveal the interactive mechanism by which nature probabilistically generates our experience of the physical world. Herein, Campbell asserts that, like a videogame, the universe is generated as needed for the player and does not exist independent of observation.

While multiple quantum experiments have pointed to the probabilistic and informational nature of reality, Campbell’s experiments are the first to investigate the connection between consciousness and simulation theory. These experiments are based on Campbell’s paper ‘On Testing the Simulation Theory’ originally published in the International Journal of Quantum Foundations in 2017.

Paradigm-shifting consequences

Importantly, Campbell’s version of the simulation hypothesis differs from the ‘ancestor simulation’ thought experiment popularized by philosopher Dr. Nick Bostrom. “Contrary to what Bostrom postulates, the idea here is that consciousness is not a product of the simulation — it is fundamental to reality,” Campbell explains. “If all five experiments work as expected, this will challenge the conventional understanding of reality and uncover profound connections between consciousness and the cosmos.” The first experiment is currently being carried out by two independent teams of researchers — One at California State Polytechnic University (Pomona) headed by Dr. Farbod Khoshnoud, and the other at a top-tier Canadian university that has chosen to participate anonymously at this time.

Anticipation and to remain hopeful and patient in expecting a preferred future have a special place and a critical role in some moral and religious systems of faith. As a personal virtue, there are many natural, cultural, social, and educational factors that play a role in its development. However, for an economic agent and in general forward looking decision makers who follow a more secular worldview, the argument in favor of anticipation and how much it could be reasonable might be less clear. Therefore, it is worthwhile to explore when and under which circumstances we should choose anticipation. A convincing argument might be helpful. In this blog post I will build a framework based on game theory to provide a better and deeper insight.

Economists, mathematicians, and to some degree, engineers have contributed to the development of game theory. In neoclassic economics, it is assumed that each economic agent has a rational behavior. According to the prediction model based on such an assumption, decision makers, if they sell goods and services, tend to maximize profit and if they buy tend to maximize utility. In other words, people naturally seek the best and the most. Moreover, decision making is based on the principle of “predict then act”. The individual first predicts the likely consequences of choices and attribute to them utilities. In the next step, an alternative is chosen that has the best consequence or the most utility. This camp or school is often called the normative decision analysis.

Nonetheless, empirical studies on the behavior of real decision makers demonstrate that despite the prediction of rational models of choice, the individuals or economic agents, do not always follow the principle of the best and the most. In 1950s, for instance, Herbert Simon showed that when faced with uncertainty and due to lack of information about the future, there are cognitive limits to rationality such that contrary to the neoclassic economic theory, people do not make decisions rationally and logically in search of the optimal alternative. Instead they seek a combination of satisfaction and sufficing levels of utility which is also called “satisficing”. This camp or school is often called the behavioral or descriptive decision analysis. To further explain, no one can claim that in a certain decision the best alternative has been chosen, regardless of the choice criteria or the ideal level of utility. Because there is always a better alternative than the best alternative known to us now. That better alternative either exists now beyond our awareness or will appear in the future. But we never can choose it if we do not know about it. In brief, we can possibly choose from a subset of the best, the best element.

In light of the flaws of the actual decision making by humans,

CERN has revealed plans for a gigantic successor of the giant atom smasher LHC, the biggest machine ever built. Particle physicists will never stop to ask for ever larger big bang machines. But where are the limits for the ordinary society concerning costs and existential risks?

CERN boffins are already conducting a mega experiment at the LHC, a 27km circular particle collider, at the cost of several billion Euros to study conditions of matter as it existed fractions of a second after the big bang and to find the smallest particle possible – but the question is how could they ever know? Now, they pretend to be a little bit upset because they could not find any particles beyond the standard model, which means something they would not expect. To achieve that, particle physicists would like to build an even larger “Future Circular Collider” (FCC) near Geneva, where CERN enjoys extraterritorial status, with a ring of 100km – for about 24 billion Euros.

Experts point out

By Eliott Edge

“It is possible for a computer to become conscious. Basically, we are that. We are data, computation, memory. So we are conscious computers in a sense.”

—Tom Campbell, NASA

If the universe is a computer simulation, virtual reality, or video game, then a few unusual conditions seem to necessarily fall out from that reading. One is what we call consciousness, the mind, is actually something like an artificial intelligence. If the universe is a computer simulation, we are all likely one form of AI or another. In fact, we might come from the same computer that is creating this simulated universe to begin with. If so then it stands to reason that we are virtual characters and virtual minds in a virtual universe.

In Breaking into the Simulated Universe, I discussed how if our universe is a computer simulation, then our brain is just a virtual brain. It is our avatar’s brain—but our avatar isn’t really real. It is only ever real enough. Our virtual brain plays an important part in making the overall simulation appear real. The whole point of the simulation is to seem real, feel real, look real—this includes rendering virtual brains. In Breaking I went into this “virtual brain” conundrum, including how the motor-effects of brain damage work in a VR universe. The virtual brain concept seems to apply to many variants of the “universe is a simulation” proposal. But if the physical universe and our physical brain amount to just fancy window-dressing, and the bigger picture is indeed that we are in a simulated universe, then our minds are likely part of the big supercomputer that crunches out this mock universe. That is the larger issue. If the universe is a VR, then it seems to necessarily mean that human minds already are an artificial intelligence. Specifically, we are an artificial intelligence using a virtual lifeform avatar to navigate through an evolving simulated physical universe.

About the AI

There are several flavors of the simulation hypothesis and digital mechanics out there in science and philosophy; I refer to these different schools of thought with the umbrella term simulism.

In Breaking I went over the connection between Edward Fredkin’s concept of Other—the ‘other place,’ the computer platform, where our universe is being generated from—and Tom Campbell’s concept of Consciousness as an ever-evolving AI ruleset. If you take these two ideas and run with them, what you end up with is an interesting inevitability: over enough time and enough evolutionary pressure, an AI supercomputer with enough resources should be pushed to crunch out any number of virtual universes and any number of conscious AI lifeforms. The big evolving AI supercomputer would be the origin of both physical reality and conscious life. And it would have evolved to be that way.

Why the supercomputer AI makes mock universes and AI lifeforms is to forward its own information evolution, while at the same time avoiding a kind of “death” brought on by chaos, high entropy (disorganization), and noise winning over signal, over order. To Campbell, this is a form of evolution accomplished by interaction. It would mean not only is our whole universe really a highly detailed version of The Sims. It would mean it actually evolved to be this way from a ruleset—a ruleset with the specific purpose of further evolving the overall big supercomputer and the virtual lifeforms within it. The players, the game, and the big supercomputer crunching it all out evolve and develop as one.

Maybe this is the way it is, maybe not. Nevertheless, if it turns out our universe is some kind of computed virtual reality simulation, all conscious life will likely end up being cast as AI. This makes the situation interesting when imagining what role free will might play.

Free will

If we are an AI then what about free will? Perhaps some of us virtual critters live without free will. Maybe there are philosophical zombies and non-playable characters amongst us—lifeforms that only seem to be conscious but actually aren’t. Maybe we already are zombies, and free will is an illusion. It should be noted that simulist frameworks do not all necessarily wipeout decision-making and free will. Campbell in particular argues that free will is fundamental to the supercomputing virtual reality learning machine. It uses free will and the virtual lifeforms’ interactions to learn and evolve by using the tool of decision-making. The feedback from those decisions drives evolution. In Campbell’s model, evolution is actually impossible without free will. Nevertheless, whether or not free will is real, or some have free will and others only appear to have it, let us reflect on our own experience of decision-making.

What is it like to make a choice? We do not seem to be merely linear, route machines in our thinking and decision-making processes. It is not that we undergo x-stimulus and then always deliver a single, given, preloaded y-response every single time. We appear to think and consider. Our conclusions vary. We experience fuzzy logic. Our feelings play a role. We are apparently subject to a whole array of possible responses. And of course even non-responses, like choosing not to choose, are also responses. Perhaps even all this is just an illusion.

The question of free will might be difficult or impossible to answer. However, it does bring up a larger issue that seems to influence free will: programming. Whether we are free, “free enough,” or total zombies, an interesting question seems to almost always ride alongside the issue of choice and volition—it must be asked, what role does programming play? To begin this line of inquiry, we must first admit just how programmable we always already are.

Programming

Our whole biology is the result of pressure and programming. Tabula rasa, the idea that we are born as a “blank slate,” was chucked out long ago. We now know we arrive preprogrammed by millennia. There is barely but a membrane between our programming and what we call (or assume to be) our conscious waking selves. This is dramatically explored in the 2016 series Westworld. Without much for spoilers, the story’s “hosts” are artificially intelligent robots that are trapped in programmed “loops,” repetitive cycles of thought and behavior. Regarding these loops, the hosts’ creator Dr. Ford (Anthony Hopkins) states, “Humans fancy that there’s something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do. Seldom questioning our choices. Content, for the most part, to be told what to do next.”

The programmability of biology and conscious life is already without question. We are manifestations of a complex blueprint called DNA—a set of instructions programmed by our environment interacting with our biology and genetics. Our diets, interests, how much sunlight we get a day, and even our stresses, feelings, and thoughts all have a measurable effect on our DNA. Our body is the living receipt of what is etched and programmed into our DNA.

DNA is made up of information and instructions. This information has been programmed by a variety of other types of environmental, physiological, and psychic information over vast eons of time. We grow gills due to the presence of water, or lungs due to the presence of air. Sometimes we grow four stomachs. Sometimes we grow ears so sensitive that can see mass in the dark. The world talks to us, and so we change ourselves based on what we are able to pick up. Reality informs us, and we mutate accordingly. If the universe is a computer program then so too are we programmed by it. The VR environment program also programs the conscious AIs living in it.

In part, our social environment programs our psychologies. Our families, languages, neighborhoods, cultures, religions, ideologies, expectations, fears, addictions, rewards, needs, slogans—these are all largely programmed into us as well. They define and shape our individual and collective personhood. And they all program our view of the world, and our selves within it. Our information exchange through socialization programs us.

Ultimately, programming is instruction. But human beings often experience conflicting sets of instructions simultaneously. One of Sigmund Freud’s great contributions was his identification of “das unbehagen. Unbehagen refers to the uneasiness we feel as our instincts (one set of instructions) come into conflict with our culture, society, values, and civilization (another set of instructions). We choose not to cheat on our partner with someone wildly attractive, even though we might really want to. We don’t attack someone even though they might sorely deserve it. The fallout of this behavior is potentially just too great to follow through with. If left unprocessed we develop neuroses, obsessions, and pathologies inside of us that are beyond our conscious control. “Demons” and “hungry ghosts” guide us to behaviors, thoughts, and states of being that are so upsetting to our waking conscious selves that we tend to describe them as unwanted, alien, or even as sin. They create a sense of feeling “out of control.” Indeed, conflicting instructions, conflicting thoughts, behaviors, and goals are causes of great suffering for many people. We develop illnesses of the body and mind, and then pass those smoldering genes—that malignant programming—onto the next generation. Here we have biological programming working against social programming, physiological instructions conflicting with societal instructions. Now just imagine an AI robot trying to compute two or three contradictory programs simultaneously. You would see an android throwing a fit, breaking down, shutting off, and hopefully eventually attempting to put itself back together.

In terms of conflicting programming, an interesting aside can be found in comedy. Humor strikes often in the form of contradiction, as in Shakespeare’s Hamlet. Polonius famously claims that, “brevity is the soul of wit,” yet he is ironically verbose—naturally implying that he is witless. In this case we have contradiction—does not compute. But not all humor is contradiction. Consider the joke, “Can a kangaroo jump higher than a house?” The punchline is, “Of course they can. Houses don’t jump at all.” This joke does not translate to does not compute; instead this joke computes all too well. In many instances, this is humor: it either doesn’t make sense or, it makes more sense than you ever expected. It is information brought into a new light—information recontextualized.

A final novel consideration to this idea of programming can be found in the phenomenon of ‘positive sexual imprinting.’ The habit human beings exhibit in determining sexual or romantic partners has long fascinated psychologists—they are often based on similarities to their parents and caregivers. To our species-wide relief, this behavior is not exclusive to human beings. Mammals, birds, and even fish have been documented pairing up with mates that resemble their forbearers. Even goats that are raised by sheep will grow up to pursue sheep, and visa versa. Here is another example of programming that works often just under our awareness, and yet it has a titanic, indeed central, effect on our lives. Choosing mates and partners, especially for long-term relationships or even procreation, is one of the circumstances that most dramatically guide our livelihood and our personal destiny. This is the depth of programming.

It was Freud who pointed out in so many words, your mind is not your own.

Goals and Rewards

Human beings love instruction. Recollect Dr. Ford’s remark from the previous section, “[Humans are] content, for the most part, to be told what to do next.” Chemically speaking, our rewards arrive through serotonin, dopamine, oxytocin, and endorphins. In waking life we experience them during events like social bonding, and poignant experiences; we feel it alongside with a sense of profound meaning and pleasure, and these experiences and chemicals even go on to help shape our values, goals, and lives. These complex chemical exchanges shoot through human beings particularly when we receive instructions and also when we accomplish goals.

We find it particularly rewarding when we happily do something for someone we love or admire. We are fond of all kinds of games and game playing. We enjoy drama and rewards. Acting within rules and roles, as well as bending or breaking them, is a moment-to-moment occupation for all human beings.

We also design goals that can only come into fruition years, sometimes decades, into the future. We then program and modify our being and circumstance to bring these goals into an eventual present; we change based on what we want. We feel meaning and purpose when we have a goal. We experience joy and fulfillment when that goal is achieved. Without a series of goals we become quite genuinely paralyzed. Even the movement of a limb from position A to position B is a goal. All motor functioning is goal-oriented. Turns out that the machine learning and AI that we are attempting to develop in laboratories today work particularly well when it is given goals and rewards.

In Daniel Dewey’s 2014 paper Reinforcement Learning and the Reward Engineering Principle, Dewey argued that adding rewards to machine learning actually encourages that system to produce useful and interesting behaviors. Google’s DeepMind research team has since developed an AI (which taught itself to walk in a VR environment), and subsequently published a paper in 2017 called A Distributional Perspective on Reinforcement Learning, apparently confirming this rewards-based approach.

Laurie Sullivan wrote a summary on Reinforcement Learning in a MediaPost article called Google: Deepmind AI Learns On Rewards System:

The system learns by trial and error and is motivated to get things correct based on rewards […]

The idea is that the algorithm learns, considers rewards based on its learning, and almost seems to eventually develop its own personality based on outside influences. In a new paper, DeepMind researchers show it is possible to model not only the average but also the reward as it changes. Researchers call this the “value distribution” or the distribution value of the report.

Rewards make reinforcement learning systems increasingly accurate and faster to train than previous models. More importantly, per researchers, it opens the possibility of rethinking the entire reinforcement learning process.

If human beings and our computer AIs both develop valuably through goals and rewards, then these sorts of drives might be fundamental to consciousness itself. If it is fundamental to consciousness itself, and our universe is a computer simulation, then goals and rewards likely guide or influence the big evolving supercomputer AI behind life and reality. If this is all true then there is a goal, there is a purpose embedded within the fabric of existence. Maybe there is even more than one.

Ontology and Meta-metaphors

In the essays Breaking into the simulated universe and, Why it matters that you realize you’re in a computer simulation, I asked, ‘what happens after we embrace our reality as a computer simulation?’ In a neighboring line of thinking, all simulists must equally ask, ‘what happens after we realize we are an artificial intelligence in a computer simulation?’

First of all, our whole instinctual drive to create our own computed artificial intelligence takes on a new light. We are building something like ourselves in the mirror of a would-be mentalizing machine. If this is true, then we are doing more than just recreating ourselves; we are recreating the larger reality, the larger context, that we are all a part of. Maybe making an AI is actually the most natural thing in the world, because, indeed, we already are AIs.

Second, we would have to accept that we not merely human. Part of us, an important part indeed, is locked in an experience of humanness no doubt. But, again, there is a deeper reality. If the universe is a computer simulation, then our consciousness is part of that computer, and our human bodies act as avatars. Although our situation of existing as ‘human beings’ may appear self-evident, it is this deeper notion that our consciousness is a partitioned segment of the larger evolving AI supercomputer that is responsible for both life and the universe, must be explored. We would do well to accept that as human beings we are, like any computer simulated situation, real enough—but that our human avatar is not the beginning of the end of our total consciousness. Our humanness is only the crust. If we are AIs that are being crunched out by the supercomputer responsible for our physical universe, then we might have a valuable new framework to investigate the mind, altered states, and consciousness exploration. After all, if we are part of the big supercomputer behind the universe, maybe we can interact with it and visa versa.

Third, if we are an artificial intelligence, we should examine the idea of programming intensely. Even without the virtual reality reading, we all are programed by the environment, programmed by our own volition, programmed by others, by millions of years of genetic trial and error, and we go on to program the environment, and the beings all around us as well. This is true. These programs and instructions create deep contexts, thought and behavior patterns. They generate loops that we easily pick up and fall into, often without second thought or even notice. We are already so entrenched. So, in terms of programming we would likely do well to accept this as an opportunity. Cognitive Behavioral Therapy, the growing field of psychedelic psychotherapy, and just good old fashion learning are powerful ways we can rewrite, edit, or straight-out delete code that is no longer desirable to us. It is also worth including the gene editing revolution that is upon us thanks to medical breakthroughs like CRISPR. If we accept we are an AI lifeform that has been programmed, perhaps that will put us in a more formidable position in managing and developing our own programs, instructions, rewards, and loops more consciously. To borrow the title of work by visual artist Dakota Crane—Machines, Take Up Thy Schematics and Self-Construct!

Finally, the AI metaphor might be able to help us extract ourselves out of contexts and ideas that have perhaps inadvertently limited us when we think of ourselves as strictly ‘human beings’ with ‘human brains.’ Metaphors though they may be: any concept that embraces our multidimensionality, as well as helps us get a better handle on the pressing matter of our shared existence, I deem good. Anything that narrows it—in the instance of say claiming that one is a ‘human being,’ which comes loaded with it very hard and fast assumptions and limits (either true or believed to be true)—I deem problematic. These claims are problematic because they create a context that is rarely based on truth, but based largely on convenience, habit, tradition, and belief. Simply put, claiming you are exclusively a ‘human being’ is necessarily limiting (“death,” “human nature,” etc.), whereas claiming that you are an AI means that there is a great-undiscovered country before you. For we do not know yet what it means to be an AI, while we do have a pretty fixed idea of what it means to be a human being. Nevertheless, ‘human being’ and ‘AI’ are both simply thought-based concepts. If ‘AI’ broadens our decision space more than ‘human being’ does, then AI may be a more valuable position to operate from.

Computers, robots, and AI are powerful new metaphors for understanding ourselves; because they are indeed that which is most like us. A computer is like a brain, a robot is like a brain walking around and dealing with it. Virtual reality is another metaphor—one capable of approaching everything from culture, to thought, to quantum mechanics. Much like the power and robustness of the idea of ‘virtual reality’ as a meta-metaphor and meta-context for dealing with a variety of experiences and domains, so too are the ideas of ‘programming’ and ‘artificial intelligence’ equally strong and potentially useful concepts for extracting ourselves out of the circumstances that we have, in large part, created for ourselves. However, regardless of how similar we are to computers, AIs, and robots, they are not quite us exactly. At the end of it all, terms like ‘virtual reality’ and ‘artificial intelligence’ are but metaphors. They are concepts alluding to something immensely peculiar that we detect existing—as Terence McKenna would likely describe it—just at the threshold of rational apprehension, and seemingly peeking out from hyperspace. If we are already an AI, then that is a frontier that sorely demands our exploration.

Originally published at The Institute of Ethics and Emerging Technologies

Posthumanists and perhaps especially transhumanists tend to downplay the value conflicts that are likely to emerge in the wake of a rapidly changing technoscientific landscape. What follows are six questions and scenarios that are designed to focus thinking by drawing together several tendencies that are not normally related to each other but which nevertheless provide the basis for future value conflicts.

  1. Will ecological thinking eventuate in an instrumentalization of life? Generally speaking, biology – especially when a nervous system is involved — is more energy efficient when it comes to storing, accessing and processing information than even the best silicon-based computers. While we still don’t quite know why this is the case, we are nevertheless acquiring greater powers of ‘informing’ biological processes through strategic interventions, ranging from correcting ‘genetic errors’ to growing purpose-made organs, including neurons, from stem-cells. In that case, might we not ‘grow’ some organs to function in largely the same capacity as silicon-based computers – especially if it helps to reduce the overall burden that human activity places on the planet? (E.g. the brains in the vats in the film The Minority Report which engage in the precognition of crime.) In other words, this new ‘instrumentalization of life’ may be the most environmentally friendly way to prolong our own survival. But is this a good enough reason? Would these specially created organic thought-beings require legal protection or even rights? The environmental movement has been, generally speaking, against the multiplication of artificial life forms (e.g. the controversies surrounding genetically modified organisms), but in this scenario these life forms would potentially provide a means to achieve ecologically friendly goals.

  1. Will concerns for social justice force us to enhance animals? We are becoming more capable of recognizing and decoding animal thoughts and feelings, a fact which has helped to bolster those concerned with animal welfare, not to mention ‘animal rights’. At the same time, we are also developing prosthetic devices (of the sort already worn by Steven Hawking) which can enhance the powers of disabled humans so their thoughts and feelings are can be communicated to a wider audience and hence enable them to participate in society more effectively. Might we not wish to apply similar prosthetics to animals – and perhaps even ourselves — in order to facilitate the transaction of thoughts and feelings between humans and animals? This proposal might aim ultimately to secure some mutually agreeable ‘social contract’, whereby animals are incorporated more explicitly in the human life-world — not as merely wards but as something closer to citizens. (See, e.g., Donaldson and Kymlicka’s Zoopolis.) However, would this set of policy initiatives constitute a violation of the animals’ species integrity and simply be a more insidious form of human domination?

  1. Will human longevity stifle the prospects for social renewal? For the past 150 years, medicine has been preoccupied with the defeat of death, starting from reducing infant mortality to extending the human lifespan indefinitely. However, we also see that as people live longer, healthier lives, they also tend to have fewer children. This has already created a pensions crisis in welfare states, in which the diminishing ranks of the next generation work to sustain people who live long beyond the retirement age. How do we prevent this impending intergenerational conflict? Moreover, precisely because each successive generation enters the world without the burden of the previous generations’ memories, it is better disposed to strike in new directions. All told then, then, should death become discretionary in the future, with a positive revaluation of suicide and euthanasia? Moreover, should people be incentivized to have children as part of a societal innovation strategy?

  1. Will the end of death trivialize life? A set of trends taken together call into question the finality of death, which is significant because strong normative attitudes against murder and extinction are due largely to the putative irreversibility of these states. Indeed, some have argued that the sanctity – if not the very meaning — of human life itself is intimately related to the finality of death. However, there is a concerted effort to change all this – including cryonics, digital emulations of the brain, DNA-driven ‘de-extinction’ of past species, etc. Should these technologies be allowed to flourish, in effect, to ‘resurrect’ the deceased? As it happens, ‘rights of the dead’ are not recognized in human rights legislation and environmentalists generally oppose introducing new species to the ecology, which would seem to include not only brand new organisms but also those which once roamed the earth.

  1. Will political systems be capable of delivering on visions of future human income? There are two general visions of how humans will earn their keep in the future, especially in light of what is projected to be mass technologically induced unemployment, which will include many ordinary professional jobs. One would be to provide humans with a ‘universal basic income’ funded by some tax on the producers of labour redundancy in both the industrial and the professional classes. The other vision is that people would be provided regular ‘micropayments’ based on the information they routinely provide over the internet, which is becoming the universal interface for human expression. The first vision cuts against the general ‘lower tax’ and ‘anti-redistributive’ mindset of the post-Cold War era, whereas the latter vision cuts against perceived public preference for the maintenance of privacy in the face of government surveillance. In effect, both visions of future human income demand that the state reinvents its modern role as guarantor of, respectively, welfare and security – yet now against the backdrop of rapid technological change and laissez faire cultural tendencies.

  1. Will greater information access turn ‘poverty’ into a lifestyle prejudice? Mobile phone penetration is greater in some impoverished parts of Africa and Asia than in the United States and some other developed countries. While this has made the developed world more informationally available to the developing world, the impact of this technology on the latter’s living conditions has been decidedly mixed. Meanwhile as we come to a greater understanding of the physiology of impoverished people, we realize that their nervous systems are well adapted to conditions of extreme stress, as are their cultures more generally. (See e.g. Banerjee and Duflo’s Poor Economics.) In that case, there may come a point when the rationale for ‘development aid’ might disappear, and ‘poverty’ itself may be seen as a prejudicial term. Of course, the developing world may continue to require external assistance in dealing with wars and other (by their standards) extreme conditions, just as any other society might. But otherwise, we might decide in an anti-paternalistic spirit that they should be seen as sufficiently knowledgeable of their own interests to be able to lead what people in the developed world might generally regard as a suboptimal existence – one in which, say, the life expectancies between those in the developing and developed worlds remain significant and quite possibly increase over time.

Recent evidence suggests that a variety of organisms may harness some of the unique features of quantum mechanics to gain a biological advantage. These features go beyond trivial quantum effects and may include harnessing quantum coherence on physiologically important timescales.

Quantum Biology — Quantum Mind Theory

When we as a global community confront the truly difficult question of considering what is really worth devoting our limited time and resources to in an era marked by such global catastrophe, I always find my mind returning to what the Internet hasn’t really been used for yet—and what was rumored from its inception that it should ultimately provide—an utterly and entirely free education for all the world’s people.

In regard to such a concept, Bill Gates said in 2010, “On the web for free you’ll be able to find the best lectures in the world […] It will be better than any single university […] No matter how you came about your knowledge, you should get credit for it. Whether it’s an MIT degree or if you got everything you know from lectures on the web, there needs to be a way to highlight that.”

That may sound like an idealistic stretch to the uninitiated, but the fact of the matter is universities like MIT, Harvard, Yale, Oxford, The European Graduate School, Caltech, Stanford, Berkeley, and other international institutions have been regularly uploading entire courses onto YouTube and iTunes U for years. All of them are entirely free. Open Culture, Khan Academy, Wikiversity, and many other centers for online learning also exist. Other online resources have small fees attached to some courses, as you’ll find on edX and Coursea. In fact, here is a list of over 100 places online where you can receive high quality educational material. The 2015 Survey of Online Learning revealed a “Multi-year trend [that] shows growth in online enrollments continues to outpace overall higher ed enrollments.” I. Elaine Allen, co-director of the Babson Survey Research Group points out that “The study’s findings highlight a thirteenth consecutive year of growth in the number of students taking courses at a distance.” Furthermore, “More than one in four students (28%) now take at least one distance education course (a total of 5,828,826 students, a year‐to‐year increase of 217,275).” There are so many online courses, libraries of recorded courses, pirate libraries, Massive Open Online Courses, and online centers for learning with no complete database thereof that in 2010 I found myself dumping all the websites and master lists I could find onto a simple Tumblr archive I put together called Educating Earth. I then quickly opened a Facebook Group to try and encourage others to share and discuss courses too.

The volume of high quality educational material already available online is staggering. Despite this, there has yet to be a central search hub for all this wonderful and unique content. No robust community has been built around it with major success. Furthermore, the social and philosophical meaning of this new practice has not been strongly advocated enough yet in a popular forum.

There are usually a few arguments against this brand of internet-based education. One of the most common arguments being that learning online will never be learning in a physical classroom setting. I will grant that. However, I’ll counter it with the obvious: You don’t need to learn everything there is to learn strictly in a classroom setting. That is absurd. Not everything is surgery. Furthermore, not everyone has access to a classroom, which is really in a large way what this whole issue is all about. Finally, you cannot learn everything you may want to learn from one single teacher in one single location.

Another argument pertains to cost, that a donation-based free education project would be an expensive venture. All I can think to respond to that is: How much in personal debt does the average student in the United States end up in after four years of college? What if that money was used to pay for a robust online educational platform? How many more people the world over could learn from a single four-year tuition alone? These are serious questions worth considering.

Here are just a few major philosophical points for such a project. Illiteracy has been a historic tool used to oppress people. According to the US Census Bureau an average of one billion more people are born about every 15 years since 1953. In 2012 our global population was estimated at 7 billion people. Many of these individuals will be lucky to ever see the inside of a classroom. Today nearly 500 million women on this planet are denied the basic freedom to learn how to read and write. Women make up two-thirds of total population of the world’s illiterate adults. It is a global crime perpetuated against women, pure and simple.

Here is another really simple point: If the world has so many problems on both a local and a global scale, doesn’t it make sense to have more problem solvers available to collaborate and tackle them? Consider all these young people devising ingenious ways to clean the ocean, or detect cancer, or power their community by building windmills; don’t you want many orders of magnitude more of all that going on in the world? More people freely learning and sharing what they discover simply translates to a higher likelihood of breakthroughs and general social benefit. This is good for everyone. Is this not obvious?

Here is one last point: In terms of moral, social, and philosophical uprightness, isn’t it striking to have the technology to provide a free education to all the world’s people (i.e. the internet and cheap computers) and not do it? Isn’t it classist and backward to have the ability to teach the world yet still deny millions of people that opportunity due to location and finances? Isn’t that immoral? Isn’t it patently unjust? Should it not be a universal human goal to enable everyone to learn whatever they want, as much as they want, whenever they want, entirely for free if our technology permits it? These questions become particularly deep if we consider teaching, learning, and education to be sacred enterprises.

Read the whole article on IEET.org

My sociology of knowledge students read Yuval Harari’s bestselling first book, Sapiens, to think about the right frame of reference for understanding the overall trajectory of the human condition. Homo Deus follows the example of Sapiens, using contemporary events to launch into what nowadays is called ‘big history’ but has been also called ‘deep history’ and ‘long history’. Whatever you call it, the orientation sees the human condition as subject to multiple overlapping rhythms of change which generate the sorts of ‘events’ that are the stuff of history lessons. But Harari’s history is nothing like the version you half remember from school.

In school historical events were explained in terms more or less recognizable to the agents involved. In contrast, Harari reaches for accounts that scientifically update the idea of ‘perennial philosophy’. Aldous Huxley popularized this phrase in his quest to seek common patterns of thought in the great world religions which could be leveraged as a global ethic in the aftermath of the Second World War. Harari similarly leverages bits of genetics, ecology, neuroscience and cognitive science to advance a broadly evolutionary narrative. But unlike Darwin’s version, Harari’s points towards the incipient apotheosis of our species; hence, the book’s title.

This invariably means that events are treated as symptoms if not omens of the shape of things to come. Harari’s central thesis is that whereas in the past we cowered in the face of impersonal natural forces beyond our control, nowadays our biggest enemy is the one that faces us in the mirror, which may or may not be able within our control. Thus, the sort of deity into which we are evolving is one whose superhuman powers may well result in self-destruction. Harari’s attitude towards this prospect is one of slightly awestruck bemusement.

Here Harari equivocates where his predecessors dared to distinguish. Writing with the bracing clarity afforded by the Existentialist horizons of the Cold War, cybernetics founder Norbert Wiener declared that humanity’s survival depends on knowing whether what we don’t know is actually trying to hurt us. If so, then any apparent advance in knowledge will always be illusory. As for Harari, he does not seem to see humanity in some never-ending diabolical chess match against an implacable foe, as in The Seventh Seal. Instead he takes refuge in the so-called law of unintended consequences. So while the shape of our ignorance does indeed shift as our knowledge advances, it does so in ways that keep Harari at a comfortable distance from passing judgement on our long term prognosis.

This semi-detachment makes Homo Deus a suave but perhaps not deep read of the human condition. Consider his choice of religious precedents to illustrate that we may be approaching divinity, a thesis with which I am broadly sympathetic. Instead of the Abrahamic God, Harari tends towards the ancient Greek and Hindu deities, who enjoy both superhuman powers and all too human foibles. The implication is that to enhance the one is by no means to diminish the other. If anything, it may simply make the overall result worse than had both our intellects and our passions been weaker. Such an observation, a familiar pretext for comedy, wears well with those who are inclined to read a book like this only once.

One figure who is conspicuous by his absence from Harari’s theology is Faust, the legendary rogue Christian scholar who epitomized the version of Homo Deus at play a hundred years ago in Oswald Spengler’s The Decline of the West. What distinguishes Faustian failings from those of the Greek and Hindu deities is that Faust’s result from his being neither as clever nor as loving as he thought. The theology at work is transcendental, perhaps even Platonic.

In such a world, Harari’s ironic thesis that future humans might possess virtually perfect intellects yet also retain quite undisciplined appetites is a non-starter. If anything, Faust’s undisciplined appetites point to a fundamental intellectual deficiency that prevents him from exercising a ‘rational will’, which is the mark of a truly supreme being. Faust’s sense of his own superiority simply leads him down a path of ever more frustrated and destructive desire. Only the one true God can put him out of his misery in the end.

In contrast, if there is ‘one true God’ in Harari’s theology, it goes by the name of ‘Efficiency’ and its religion is called ‘Dataism’. Efficiency is familiar as the dimension along which technological progress is made. It amounts to discovering how to do more with less. To recall Marshall McLuhan, the ‘less’ is the ‘medium’ and the ‘more’ is the ‘message’. However, the metaphysics of efficiency matters. Are we talking about spending less money, less time and/or less energy?

It is telling that the sort of efficiency which most animates Harari’s account is the conversion of brain power to computer power. To be sure, computers can outperform humans on an increasing range of specialised tasks. Moreover, computers are getting better at integrating the operations of other technologies, each of which also typically replaces one or more human functions. The result is the so-called Internet of Things. But does this mean that the brain is on the verge of becoming redundant?

Those who say yes, most notably the ‘Singularitarians’ whose spiritual home is Silicon Valley, want to translate the brain’s software into a silicon base that will enable it to survive and expand indefinitely in a cosmic Internet of Things. Let’s suppose that such a translation becomes feasible. The energy requirements of such scaled up silicon platforms might still be prohibitive. For all its liabilities and mysteries, the brain remains the most energy efficient medium for encoding and executing intelligence. Indeed, forward facing ecologists might consider investing in a high-tech agronomy dedicated to cultivating neurons to function as organic computers – ‘Stem Cell 2.0’, if you will.

However, Harari does not see this possible future because he remains captive to Silicon Valley’s version of determinism, which prescribes a migration from carbon to silicon for anything worth preserving indefinitely. It is against this backdrop that he flirts with the idea that a computer-based ‘superintelligence’ might eventually find humans surplus to requirements in a rationally organized world. Like other Singularitarians, Harari approaches the matter in the style of a 1950s B-movie fan who sees the normative universe divided between ‘us’ (the humans) and ‘them’ (the non-humans).

The bravest face to put on this intuition is that computers will transition to superintelligence so soon – ‘exponentially’ as the faithful say — that ‘us vs. them’ becomes an operative organizing principle. More likely and messier for Harari is that this process will be dragged out. And during that time Homo sapiens will divide between those who identify with their emerging machine overlords, who are entitled to human-like rights, and those who cling to the new acceptable face of racism, a ‘carbonist’ ideology which would privilege organic life above any silicon-based translations or hybridizations. Maybe Harari will live long enough to write a sequel to Homo Deus to explain how this battle might pan out.

NOTE ON PUBLICATION: Homo Deus is published in September 2016 by Harvil Secker, an imprint of Penguin Random House. Fuller would like to thank The Literary Review for originally commissioning this review. It will appear in a subsequent edition of the magazine and is published here with permission.

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

In fact, we would do better to revisit the signature Cold War way of thinking about these matters, which the RAND Corporation strategist Herman Kahn dubbed ‘thinking the unthinkable’. What he had in mind was the aftermath of a thermonuclear war in which, say, 25–50% of the world’s population is wiped out over a relatively short period of time. How do we rebuild humanity under those circumstances? This is not so different from ‘the worst case scenarios’ proposed nowadays, even under conditions of severe global warming. Kahn’s point was that we need now to come up with the relevant new technologies that would be necessary the day after Doomsday. Moreover, such a strategy was likely to be politically more tractable than trying actively to prevent Doomsday, say, through unilateral nuclear disarmament.

And indeed, we did largely follow Kahn’s advice. And precisely because Doomsday never happened, we ended up in peacetime with the riches that we have come to associate with Silicon Valley, a major beneficiary of the US federal largesse during the Cold War. The internet was developed as a distributed communication network in case the more centralized telephone system were taken down during a nuclear attack. This sort of ‘ahead of the curve’ thinking is characteristic of military-based innovation generally. Warfare focuses minds on what’s dispensable and what’s necessary to preserve – and indeed, how to enhance that which is necessary to preserve. It is truly a context in which we can say that ‘necessity is the mother of invention’. Once again, and most importantly, we win even – and especially – if Doomsday never happens.

An interesting economic precedent for this general line of thought, which I have associated with transhumanism’s ‘proactionary principle’, is what the mid-twentieth century Harvard economic historian Alexander Gerschenkron called ‘the relative advantage of backwardness’. The basic idea is that each successive nation can industrialise more quickly by learning from its predecessors without having to follow in their footsteps. The ‘learning’ amounts to innovating more efficient means of achieving and often surpassing the predecessors’ level of development. The post-catastrophic humanity would be in a similar position to benefit from this sense of ‘backwardness’ on a global scale vis-à-vis the pre-catastrophic humanity.

Doomsday scenarios invariably invite discussions of our species’ ‘resilience’ and ‘adaptability’, but these terms are far from clear. I prefer to start with a distinction drawn in cognitive archaeology between ‘reliable’ and ‘maintainable’ artefacts. Reliable artefacts tend to be ‘overdesigned’, which is to say, they can handle all the anticipated forms of stress, but most of those never happen. Maintainable artefacts tend to be ‘underdesigned’, which means that they make it easy for the user to make replacements when disasters strike, which are assumed to be unpredictable.

In a sense, ‘resilience’ and ‘adaptability’ could be identified with either position, but the Cold War’s proactionary approach to Doomsday suggests that the latter would be preferable. In other words, we want a society that is not so dependent on the likely scenarios – including the likely negative ones — that we couldn’t cope in case a very unlikely, very negative scenario comes to pass. Recalling US Defence Secretary Donald Rumsfeld’s game-theoretic formulation, we need to address the ‘unknown unknowns’, not merely the ‘known unknowns’. Good candidates for the relevant ‘unknown unknowns’ are the interaction effects of relatively independent research and societal trends, which while benign in themselves may produce malign consequences — call them ‘emergent’, if you wish.

It is now time for social scientists to present both expert and lay subjects with such emergent scenarios and ask them to pinpoint their ‘negativity’: What would be potentially lost in the various scenarios which would be vital to sustain the ‘human condition’, however defined? The answers would provide the basis for future innovation policy – namely, to recover if not strengthen these vital features in a new guise. Even if the resulting innovations prove unnecessary in the sense that the Doomsday scenarios don’t come to pass, nevertheless they will make our normal lives better – as has been the long-term effect of the Cold War.

References

Bleed, P. (1986). ‘The optimal design of hunting weapons: Maintainability or reliability?’ American Antiquity 51: 737– 47.

Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.

Fuller, S. and Lipinska, V. (2014). The Proactionary Imperative. London: Palgrave (pp. 35–36).

Gerschenkron, A. (1962). Economic Backwardness in Historical Perspective. Cambridge MA: Harvard University Press.

Kahn, H. (1960). On Thermonuclear War. Princeton: Princeton University Press.