Toggle light / dark theme

Interstellar travel one of the most moral projects? “one of the most moral projects might be to prepare for interstellar travel. After all, if the Earth becomes inhabitable—whether in 200 years or in 200,000 years—the only known civilization in the history of the solar system will suddenly go extinct. But if the human species has already spread to other planets, we will escape this permanent eradication, thus saving millions—possibly trillions—of lives that can come into existence after the demise of our first planet.”


The Red Planet is a freezing, faraway, uninhabitable desert. But protecting the human species from the end of life on Earth could save trillions of lives.

Read more

In terms of moral, social, and philosophical uprightness, isn’t it striking to have the technology to provide a free education to all the world’s people (i.e. the Internet and cheap computers) and not do it? Isn’t it classist and backward to have the ability to teach the world yet still deny millions of people that opportunity due to location and finances? Isn’t that immoral? Isn’t it patently unjust? Should it not be a universal human goal to enable everyone to learn whatever they want, as much as they want, whenever they want, entirely for free if our technology permits it? These questions become particularly deep if we consider teaching, learning, and education to be sacred enterprises.


When we as a global community confront the truly difficult question of considering what is really worth devoting our limited time and resources to in an era marked by global catastrophe, I always find my mind returning to what the Internet hasn’t really been used for yet — and what was rumored from its inception that it should ultimately provide — an utterly and entirely free education for all the world’s people.

In regard to such a concept, Bill Gates said in 2010:

“On the web for free you’ll be able to find the best lectures in the world […] It will be better than any single university […] No matter how you came about your knowledge, you should get credit for it. Whether it’s an MIT degree or if you got everything you know from lectures on the web, there needs to be a way to highlight that.”

The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn’t speculate about whether exposure to graphic content changes the way a human thinks. They’ve done other experiments in the same vein, too, using AI to write horror stories, create terrifying images, judge moral decisions, and even induce empathy. This kind of research is important. We should be asking the same questions of artificial intelligence as we do of any other technology because it is far too easy for unintended consequences to hurt the people the system wasn’t designed to see. Naturally, this is the basis of sci-fi: imagining possible futures and showing what could lead us there. Issac Asimov gave wrote the “Three Laws of Robotics” because he wanted to imagine what might happen if they were contravened.

Even though artificial intelligence isn’t a new field, we’re a long, long way from producing something that, as Gideon Lewis-Kraus wrote in The New York Times Magazine, can “demonstrate a facility with the implicit, the interpretive.” But it still hasn’t undergone the kind of reckoning that causes a discipline to grow up. Physics, you recall, gave us the atom bomb, and every person who becomes a physicist knows they might be called on to help create something that could fundamentally alter the world. Computer scientists are beginning to realize this, too. At Google this year, 5,000 employees protested and a host of employees resigned from the company because of its involvement with Project Maven, a Pentagon initiative that uses machine learning to improve the accuracy of drone strikes.

Norman is just a thought experiment, but the questions it raises about machine learning algorithms making judgments and decisions based on biased data are urgent and necessary. Those systems, for example, are already used in credit underwriting, deciding whether or not loans are worth guaranteeing. What if an algorithm decides you shouldn’t buy a house or a car? To whom do you appeal? What if you’re not white and a piece of software predicts you’ll commit a crime because of that? There are many, many open questions. Norman’s role is to help us figure out their answers.

Read more

A journalist, a soup exec, and an imam walk into a room. There’s no joke here. It’s just another day at CrisprCon.

On Monday and Tuesday, hundreds of scientists, industry folk, and public health officials from all over the world filled the amphitheater at the Boston World Trade Center to reckon with the power of biology’s favorite new DNA-tinkering tool: Crispr. The topics were thorny—from the ethics of self-experimenting biohackers to the feasibility of pan-global governance structures. And more than once you could feel the air rush right out of the room. But that was kind of the point. CrisprCon is designed to make people uncomfortable.

“I’m going to talk about the monkey in the room,” said Antonio Cosme, an urban farmer and community organizer in Detroit who appeared on a panel at the second annual conference devoted to Crispr’s big ethical questions to talk about equitable access to gene editing technologies. He referred to the results of an audience poll that had appeared moments before in a word cloud behind him, with one bigger than all the others: “eugenics.”

Read more

Google ends Pentagon contract to develop AI for recognising people in drone videos after 4,000 employees signed an open letter saying that Google’s involvement is against the company’s “moral and ethical responsibility”.


Google will not seek another contract for its controversial work providing artificial intelligence to the U.S. Department of Defense for analyzing drone footage after its current contract expires.

Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract, Greene said. The meeting, dubbed Weather Report, is a weekly update on Google Cloud’s business.

Google would not choose to pursue Maven today because the backlash has been terrible for the company, Greene said, adding that the decision was made at a time when Google was more aggressively pursuing military work. The company plans to unveil new ethical principles about its use of AI next week. A Google spokesperson did not immediately respond to questions about Greene’s comments.

Check out the internal Google film, “The Selfish Ledger”. This probably wasn’t meant to slip onto a public web server, and so I have embedded a backup copy below. Ping me if it disappears. I will locate a permanent URL.

This 8½ minute video is a lot deeper—and possibly more insipid—than it appears. Nick Foster may be the Anti-Christ, or perhaps the most brilliant sociologist of modern times. It depends on your vantage point, and your belief in the potential of user controls and cat-in-bag containment.

He talks of a species propelling itself toward “desirable goals” by cataloging, data mining, and analyzing the past behavior of peers and ancestors—and then using that data to improve the experience of each user’s future and perhaps even their future generations. But, is he referring to shared goals across cultures, sexes and incomes? Who controls the algorithms and the goal filters?! Is Google the judge, arbiter and God?

Consider these quotes from the video. Do they disturb you? The last one sends a chill down my spine. But, I may be overreacting to what is simply an unexplored frontier. The next generation in AI. I cannot readily determine if it ushers in an era of good or bad:

  • Behavioral sequencing « a phrase used throughout the video
  • Viewing human behavior through a Lemarkian lens
  • An individual is just a carrier for the gene. The gene seeks to improve itself and not its host
  • And [at 7:25]: “The mass multigenerational examination of actions and results could introduce a model of behavioral sequencing.”

There’s that odd term again: behavioral sequencing. It suggests that we are mice and that Google can help us to act in unison toward society’s ideal goals.

Today, Fortune Magazine described it this way: “Total and absolute data collection could be used to shape the decisions you make … The ledger would essentially collect everything there is to know about you, your friends, your family, and everything else. It would then try to move you in one direction or another for your or society’s apparent benefit.”

The statements could apply just as easily to the NSA as it does to Google. At least we are entering into a bargain with Google. We hand them data and they had us numerous benefits (the same benefits that many users often overlook). Yet, clearly, this is heavy duty stuff—especially for the company that knows everything about everyone. Watch it a second time. Think carefully about the power that Google wields.

Don’t get me wrong. I may be in the minority, but I generally trust Google. I recognize that I am raw material and not a client. I accept the tradeoff that I make when I use Gmail, web search, navigate to a destination or share documents. I benefit from this bargain as Google matches my behavior with improved filtering of marketing directed at me.

But, in the back of my mind, I hope for the day that Google implements Blind Signaling and Response, so that my data can only be used in ways that were disclosed to me—and that strengthen and defend that bargain, without subjecting my behavior, relationships and predilections to hacking, misuse, or accidental disclosure.


Philip Raymond sits on Lifeboat’s New Money Systems board. He co-chairs CRYPSA, hosts the Bitcoin Event, publishes Wild Duck and is keynote speaker at global Cryptocurrency Conferences. Book a presentation or consulting engagement.

Credit for snagging this video: Vlad Savov @ TheVerge

Stem cell technology has advanced so much that scientists can grow miniature versions of human brains — called organoids, or mini-brains if you want to be cute about it — in the lab, but medical ethicists are concerned about recent developments in this field involving the growth of these tiny brains in other animals. Those concerns are bound to become more serious after the annual meeting of the Society for Neuroscience starting November 11 in Washington, D.C., where two teams of scientists plan to present previously unpublished research on the unexpected interaction between human mini-brains and their rat and mouse hosts.

In the new papers, according to STAT, scientists will report that the organoids survived for extended periods of time — two months in one case — and even connected to lab animals’ circulatory and nervous systems, transferring blood and nerve signals between the host animal and the implanted human cells. This is an unprecedented advancement for mini-brain research.

“We are entering totally new ground here,” Christof Koch, president of the Allen Institute for Brain Science in Seattle, told STAT. “The science is advancing so rapidly, the ethics can’t keep up.”

Read more

This week RT en Español aired a half hour show on life extension and #transhumanism on TV to millions of its #Spanish viewers. My #ImmortalityBus and work was covered. Various Lifeboat Foundation members in this video: Give it a watch:


La longevidad, la inmortalidad… Temas que nunca han dejado a nadie indiferente. Ahora algunos científicos aseguran que la inmortalidad es técnicamente alcanzable en un futuro cercano. Pero al mismo tiempo surgen preguntas de carácter moral e incluso filosófico: ¿qué significa alcanzar la inmortalidad para cada uno de nosotros? Además, en una sociedad consumista y de empresas transnacionales como la nuestra, suena poco convincente que la inmortalidad pueda llegar a ser accesible para todos.

Suscríbete a nuestro canal de eventos en vivo:

A scientific experiment to reanimate dead brains could lead to humans enduring a ‘fate worse than death,’ an ethics lecturer has warned.

Last month Yale University announced it had successfully resurrected the brains of more than 100 slaughtered pigs and found that the cells were still healthy.

The reanimated brains were kept alive for up to 36 hours and scientists said the process, which should also work in primates, offered a new way to study intact organs in the lab.

Read more

This month I’m participating in Cato Institute’s Cato Unbound discussion. Cato is one of the world’s leading think tanks. Here’s my new and second essay for the project:


Professor David D. Friedman sweeps aside my belief that religion may well dictate the development of AI and other radical transhumanist tech in the future. However, at the core of a broad swath of American society lies a fearful luddite tradition. Americans—including the U.S. Congress, where every member is religious—often base their life philosophies and work ethics on their faiths. Furthermore, a recent Pew study showed 7 in 10 Americans were worried about technology in people’s bodies and brains, even if it offered health benefits.

It rarely matters what point in American history innovation has come out. Anesthesia, vaccines, stem cells, and other breakthroughs have historically all battled to survive under pressure from conservatives and Christians. I believe that if formal religion had not impeded our natural secular progress as a nation over the last 250 years, we would have been much further along in terms of human evolution. Instead of discussing and arguing about our coming transhumanist future, we’d be living in it.

Our modern-day battle with genetic editing and whether our government will allow unhindered research of it is proof we are still somewhere between the Stone Age and the AI Age. Thankfully, China and Russia are forcing the issue, since one thing worse than denying Americans their religion is denying them the right to claim the United States is the greatest, most powerful nation in the world.