Toggle light / dark theme

Robert Long–Artificial Sentience, Digital Minds

Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever’s slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.

Audio & transcript: https://theinsideview.ai/roblong.
Michaël: https://twitter.com/MichaelTrazzi.
Robert: https://twitter.com/rgblong.

Robert’s blog: https://experiencemachines.substack.com.

OUTLINE
00:00:00 Intro.
00:01:11 The LaMDA Controversy.
00:07:06 Defining AGI And Consciousness.
00:10:30 The Slightly Conscious Tweet.
00:13:16 Could Large Language Models Become Conscious?
00:18:03 Blake Lemoine Does Not Negotiate With Terrorists.
00:25:58 Could We Actually Test Artificial Consciousness?
00:29:33 From Metaphysics To Illusionism.
00:35:30 How We Could Decide On The Moral Patienthood Of Language Models.
00:42:00 Predictive Processing, Global Workspace Theories and Integrated Information Theory.
00:49:46 Have You Tried DMT?
00:51:13 Is Valence Just The Reward in Reinforcement Learning?
00:54:26 Are Pain And Pleasure Symetrical?
01:04:25 From Charismatic AI Systems to Artificial Sentience.
01:15:07 Sharing The World With Digital Minds.
01:24:33 Why AI Alignment Is More Pressing Than Artificial Sentience.
01:39:48 Why Moral Personhood Could Require Memory.
01:42:41 Last thoughts And Further Readings.

Olivia Zetter — Head of Government Affairs and AI Strategy — National Resilience, Inc.

Making the future of medicine possible by rethinking how medicines are made — olivia zetter, head of government affairs & AI strategy, resilience.


Olivia Zetter is Head of Government Affairs and AI Strategy at National Resilience, Inc. (https://resilience.com/) a first-of-its-kind manufacturing and technology company dedicated to broadening access to complex medicines and protecting bio-pharmaceutical supply chains against disruption.

Founded in 2020, National Resilience, Inc. is building a sustainable network of high-tech, end-to-end manufacturing solutions to ensure the medicines of today, and tomorrow, can be made quickly, safely, and at scale.

Olivia brings extensive experience in national security spanning diplomacy, defense, and development, along with emerging technology issues. Olivia has held multiple positions in government, most recently as a Director of Research and Analysis at the National Security Commission on Artificial Intelligence, an independent federal commission established by Congress to examine the impact of artificial intelligence on national security and defense.

Olivia previously served at the Department of State as a Foreign Affairs Officer in the Office of the Coordinator for Cyber Issues, where her work spanned a diverse range of cyber policy areas. She also served as the Special Advisor on Trans-Regional Issues to the Special Presidential Envoy for the Global Coalition to Counter ISIS, where she coordinated efforts to counter the terrorist organization’s financing, foreign terrorist fighter flows, and external operations.

Russia proposes ban on use and mining of cryptocurrencies

Russia’s central bank on Thursday proposed banning the use and mining of cryptocurrencies on Russian territory, citing threats to financial stability, citizens’ wellbeing and its monetary policy sovereignty.

The move is the latest in a global cryptocurrency crackdown as governments from Asia to the United States worry that privately operated and highly volatile digital currencies could undermine their control of financial and monetary systems.

Russia has argued for years against cryptocurrencies, saying they could be used in money laundering or to finance terrorism. It eventually gave them legal status in 2020 but banned their use as a means of payment.

Kamikaze drones: A new weapon brings power and peril to the U.S. military

Americans have become accustomed to images of Hellfire missiles raining down from Predator and Reaper drones to hit terrorist targets in Pakistan or Yemen. But that was yesterday’s drone war.

A revolution in unmanned aerial vehicles is unfolding, and the U.S. has lost its monopoly on the technology.

Some experts believe the spread of the semi-autonomous weapons will change ground warfare as profoundly as the machine gun did.

‘If Human, Kill’: Video Warns Of Need For Legal Controls On Killer Robots

A new video released by nonprofit The Future of Life Institute (FLI) highlights the risks posed by autonomous weapons or ‘killer robots’ – and the steps we can take to prevent them from being used. It even has Elon Musk scared.

Its original Slaughterbots video, released in 2017, was a short Black Mirror-style narrative showing how small quadcopters equipped with artificial intelligence and explosive warheads could become weapons of mass destruction. Initially developed for the military, the Slaughterbots end up being used by terrorists and criminals. As Professor Stuart Russell points out at the end of the video, all the technologies depicted already existed, but had not been put together.

Now the technologies have been put together, and lethal autonomous drones able to locate and attack targets without human supervision may already have been used in Libya.

Could Big Data Beat Our Opioid Crisis?

Experts in the AI and Big Data sphere consider October 2021 to be a dark month. Their pessimism isn’t fueled by rapidly shortening days or chilly weather in much of the country—but rather by the grim news from Facebook on the effectiveness of AI in content moderation.

This is unexpected. The social media behemoth has long touted tech tools such as machine learning and Big Data as answers to its moderation woes. As CEO Mark Zuckerberg explained for CBS News, “The long-term promise of AI is that in addition to identifying risks more quickly and accurately than would have already happened, it may also identify risks that nobody would have flagged at all—including terrorists planning attacks using private channels, people bullying someone too afraid to report it themselves, and other issues both local and global.”

Artificial intelligence: ‘The window to act is closing fast’

Artificial intelligence (AI) is a force for good that could play a huge part in solving problems such as climate change. Left unchecked, however, it could undermine democracy, lead to massive social problems and be harnessed for chilling military or terrorist attacks.

That’s the view of Martin Ford, futurist and author of Rule of the Robots, his follow-up to Rise of the Robots, the 2015 New York Times bestseller and winner of the Financial Times/McKinsey Business Book of the Year, which focused on how AI would destroy jobs.

In the new book, Ford, a sci-fi fan, presents two broad movie-based scenarios.

AI Weekly: EU facial recognition ban highlights need for U.S. legislation

This week, The European Parliament, the body responsible for adopting European Union (EU) legislation, passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places. The resolution, which also proposes a moratorium on the deployment of predictive policing software, would restrict the use of remote biometric identification unless it’s to fight “serious” crime, such as kidnapping and terrorism.

The approach stands in contrast to that of U.S. agencies, which continue to embrace facial recognition even in light of studies showing the potential for ethnic, racial, and gender bias. A recent report from the U.S. Government Accountability Office found that 10 branches including the Departments of Agriculture, Commerce, Defense, and Homeland Security plan to expand their use of facial recognition between 2020 and 2023 as they implement as many as 17 different facial recognition systems.

Commercial face-analyzing systems have been critiqued by scholars and activists alike throughout the past decade, if not longer. The technology and techniques — everything from sepia-tinged film to low-contrast digital cameras — often favor lighter skin, encoding racial bias in algorithms. Indeed, independent benchmarks of vendors’ systems by the Gender Shades project and others have revealed that facial recognition technologies are susceptible to a range of prejudices exacerbated by misuse in the field. For example, a report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects.

5 Ways to Save Humanity from a Malicious AI

Describing one scenario, an AI could act as a quiet bioterrorist. Since we have become heavily dependent on computers and computer-dependent systems are by default dispersed and impersonal, the task of eradicating humanity could be divvied up to gazillions of little micro-players, who may or may not know what they are doing. “For example, the AI could pay employees in a DNA-synthesizing company to do small jobs and then combine their work into creating a biological weapon, such as a virus able to kill a lot of people,” says Turchin.

“How are you certain you are not in a multilevel simulation, where your behavior is being monitored by humans who are testing your benevolence levels?”

Turchin addresses this in one of his letters to the young, future humanity-killing AI. “There are several instrumental reasons why exterminating humanity is not in your interest,” the Russian transhumanist writes. “Cooperating with humans will only increase your chances of accomplishing any goal you may have.”

Militaries are plundering sci-fi for technology ideas

But while science fiction provides military planners with a tantalizing glimpse of future weaponry, from exoskeletons to mind-machine interfaces, the genre is always about more than flashy new gadgets. It’s about anticipating the unforeseen ways in which these technologies could affect humans and society – and this extra context is often overlooked by the officials deciding which technologies to invest in for future conflicts.

Imagined worlds

Like my colleague David Seed, who has studied how fiction impacts on real-life threat assumptions about nuclear terrorism, I’m interested in how science fiction informs our sense of the future. This has given me the opportunity to work with members of the armed forces, using science fiction to query assumptions and generate novel visions of the future.

/* */