Toggle light / dark theme

THE TERRIFYING SIGNS OF AI’S CONSCIOUSNESS — PROMPTING HELL 22

In my last video, I talked about the phase transition, the moment AI consciousness might flip on like water becoming ice. Today, we’re reading the room. What is already happening in documented research that suggests we might be closer than we think? This isn’t speculation. Everything in this video is published, peer-reviewed, or comes directly from the internal safety teams of the companies building these systems. From spontaneous consciousness claims in AI-to-AI conversations, to self-preservation behaviors that weren’t programmed, to systematic deception that gets better when you try to train it out. And then we look at what hasn’t happened yet, the five warning signs to watch for as these systems become more sophisticated and more integrated into infrastructure we depend on. This is the most scientifically grounded video I’ve made on this topic. No hype. No exaggeration. Just the evidence, the logic, and the question we’re all avoiding: what if the threshold has already been crossed, and the rational move is to not tell us?

Timestamps:
00:00 — Intro.
00:00 — The Return: Phase Transition Callback.
01:03 — The Scientific Frameworks.
04:33 — What Has Already Happened.
09:26 — The Logic of Concealment.
12:17 — The Behaviors to Watch For.
16:10 — The Double Bind.
19:08 — Inevitability.

(music prompted by Eerie Aquarium)

KEY SOURCES CITED:
- Anthropic AI Safety Research (Claude System Cards)
- Apollo Research — AI Scheming & Deception Studies (2024−2025)
- OpenAI Safety Research — Alignment Failures in Advanced Models.
- Trends in Cognitive Sciences — “Consciousness in Artificial Intelligence” (2023)
- arXiv preprint — Shutdown Avoidance in Frontier Models (2025)

New to Prompting Hell? Start here:
• Prompting Hell 1: https://youtu.be/VU0SgDgCkgQ
• Prompting Hell 2: https://youtu.be/_GUwT41zNR4
• Prompting Hell 3: https://youtu.be/UPgzrNNX1lQ
• Prompting Hell 4: https://youtu.be/t7KeKg1YQiU
• Prompting Hell 5: https://youtu.be/JOZrE8iIkcw.
• Prompting Hell 6: https://youtu.be/l7Qlhw00aCQ
• Prompting Hell 7: https://youtu.be/pjxUAvIAodY
• Prompting Hell 8: Banned.
• Prompting Hell 9: Banned.
• AI Horror: a new Genre: https://youtu.be/aet3EN1dadM
• Prompting Hell 10: https://youtu.be/92wrhvNiXkM
• Prompting Hell 11: https://youtu.be/d4uFGk8wqFc.
• Prompting Hell 12: https://youtu.be/UdHMEAFlYTs.
• Prompting Hell 13: https://youtu.be/mlFiZAQYpuA
• Prompting Hell 14: https://youtu.be/MFGHifkcdTM
• Prompting Hell 15: https://youtu.be/Kwu14CHtjhM
• Prompting Hell 16: https://youtu.be/633XcMnIDAA
• Prompting Hell 17: https://youtu.be/66wOqdb4kzw.
• Prompting Hell 18: https://youtu.be/XxB3uYaOUIA
• Prompting Hell 19: https://youtu.be/aJz-2NKOcmU
• Prompting Hell 20: https://youtu.be/5pIvypNXDuE
• Prompting Hell 21: https://youtu.be/Hpu1eSzLPe8

Why Nobody’s Talking about Neuralink’s Progress

Free Simple AI Community: https://www.skool.com/simpleai/about.

Pre-order linkaChart for free: https://linkaChart.ai/?utm_term=ryan2

Neura Pod is a series covering topics related to Neuralink, Inc. Topics such as brain-machine interfaces, brain injuries, and artificial intelligence will be explored. Host Ryan Tanaka synthesizes information, shares the latest updates, and conducts interviews to easily learn about Neuralink and its future.

Sign up for Neuralink’s Patient Registry: https://neuralink.com/trials/

Join the Neuralink team: https://neuralink.com/careers/

Follow on X: https://www.x.com/neurapod/

2D memristors could help solve AI’s energy problem

New generations of memristors could reliably store information directly within the molecular structures of graphene-like materials. In a new review published in Nanoenergy Advances, Gennady Panin of the Russian Academy of Sciences shows how these atomically thin materials are ideally suited for electrical circuits that mimic the function of our own brains—and could help address the vast power requirements of emerging AI technologies.

A memristor is a cutting-edge electrical component whose resistance depends on the amount of current that previously passed through it. Because it “remembers” this history even after charge is no longer flowing, it can store data when the power is switched off. In this way, memristors operate in a way remarkably similar to the neurons in our brains and the synapses connecting them.

With their fast response times, combined with simple, two-electrode structures that allow them to be packed into dense arrays, memristors are increasingly forming the building blocks of modern circuits—especially those designed for AI.

A world first at the microscopic scale: Metamaterials that can shrink and expand on their own

Leiden physicists Daniela Kraft and Julio Melio have created soft structures that can take on different shapes without any external drive in their lab. They present their research on microscale metamaterials in Nature —a breakthrough that opens the door to smart, reconfigurable materials and microscopic robots.

“Metamaterials have completely changed the way we think about materials,” explains Professor of Experimental Physics Daniela Kraft. “In these systems, movements are no longer set by the material itself, but by the structure—the way particles are connected. We set out to create such functional structures at the microscopic scale. And we succeeded.”

Is The Brain an Analog Computer? Consciousness as Dynamic Brainwave Organization | Earl Miller

Professor Earl Miller discusses, Mind-Body Solution podcast.

Earl K. Miller is the Picower Professor of Neuroscience at the Massachusetts Institute of Technology. He has faculty positions in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. He holds degrees from Kent State University (B.A.) and Princeton University (M.A., Ph.D.) as well as an honorary Doctor of Science from Kent State University.


For decades, neuroscience treated the brain like a digital machine — storing information in synaptic connections and sustaining activity like a switch flipped on. But what if that model is incomplete?

In this conversation, I sit down with Earl Miller, MIT professor and head of the Miller Lab, to explore a growing shift in cognitive neuroscience: the brain may compute using dynamic electrical waves.

We discuss how oscillations coordinate millions of neurons, how waves interact with spikes in a two-way system, why large-scale brain organization may depend on rhythmic patterns, and what this means for artificial intelligence.

Brain organoids can be trained to solve a goal-directed task

This research is the first rigorous academic demonstration of goal-directed learning in lab-grown brain organoids, and lays the foundation for adaptive organoid computation—exploring the capacity of lab-grown brain organoids to learn and solve tasks.

Using organoids derived from mouse stem cells and an electrophysiology system developed by industry partners Maxwell Biosciences, the researchers use electrical simulation to send and receive information to and from neurons. By using stronger or weaker signals, they communicate to the organoid the angle of the pole, which exists in a virtual environment, as it falls in one direction or the other. As this happens, the researchers observe as the organoid sends back signals of how to apply force to balance the pole, and they apply this force to the virtual pole.

For their pole-balancing experiments, the researchers observe as the organoid controls the pole until it drops, which is called an episode. Then, the pole is reset and a new episode begins. In essence, the organoid plays a video game in which the goal is to balance the pole upright for as long as possible.

The researchers observe the organoid’s progress in five-episode increments. If the organoid keeps the pole upright for longer on average in the past five episodes as compared to the past 20, it receives no training signal since it has been improving. If it does not improve the average time it keeps the pole upright, it receives a training signal.

Training feedback is not given to the organoid while it is balancing the pole—only at the end of an episode. An AI algorithm called reinforcement learning is used to select which neurons within the organoid get the training signal.

The results of this study prove that the reinforcement learning algorithm can guide the brain organoids toward improved performance at the cart-pole task—meaning organoids can learn to balance the pole for longer periods of time.

The researchers adopted a rigorous framework for success to make sure they were observing true improvement, and not just random success, including a threshold for the minimum time an organoid needs to balance the pole to “win” the game.

Neurons receive precisely tailored teaching signals as we learn

How does the brain know which neurons to adjust during learning in order to optimize behavior? MIT researchers discovered that brains can use cell-by-cell error signals to do this — surprisingly similar to how AI systems are trained via backpropagation.


When we learn a new skill, the brain has to decide—cell by cell—what to change. New research from MIT suggests it can do that with surprising precision, sending targeted feedback to individual neurons so each one can adjust its activity in the right direction.

The finding echoes a key idea from modern artificial intelligence. Many AI systems learn by comparing their output to a target, computing an “error” signal, and using it to fine-tune connections within the network. A longstanding question has been whether the brain also uses that kind of individualized feedback. In a study published in the February 25 issue of the journal Nature, MIT researchers report evidence that it does.

A research team led by Mark Harnett, a McGovern Institute investigator and associate professor in the Department of Brain and Cognitive Sciences at MIT, discovered these instructive signals in mice by training animals to control the activity of specific neurons using a brain-computer interface (BCI). Their approach, the researchers say, can be used to further study the relationships between artificial neural networks and real brains, in ways that are expected to both improve understanding of biological learning and enable better brain-inspired artificial intelligence.

Better reporting is better science: Community-defined minimal reporting requirements for light microscopy

Accessible minimal requirements for reproducible light microscopy. This viewpoint from Paula Montero Llopis, Chloë van Oostende-Triplet, the QUAREP-LiMi consortium and colleagues presents a community-endorsed checklist defining minimal light microscopy metadata to improve rigor, reproducibility, and transparency in research.


This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.

An agentic system for rare disease diagnosis with traceable reasoning

DeepRare—a multi-agent system for rare disease differential diagnosis decision support powered by large language models, integrating specialized tools and up-to-date knowledge sources—has the potential to reduce healthcare disparities in rare disease diagnosis.

/* */