Rodolfo Llinas tells the story of how he has developed bundles of nanowires thinner than spider webs that can be inserted into the blood vessels of human brains.
While these wires have so far only been tested in animals, they prove that direct communication with the deep recesses of the brain may not be so far off. To understand just how big of a breakthrough this is—US agents from the National Security Agency quickly showed up at the MIT laboratory when the wires were being developed.
What does this mean for the future? It might be possible to stimulate the senses directly — creating visual perceptions, auditory perceptions, movements, and feelings. Deep brain stimulation could create the ultimate virtual reality. Not to mention, direct communication between man and machine or human brain to human brain could become a real possibility.
Llinas poses compelling questions about the potentials and ethics of his technology.
The future of warfare starts in your mind. Understand how Neuroscience, Technology/AI and the OODA loop affects your flow. The world is changing, and cognitive warfare is at the forefront. In our latest podcast episode, we sit down with James Giordano, PhD, a Navy veteran and an expert in neurocognitive science, to delve into the world of cognitive warfare. Stay in the Loop: https://www.aglx.com/newsletter-signup-north-america. From the impact of emotions on decision-making to the integration of artificial intelligence and human cognition, this episode challenges your perspective on the battlefield. Join us as we explore the ethical implications of genetic modifications, the transformative effects of psychedelics, and the complexities of data usage in the digital age. Get ready to reimagine the relationship between technology, culture, and language. Don’t miss out on this opportunity to gain valuable insights from our thought-provoking conversation with Dr. Giordano. Tune in now to stay ahead of the curve on the evolving landscape of warfare!
00:00 — Understanding the OODA loop: A Neuroscience Perspective. 09:11 — Exploring Fifth Generation Warfare and Liminal Warfare. 16:06 — The Long Game: China’s Strategic Plan. 22:19 — Understanding Cognitive Warfare and Human-Machine Teaming. 25:52 — The Evolution of Human-Machine Teaming. 29:11 — Human Involvement in AI Decision Making. 36:01 — The Ethics of Paternalistic AI Systems. 40:43 — Technology’s Impact on Cognitive Engagement. 45:13 — Exploring Technologies for Human Performance Enhancement. 55:59 — Diving Into Attacking Mode and Ethics. 56:24 — Hacking the Human Genome. 59:37 — Epigenetic Modification and Phenotypic Shift. 1:04:54 — The Psychedelic Revolution. 1:11:18 — Revisiting Alcohol and Caffeine: Benefits and Burdens. 1:19:18 — Impact of Technology on Cognitive Capacity. 1:23:33 — Information Overload and Burdens. 1:27:02 — Ownership and Security of Personal Data. 1:31:56 — Identifying Predispositional Traits. 1:33:49 — Data Manipulation and Biometrics. 1:40:13 — Cultural Impact of Technology. 1:48:55 — The Role of Education in Integrating Science, Technology, Ethics, and Policy. 1:54:30 — Major Threats and Concerns in Today’s World.
Joscha Bach is a German cognitive scientist, AI researcher, and philosopher known for his work on cognitive architectures, artificial intelligence, mental representation, emotion, social modeling, multi-agent systems, and the philosophy of mind.
Dr. Theofanopoulou studies neural circuits behind sensory-motor behaviors like speech and dance, aiming to develop drug-and arts-based therapies for brain disorders. Her brain imaging research reveals overlapping motor cortex regions controlling muscles for speech and dance, while transcriptomic studies show upregulation of the oxytocin gene pathway in key areas like the motor cortex and brainstem. Using zebra finches, Bengalese finches, white-rumped munias, and humans, she demonstrates oxytocin’s role in vocal production. She also developed genomic tools to apply these findings across vertebrates. Her future work explores oxytocin-based drugs and dance therapies to treat speech and motor deficits in brain disorders. Recorded on 02/14/2025. [3/2025] [Show ID: 40384]
In this interview Jeff Sebo discusses the ethical implications of artificial intelligence and why we must take the possibility of AI sentience seriously now. He explores challenges in measuring moral significance, the risks of dismissing AI as mere tools, and strategies to mitigate suffering in artificial systems. Drawing on themes from the paper ‘Taking AI Welfare Seriously’ and his up and coming book ‘The Moral Circle’, Sebo examines how to detect markers of sentience in AI systems, and what to do about it. We explore ethical considerations through the lens of population ethics, AI governance (especially important in an AI arms race), and discuss indirect approaches detecting sentience, as well as AI aiding in human welfare. This rigorous conversation probes the foundations of consciousness, moral relevance, and the future of ethical AI design.
Paper ‘Taking AI Welfare Seriously’: https://eleosai.org/papers/20241030_T… — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20… Jeff’s Website: https://jeffsebo.net/ Eleos AI: https://eleosai.org/ Chapters: 00:00 Intro 01:40 Implications of failing to take AI welfare seriously 04:43 Engaging the disengaged 08:18 How Blake Lemoine’s ‘disclosure’ influenced public discourse 12:45 Will people take AI sentience seriously if it is seen tools or commodities? 16:19 Importance, neglectedness and tractability (INT) 20:40 Tractability: Difficulties in measuring moral significance — i.e. by aggregate brain mass 22:25 Population ethics and the repugnant conclusion 25:16 Pascal’s mugging: low probabilities of infinite or astronomically large costs and rewards 31:21 Distinguishing real high stakes causes from infinite utility scams 33:45 The nature of consciousness, and what to measure in looking for moral significance in AI 39:35 Varieties of views on what’s important. Computational functionalism 44:34 AI arms race dynamics and the need for governance 48:57 Indirect approaches to achieving ideal solutions — Indirect normativity 51:38 The marker method — looking for morally relevant behavioral & anatomical markers in AI 56:39 What to do about suffering in AI? 1:00:20 Building in fault tolerance to noxious experience into AI systems — reverse wireheading 1:05:15 Will AI be more friendly if it has sentience? 1:08:47 Book: The Moral Circle by Jeff Sebo 1:09:46 What kind of world could be achieved 1:12:44 Homeostasis, self-regulation and self-governance in sentient AI systems 1:16:30 AI to help humans improve mood and quality of experience 1:18:48 How to find out more about Jeff Sebo’s research 1:19:12 How to get involved Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford
🔍 Overview: Join Robert Plomin and me as we dive deep into the fascinating world of behavioural genetics, exploring how our DNA shapes who we are, the power of environment, and whether we can rewrite our genetic destiny.
🗣️ Highlights. [Highlight 1]: How Does Genetics Shape Who We Are? [Highlight 2]: What Role Does the Environment Truly Play in Defining Us? [Highlight 3]: Are We Hardwired by Our DNA, or Can We Rewrite Our Destiny?
Humans have been selectively breeding cats and dogs for thousands of years to make more desirable pets. A new startup called the Los Angeles Project aims to speed up that process with genetic engineering to make glow-in-the-dark rabbits, hypoallergenic cats and dogs, and possibly, one day, actual unicorns.
The Los Angeles Project is the brainchild of biohacker Josie Zayner, who in 2017 publicly injected herself with the gene-editing tool Crispr during a conference in San Francisco and livestreamed it. “I want to help humans genetically modify themselves,” she said at the time. She’s also given herself a fecal transplant and a DIY Covid vaccine and is the founder and CEO of The Odin, a company that sells home genetic-engineering kits.
Now, Zayner wants to create the next generation of pets. “I think, as a human species, it’s kind of our moral prerogative to level up animals,” she says.
Advanced artificial intelligence (AI) tools, including LLM-based conversational agents such as ChatGPT, have become increasingly widespread. These tools are now used by countless individuals worldwide for both professional and personal purposes.
Some users are now also asking AI agents to answer everyday questions, some of which could have ethical and moral nuances. Providing these agents with the ability to discern between what is generally considered ‘right’ and ‘wrong’, so that they can be programmed to only provide ethical and morally sound responses, is thus of the utmost importance.
Researchers at the University of Washington, the Allen Institute for Artificial Intelligence and other institutes in the United States recently carried out an experiment exploring the possibility of equipping AI agents with a machine equivalent of human moral judgment.