Toggle light / dark theme

Humans have been selectively breeding cats and dogs for thousands of years to make more desirable pets. A new startup called the Los Angeles Project aims to speed up that process with genetic engineering to make glow-in-the-dark rabbits, hypoallergenic cats and dogs, and possibly, one day, actual unicorns.

The Los Angeles Project is the brainchild of biohacker Josie Zayner, who in 2017 publicly injected herself with the gene-editing tool Crispr during a conference in San Francisco and livestreamed it. “I want to help humans genetically modify themselves,” she said at the time. She’s also given herself a fecal transplant and a DIY Covid vaccine and is the founder and CEO of The Odin, a company that sells home genetic-engineering kits.

Now, Zayner wants to create the next generation of pets. “I think, as a human species, it’s kind of our moral prerogative to level up animals,” she says.

Advanced artificial intelligence (AI) tools, including LLM-based conversational agents such as ChatGPT, have become increasingly widespread. These tools are now used by countless individuals worldwide for both professional and personal purposes.

Some users are now also asking AI agents to answer everyday questions, some of which could have ethical and moral nuances. Providing these agents with the ability to discern between what is generally considered ‘right’ and ‘wrong’, so that they can be programmed to only provide ethical and morally sound responses, is thus of the utmost importance.

Researchers at the University of Washington, the Allen Institute for Artificial Intelligence and other institutes in the United States recently carried out an experiment exploring the possibility of equipping AI agents with a machine equivalent of human moral judgment.

I presented these slides (PDF and images below) during the Workshop on Philosophy and Ethics of Brain Emulation (January 28th-29th, 2025) at the Mimir Center for Long Term Futures Research in Stockholm, Sweden. In my talk, I explored how various biological phenomena beyond standard neuronal electrophysiology may exert noticeable effects on the computations underlying subjective experiences. I emphasized the importance of the large range of timescales that such phenomena operate over (milliseconds to years). If we are to create emulations which think and feel like human beings, we must carefully consider the numerous tunable regulatory mechanisms the brain uses to enhance the complexity of its computational repertoire.

Dr. Masayo Takahashi graduated from Kyoto University’s Faculty of Medicine in 1986. In 1992, she completed her Ph.D. in Visual Pathology at Kyoto University’s Graduate School of Medicine. She first worked as a clinician, but later became interested in research following her studies in the United States in 1995. In 2005, her lab became the first in the world to successfully differentiate neural retina from embryonic stem cells. She is currently the project leader of the Laboratory for Retinal Regeneration at the RIKEN Center for Developmental Biology (CDB).

Recently in Japan they restored vision of three people using puliportent stem cells.


Then, in March 2017, Dr. Takahashi and her team made another important step forward. While the 2014 surgery had used cells generated from the patient’s own tissues, Dr. Takahashi and her team succeeded this time in the world’s first transplantation of RPE cells generated from iPS cells that originated from another person (called “allogeneic transplantation”) to treat a patient with wet-type AMD. Currently, the patient is being monitored for the possibility of rejection, which is a risk of allogeneic transplantation. Regarding the significance of the operation, Dr. Takahashi explains that “allogeneic transplantation substantially reduces the time and cost required in producing RPE cells, creating opportunities for even more patients to undergo surgeries. Hearing patients’ eager expectations firsthand when working as a clinician has also been a significant motivation.”

Dr. Takahashi’s team is currently making preparations for clinical studies that will target retinitis pigmentosa, a hereditary eye disease, by transplanting photoreceptor cells. “Having my mind set on wanting to see applications of iPS cells in treatments as quickly as possible, I have been actively involved in the creation of the regulations for their practical applications in regenerative medicine. In Japan, where clinical studies and clinical trials can be conducted at the same time, there is significant merit in the fact that research can be carried out by doctors who also work in medical settings. This helps ensure that they proceed with a sense of responsibility and strong ethics. Our advanced clinical studies have attracted the attention of researchers working in regenerative medicine in various countries. I intend to maintain a rapid pace of research so that we can treat the illnesses of as many patients as possible.”

Professor Graham Oppy discusses the Turing Test, whether AI can understand, whether it can be more ethical than humans, moral realism, AI alignment, incoherence in human value, indirect normativity and much more.

Chapters:
0:00 The Turing test.
6:06 Agentic LLMs.
6:42 Concern about non-anthropocentric intelligence.
7:57 Machine understanding & the Chinese Room argument.
10:21 AI ‘grokking’ — seemingly understanding stuff.
13:06 AI and fact checking.
15:01 Alternative tests for assessing AI capability.
17:35 Moral Turing Tests — Can AI be highly moral?
18:37 Philosophy’s role in AI development.
21:51 Can AI help progress philosophy?
23:48 Increasing percision in the language of philosophy via technoscience.
24:54 Should philosophers be more involved in AI development?
26:59 Moral realism & fining universal principles.
31:02 Empiricism & moral truth.
32:09 Challenges to moral realism.
33:09 Truth and facts.
36:26 Are suffering and pleasure real?
37:54 Signatures of pain.
39:25 AI leaning from morally relevant features of reality.
41:22 AI self-improvement.
42:36 AI mind reading.
43:46 Can AI learn to care via moral realism?
45:42 Bias in AI training data.
46:26 Metaontology.
48:27 Is AI conscious?
49:45 Can AI help resolve moral disagreements?
51:07 ‘Little’ philosophical progress.
54:09 Does the human condition prevent or retard wide spread value convergence?
55:04 Difficulties in AI aligning to incoherent human values.
56:30 Empirically informed alignment.
58:41 Training AI to be humble.
59:42 Paperclip maximizers.
1:00:41 Indirect specification — avoiding AI totalizing narrow and poorly defined goals.
1:02:35 Humility.
1:03:55 Epistemic deference to ‘jupiter-brain’ AI
1:05:27 Indirect normativity — verifying jupiter-brain oracle AI’s suggested actions.
1:08:25 Ideal observer theory.
1:10:45 Veil of ignorance.
1:13:51 Divine psychology.
1:16:21 The problem of evil — an indifferent god?
1:17:21 Ideal observer theory and moral realism.

See Wikipedia article on Graham Oppy: https://en.wikipedia.org/wiki/Graham_https://twitter.com/oppygraham #AI #philosophy #aisafety #ethics Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

x: https://twitter.com/oppygraham.

#AI #philosophy #aisafety #ethics.

What does it take for a kind, compassionate, and ethical person to commit acts of cruelty? Why do ordinary individuals sometimes cross the line into darkness?

In this video, we explore the psychological forces behind human behavior, delving into Philip Zimbardo’s groundbreaking Stanford Prison Experiment, Stanley Milgram’s obedience studies, and historical events that reveal the thin line between good and evil. From the power of authority and dehumanization to the roles society imposes, discover the mechanisms that can corrupt even the most virtuous among us.

But this isn’t just about others—it’s about you. Could you resist these forces? Are you aware of how they operate in your daily life?

By the end, you’ll learn practical strategies to recognize and resist these influences, uncovering your potential for moral courage, empathy, and heroism. This video will challenge your perspective on human nature and inspire you to act with integrity in a world where the battle between good and evil is ever-present.

Teaching healthy lifestyle behaviors to very young children is foundational to their future habits. Previous evidence suggests that philosophical thinking (PT) can help children develop moral values, cognitive skills, and decision-making abilities.

A recent study published in BMC Public Health explores the role of PT in assisting preschoolers to adopt healthy lifestyle behaviors. Some of these habits include being physically active, eating healthy, washing hands properly, having respect for one’s body, being aware of one’s needs, feelings, abilities, and responsibilities, getting sufficient sleep, and sharing one’s thoughts with others.

Artificial consciousness is the next frontier in AI. While artificial intelligence has advanced tremendously, creating machines that can surpass human capabilities in certain areas, true artificial consciousness represents a paradigm shift—moving beyond computation into subjective experience, self-awareness, and sentience.

In this video, we explore the profound implications of artificial consciousness, the defining characteristics that set it apart from traditional AI, and the groundbreaking work being done by McGinty AI in this field. McGinty AI is pioneering new frameworks, such as the McGinty Equation (MEQ) and Cognispheric Space (C-space), to measure and understand consciousness levels in artificial and biological entities. These advancements provide a foundation for building truly conscious AI systems.

The discussion also highlights real-world applications, including QuantumGuard+, an advanced cybersecurity system utilizing artificial consciousness to neutralize cyber threats, and HarmoniQ HyperBand, an AI-powered healthcare system that personalizes patient monitoring and diagnostics.

However, as we venture into artificial consciousness, we must navigate significant technical challenges and ethical considerations. Questions about autonomy, moral status, and responsible development are at the forefront of this revolutionary field. McGinty AI integrates ethical frameworks such as the Rotary Four-Way Test to ensure that artificial consciousness aligns with human values and benefits society.

Unlike traditional RLHFs, which only provide feedback after an assessment has been completed, pBCIs capture implicit, real-time information about the user’s cognitive and emotional state throughout the interaction. This allows the AI to access more comprehensive, multidimensional feedback, including intermediate decisions, judgments and thought processes. By observing brain activity when assessing situations, pBCIs provide a more comprehensive understanding of user needs and enable the AI to adapt more effectively and proactively.

By combining RLHF with pBCIs, we can elevate AI alignment to a new level—capturing richer, more meaningful information that enhances AI’s responsiveness, adaptability and effectiveness. This combination, called neuroadaptive RLHF, retains the standard RLHF approach but adds more detailed feedback through pBCIs in an implicit and unobtrusive way. Neuroadaptive RLHF allows us to create AI models that better understand and support the user, saving time and resources while providing a seamless experience.

The integration of RLHF with pBCIs presents both opportunities and challenges. Among the most pressing concerns are privacy and ethics, as pBCIs capture sensitive neural data. Ensuring proper consent, secure storage and ethical use of this data is critical to avoid misuse or breaches of trust.