БЛОГ

Archive for the ‘ethics’ category: Page 3

Jul 24, 2024

Global Versus Local Theories of Consciousness and the Consciousness Assessment Issue in Brain Organoids

Posted by in categories: biotech/medical, ethics, neuroscience

Recently, human brain organoids have raised increasing interest from scholars of many fields and a dynamic discussion in bioethics is ongoing. There is a serious concern that these in vitro models of brain development based on innovative methods for three-dimensional stem cell culture might deserve a specific moral status [1, 2]. This would especially be the case if these small stem cell constructs were to develop physiological features of organisms endowed with nervous systems, suggesting that they may be able to feel pain or develop some form of sentience or consciousness. Whether one wants to envision or discard the possibility of conscious brain organoids and whether one wants to acknowledge or dispute its moral relevance, the notion of consciousness is a main pillar of this discussion (even if not the only issue involved [3]). However, consciousness is itself a difficult notion, its nature and definition having been discussed for decades [4, 5]. As a consequence, the ethical debate surrounding brain organoids is deeply entangled with epistemological uncertainty pertaining to the conceptual underpinnings of the science of consciousness and its empirical endeavor.

It has been argued that neuroethics should circumvent this fundamental uncertainty by adhering to a precautionary principle [6]. Even if we do not know with certainty at which point brain organoids could become conscious, following some experimental design principles would ensure that the research does not raise any ethically problematic features in the years to come. It has also been proposed to redirect the inquiry to the “what-kind” issue (rather than the “whether or not” issue) in order to rely on more graspable features for ethical assessment [7]. These strategies, however, make the epistemological issue even more relevant. The question of whether or not current and future organoids can develop a certain form of consciousness (without presupposing what these different forms of consciousness might be) and how to assess this potentiality in existing biological systems is bound to stay with the field of brain organoid technology for a certain time. Even if it is not for advancing ethical issues, there is a theoretical interest in determining the boundary conditions of consciousness and its potential emergence in artificial entities. Although the methodological and knowledge gap is still wide between the research community on cellular biology and stem cell culture on the one side and the research community on consciousness such as cognitive neuroscience on the other, there will be more and more circulation of ideas and methods in the coming years. The results of this scientific endeavor will, in turn, impact ethics.

In this article, I look back at the history of consciousness research to find new perspectives on this contemporary epistemological conundrum. In particular, I suggest the distinction between “global” theories of consciousness and “local” theories of consciousness as a thought-provoking one for those engaged in the difficult task of adapting models of consciousness to the biological reality of brain organoids. The first section introduces the consciousness assessment issue as a general framework and a challenge for any discussion related to the putative consciousness of brain organoids. In the second section, I describe and critically assess the main attempt, so far, at solving the consciousness assessment issue relying on integrated information theory. In the third section, I propose to rely on the distinction between local and global theories of consciousness as a tool to navigate the theoretical landscape, before turning to the analysis of a notable local theory of consciousness, Semir Zeki’s theory of microconsciousness, in the fourth section. I conclude by drawing the epistemological and ethical lessons from this theoretical exploration.

Jul 23, 2024

Human Brain Organoid Research and Applications: Where and How to Meet Legal Challenges?

Posted by in categories: biotech/medical, ethics, law, neuroscience

One of the most debated ethical concerns regarding brain organoids is the possibility that they will become conscious (de Jongh et al. 2022). Currently, many researchers believe that human brain organoids will not become conscious in the near future (International Society for Stem Cell Research 2021). However, several consciousness theories suggest that even existing human brain organoids could be conscious (Niikawa et al. 2022). Further, the feasibility depends on the definition of “consciousness.” For the sake of argument, we assume that human brain organoids can be conscious in principle and examine the legal implications of three types of “consciousness” in the order in which they could be easiest to realize. The first is a non–valenced experience—a mere sensory experience without positive or negative evaluations. The second is a valenced experience or sentience— an experience with evaluations such as pain and pleasure. The third is a more developed cognitive capacity. We assume that if any consciousness makes an entity a subject of (more complex) welfare, it may need to be legally (further) protected.

As a primitive form of consciousness, a non–valenced experience will, if possible, be realized earlier by human brain organoids than other forms of consciousness. However, the legal implications remain unclear. Suppose welfare consists solely of a good or bad experience. In that case, human brain organoids with a non–valenced experience have nothing to protect because they cannot have good or bad experiences. However, some argue that non–valenced experiences hold moral significance even without contributing to welfare. In addition, welfare may not be limited to experience as it has recently been adopted in animal ethics (Beauchamp and DeGrazia 2020). Adopting this perspective, even if human brain organoids possess only non–valenced experiences—or lack consciousness altogether—their basic sensory or motor capacities (Kataoka and Sawai 2023) or the possession of living or non-living bodies to utilize these capacities (Shepherd 2023), may warrant protection.

Jul 21, 2024

The Donation of Human Biological Material for Brain Organoid Research: The Problems of Consciousness and Consent

Posted by in categories: biotech/medical, ethics, neuroscience

Human brain organoids are three-dimensional masses of tissues derived from human stem cells that partially recapitulate the characteristics of the human brain. They have promising applications in many fields, from basic research to applied medicine. However, ethical concerns have been raised regarding the use of human brain organoids. These concerns primarily relate to the possibility that brain organoids may become conscious in the future. This possibility is associated with uncertainties about whether and in what sense brain organoids could have consciousness and what the moral significance of that would be. These uncertainties raise further concerns regarding consent from stem cell donors who may not be sufficiently informed to provide valid consent to the use of their donated cells in human brain organoid research.

Jul 15, 2024

All about Transhumanism

Posted by in categories: biological, ethics, mobile phones, neuroscience, transhumanism

I have recently read the report from Sharad Agarwal, and here are my outcomes by adding some examples:

Transhumanism is the concept of transcending humanity’s fundamental limitations through advances in science and technology. This intellectual movement advocates for enhancing human physical, cognitive, and ethical capabilities, foreseeing a future where technological advancements will profoundly modify and improve human biology.

Consider transhumanism to be a kind of upgrade to your smartphone. Transhumanism, like updating our phones with the latest software to improve their capabilities and fix problems, seeks to use technological breakthroughs to increase human capacities. This could include strengthening our physical capacities to make us stronger or more resilient, improving our cognitive capabilities to improve memory or intelligence, or even fine-tuning moral judgments. Transhumanism, like phone upgrades, aspires to maximize efficiency and effectiveness by elevating the human condition beyond its inherent bounds.

Jul 11, 2024

Could AIs become conscious? Right now, we have no way to tell

Posted by in categories: biological, ethics, law, robotics/AI

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Jul 10, 2024

The Promise and Peril of AI

Posted by in categories: biotech/medical, drones, ethics, existential risks, law, military, robotics/AI

In early 2023, following an international conference that included dialogue with China, the United States released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of “human control” itself is hazier than it might seem. If humans authorized a future AI system to “stop an incoming nuclear attack,” how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes.

We need to recognize the fact that AI technologies are inherently dual-use. This is true even of systems already deployed. For instance, the very same drone that delivers medication to a hospital that is inaccessible by road during a rainy season could later carry an explosive to that same hospital. Keep in mind that military operations have for more than a decade been using drones so precise that they can send a missile through a particular window that is literally on the other side of the earth from its operators.

We also have to think through whether we would really want our side to observe a lethal autonomous weapons (LAW) ban if hostile military forces are not doing so. What if an enemy nation sent an AI-controlled contingent of advanced war machines to threaten your security? Wouldn’t you want your side to have an even more intelligent capability to defeat them and keep you safe? This is the primary reason that the “Campaign to Stop Killer Robots” has failed to gain major traction. As of 2024, all major military powers have declined to endorse the campaign, with the notable exception of China, which did so in 2018 but later clarified that it supported a ban on only use, not development—although even this is likely more for strategic and political reasons than moral ones, as autonomous weapons used by the United States and its allies could disadvantage Beijing militarily.

Jul 9, 2024

Thomas Hartung and colleagues | The future of organoid intelligence | Frontiers Forum Deep Dive 2023

Posted by in categories: biotech/medical, chemistry, computing, engineering, ethics, health, neuroscience, policy

Eexxeccellent.


Human brains outperform computers in many forms of processing and are far more energy efficient. What if we could harness their power in a new form of biological computing?

Continue reading “Thomas Hartung and colleagues | The future of organoid intelligence | Frontiers Forum Deep Dive 2023” »

Jul 9, 2024

Philosopher David Chalmers: We Can Be Rigorous in Thinking about the Future

Posted by in categories: bioengineering, ethics, life extension, Ray Kurzweil, robotics/AI, singularity

David is one of the world’s best-known philosophers of mind and thought leaders on consciousness. I was a freshman at the University of Toronto when I first read some of his work. Since then, Chalmers has been one of the few philosophers (together with Nick Bostrom) who has written and spoken publicly about the Matrix simulation argument and the technological singularity. (See, for example, David’s presentation at the 2009 Singularity Summit or read his The Singularity: A Philosophical Analysis)

During our conversation with David, we discuss topics such as: how and why Chalmers got interested in philosophy; and his search to answer what he considers to be some of the biggest questions – issues such as the nature of reality, consciousness, and artificial intelligence; the fact that academia in general and philosophy, in particular, doesn’t seem to engage technology; our chances of surviving the technological singularity; the importance of Watson, the Turing Test and other benchmarks on the way to the singularity; consciousness, recursive self-improvement, and artificial intelligence; the ever-shrinking of the domain of solely human expertise; mind uploading and what he calls the hard problem of consciousness; the usefulness of philosophy and ethics; religion, immortality, and life-extension; reverse engineering long-dead people such as Ray Kurzweil’s father.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

Jul 5, 2024

Anders Sandberg: We Are All Amazingly Stupid, But We Can Get Better

Posted by in categories: ethics, singularity, transhumanism

Want to find out how and why Anders Sandberg got interested in transhumanism and ethics? Want to hear his take on the singularity? Check out his interview for SingularityWeblog.com

Jul 5, 2024

Exploring AI, Cognitive Science, and Ethics | Deep Interview with Jay Friedenberg

Posted by in categories: biotech/medical, ethics, finance, robotics/AI, science, singularity

In this thought-provoking lecture, Prof. Jay Friedenberg from Manhattan College delves into the intricate interplay between cognitive science, artificial intelligence, and ethics. With nearly 30 years of teaching experience, Prof. Friedenberg discusses how visual perception research informs AI design, the implications of brain-machine interfaces, the role of creativity in both humans and AI, and the necessity for ethical considerations as technology evolves. He emphasizes the importance of human agency in shaping our technological future and explores the concept of universal values that could guide the development of AGI for the betterment of society.

00:00 Introduction to Jay Friedenberg.
01:02 Connecting Cognitive Science and AI
02:36 Human Augmentation and Technology.
03:50 Brain-Machine Interfaces.
05:43 Balancing Optimism and Caution in AI
07:52 Free Will vs Determinism.
12:34 Creativity in Humans and Machines.
16:45 Ethics and Value Alignment in AI
20:09 Conclusion and Future Work.

Continue reading “Exploring AI, Cognitive Science, and Ethics | Deep Interview with Jay Friedenberg” »

Page 3 of 8312345678Last