БЛОГ

Archive for the ‘ethics’ category: Page 4

May 30, 2024

Andreas Hein on LinkedIn: #interstellar #conference #luxembourg #exoplanet

Posted by in categories: ethics, robotics/AI, security, space travel

Want to go on an unforgettable trip? Abstract Submission closing soon! Exciting news from SnT, Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg! We are thrilled to announce the 1st European Interstellar Symposium in collaboration with esteemed partners like the Interstellar Research Group, Initiative & Institute for Interstellar Studies, Breakthrough Prize Foundation, and Luxembourg Space Agency. This interdisciplinary symposium will delve into the profound questions surrounding interstellar travel, exploring topics such as human and robotic exploration, propulsion, exoplanet research, life support systems, and ethics. Join us to discuss how these insights will impact near-term applications on Earth and in space, covering technologies like optical communications, ultra-lightweight materials, and artificial intelligence. Don’t miss this opportunity to connect with a community of experts and enthusiasts, all united in a common goal. Check out the “Call for Papers” link in the comment section to secure your spot! Image credit: Maciej Rębisz, Science Now Studio #interstellar #conference #Luxembourg #exoplanet

May 28, 2024

How AI is poised to unlock innovations at unprecedented pace

Posted by in categories: business, ethics, governance, internet, policy, robotics/AI, security

How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.


These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Continue reading “How AI is poised to unlock innovations at unprecedented pace” »

May 26, 2024

Training Transhumanists at Oxford University

Posted by in categories: biotech/medical, ethics, mobile phones, neuroscience, transhumanism

Those who know Oxford University for its literary luminaries might be surprised to learn that some of the most important reflections on emerging technologies come from its hallowed halls. While the leading tech innovators in Silicon Valley capture imaginations with their bold visions of future singularities, mind-machine melding, and digital immortality by 2045, they rarely engage as deeply with the philosophical issues surrounding such developments as their like-minded scholars over the pond. This essay will briefly highlight some of the key contributions of Oxford University’s professors Nick Bostrom, Anders Sandberg, and Julian Savulescu to the transhumanist movement. It will also show how this movement’s focus on radical autonomy in biotechnical enhancements shapes the wider global bioethical conversation.

As the lead author of the Transhumanist FAQ, Bostrom provides the closest the movement has to an institutional catechism. He is, in a sense, the Ratzinger of Transhumanism. The first paragraph of the seminal text emphasizes the evolutionary vision of his school. Transhumanism’s incessant pursuit of radical technological transformation is “based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.” Current humans are but one intriguing yet greatly improvable iteration of human existence. Think of the first iPhone and how unattractive 2007’s most cutting-edge technology is in 2024.

Continue reading “Training Transhumanists at Oxford University” »

May 19, 2024

Superintelligence: Paths, Dangers, Strategies

Posted by in categories: biotech/medical, ethics, existential risks, robotics/AI

Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for ‘AI’ from late 2022, which reached a record high in February 2024.

You would therefore be forgiven for thinking that AI is suddenly and only recently a ‘big thing.’ Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.1 Since its beginning, a meandering trajectory of technical successes and ‘AI winters’ subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today’s public conscience.

Alongside those who aim to develop transformational AI as quickly as possible – the so-called ‘Effective Accelerationism’ movement, or ‘e/acc’ – exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI – the ‘decels’ and ‘doomers.’2 One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,3 anthropic reasoning,4 the simulation argument,5 and existential risk.6 I first read his 2014 book Superintelligence: Paths, Dangers, Strategies7 five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a ‘superintelligence’) ought to be taken very seriously before such a system is brought into existence. These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,8 uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),9 and are of a truly existential nature. In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today’s digital technology landscape.

May 17, 2024

The neural signature of subjective disgust could apply to both sensory and socio-moral experiences

Posted by in categories: ethics, neuroscience

Disgust is one of the six basic human emotions, along with happiness, sadness, fear, anger, and surprise. Disgust typically arises when a person perceives a sensory stimulus or situation as revolting, off-putting, or unpleasant in other ways.

May 17, 2024

Security Checks Reaching Towards Your Brain

Posted by in categories: ethics, neuroscience, privacy, security

When Descartes said “I think therefore I am” he probably didn’t know that he was answering a security question. Using behavioral or physical characteristics to identify people, biometrics, has gotten a big boost in the EU. The Orwellian sounding HUMABIO (Human Monitoring and Authentication using Biodynamic Indicators and Behavioral Analysis) is a well funded research project that seeks to combine sensor technology with the latest in biometrics to find reliable and non-obtrusive ways to identify people quickly. One of their proposed methods: scanning your brain stem. That’s right, in addition to reading your retinas, looking at your finger prints, and monitoring your voice, the security systems of the future may be scanning your brain.

How could they actually read your brain? What kind of patterns would they use to authenticate your identity? Yeah, that haven’t quite figured that out yet. HUMABIO is still definitely in the “pre-commercial” and “proof of concept” phase. They do have a nice ethics manual to read, and they’ve actually written some “stories” that illustrate the uses of their various works in progress, but they haven’t produced a fieldable instrument yet. In fact, this aspect of the STREP (Specific Targeted Research Project) would hardly be remarkable if we didn’t know more about the available technology from other sources.

May 15, 2024

Sam Altman talks GPT-4o and Predicts the Future of AI

Posted by in categories: business, education, employment, ethics, robotics/AI

On the day of the ChatGPT-4o announcement, Sam Altman sat down to share behind-the-scenes details of the launch and offer his predictions for the future of AI. Altman delves into OpenAI’s vision, discusses the timeline for achieving AGI, and explores the societal impact of humanoid robots. He also expresses his excitement and concerns about AI personal assistants, highlights the biggest opportunities and risks in the AI landscape today, and much more.

(00:00) Intro.
(00:50) The Personal Impact of Leading OpenAI
(01:44) Unveiling Multimodal AI: A Leap in Technology.
(02:47) The Surprising Use Cases and Benefits of Multimodal AI
(03:23) Behind the Scenes: Making Multimodal AI Possible.
(08:36) Envisioning the Future of AI in Communication and Creativity.
(10:21) The Business of AI: Monetization, Open Source, and Future Directions.
(16:42) AI’s Role in Shaping Future Jobs and Experiences.
(20:29) Debunking AGI: A Continuous Journey Towards Advanced AI
(24:04) Exploring the Pace of Scientific and Technological Progress.
(24:18) The Importance of Interpretability in AI
(25:11) Navigating AI Ethics and Regulation.
(27:26) The Safety Paradigm in AI and Beyond.
(28:55) Personal Reflections and the Impact of AI on Society.
(29:11) The Future of AI: Fast Takeoff Scenarios and Societal Changes.
(30:59) Navigating Personal and Professional Challenges.
(40:21) The Role of AI in Creative and Personal Identity.
(43:09) Educational System Adaptations for the AI Era.
(44:30) Contemplating the Future with Advanced AI

Continue reading “Sam Altman talks GPT-4o and Predicts the Future of AI” »

May 15, 2024

An AI Easily Beat Humans in the Moral Turing Test

Posted by in categories: ethics, information science, robotics/AI

Welcome to the era of ethical algorithms.

May 13, 2024

Does Revenge Taste Sweet? New Study Challenges Assumptions

Posted by in category: ethics

Feeling Bad About Feeling Good?


Summary: A new study explores the complex moral landscape of revenge, revealing that people’s reactions to revenge vary significantly based on the emotions displayed by the avenger. Conducted across four surveys involving Polish students and American adults, the study found that avengers who demonstrate satisfaction are viewed as more competent, whereas those expressing pleasure are seen as immoral.

These perceptions shift dramatically when individuals imagine themselves in the avenger’s shoes, tending to view their own actions as less moral compared to others. The findings challenge conventional views on revenge, suggesting that societal and personal perspectives on morality and competence deeply influence judgments of revengeful actions.

May 12, 2024

In the rush to adopt AI, ethics and responsibility are taking a backseat at many companies

Posted by in categories: ethics, robotics/AI

ChatGPT sparked a generative AI frenzy in the corporate workplace. Efforts to implement that technology responsibly, however, haven’t kept up.

Page 4 of 8212345678Last