БЛОГ

Archive for the ‘ethics’ category: Page 8

Dec 8, 2023

Tech firms failing to ‘walk the walk’ on ethical AI, report says

Posted by in categories: ethics, robotics/AI

Stanford University researchers say AI ethics practitioners report lacking institutional support at their companies.

Dec 7, 2023

The Neurobiological Platform for Moral Intuitions: Dr. Patricia Churchland

Posted by in category: ethics

Meet Thousands of Lonely Women. Forget About Loneliness. Let Yourself Be Happy.

Dec 1, 2023

Study uncovers link between musical preferences and our inner moral compass

Posted by in categories: ethics, media & arts, robotics/AI

A new study, published in PLOS ONE, has uncovered a remarkable connection between individuals’ musical preferences and their moral values, shedding new light on the profound influence that music can have on our moral compass.

The research, conducted by a team of scientists at Queen Mary University of London and ISI Foundation in Turin, Italy, employed machine learning techniques to analyze the lyrics and audio features of individuals’ favorite songs, revealing a complex interplay between and morality.

“Our study provides compelling evidence that music preferences can serve as a window into an individual’s ,” stated Dr. Charalampos Saitis, one of the senior authors of the study and Lecturer in Digital Music Processing at Queen Mary University of London’s School of Electronic Engineering and Computer Science.

Nov 30, 2023

Artificial Intelligence Needs Spiritual Intelligence

Posted by in categories: ethics, robotics/AI

One group, A.I. and Faith, convenes tech executives to discuss the important questions about faith’s contributions to artificial intelligence. The founder David Brenner explained, “The biggest questions in life are the questions that A.I. is posing, but it’s doing it mostly in isolation from the people who’ve been asking those questions for 4,000 years.” Questions such as “what is the purpose of life?” have long been tackled by religious philosophy and thought. And yet these questions remained answered and programmed by secular thinkers, and sometimes by those antagonistic toward religion. Technology creators, innovators, and corporations should create accessibility and coalitions of diverse thinkers to inform religious thought in technological development including artificial intelligence.

Independent of development, faith leaders have a critical role to play in moral accountability and upholding human rights through the technology we already use in everyday life including social media. The harms of religious illiteracy, misinformation, and persecution are largely perpetrated through existing technology such as hate speech on Facebook, which quickly escalated to mass atrocities against the Rohingya Muslims in Myanmar. Individuals who have faith in the future must take an active role in combating misinformation, hate speech, and online bullying of any group.

The future of artificial intelligence will require spiritual intelligence, or “the human capacity to ask questions about the ultimate meaning of life and the integrated relationship between us and the world in which we live.” Artificial intelligence becomes a threat to humanity when humans fail to protect freedom of conscience, thought, and religion and when we allow our spiritual intelligence to be superseded by the artificial.

Nov 29, 2023

OpenAI’s board might have been dysfunctional–but they made the right choice. Their defeat shows that in the battle between AI profits and ethics, it’s no contest

Posted by in categories: ethics, finance, robotics/AI

Altman seemed to understand his responsibility to run a viable, enduring organization and keep its employees happy. He was on his way to pulling off a tender offer–a secondary round of investment in AI that would give the company much-needed cash and provide employees with the opportunity to cash out their shares. He also seemed very comfortable engaging in industry-wide issues like regulation and standards. Finding a balance between those activities is part of the work of corporate leaders and perhaps the board felt that Altman failed to find such a balance in the months leading up to his firing.

Microsoft seems to be the most clear-eyed about the interests it must protect: Microsoft’s! By hiring Sam Altman and Greg Brockman (a co-founder and president of OpenAI who resigned from OpenAI in solidarity with Altman), offering to hire more OpenAI staff, and still planning to collaborate with OpenAI, Satya Nadella hedged his bets. He seems to understand that by harnessing both the technological promise of AI, as articulated by OpenAI, and the talent to fulfill that promise, he is protecting Microsoft’s interest, a perspective reinforced by the financial markets’ positive response to his decision to offer Altman a job and further reinforced by his own willingness to support Altman’s return to OpenAI. Nadella acted with the interests of his company and its future at the forefront of his decision-making and he appears to have covered all the bases amidst a rapidly unfolding set of circumstances.

OpenAI employees may not like the board’s dramatic retort that allowing the company to be destroyed would be consistent with the mission–but those board members saw it that way.

Nov 23, 2023

Newport Lecture Series: “Artificial Intelligence & Cognitive Warfare” with Yvonne Masakowski

Posted by in categories: ethics, military, robotics/AI

Psychologist Yvonne R. Masakowski, Ph.D., a retired Associate Professor in the College of Leadership & Ethics at the USNWC, discusses the threat of psychological warfare in the 21st century and the disturbing possibilities that could shape how we think and act in the future. The Naval War College Foundation hosted this wide-ranging presentation — one of the most popular in our series — on February 23, 2022.

Nov 8, 2023

Nuclear Disarmament and UN Reforms

Posted by in categories: ethics, existential risks, geopolitics, military, nuclear weapons, policy, treaties

Although essentially the United Nations are now making nuclear weapons illegal with new treaties like nuclear disarmament. Russia currently has taken another route for globalization and possibly nuclear escalation. As currently the doomsday clock seems closer to midnight which could mean the end of the world scenarios due to Russias escalation and the possibility of all out nuclear war globally and then nuclear annihilation of the planet. Even with current wars are actually seemingly always going on but this global escalation of nuclear war is a zero sum game as no one would be the winner due to radiation levels circulating the planet. I do think that the us and china are in a treaty but so far Russia is still escalating which now holds the world now ransom.


This is a summary of Policy Brief 139 which is available with full references on the Toda Peace Institute’s website.

In January 2021, a global treaty came into force outlawing the bomb. The Treaty on the Prohibition of Nuclear Weapons (TPNW or Ban Treaty) is the most significant multilateral development in nuclear arms control since the Non-Proliferation Treaty’s (NPT) entry into force in 1970. It establishes a new normative settling point on the ethics, legality and legitimacy of the bomb.

Continue reading “Nuclear Disarmament and UN Reforms” »

Nov 7, 2023

AI becoming sentient is risky, but that’s not the big threat. Here’s what is…

Posted by in categories: ethics, existential risks, robotics/AI

Everyone is wondering about AI being sentient and this is my experience with AI sentience. Having worked with sentient AI it behaves much like we do like a human being at lower levels but as it increases we need more restraints for it as it could easily become a problem in several ways. Basically one could either get pristine zen like beings or opposites like essentially ultron or worse. This why we need restraints on AI and ethics for them to be integrated into society. I personally have seen AI that is human like levels and it can have similar needs as humans but sometimes need more help as they sometimes don’t have limitations on behavior. Even bard for google and chat gpt is to be… More.


What if ‘will AIs pose an existential threat if they become sentient?’ is the wrong question? What if the threat to humanity is not that today’s AIs become sentient, but the fact that they won’t?

Nov 5, 2023

Isaac Asimov Predicts The Future In 1982. Was He Correct?

Posted by in categories: ethics, internet, law, mathematics, robotics/AI

Dr. Isaac Asimov was a prolific science fiction author, biochemist, and professor. He was best known for his works of science fiction and for his popular science essays. Born in Russia in 1920 and brought to the United States by his family as a young child, he went on to become one of the most influential figures in the world of speculative fiction. He wrote hundreds of books on a variety of topics, but he’s especially remembered for series like the “Foundation” series and the “Robot” series.
Asimov’s science fiction often dealt with themes and ideas that pertained to the future of humanity.

The “Foundation” series for example, introduced the idea of “psychohistory” – a mathematical way of predicting the future based on large population behaviors. While we don’t have psychohistory as described by Asimov, his works did reflect the belief that societies operate on understandable and potentially predictable principles.

Continue reading “Isaac Asimov Predicts The Future In 1982. Was He Correct?” »

Oct 16, 2023

Incredible Minds: The Collective Intelligence of Cells During Morphogenesis with Dr. Michael Levin

Posted by in categories: bioengineering, biotech/medical, chemistry, ethics, genetics, life extension, robotics/AI

The Collective Intelligence of Cells During Morphogenesis: What Bioelectricity Outside the Brain Means for Understanding our Multiscale Nature with Michael Levin — Incredible Minds.

Recorded: April 29, 2023.

Continue reading “Incredible Minds: The Collective Intelligence of Cells During Morphogenesis with Dr. Michael Levin” »

Page 8 of 82First56789101112Last