We’ve been chatting with the winners of VentureBeat’s Women in AI awards. Here are the conversations, covering ethics, regulation, and more.
Category: ethics – Page 32
What transpires in comedies and cartoons when a character has a devil on one shoulder and an angel on the other is not far off from people’s perceptions of the real world, finds a new study from the University of Waterloo.
Intended to illustrate the characters’ decision-making dilemma with comedic results, the moral character and motives of the supernatural beings are obvious. And people have similar expectations when it comes to individuals they see as good or bad.
The researchers explored expectations about how good and evil individuals respond to requests. The researchers were interested in understanding why movies and folktales often depict the devil and demons as eager to grant accidental requests, whereas angels are not depicted this way.
The late 21st century belongs to Superhumans. Technological progress in the field of medicine through gene editing tools like CRISPR is going to revolutionize what it means to be human. The age of Superhumans is portrayed in many science fiction movies, but for the first time in our species history, radically altering our genome is going to be possible through the methods and tools of science.
The gene-editing tool CRISPR, short for clustered regularly interspaced short palindromic repeats, could help us to reprogram life. It gives scientists more power and precision than they have ever had to alter human DNA.
Genetic engineering holds great promise for the future of humanity. A growing number of scientists including David Sinclair believe that we will soon be able to engineer and change our genes in a way that will help us live longer and healthier lives.
But how much should we really tinker with our own nature? What is the moral responsibility of scientists and humans towards future generations?
A thought-provoking new article poses some hugely important scientific questions: Could brain cells initiated and grown in a lab become sentient? What would that look like, and how could scientists test for it? And would a sentient, lab-grown brain “organoid” have some kind of rights? Buckle up for a quick and dirty history of the ethics of sentience. We associate the term with computing and artificial intelligence, but the question of who (or what) is or isn’t “sentient” and deserving of rights and moral consideration goes back to the very beginning of the human experience. The debate colors everything from ethical consumption of meat to many episodes of Black Mirror.
Well, we don’t want that… or do we?
My story centers on the concept of a genetically modified virus (named) which infects the brain and gives people enhanced empathy. The narrative takes place in a fictional middle eastern city called Fakhoury and explores bioethical themes. Love acts as a central motif which ties the story together. Note that this piece will be available online for a limited time, after which you will need to pay for the magazine. I encourage you to check out my story!
Read Philosophy Ethics Short Stories with your friends, family, book club, and students. Each story comes with suggested discussion questions.
AI ethics is about more than just bias. That’s why Red Hat’s Noelle Silver is dedicated to spreading AI literacy.
Since 1,988 and formation of the Posthuman Movement, and articles by early adopters like Max Moore were a sign our message was being received — although I always argued on various Extropian & Transhuman bulletin boards & Yahoo groups &c that “Trans” was a redundant middle and we should move straight to Posthuman, now armed with the new MVT knowledge (also figures on the CDR). There will be a new edition of World Philosophy, the first this millennium, to coincided with various Posthuman University events later this year. Here is the text:
THE EXTROPIAN PRINCIPLES V. 2.01 August 7 1992.
Max More Executive Director, Extropy Institute.
1. BOUNDLESS EXPANSION — Seeking more intelligence, wisdom, and.
personal power, an unlimited lifespan, and removal of natural, social.
biological, and psychological limits to self-actualization and self-realization. Overcoming limits on our personal and social.
progress and possibilities. Expansion into the universe and infinite existence.
2. SELF-TRANSFORMATION — A commitment to continual moral.
Dean’s appearance at TED comes during a time when critics—including current Google employees —are calling for greater scrutiny over big tech’s control over the world’s AI systems. Among those critics was one who spoke right after Dean at TED. Coder Xiaowei R. Wang, creative director of the indie tech magazine Logic, argued for community-led innovations. “Within AI there is only a case for optimism if people and communities can make the case themselves, instead of people like Jeff Dean and companies like Google making the case for them, while shutting down the communities [that] AI for Good is supposed to help,” she said. (AI for Good is a movement that seeks to orient machine learning toward solving the world’s most pressing social equity problems.)
TED curator Chris Andersen and Greg Brockman, co-founder of the AI ethics research group Open AI, also wrestled with the unintended consequences of powerful machine learning systems at the end of the conference. Brockman described a scenario in which humans serve as moral guides to AI. “We can teach the system the values we want, as we would a child,” he said. “It’s an important but subtle point. I think you do need the system to learn a model of the world. If you’re teaching a child, they need to learn what good and bad is.”
There also is room for some gatekeeping to be done once the machines have been taught, Anderson suggested. “One of the key issues to keeping this thing on track is to very carefully pick the people who look at the output of these unsupervised learning systems,” he said.
A New Yorker review of “Roadrunner,” a documentary about the deceased celebrity chef Anthony Bourdain by the Oscar-winning filmmaker Morgan Neville, reveals that a peculiar method was used to create a voice over of an email written by Bourdain. In addition to using clips of Bourdain’s voice from various media appearances, the filmmaker says he had an “A.I. model” of Bourdain’s voice created in order to complete the effect of Bourdain ‘reading’ from his own email in the film. “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville told the reviewer, Helen Rosner. “We can have a documentary-ethics panel about it later.”
On Twitter, some media observers decided to start the panel right away.
“This is unsettling,” tweeted Mark Berman, a reporter at the Washington Post, while ProPublica reporter and media manipulation expert Craig Silverman tweeted “this is not okay, especially if you don’t disclose to viewers when the AI is talking.” Indeed, “The ‘ethics panel’ is supposed to happen BEFORE they release the project,” tweeted David Friend, Entertainment reporter at The Canadian Press.
Tests could show the probability of illnesses occurring in future years, with huge moral and ethical implications, says immunology professor Daniel M Davis.