Cool story!
Biohackers push back as the scientific establishment charts a course through the ethics of genetic interference.
Posthumanists and perhaps especially transhumanists tend to downplay the value conflicts that are likely to emerge in the wake of a rapidly changing technoscientific landscape. What follows are six questions and scenarios that are designed to focus thinking by drawing together several tendencies that are not normally related to each other but which nevertheless provide the basis for future value conflicts.
Growing organs in the lab is an enduring sci-fi trope, but as stem cell technology brings it ever closer to reality, scientists are beginning to contemplate the ethics governing disembodied human tissue.
So-called organoids have now been created from gut, kidney, pancreas, liver and even brain tissue. Growing these mini-organs has been made possible by advances in stem cell technology and the development of 3D support matrices that allow cells to develop just like they would in vivo.
Unlike simple tissue cultures, they exhibit important structural and functional properties of organs, and many believe they could dramatically accelerate research into human development and disease.
That’s a relief.
Of all the potentially apocalyptic technologies scientists have come up with in recent years, the gene drive is easily one of the most terrifying. A gene drive is a tool that allows scientists to use genetic engineering to override natural selection during reproduction. In theory, scientists could use it to alter the genetic makeup of an entire species—or even wipe that species out. It’s not hard to imagine how a slip-up in the lab could lead to things going very, very wrong.
But like most great risks, the gene drive also offers incredible reward. Scientists are, for example, exploring how gene drive might be used to wipe out malaria and kill off Hawaii’s invasive species to save endangered native birds. Its perils may be horrifying, but its promise is limitless. And environmental groups have been campaigning hard to prevent that promise from ever being realized.
This week at the United Nations Convention on Biodiversity in Mexico, world governments rejected calls for a global moratorium on gene drives. Groups such Friends of the Earth and the Council for Responsible Genetics have called gene drive “gene extinction technology,” arguing that scientists “propose to use extinction as a deliberate tool, in direct contradiction to the moral purpose of conservation organizations, which is to protect life on earth.”
Yikes!
If there’s an unavoidable accident in a self-driving car, who dies? This is the question researchers at Massachusetts Institute of Technology (MIT) want you to answer in ‘Moral Machine.’
The simplistic website is sort of like the famed ‘Trolley Problem’ on steroids. If you’re unfamiliar, according to Wikipedia, the Trolley Problem is as follows:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:
Nice article raising old concerns and debates on ethics. Synbio like any technology or science can in the wrong hands be used to do anything destructive. Placing standards and laws on such technologies truly does get the law abiding researchers, labs and companies aligned and sadly restricted. However, it does not prevent an ISIS, or the black market, or any other criminal with money from trying to meet an intended goal. So, I do caution folks to at least step back assess and think before imposing a bunch of restrictions and laws on a technology that prevents it from helping those in need v. criminals who never follow ethics or the law.
When artists use synthetic biology, are they playing God, or just playing with cool new toys? Scientists Drew Endy and Christina Agapakis weigh in on the ethics.
IEEE’s new standards for ethically aligned AI — it’s a start focuses a lot on building ethics/ Morales into AI and not promote the building of autonomous AI Weapons, etc. However, without government & laws on the books this set of standards are a feel good document at best. When it gets into morals, values, not breaking laws, etc. this is when the standard really must come from social and cultural order/ practices, government, and most importantly laws to ensure the standards have the buy in and impact you need. My suggestion to IEEE, please work with gov, tech, legal sys. on this one.
More than 100 experts in artificial intelligence and ethics are attempting to advance public discussion surrounding the ethical considerations of AI.
But Westworld is more than just entertainment. It raises problems that society will have to face head-on as technology gets more powerful. Here are a couple of the biggest.
1. Can we treat robots with respect?
Westworld raises a moral question — at what point do we have to treat machines in a responsible manner? We’re used to dropping our smartphones on the ground without remorse and throwing our broken gadgets in the trash. We may have to think differently as machines show more human traits.