Toggle light / dark theme

“Intelligence supposes goodwill,” Simone de Beauvoir wrote in the middle of the twentieth century. In the decades since, as we have entered a new era of technology risen from our minds yet not always consonant with our values, this question of goodwill has faded dangerously from the set of considerations around artificial intelligence and the alarming cult of increasingly advanced algorithms, shiny with technical triumph but dull with moral insensibility.

In De Beauvoir’s day, long before the birth of the Internet and the golden age of algorithms, the visionary mathematician, philosopher, and cybernetics pioneer Norbert Wiener (November 26, 1894–March 18, 1964) addressed these questions with astounding prescience in his 1954 book The Human Use of Human Beings, the ideas in which influenced the digital pioneers who shaped our present technological reality and have recently been rediscovered by a new generation of thinkers eager to reinstate the neglected moral dimension into the conversation about artificial intelligence and the future of technology.

A decade after The Human Use of Human Beings, Wiener expanded upon these ideas in a series of lectures at Yale and a philosophy seminar at Royaumont Abbey near Paris, which he reworked into the short, prophetic book God & Golem, Inc. (public library). Published by MIT Press in the final year of his life, it won him the posthumous National Book Award in the newly established category of Science, Philosophy, and Religion the following year.

Possibly a move to freeze and stall the tec, like the bio ethics clowns who were able to freeze bio tec. But, China wouldnt sign on to any freeze, thankfully. And the tec has already spread across 3rd world countries.


WASHINGTON, June 6 (Reuters) — Senate Majority Leader Chuck Schumer said on Tuesday he has scheduled three briefings for senators on artificial intelligence, including the first classified briefing on the topic.

In a letter to colleagues on Tuesday, the Democratic leader said senators need to deepen their understanding of artificial intelligence.

“AI is already changing our world, and experts have repeatedly told us that it will have a profound impact on everything from our national security to our classrooms to our workforce, including potentially significant job displacement,” Schumer said.

The 2020 Nobel Prize for Chemistry was awarded to Dr. Jennifer Doudna and Dr. Emmanuelle Charpentier for their work on the gene editing technique known as CRISPR-Cas9. This gives us the ability to change the DNA of any living thing, from plants and animals to humans.

The applications are enormous, from improving farming to curing diseases. A decade or so from now, CRISPR will no doubt be taught in High Schools, and be a basic building block of medicine and agriculture. It is going to change everything.

There are ethical and moral concerns, of course, and we will need regulations to ensure this powerful technology is not abused. But we should focus on the remarkable opportunities CRISPR has opened up for us.

I quoted and responded to this remark:

“…we probably will not solve death and this actually shouldn’t be our goal.” Well nice as she seems thank goods Dr Levine does not run the scientific community involved in rejuvenation.

The first bridge looks like it’s going to be plasma dilution and this may come to the general population in just a few short years. People who have taken this treatment report things like their arthritis and back pain vanishing.

After that epigentic programming to treat things that kill you in old age. And so on, bridge after bridge. if you have issues with the future, some problem with people living as long as they like, then by all means you have to freedom to grow old and die. That sounds mean but then I think it’s it’s mean to inform me I have to die because you think we have to because of “progress”. But this idea that living for centuries or longer is some horrible moral crime just holds no water.


Science can’t stop aging, but it may be able to slow our epigenetic clocks.

In today’s column, I will be examining how the latest in generative AI is stoking medical malpractice concerns for medical doctors, doing so in perhaps unexpected or surprising ways. We all pretty much realize that medical doctors need to know about medicine, and it turns out that they also need to know about or at least be sufficiently aware of the intertwining of AI and the law during their illustrious medical careers.

Here’s why.


Is generative AI a blessing or a curse when it comes to medical doctors and the role of medical malpractice lawsuits.

Our technological age is witnessing a breakthrough that has existential implications and risks. The innovative behemoth, ChatGPT, created by OpenAI, is ushering us inexorably into an AI economy where machines can spin human-like text, spark deep conversations and unleash unparalleled potential. However, this bold new frontier has its challenges. Security, privacy, data ownership and ethical considerations are complex issues that we must address, as they are no longer just hypothetical but a reality knocking at our door.

The G7, composed of the world’s seven most advanced economies, has recognized the urgency of addressing the impact of AI.


To understand how countries may approach AI, we need to examine a few critical aspects.

Clear regulations and guidelines for generative AI: To ensure the responsible and safe use of generative AI, it’s crucial to have a comprehensive regulatory framework that covers privacy, security and ethics. This framework will provide clear guidance for both developers and users of AI technology.

Public engagement: It’s important to involve different viewpoints in policy discussions about AI, as these decisions affect society as a whole. To achieve this, public consultations or conversations with the general public about generative AI can be helpful.

Year 2022


Experiments such as this one cannot be funded with federal research dollars, though they break no U.S. laws. The work was conducted in China, not because it was illegal in the United States, the researchers said, but because the monkey embryos, which are difficult to procure and expensive, were available there. The experiment used a total of 150 embryos, which were obtained without harming the monkeys, “just like in the IVF procedure,” Tan said.

But such experiments, which combine human cells with those of animals, are nevertheless controversial. This work, and other work by Izpisua Belmonte, has moved so rapidly, bioethicists have had trouble keeping up.

“The complicated thing is that we need better models of human disease, but the better those models are, the closer they bring us to the ethical issues we were trying to avoid by not doing experiments in humans,” Farahany said. “Remarkable steps forward require urgent public engagement.”

As generative AI gains traction and companies rush to incorporate it into their operations, concerns have mounted over the ethics of the technology. Deepfake images have circulated online, such as ones showing former President Donald Trump being arrested, and some testers have found that AI chatbots will give advice related to criminal activities, such as tips for how to murder people.

AI is known to sometimes hallucinate — make up information and continuously insist that it’s true — creating fears that it could spread false information. It can also develop bias and in some cases has argued with users. Some scammers have also used AI voice-cloning software in attempts to pose as relatives.

“How do you develop AI systems that are aligned to human values, including morality?,” Pichai said. “This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on.”

Thanks to advances in artificial intelligence (AI) chatbots and warnings by prominent AI researchers that we need to pause AI research lest it destroys society, people have been talking a little more about the ethics of artificial intelligence lately.

The topic is not new: Since people first imagined robots, some have tried to come up with ways of stopping them from seeking out the last remains of humanity hiding in a big field of skulls. Perhaps the most famous example of thinking about how to constrain technology so that it doesn’t destroy humanity comes from fiction: Isaac Asimov’s Laws of Robotics.

The laws, explored in Asimov’s works such as the short story Runaround and I, Robot, are incorporated into all AI as a safety feature in the works of fiction. They are not, as some on the Internet appear to believe, real laws, nor is there currently a way to implement such laws.