Toggle light / dark theme

The 2020 Nobel Prize for Chemistry was awarded to Dr. Jennifer Doudna and Dr. Emmanuelle Charpentier for their work on the gene editing technique known as CRISPR-Cas9. This gives us the ability to change the DNA of any living thing, from plants and animals to humans.

The applications are enormous, from improving farming to curing diseases. A decade or so from now, CRISPR will no doubt be taught in High Schools, and be a basic building block of medicine and agriculture. It is going to change everything.

There are ethical and moral concerns, of course, and we will need regulations to ensure this powerful technology is not abused. But we should focus on the remarkable opportunities CRISPR has opened up for us.

I quoted and responded to this remark:

“…we probably will not solve death and this actually shouldn’t be our goal.” Well nice as she seems thank goods Dr Levine does not run the scientific community involved in rejuvenation.

The first bridge looks like it’s going to be plasma dilution and this may come to the general population in just a few short years. People who have taken this treatment report things like their arthritis and back pain vanishing.

After that epigentic programming to treat things that kill you in old age. And so on, bridge after bridge. if you have issues with the future, some problem with people living as long as they like, then by all means you have to freedom to grow old and die. That sounds mean but then I think it’s it’s mean to inform me I have to die because you think we have to because of “progress”. But this idea that living for centuries or longer is some horrible moral crime just holds no water.


Science can’t stop aging, but it may be able to slow our epigenetic clocks.

In today’s column, I will be examining how the latest in generative AI is stoking medical malpractice concerns for medical doctors, doing so in perhaps unexpected or surprising ways. We all pretty much realize that medical doctors need to know about medicine, and it turns out that they also need to know about or at least be sufficiently aware of the intertwining of AI and the law during their illustrious medical careers.

Here’s why.


Is generative AI a blessing or a curse when it comes to medical doctors and the role of medical malpractice lawsuits.

Our technological age is witnessing a breakthrough that has existential implications and risks. The innovative behemoth, ChatGPT, created by OpenAI, is ushering us inexorably into an AI economy where machines can spin human-like text, spark deep conversations and unleash unparalleled potential. However, this bold new frontier has its challenges. Security, privacy, data ownership and ethical considerations are complex issues that we must address, as they are no longer just hypothetical but a reality knocking at our door.

The G7, composed of the world’s seven most advanced economies, has recognized the urgency of addressing the impact of AI.


To understand how countries may approach AI, we need to examine a few critical aspects.

Clear regulations and guidelines for generative AI: To ensure the responsible and safe use of generative AI, it’s crucial to have a comprehensive regulatory framework that covers privacy, security and ethics. This framework will provide clear guidance for both developers and users of AI technology.

Public engagement: It’s important to involve different viewpoints in policy discussions about AI, as these decisions affect society as a whole. To achieve this, public consultations or conversations with the general public about generative AI can be helpful.

Year 2022


Experiments such as this one cannot be funded with federal research dollars, though they break no U.S. laws. The work was conducted in China, not because it was illegal in the United States, the researchers said, but because the monkey embryos, which are difficult to procure and expensive, were available there. The experiment used a total of 150 embryos, which were obtained without harming the monkeys, “just like in the IVF procedure,” Tan said.

But such experiments, which combine human cells with those of animals, are nevertheless controversial. This work, and other work by Izpisua Belmonte, has moved so rapidly, bioethicists have had trouble keeping up.

“The complicated thing is that we need better models of human disease, but the better those models are, the closer they bring us to the ethical issues we were trying to avoid by not doing experiments in humans,” Farahany said. “Remarkable steps forward require urgent public engagement.”

As generative AI gains traction and companies rush to incorporate it into their operations, concerns have mounted over the ethics of the technology. Deepfake images have circulated online, such as ones showing former President Donald Trump being arrested, and some testers have found that AI chatbots will give advice related to criminal activities, such as tips for how to murder people.

AI is known to sometimes hallucinate — make up information and continuously insist that it’s true — creating fears that it could spread false information. It can also develop bias and in some cases has argued with users. Some scammers have also used AI voice-cloning software in attempts to pose as relatives.

“How do you develop AI systems that are aligned to human values, including morality?,” Pichai said. “This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on.”

Thanks to advances in artificial intelligence (AI) chatbots and warnings by prominent AI researchers that we need to pause AI research lest it destroys society, people have been talking a little more about the ethics of artificial intelligence lately.

The topic is not new: Since people first imagined robots, some have tried to come up with ways of stopping them from seeking out the last remains of humanity hiding in a big field of skulls. Perhaps the most famous example of thinking about how to constrain technology so that it doesn’t destroy humanity comes from fiction: Isaac Asimov’s Laws of Robotics.

The laws, explored in Asimov’s works such as the short story Runaround and I, Robot, are incorporated into all AI as a safety feature in the works of fiction. They are not, as some on the Internet appear to believe, real laws, nor is there currently a way to implement such laws.

Discover the fascinating world of digital immortality and the pivotal role artificial intelligence plays in bringing this concept to life. In this captivating video, we delve into the intriguing idea of preserving our consciousness, memories, and personalities in a digital realm, potentially allowing us to live forever in a virtual environment. Unravel the cutting-edge AI technologies like mind uploading, AI-powered avatars, and advanced brain-computer interfaces that are pushing the boundaries of what it means to be alive.

Join us as we explore the ethical considerations, current progress, and future prospects of digital immortality. Learn about the ongoing advancements in brain-computer interfaces such as Neuralink, AI-powered virtual assistants like ChatGPT, and the challenges and opportunities that lie ahead. Will digital immortality redefine humanity’s relationship with life, death, and existence itself? Watch now to uncover the possibilities.

Keywords: digital immortality, artificial intelligence, mind uploading, AI-powered avatars, brain-computer interfaces, Neuralink, ChatGPT, virtual afterlife, eternal life, neuroscience, ethics, virtual reality, consciousness, future of humanity.

Neurotech will bring many amazing positive changes to the world, such as treating ailments like blindness, depression, and epilepsy, giving us superhuman sensory capabilities that allow us to understand the world in new ways, accelerating our ability to cognitively process information, and more. But in an increasingly connected society, neuroprivacy will represent a crucial concern of the future. We must carefully devise legal protections against misuse of “mind reading” technology as well as heavily invest in “neurocybersecurity” R&D to prevent violation of people’s inner thoughts and feelings by authorities and malignant hackers. We can capitalize on the advantages, but we must do establish safety mechanisms as these technologies mature. #neurotechnology #neuroscience #neurotech #computationalbiology #future #brain


Determining how the brain creates meaning from language is enormously difficult, says Francisco Pereira, a neuroscientist at the US National Institute of Mental Health in Bethesda, Maryland. “It’s impressive to see someone pull it off.”‘

‘Wake-up call’

Neuroethicists are split on whether the latest advance represents a threat to mental privacy. “I’m not calling for panic, but the development of sophisticated, non-invasive technologies like this one seems to be closer on the horizon than we expected,” says bioethicist Gabriel Lázaro-Muñoz at Harvard Medical School in Boston. “I think it’s a big wake-up call for policymakers and the public.”