The AI race is transforming northwestern Nevada into one of the world’s largest data-center markets—and sparking fears of water strains in the nation’s driest state.
Category: robotics/AI
CRISPR vs Aging: What’s Actually Happening Right Now
🧠 VIDEO SUMMARY:
CRISPR gene editing in 2025 is no longer science fiction. From curing rare immune disorders and type 1 diabetes to lowering cholesterol and reversing blindness in mice, breakthroughs are transforming medicine today. With AI accelerating precision tools like base editing and prime editing, CRISPR not only cures diseases but also promises longer, healthier lives and maybe even longevity escape velocity.
0:00 – INTRO — First human treated with prime editing.
0:35 — The DNA Problem.
1:44 – CRISPR 1.0 — The Breakthrough.
3:19 – AI + CRISPR 2.0 & 3.0
4:47 – Epigenetic Reprogramming.
5:54 – From the Lab to the Body.
7:28 – Risks, Ethics & Power.
8:59 – The 2030 Vision.
👇 Don’t forget to check out the first three parts in this series:
Part 1 – “Longevity Escape Velocity: The Race to Beat Aging by 2030″
Part 2 – “Longevity Escape Velocity 2025: Latest Research Uncovered!“
Part 3 – “Longevity Escape Velocity: How AI is making us immortal by 2030!”
📌 Easy Insight simplifies the future — from longevity breakthroughs to mind-bending AI and quantum revolutions.
🔍 KEYWORDS:
longevity, longevity escape velocity, AI, artificial intelligence, quantum computing, supercomputers, simplified science, easy insightm, CRISPR 2025, CRISPR gene editing, CRISPR cures diseases, CRISPR longevity, prime editing 2025, base editing 2025, AI in gene editing, gene editing breakthroughs, gene therapy 2025, life extension 2025, reversing aging with CRISPR, CRISPR diabetes cure, CRISPR cholesterol PCSK9, CRISPR ATTR amyloidosis, CRISPR medical revolution, Easy Insight longevity.
👇 JOIN THE CONVERSATION:
Training convolutional neural networks with the Forward–Forward Algorithm
Scodellaro, R., Kulkarni, A., Alves, F. et al. Training convolutional neural networks with the Forward–Forward Algorithm. Sci Rep 15, 38,461 (2025). https://doi.org/10.1038/s41598-025-26235-2
Stanford AI Experts Predict What Will Happen in 2026
After years of fast expansion and billion-dollar bets, 2026 may mark the moment artificial intelligence confronts its actual utility. In their predictions for the next year, Stanford faculty across computer science, medicine, law, and economics converge on a striking theme: The era of AI evangelism is giving way to an era of AI evaluation. Whether it’s standardized benchmarks for legal reasoning, real-time dashboards tracking labor displacement, or clinical frameworks for vetting the flood of medical AI startups, the coming year demands rigor over hype. The question is no longer “Can AI do this?” but “How well, at what cost, and for whom?”
Learn more about what Stanford HAI faculty expect in the new year.
New generator uses carbon fiber to turn raindrops into rooftop electricity
A research team affiliated with UNIST has introduced a technology that generates electricity from raindrops striking rooftops, offering a self-powered approach to automated drainage control and flood warning during heavy rainfall.
Led by Professor Young-Bin Park of the Department of Mechanical Engineering at UNIST, the team developed a droplet-based electricity generator (DEG) using carbon fiber-reinforced polymer (CFRP). This device, called the superhydrophobic fiber-reinforced polymer (S-FRP-DEG), converts the impact of falling rain into electrical signals capable of operating stormwater management systems without an external power source. The findings are published in Advanced Functional Materials.
CFRP composites are lightweight, yet durable, and are used in a variety of applications, such as aerospace and construction because of their strength and resistance to corrosion. Such characteristics make it well suited for long-term outdoor installation on rooftops and other exposed urban structures.
Reinforcement learning accelerates model-free training of optical AI systems
Optical computing has emerged as a powerful approach for high-speed and energy-efficient information processing. Diffractive optical networks, in particular, enable large-scale parallel computation through the use of passive structured phase masks and the propagation of light. However, one major challenge remains: systems trained in model-based simulations often fail to perform optimally in real experimental settings, where misalignments, noise, and model inaccuracies are difficult to capture.
In a new paper published in Light: Science & Applications, researchers at the University of California, Los Angeles (UCLA) introduce a model-free in situ training framework for diffractive optical processors, driven by proximal policy optimization (PPO), a reinforcement learning algorithm known for stability and sample efficiency. Rather than rely on a digital twin or the knowledge of an approximate physical model, the system learns directly from real optical measurements, optimizing its diffractive features on the hardware itself.
“Instead of trying to simulate complex optical behavior perfectly, we allow the device to learn from experience or experiments,” said Aydogan Ozcan, Chancellor’s Professor of Electrical and Computer Engineering at UCLA and the corresponding author of the study. “PPO makes this in situ process fast, stable, and scalable to realistic experimental conditions.”
Optical system uses diffractive processors to achieve large-scale nonlinear computation
Researchers at the University of California, Los Angeles (UCLA) have developed an optical computing framework that performs large-scale nonlinear computations using linear materials.
Reported in eLight, the study demonstrates that diffractive optical processors—thin, passive material structures composed of phase-only layers—can compute numerous nonlinear functions simultaneously, executed rapidly at extreme parallelism and spatial density, bound by the diffraction limit of light.
Nonlinear operations underpin nearly all modern information-processing tasks, from machine learning and pattern recognition to general-purpose computing. Yet, implementing such operations optically has remained a challenge, as most nonlinear optical effects are weak, power-hungry, or slow.
AI model forecasts speech development in deaf children after cochlear implants
An AI model using deep transfer learning—the most advanced form of machine learning—has predicted spoken language outcomes with 92% accuracy from one to three years after patients received cochlear implants (implanted electronic hearing device).
The research is published in the journal JAMA Otolaryngology–Head & Neck Surgery.
Although cochlear implantation is the only effective treatment to improve hearing and enable spoken language for children with severe to profound hearing loss, spoken language development after early implantation is more variable in comparison to children born with typical hearing. If children who are likely to have more difficulty with spoken language are identified prior to implantation, intensified therapy can be offered earlier to improve their speech.