Toggle light / dark theme

In this episode, Peter answers the hardest questions about AI, Longevity, and our future at an event in El Salvador (Padres y Hijos).

Recorded on February 2025
Views are my own thoughts; not Financial, Medical, or Legal Advice.

Chapters.

00:00 — Navigating Confusion in Leadership and Purpose.
02:00 — The Evolution of Work and Purpose.
03:50 — AI’s Role in Information Credibility.
07:17 — Sustainability and Technology’s Impact on Nature.
09:26 — Building a Future with AI and Longevity.
11:40 — The Economics of Longevity and Accessibility.
15:15 — Reimagining Education for the Future.
19:23 — Overcoming Human Obstacles to Progress.

I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: https://www.diamandis.com/subscribe.

Connect with Peter:

In today’s AI news, OpenAI and Google are pushing the US government to allow their AI models to train on copyrighted material. Both companies outlined their stances in proposals published this week, with OpenAI arguing that applying fair use protections to AI “is a matter of national security.” The proposals come in response to a request from the White House, which asked for input on Trump’s AI Action Plan.

In other advancements, one of the bigger players in automation has scooped up a startup in the space in hopes of taking a bigger piece of that market. UiPath, as part of a quarterly result report last night that spelled tougher times ahead, also delivered what it hopes might prove a silver lining. It said it had acquired, a startup founded originally in Manchester, England.

S most advanced features are now available to free users. You And, the restrictive and inconsistent licensing of so-called ‘open’ AI models is creating significant uncertainty, particularly for commercial adoption, Nick Vidal, head of community at the Open Source Initiative, told TechCrunch. While these models are marketed as open, the actual terms impose various legal and practical hurdles that deter businesses from integrating them into their products or services.

S Kate Rooney sits down with Garry Tan, Y Combination president and CEO, at the accelerator On Inside the Code, Ankit Kumar, Sesame, and Anjney Midha, a16z on the Future of Voice AI. What goes into building a truly natural-sounding AI voice? Sesame’s cofounder and CTO, Ankit Kumar, joins a16z’s Anjney Midha for a deep dive into the research and engineering behind their voice technology.

Then, Nate B. Jones explains how AI is making intelligence cheaper, but software strategies built on user lock-in are failing. Historically, SaaS companies relied on retaining users by making it difficult to switch. However, as AI lowers the cost of building and refactoring, users move between tools more freely. The real challenge now is data interoperability—data remains siloed, making AI-generated content and workflows hard to integrate.

We close out with, AI is getting expensive…but it doesn’t have to be. NetworkChuck found a way to access all the major AI models– ChatGPT, Claude, Gemini, even Grok – without paying for multiple expensive subscriptions. Not only does he get unlimited access to the newest models, but he also has better security, more privacy, and a ton of features… this might be the best way to use AI.

Thats all for today, but AI is moving fast — subscribe and follow for more Neural News.

In today’s AI news, ChatGPT just added 100 million users in two months, the fastest cohort adoption in two years, they said. As a result, we have increased our forecast for AI adoption in both consumer and enterprise, they added. OpenAI didn’t respond to a request for comment about what’s been driving this growth spurt. The Barclays analysts studying their growth suggested several reasons, though.

And, tech companies have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things. But, the tech still doesn’t work. Chatbots may be fun to talk to and an occasionally useful replacement for Google, but truly game-changing virtual assistants are nowhere close to ready. And without them, the gadget revolution we were promised has utterly failed to materialize.

Meanwhile, AI company Sesame has released the base model that powers Maya, the impressively realistic voice assistant. The model, which is 1 billion parameters in size (“parameters” referring to individual components of the model), is under an Apache 2.0 license, meaning it can be used commercially with few restrictions. Called CSM-1B, the model generates “RVQ audio codes” from text and audio inputs.

S official forum, after producing approximately 750 to 800 lines of code, the AI assistant halted work and delivered a refusal message: “I cannot generate code for you, as that would be completing your work.” ‘ + In videos, can mislabeled dog paws ruin an AI model? IBM Fellow, Martin Keen explains how ground truth data ensures accurate AI predictions by powering supervised learning and training. Explore challenges like ambiguity and skewed data, and learn strategies to improve data labeling for better AI performance.

And, Harvey CEO Winston Weinberg explains why success in legal AI requires more than just model capabilities—it demands deep process expertise that doesn’t exist online. He shares how Harvey balances rapid product development with earning trust from law firms through hyper-personalized demos and deep industry expertise. He covers Harvey’s approach to product development—expanding specialized capabilities then collapsing them …

In further experimentation, Alex Ziskind compared running DeepSeek locally — various model sizes and quantizations on Apple Silicon M1, M2, M3, M4 Max MacBooks. Alex puts them all to the test and explains all the steps.

We close out with, Eric Simons is the founder and CEO of StackBlitz, the company behind Bolt—the #1 web-based AI coding agent and one of the fastest-growing products in history. After nearly shutting down, StackBlitz launched Bolt on Twitter and exploded from zero to $40 million ARR and 1 million monthly active users in about five months.

Melanized fungi although dangerous to human biology actually are remarkable because they adapted to the radiation which could give more clues to how humans could evolve to survive radiation exposure long term.


There’s an organism thriving within the Chernobyl disaster zone that is not only enduring some of the harshest living conditions imaginable, but potentially helping to improve them too.

The fallout from the Chernobyl nuclear disaster in 1986 is still fascinating the scientific community nearly 40 years on, with new developments emerging all the time.

The Chernobyl Exclusion Zone in Ukraine features a level of radiation that is six times the legal limit of human exposure for workers at 11.28 millirem – but there is still a living organism that has adapted to live and thrive there.

Is an in-depth investigation featuring world renowned philosophers and scientists into the most profound philosophical debate of all time: Do we have free will?

Featuring: Sean Carroll, Daniel Dennett, Jerry Coyne, Dan Barker, Heather Berlin, Gregg Caruso, Massimo Pigliucci, Alex O’Conner, Coleman Hughes, Edwin Locke, Robert Kane, Rick Messing, Derk Pereboom, Richard Carrier, Trick Slattery, Dustin Kreuger, Steven Sharper, Donia Abouelatta.

Chapters.

Intro: — 0:00
Chapter 1: What is Free Will? — 4:19
Chapter 2: The Problem of Free Will — 15:29
Interlude: 22:33
Chapter 3: Libertarian Free Will — 23:16
Chapter 4: Compatibilism — 34:47
Chapter 5: Free Will Skepticism — 45:13
Interlude: The 3 Positions of Free Will — 55:45
Chapter 6: The Great Debate — 57:28
Chapter 7: Neuroscience — 1:07:28
Chapter 7: The Interaction Problem — 1:18:37
Chapter 8: Physics — 1:20:10
Chapter 8: Reduction & Emergence — 1:22:14
Chapter 9: Can We Have Determinism and Free Will? — 1:28:57
Chapter 10: Free Will and the Law — 1:45:57
Chapter 11: Should We Stop Using the Term Free Will? — 1:56:37
Outro: 2:00:38

In today’s AI news, Investor interest in AI coding assistants is exploding. Anysphere, the developer of AI-powered coding assistant Cursor, is in talks with venture capitalists to raise capital at a valuation of nearly $10 billion, Bloomberg reported. The round, if it transpires, would come about three months after Anysphere completed its previous fundraise of $100 million at a pre-money valuation of $2.5 billion.

And, there’s a new voice model in town, and it’s called Sesame. As he so often does, John Werner got a lot of information on this new technology from Nathaniel Whittemore at AI Daily Brief, where he covered interest in this conversational AI. Quoting Deedy Das of Menlo Ventures calling Sesame “the GPT-3 moment for voice,” Whittemore talked about what he called an “incredible explosion” of voice-based models happening now.

In other advancements, along with the new M4 MacBook Pro series Apple is releasing, the company is also quite proud of the new Mac mini. The Mac mini is arguably the more radical of the two. Apple’s diminutive computer has now received its first major design overhaul in 13 years. And this new tiny computer is the perfect machine for experimenting with and learning AI.

S biggest defense tech startups by valuation, raising $240 million at a $5.3 billion valuation in its latest round. Shield AI, the San Diego defense tech startup that builds drones and other AI-powered military systems, has raised a $240 million round at a $5.3 billion valuation, it announced today.” + In videos, while he hardly needs an introduction, few leaders have shaped the future of technology quite like Satya Nadella. He stepped into Microsoft’s top job at a catalytic moment—making bold bets on the cloud, embedding AI into the fabric of computing, all while staying true to Microsoft’s vision of becoming a “software factory.”

T just think, it delivers results. Manus excels at various tasks in work and life, getting everything done while you rest. + Then, join Boris Starkov and Anton Pidkuiko, the developers behind GibberLink, for a fireside chat with Luke Harries from ElevenLabs. On February 24, Georgi Gerganov, the creator of the GGwave protocol, showcased their demo at the ElevenLabs London hackathon on X, garnering attention from around the world—including Forbes, TechCrunch, and the entire developer community.

We close out with, Sam Witteveen looking at the latest release from Mistral AI, which is their Mistral OCR model. He looks at how it works and how it compares to other models, as well as how you can get started using it with code.

Thats all for today, but AI is moving fast — subscribe and follow for more Neural News.

“The Future Already Happened“
What if the past isn’t fixed? Scientists have just proven that the future can influence the past, shattering everything we thought we knew about time and reality. From mind-bending quantum experiments to the shocking science of precognition, this video explores the hidden connections between time, consciousness, and the universe.

✅GET YOUR FREE NUMEROLOGY READING HERE:
https://bit.ly/full-numerology-reading.

Time Stamps:

0:00 — Mind-Blowing Experiments.
1:43 — Presentiment.
2:26 — Precognition.
5:12 — J.W. Dunne’s Precognitive Dream Protocol.
7:33 — Feeling The Future.
10:00 — Remote Viewing.
12:18 — Free Will & Retrocausality.
14:43 — Lucid Dreaming.

►Copyright ©:
Script — BE INSPIRED
Narration — BE INSPIRED
Footage is licensed through Videoblocks, Artgrid, and Envato.
Music: Epidemic Sound / Audiojungle / Envato Elements.
Interviews / Video References were used under FAIR USE LAW.

© BE INSPIRED CHANNEL — All rights reserved.

On the weekend Elon Musk provided a live demonstration of Neuralink’s technology using pigs with surgically implanted brain monitoring devices. The Australian Society for Computers & Law invited Dr Michelle Sharpe (Victorian Barrister) and Dr Allan McCay (Lecturer and Author on Neurotechnology and the law) to explore the legal and ethical implications of technology that interfaces between the human brain and computer devices.

Abstract: Hallucination is a persistent challenge in large language models (LLMs), where even with rigorous quality control, models often generate distorted facts. This paradox, in which error generation continues despite high-quality training data, calls for a deeper understanding of the underlying LLM mechanisms. To address it, we propose a novel concept: knowledge overshadowing, where model’s dominant knowledge can obscure less prominent knowledge during text generation, causing the model to fabricate inaccurate details. Building on this idea, we introduce a novel framework to quantify factual hallucinations by modeling knowledge overshadowing. Central to our approach is the log-linear law, which predicts that the rate of factual hallucination increases linearly with the logarithmic scale of Knowledge Popularity, Knowledge Length, and Model Size. The law provides a means to preemptively quantify hallucinations, offering foresight into their occurrence even before model training or inference. Built on overshadowing effect, we propose a new decoding strategy CoDa, to mitigate hallucinations, which notably enhance model factuality on Overshadow (27.9%), MemoTrap (13.1%) and NQ-Swap (18.3%). Our findings not only deepen understandings of the underlying mechanisms behind hallucinations but also provide actionable insights for developing more predictable and controllable language models.

From: Yuji Zhang [view email].

Have you ever questioned the deep nature of time? While some physicists argue that time is just an illusion, dismissing it outright contradicts our lived experience. In my latest work, Temporal Mechanics: D-Theory as a Critical Upgrade to Our Understanding of the Nature of Time (2025), I explore how time is deeply rooted in the computational nature of reality and information processing by conscious systems. This paper tackles why the “now” is all we have.

In the absence of observers, the cosmic arrow of time doesn’t exist. This statement is not merely philosophical; it is a profound implication of the problem of time in physics. In standard quantum mechanics, time is an external parameter, a backdrop against which events unfold. However, in quantum gravity and the Wheeler-DeWitt equation, the problem of time emerges because there is no preferred universal time variable—only a timeless wavefunction of the universe. The flow of time, as we experience it, arises not from any fundamental law but from the interaction between observers and the informational structure of reality.