Toggle light / dark theme

In today’s AI news, believe it or not AI is alive and well, and it’s clearly going to change a lot of things forever. My personal epiphany happened just the other day, while I was “vibe coding” a personal software project. Those of us who have never written a line of code in our lives, but create software programs and applications using AI tools like Bolt or Lovable are called vibe coders.

S how these tools improve automation, multi-agent collaboration, and workflow orchestration for developers. Before we dig into what Then, Anthropic’s CEO Dario Amodei is worried that spies, likely from China, are getting their hands on costly “algorithmic secrets” from the U.S.’s top AI companies — and he wants the U.S. government to step in. Speaking at a Council on Foreign Relations event on Monday, Amodei said that China is known for its “large-scale industrial espionage” and that AI companies like Anthropic are almost certainly being targeted.

Meanwhile, despite all the hype, very few people have had a chance to use Manus. Currently, under 1% of the users on the wait list have received an invite code. It’s unclear how many people are on this list, but for a sense of how much interest there is, Manus’s Discord channel has more than 186,000 members. MIT Technology Review was able to obtain access to Manus, and they gave it a test-drive.

In videos, join Palantir CEO Alexander Karp with New York Times DealBook creator Andrew Ross Sorkin on the promises and peril of Silicon Valley, tech’s changing relationship with Washington, and what it means for our future — and his new book, The Technological Republic. Named “Best CEO of 2024” by The Economist, Alexander Karp is a vital player in Silicon Valley as the CEO of Palantir.

Then, Piers Linney, Co-founder of Implement AI, discusses how artificial intelligence and automation can be maximized across businesses on CNBC International Live. Linney says AI poses a threat to the highest income knowledge workers around the world.

Meanwhile, Nate B. Jones is back with some commentary on how OpenAI has launched a new API aimed at helping developers build AI agents, but its strategic impact remains unclear. While enterprises with strong LLM expertise are already using tools like LangChain effectively, smaller teams struggle with agent complexity. Nate says, despite being a high-quality API, it lacks a distinct differentiator beyond OpenAI’s own ecosystem.

We close out with, Celestial AI CEO Dave Lazovsky outlines how their “Photonic Fabric” technology helps to scale AI as the company raises $250 million in their latest funding round, valuing the company at $2.5 billion. Thats all for today, but AI is moving fast — subscribe.

The X-37B is a reusable robotic space plane operated by the US Space Force. It resembles a miniature space shuttle at just under 9 metres long with a 4.5 metre wingspan and is an uncrewed vehicle designed for long-duration missions in low Earth orbit.

The craft launches vertically atop a rocket, lands horizontally like a conventional aircraft and serves as a testbed for new technologies and experiments that can be returned to Earth for analysis.

It’s development was a collaborative effort between NASA, Boeing, and the US Department of Defence. It was originally conceived by NASA in the late 1990s to explore reusable spaceplane technologies but transitioned to the US Air Force in 2004 for military purposes.

Yale University, Dartmouth College, and the University of Cambridge researchers have developed MindLLM, a subject-agnostic model for decoding functional magnetic resonance imaging (fMRI) signals into text.

Integrating a neuroscience-informed attention mechanism with a large language model (LLM), the model outperforms existing approaches with a 12.0% improvement in downstream tasks, a 16.4% increase in unseen subject generalization, and a 25.0% boost in novel task adaptation compared to prior models like UMBRAE, BrainChat, and UniBrain.

Decoding into has significant implications for neuroscience and brain-computer interface applications. Previous attempts have faced challenges in predictive performance, limited task variety, and poor generalization across subjects. Existing approaches often require subject-specific parameters, limiting their ability to generalize across individuals.

The Automated Intimate Partner Violence Risk Support System (AIRS) utilizes clinical history and radiologic data to pinpoint patients seen in the emergency room who may be at a risk for intimate partner violence (IPV). Developed over the past five years, AIRS has been rolled out to the Brigham and Women’s Hospital’s Emergency Rooms in Boston as well as surrounding primary care sites. Currently, the tool has been validated at the University of California-San Francisco Medical Center and is being evaluated by the Alameda Health System for its role in clinical workflow.

“Data labeling quality is a huge concern—not just with intimate partner violence care, but in machine learning for healthcare and machine learning, broadly speaking,” says cofounder Irene Chen. “Our hope is that with training, clinicians can be taught how to spot intimate partner violence—we are hoping to find a set of cleaner labels.”

In 1989, political scientist Francis Fukuyama predicted we were approaching the end of history. He meant that similar liberal democratic values were taking hold in societies around the world. How wrong could he have been? Democracy today is clearly on the decline. Despots and autocrats are on the rise.

You might, however, be thinking Fukuyama was right all along. But in a different way. Perhaps we really are approaching the end of history. As in, game over humanity.

Now there are many ways it could all end. A global pandemic. A giant meteor (something perhaps the dinosaurs would appreciate). Climate catastrophe. But one end that is increasingly talked about is (AI). This is one of those potential disasters that, like climate change, appears to have slowly crept up on us but, many people now fear, might soon take us down.

John Smart has taught and written for over 20 years on topics like foresight and futurism as well as the drivers, opportunities, and problems of exponential processes throughout human history. John is President of the Acceleration Studies Foundation, co-Founder of the Evo-Devo research community, and CEO of Foresight University. Most recently, Smart is the author of Introduction to Foresight, which in my view is a “one-of-a-kind all-in-one instruction manual, methodological encyclopedia, and daily work bible for both amateur and professional futurists or foresighters.”

During our 2-hour conversation with John Smart, we cover a variety of interesting topics such as the biggest tech changes since our 1st interview; machine vs human sentience; China’s totalitarianism and our new geostrategic global realignment; Citizen’s Diplomacy, propaganda, and the Russo-Ukrainian War; foresight, futurism and grappling with uncertainty; John’s Introduction to Foresight; Alvin Toffler’s 3P model aka the Evo-Devo Classic Foresight Pyramid; why the future is both predicted and created despite our anti-prediction and freedom bias; Moore’s Law and Accelerating Change; densification and dematerialization; definition and timeline to general AI; evolutionary vs developmental dynamics; autopoiesis and practopoiesis; existential threats and whether we live in a child-proof universe; the Transcension Hypothesis.

My favorite quote that I will take away from this interview with John Smart is:

The article presents an equation of state (EoS) for fluid and solid phases using artificial neural networks. This EoS accurately models thermophysical properties and predicts phaseions, including the critical and triple points. This approach offers a unified way to understand different states of matter.

Just a few days after the full release of OpenAI’s o1 model, a company staffer is now claiming that the company has achieved artificial general intelligence (AGI).

“In my opinion,” OpenAI employee Vahid Kazemi wrote in a post on X-formerly-Twitter, “we have already achieved AGI and it’s even more clear with O1.”

If you were anticipating a fairly massive caveat, though, you weren’t wrong.