Toggle light / dark theme

Who’s behind AMI Labs, Yann LeCun’s ‘world model’ startup

Yann LeCun’s new venture, AMI Labs, has drawn intense attention since the AI scientist left Meta to found it. This week, the startup finally confirmed what it’s building — and several key details have been hiding in plain sight.

On its newly launched website, the startup disclosed its plans to develop “world models” in order to “build intelligent systems that understand the real world.” The focus on world models was already hinted at by AMI’s name, which stands for Advanced Machine Intelligence, but it has now officially joined the ranks of the hottest AI research startups.

Building foundational models that bridge AI and the real world has become one of the field’s most exciting pursuits, attracting top scientists and deep-pocketed investors alike — product or no product.

Distinct SOX9 single-molecule dynamics characterize adult differentiation and fetal-like reprogrammed states in intestinal organoids

New organoid research published in Stem Cell Reports:

Cell press | gairdner foundation | sickkids foundation | california institute for regenerative medicine | uni bayreuth.


Walther and colleagues employed an automated live-cell single-molecule tracking pipeline to study the diffusive behavior of the transcription factor SOX9 during adult differentiation and fetal-like reprogrammed states in intestinal organoid models. The authors linked distinct fractions of chromatin-bound SOX9 molecules to specific cellular states in enteroid monolayers, thereby paving the way to unravel molecular mechanisms underlying differentiation and organoid phenotypes.

The AIs of 2026 Will be Wild!

What if the AIs of 2026 don’t just assist humans—but outthink, outcreate, and outpace them? This video breaks down why experts are calling the next wave of artificial intelligence “wild,” unpredictable, and unlike anything we’ve seen before.

From autonomous AI agents that can run businesses to models that learn continuously without retraining, 2026 is shaping up to be the year AI crosses invisible psychological and technological lines. We explore the breakthroughs most people aren’t paying attention to—and why they matter more than flashy demos.

You’ll discover how AI reasoning, memory, creativity, and decision-making are evolving fast, and why this shift could quietly redefine work, power, and human relevance. These aren’t sci-fi concepts anymore—they’re already being tested behind closed doors.

This video also reveals the hidden risks, ethical tensions, and control problems emerging as AI systems become less tool-like and more independent. By the end, you’ll understand why 2026 may be remembered as the year AI stopped feeling artificial.

What will AI be capable of in 2026? Why are experts worried about next-generation AI? How will AI change jobs and creativity? Are autonomous AI agents dangerous? Is AI evolving faster than humans can adapt?

*******************

Shapeshifting materials could power next generation of soft robots

McGill University engineers have developed new ultra-thin materials that can be programmed to move, fold and reshape themselves, much like animated origami. They open the door to softer, safer and more adaptable robots that could be used in medical tools that gently move inside the body, wearable devices that change shape on the skin or smart packaging that reacts to its environment.

The research, jointly led by the laboratories of Hamid Akbarzadeh in the Department of Bioresource Engineering and Marta Cerruti in the Department of Mining and Material Engineering, shows how simple, paper-like sheets made from folded graphene oxide (GO) can be turned into tiny devices that walk, twist, flip and sense their own motion. Two related studies demonstrate how these materials can be made at scale, programmed to change shape and controlled either by humidity or magnetic fields.

The studies are published in Materials Horizons and Advanced Science.

Stress-testing AI vision systems: Rethinking how adversarial images are generated

Deep neural networks (DNNs) have become a cornerstone of modern AI technology, driving a thriving field of research in image-related tasks. These systems have found applications in medical diagnosis, automated data processing, computer vision, and various forms of industrial automation, to name a few.

As reliance on AI models grows, so does the need to test them thoroughly using adversarial examples. Simply put, adversarial examples are images that have been strategically modified with noise to trick an AI into making a mistake. Understanding adversarial image generation techniques is essential for identifying vulnerabilities in DNNs and for developing more secure, reliable systems.

GNSS-only method delivers stable positioning for autonomous vehicles in urban areas

Global navigation satellite systems (GNSS) are vital for positioning autonomous vehicles, buses, drones, and outdoor robots. Yet its accuracy often degrades in dense urban areas due to signal blockage and reflections.

Now, researchers have developed a GNSS-only method that delivers stable, accurate positioning without relying on fragile carrier-phase ambiguity resolution. Tested across six challenging urban scenarios, the approach consistently outperformed existing methods, enabling safer and more reliable autonomous navigation.

With some help from AI, your next move can be predicted

AI might know where you’re going before you do. Researchers at Northeastern University used large language models, the kind of advanced artificial intelligence normally designed to process and generate language, to predict human movement.

How RHYTHM predicts human movement RHYTHM, their innovative tool, “can revolutionize the forecasting of human movements,” forecasting “where you’re going to be in the next 30 minutes or the next 25 hours,” said Ryan Wang, an associate professor and vice chair of research in civil and environmental engineering at Northeastern.

The hope is that RHYTHM will improve domains like transportation and traffic planning to make our lives easier, but in extreme cases, RHYTHM could even be deployed to respond to natural disasters, highway accidents and terrorist attacks.

AI models mirror human ‘us vs. them’ social biases, study shows

Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini and other widely used artificial intelligence (AI) platforms, can rapidly source information and generate texts tailored for specific purposes. As these models are trained on large amounts of texts written by humans, they could exhibit some human-like biases, which are inclinations to prefer specific stimuli, ideas or groups that deviate from objectivity.

One of these biases, known as the “us vs. them” bias, is the tendency of people to prefer groups they belong to, viewing other groups less favorably. This effect is well-documented in humans, but it has so far remained largely unexplored in LLMs.

Researchers at University of Vermont’s Computational Story Lab and Computational Ethics Lab recently carried out a study investigating the possibility that LLMs “absorb” the “us vs. them” bias from the texts that they are trained on, exhibiting a similar tendency to prefer some groups over others. Their paper, posted to the arXiv preprint server, suggests that many widely used models tend to express a preference for groups that are referred to favorably in training texts, including GPT-4.1, DeepSeek-3.1, Gemma-2.0, Grok-3.0 and LLaMA-3.1.

/* */