GPT-5 will have ‘enhanced agentic capabilities’ and handle ‘complex coding tasks with minimal prompting.’

AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still trying to figure out how its “personality traits” arise and how to control them. Large learning models (LLMs) use chatbots or “assistants” to interface with users, and some of these assistants have exhibited troubling behaviors recently, like praising evil dictators, using blackmail or displaying sycophantic behaviors with users. Considering how much these LLMs have already been integrated into our society, it is no surprise that researchers are trying to find ways to weed out undesirable behaviors.
Anthropic, the AI company and creator of the LLM Claude, recently released a paper on the arXiv preprint server discussing their new approach to reining in these undesirable traits in LLMs. In their method, they identify patterns of activity within an AI model’s neural network—referred to as “persona vectors”—that control its character traits. Anthropic says these persona vectors are somewhat analogous to parts of the brain that “light up” when a person experiences a certain feeling or does a particular activity.
Anthropic’s researchers used two open-source LLMs, Qwen 2.5-7B-Instruct and Llama-3.1-8B-Instruct, to test whether they could remove or manipulate these persona vectors to control the behaviors of the LLMs. Their study focuses on three traits: evil, sycophancy and hallucination (the LLM’s propensity to make up information). Traits must be given a name and an explicit description for the vectors to be properly identified.
Imagine trying to make an accurate three-dimensional model of a building using only pictures taken from different angles—but you’re not sure where or how far away all the cameras were. Our big human brains can fill in a lot of those details, but computers have a much harder time doing so.
This scenario is a well-known problem in computer vision and robot navigation systems. Robots, for instance, must take in lots of 2D information and make 3D point clouds —collections of data points in 3D space—in order to interpret a scene. But the mathematics involved in this process is challenging and error-prone, with many ways for the computer to incorrectly estimate distances. It’s also slow, because it forces the computer to create its 3D point cloud bit by bit.
Computer scientists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) think they have a better method: A breakthrough algorithm that lets computers reconstruct high-quality 3D scenes from 2D images much more quickly than existing methods.
How can a horde of active robots be automatically brought to a standstill? By arresting their dynamics in a self-sustained way. This phenomenon was discovered by physicists at Heinrich Heine University Dusseldorf (HHU) and La Sapienza University in Rome. The threshold principle of static friction with the ground plays a decisive role here: it removes the kinetic energy of two robots after a mutual collision so efficiently that they can no longer set themselves in motion.
The researchers describe in the journal Nature Communications that this fundamental effect can also be used to construct controllable moving robot systems.
Friction creates heat, as anyone knows who has rubbed their hands together in winter weather. And friction costs energy. Road friction on vehicle tires, for example, will cause a moving car to steadily slow down unless the accelerator is used.
Researchers from China and us create shape shifting robot:
In a scene straight out of science fiction, researchers from China and the U.S. have developed a shape-shifting robot made from magnetically responsive liquid metal that can melt, flow, escape confinement, and reassemble itself—all on command.
Inspired by sea cucumbers and powered by gallium, a metal with a melting point just above room temperature, the robot can switch between solid and liquid states using magnetic fields. During tests, it was able to melt, escape from a prison-like cage, and then re-solidify into its original form—without losing function.
Unlike traditional rigid robots, this breakthrough allows machines to:
* Navigate tight or complex spaces * Heal themselves or split apart to avoid damage * Perform surgical tasks inside the human body without invasive procedures * Transition between tool-like solidity and liquid flexibility.
The magnetic fields not only induce the phase change but also control movement, making the robot swim, climb walls, and even jump. Researchers envision future uses in minimally invasive medicine, like removing foreign objects from internal organs, or in electronic assembly, where the robot could flow into hard-to-reach places and form circuits.
Can artificial intelligence, robots and surveillance protect workers on the job? Yes, according to the latest report from the International Labour Organization. In this episode of the Future of Work podcast, ILO occupational safety and health expert Manal Azzi explains how AI and technology is being used as a safety net, and not a threat, for workers worldwide.
Research shows that while connections between innovations speed discovery, they also sharply increase the risk of total system collapse—with the sweet spot for sustainable innovation proving surprisingly narrow.
Innovation is a central currency of global power. Whether in the race for leadership in artificial intelligence, the development of clean energy technologies, or the search for medical breakthroughs, major players like China, the United States, and the European Union are investing billions in research and development to secure the next technological leap—and with it, economic and strategic advantage.
Yet, as a new study from the Complexity Science Hub (CSH), published in Physical Review Research, indicates, long-term innovation is only sustainable under specific structural conditions. First, the study finds that innovation can only endure over time if it is balanced with “exnovation”—the loss or forgetting of older possibilities.