Toggle light / dark theme

The Carboncopies Foundation is starting The Brain Emulation Challenge.


With the availability of high throughput electron microscopy (EM), expansion microscopy (ExM), Calcium and voltage imaging, co-registered combinations of these techniques and further advancements, high resolution data sets that span multiple brain regions or entire small animal brains such as the fruit-fly Drosophila melanogaster may now offer inroads to expansive neuronal circuit analysis. Results of such analysis represent a paradigm change in the conduct of neuroscience.

So far, almost all investigations in neuroscience have relied on correlational studies, in which a modicum of insight gleaned from observational data leads to the formulation of mechanistic hypotheses, corresponding computational modeling, and predictions made using those models, so that experimental testing of the predictions offers support or modification of hypotheses. These are indirect methods for the study of a black box system of highly complex internal structure, methods that have received published critique as being unlikely to lead to a full understanding of brain function (Jonas and Kording, 2017).

Large scale, high resolution reconstruction of brain circuitry may instead lead to mechanistic explanations and predictions of cognitive function with meaningful descriptions of representations and their transformation along the full trajectory of stages in neural processing. Insights that come from circuit reconstructions of this kind, a reverse engineering of cognitive processes, will lead to valuable advances in neuroprosthetic medicine, understanding of the causes and effects of neurodegenerative disease, possible implementations of similar processes in artificial intelligence, and in-silico emulations of brain function, known as whole-brain emulation (WBE).

Only weeks after Figure.ai announced ending its collaboration deal with OpenAI, the Silicon Valley startup has announced Helix – a commercial-ready, AI “hive-mind” humanoid robot that can do almost anything you tell it to.

Figure has made headlines in the past with its Figure 01 humanoid robot. The company is now on version 2 of its premiere robot, however, it’s received more than just a few design changes: it’s been given an entirely new AI brain called Helix VLA.

It’s not just any ordinary AI either. Helix is the very first of its kind to be put into a humanoid robot. It’s a generalist Vision-Language-Action model. The keyword being “generalist.” It can see the world around it, understand natural language, interact with the real world, and it can learn anything.

He emphasized that enhancing intelligence models is key to Alibaba’s long-term vision as it shifts towards AI technologies.

This aligns with Alibaba’s declaration as an AI-driven company.

While e-commerce remains central, Alibaba’s cloud services saw strong growth, with revenue rising 13% last quarter. AI-related products within the cloud division posted triple-digit growth.

Introduces the first World and Human Action Model (WHAM). The WHAM, which we’ve named “Muse,” is a generative AI model of a video game that can generate game visuals, controller actions, or both.


Today Nature published Microsoft’s research detailing our WHAM, an AI model that generates video game visuals & controller actions. We are releasing the model weights, sample data, & WHAM Demonstrator on Azure AI Foundry, enabling researchers to build on the work.

Very excellent.


Arc Institute researchers have developed a machine learning model called Evo 2 that is trained on the DNA of over 100,000 species across the entire tree of life. Its deep understanding of biological code means that Evo 2 can identify patterns in gene sequences across disparate organisms that experimental researchers would need years to uncover. The model can accurately identify disease-causing mutations in human genes and is capable of designing new genomes that are as long as the genomes of simple bacteria.

Evo 2’s developers—made up of scientists from Arc Institute and NVIDIA, convening collaborators across Stanford University, UC Berkeley, and UC San Francisco—will post details about the model as a preprint on February 19, 2025, accompanied by a user-friendly interface called Evo Designer. The Evo 2 code is publicly accessible from Arc’s GitHub, and is also integrated into the NVIDIA BioNeMo framework, as part of a collaboration between Arc Institute and NVIDIA to accelerate scientific research. Arc Institute also worked with AI research lab Goodfire to develop a mechanistic interpretability visualizer that uncovers the key biological features and patterns the model learns to recognize in genomic sequences. The Evo team is sharing its training data, training and inference code, and model weights to release the largest-scale, fully open source AI model to date.

Building on its predecessor Evo 1, which was trained entirely on single-cell genomes, Evo 2 is the largest artificial intelligence model in biology to date, trained on over 9.3 trillion nucleotides—the building blocks that make up DNA or RNA—from over 128,000 whole genomes as well as metagenomic data. In addition to an expanded collection of bacterial, archaeal, and phage genomes, Evo 2 includes information from humans, plants, and other single-celled and multi-cellular species in the eukaryotic domain of life.

Before long, machines will become vastly more intelligent than humans…either accept that humans will become the second most intelligent species or impose a global ban — he will speak at Future Day.


Hugo de Garis believes that too many commentators on AI are avoiding the fundamental issue: before long, machines will become vastly more intelligent than humans—potentially trillions of trillions of times more, or even beyond that. Humanity will soon face a critical decision: either accept that humans will become the second most intelligent species or impose a global ban on the creation of artilects (artificial intellects).

In today’s AI news, in a social media post, DeepSeek said the daily releases it is planning for its Open Source Week would provide visibility into these humble building blocks in our online service that have been documented, deployed and battle-tested in production. As part of the open-source community, we believe that every line shared becomes collective momentum that accelerates the journey.

In other advancements, Together AI an AI cloud platform that enables companies to train and deploy artificial intelligence models — has raised $305 million in Series B funding in a round led by General Catalyst, more than doubling its valuation to $3.3 billion from $1.25 billion last March. The funding comes amid growing demand for computing power to run advanced open-source models.

In personal and professional development, if you’re curious about how to integrate AI smartly into your business, here are some friendly tips to get you started while keeping things safe and effective. The key is strategic integration with safeguards in place, use AI’s strengths — without losing your own.

Then, search startup Genspark has raised $100 million in a series A funding round, valuing the startup at $530 million, according to a source familiar with the matter, as the race to use artificial intelligence to disrupt Google’s stranglehold on the search engine market heats up. The Palo Alto-based company currently has over 2 million monthly active users, and the round was led by a group of U.S. and Singapore-based investors.

S like to compete with Google, and what the future of search could look like. + Then, as AI scales from the cloud to the very edges of our devices, the potential for transformative innovation grows exponentially. In this Imagination In Action session at Davos, Daniel Newman, CEO The Futurum Group moderates this expert panel which includes: Åsa Tamsons, Executive VP, Ericsson, Gill Pratt, CEO Toyota Research, Chief Scientist Toyota, Kinuko Masaki, CEO, VoiceBrain, Cyril Perducat, CTO, Rockwell Automation, and Alexander Amini, CSO, Liquid AI.