Toggle light / dark theme

Deep learning has become an essential part of computer vision, with deep neural networks (DNNs) excelling in predictive performance. However, they often fall short in other critical quality dimensions, such as robustness, calibration, or fairness. While existing studies have focused on a subset of these quality dimensions, none have explored a more general form of “well-behavedness” of DNNs. With this work, we address this gap by simultaneously studying nine different quality dimensions for image classification. Through a large-scale study, we provide a bird’s-eye view by analyzing 326 backbone models and how different training paradigms and model architectures affect the quality dimensions. We reveal various new insights such that (i) vision-language models exhibit high fairness on ImageNet-1k classification and strong robustness against domain changes; (ii) self-supervised learning is an effective training paradigm to improve almost all considered quality dimensions; and (iii) the training dataset size is a major driver for most of the quality dimensions. We conclude our study by introducing the QUBA score (Quality Understanding Beyond Accuracy), a novel metric that ranks models across multiple dimensions of quality, enabling tailored recommendations based on specific user needs.

Summary: ChatGPT4 has demonstrated superiority in various student exams, revealing its potential to support academic learning and improve educational outcomes, particularly in test preparation. With its accessibility and affordability compared to traditional tutoring services, AI tutoring can help address the increasing demand for academic support, especially as universities begin to reinstate standardized testing requirements.

In 2023, OpenAI shook the foundation of the education system by releasing ChatGPT4. The previous model of ChatGPT had already disrupted classrooms K–12 and beyond by offering a free academic tool capable of writing essays and answering exam questions. Teachers struggled with the idea that widely accessible artificial intelligence (AI) technology could meet the demands of most traditional classroom work and academic skills. GPT3.5 was far from perfect, though, and lacked creativity, nuance, and reliability. However, reports showed that GPT4 could score better than 90 percent of participants on the bar exam, LSAT, SAT reading and writing and math, and several Advanced Placement (AP) exams. This showed a significant improvement from GPT3.5, which struggled to score as well as 50 percent of participants.

This marked a major shift in the role of AI, from it being an easy way out of busy work to a tool that could improve your chances of getting into college. The US Department of Education published a report noting several areas where AI could support teacher instruction and student learning. Among the top examples was intelligent tutoring systems. Early models of these systems showed that an AI tutor could not only recognize when a student was right or wrong in a mathematical problem but also identify the steps a student took and guide them through an explanation of the process.

AI agents need two things to succeed in this space: infinite scalability and the ability to connect agents from different blockchains. Without the former, agents do not have infrastructure with sufficient capacity to transact. Without the latter, agents would be off on their own island blockchains, unable to truly connect with each other. As agent actions become more complex on chain, more of their data will also have to live on the ledger, making optimizing for both of these factors important right now.

Because of all of this, I believe the next frontier of AI agents on blockchains is in gaming, where their training in immersive worlds will inevitably lead to more agentic behavior crossing over to non-gaming consumer spaces.

If the future of autonomous consumer AI agents sounds scary, it is because we have not yet had a way to independently verify LLM training models or the actions of AI agents so far. Blockchain provides the necessary transparency and transaction security so that this inevitable phenomenon can operate on safer rails. I believe the final home for these AI agents will be Web3.

March 30, 2012 — At yesterday’s 2025 Zhongguancun Forum At the annual meeting, the Beijing General Artificial Intelligence Research Institute launched theThe world’s first Universal Intelligent Mancomplete” 2.0 officially released.

“Tom-Tom” is positioned as a virtual human with autonomous learning, cognitive and decision-making capabilities. Expected to have the intelligence of a 6 year old within this year..

Novel artificial neurons learn independently and are more strongly modeled on their biological counterparts. A team of researchers from the Göttingen Campus Institute for Dynamics of Biological Networks (CIDBN) at the University of Göttingen and the Max Planck Institute for Dynamics and Self-Organization (MPI-DS) has programmed these infomorphic neurons and constructed artificial neural networks from them. The special feature is that the individual artificial neurons learn in a self-organized way and draw the necessary information from their immediate environment in the network.

The results were published in PNAS (“A general framework for interpretable neural learning based on local information-theoretic goal functions”).

Both, human brain and modern artificial neural networks are extremely powerful. At the lowest level, the neurons work together as rather simple computing units. An artificial neural network typically consists of several layers composed of individual neurons. An input signal passes through these layers and is processed by artificial neurons in order to extract relevant information. However, conventional artificial neurons differ significantly from their biological models in the way they learn.

To adapt the existing software to microscopy, the research team first evaluated it on a large set of open-source data, which showed the model’s potential for microscopy segmentation. To improve quality, the team retrained it on a large microscopy dataset. This dramatically improved the model’s performance for the segmentation of cells, nuclei and tiny structures in cells known as organelles.

The team then created their software, μSAM, which enables researchers and medical doctors to analyze images without the need to first manually paint structures or train a specific AI model. The software is already in wide use internationally, for example to analyze nerve cells in the ear as part of a project on hearing restoration, to segment artificial tumor cells for cancer research, or to analyze electron microscopy images of volcanic rocks.

“Analyzing cells or other structures is one of the most challenging tasks for researchers working in microscopy and is an important task for both basic research in biology and medical diagnostics,” says the author.


Identifying and delineating cell structures in microscopy images is crucial for understanding the complex processes of life. This task is called “segmentation” and it enables a range of applications, such as analysing the reaction of cells to drug treatments, or comparing cell structures in different genotypes. It was already possible to carry out automatic segmentation of those biological structures but the dedicated methods only worked in specific conditions and adapting them to new conditions was costly.

An international research team has now developed a method by retraining the existing AI-based software Segment Anything on over 17,000 microscopy images with over 2 million structures annotated by hand.

Their new model is called Segment Anything for Microscopy and it can precisely segment images of tissues, cells and similar structures in a wide range of settings. To make it available to researchers and medical doctors, they have also created μSAM, a user-friendly software to “segment anything” in microscopy images. Their work was published in Nature Methods.