Toggle light / dark theme

Machine-learning algorithms can now estimate the “brain age” of infants with unprecedented precision by analyzing electrical brain signals recorded using electroencephalography (EEG).

A team led by Sarah Lippé at Université de Montréal’s Department of Psychology has developed a method that can determine in minutes whether a baby’s brain development is advanced, delayed or in line with their chronological age.

This breakthrough promises to enable early screening and personalized monitoring of developmental disorders in babies.

Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive (that is, rewarding) outcome. The study of how organisms learn from experience to correctly anticipate rewards has been a productive research field for well over a century, since Ivan Pavlov’s seminal psychological work. In his most famous experiment, dogs were trained to expect food some time after a buzzer sounded. These dogs began salivating as soon as they heard the sound, before the food had arrived, indicating they’d learned to predict the reward. In the original experiment, Pavlov estimated the dogs’ anticipation by measuring the volume of saliva they produced. But in recent decades, scientists have begun to decipher the inner workings of how the brain learns these expectations. Meanwhile, in close contact with this study of reward learning in animals, computer scientists have developed algorithms for reinforcement learning in artificial systems. These algorithms enable AI systems to learn complex strategies without external instruction, guided instead by reward predictions.

The contribution of our new work, published in Nature (PDF), is finding that a recent development in computer science – which yields significant improvements in performance on reinforcement learning problems – may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain, and opens up new avenues of research into the brain’s dopamine system, with potential implications for learning and motivation disorders.

Reinforcement learning is one of the oldest and most powerful ideas linking neuroscience and AI. In the late 1980s, computer science researchers were trying to develop algorithms that could learn how to perform complex behaviours on their own, using only rewards and punishments as a teaching signal. These rewards would serve to reinforce whatever behaviours led to their acquisition. To solve a given problem, it’s necessary to understand how current actions result in future rewards. For example, a student might learn by reinforcement that studying for an exam leads to better scores on tests. In order to predict the total future reward that will result from an action, it’s often necessary to reason many steps into the future.

Kirigami is a traditional Japanese art form that entails cutting and folding paper to produce complex three-dimensional (3D) structures or objects. Over the past decades, this creative practice has also been applied in the context of physics, engineering, and materials science research to create new materials, devices and even robotic systems.

Researchers at Sichuan University and McGill University recently devised a new approach for the inverse engineering of kirigami, which does not rely on advanced computational tools and numerical algorithms. This new method, outlined in a paper published in Physical Review Letters, could simplify the design of intricate kirigami for a wide range of real-world applications.

“This work is a natural extension of our previous work on kirigami,” Damiano Pasini, senior corresponding author of the paper, told Phys.org.

Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanisław Ulam, was inspired by his uncle’s gambling habits.

Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically.

Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs.

A broad systematic review has revealed that quantum computing applications in health care remain more theoretical than practical, despite growing excitement in the field.

The comprehensive study published in npj Digital Medicine, which analyzed 4,915 research papers published between 2015 and 2024, found little evidence that quantum machine learning (QML) algorithms currently offer any meaningful advantage over classical computing methods for health care applications.

“Despite in research claiming quantum benefits for health care, our analysis shows no consistent evidence that quantum algorithms outperform classical methods for clinical decision-making or health service delivery,” said Dr. Riddhi Gupta from the School of Mathematics and Physics and the Queensland Digital Health Center (QDHeC) at the University of Queensland.

A recent study has mathematically clarified how the presence of crystals and gas bubbles in magma affects the propagation of seismic P-waves. The researchers derived a new equation that characterizes the travel of these waves through magma, revealing how the relative proportions of crystals and bubbles influence wave velocity and waveform properties.

The ratio of crystals to bubbles in subterranean magma reservoirs is crucial for forecasting . Due to the inaccessibility of direct observations, scientists analyze seismic P-waves recorded at the surface to infer these internal characteristics.

Previous studies have predominantly focused on the influence of , with limited consideration given to crystal content. Moreover, conventional models have primarily addressed variations in wave velocity and amplitude decay, without capturing detailed waveform transformations.

Proteins are among the most studied molecules in biology, yet new research from the University of Göttingen shows they can still hold surprising secrets. Researchers have discovered previously undetected chemical bonds within archived protein structures, revealing an unexpected complexity in protein chemistry.

These newly identified nitrogen-oxygen-sulfur (NOS) linkages broaden our understanding of how proteins respond to , a condition where harmful oxygen-based molecules build up and can damage proteins, DNA, and other essential parts of the cell. The new findings are published in Communications Chemistry.

The research team systematically re-analyzed over 86,000 high-resolution protein structures from the Protein Data Bank, a global public repository of protein structures, using a new algorithm that they developed inhouse called SimplifiedBondfinder. This pipeline combines , quantum mechanical modeling, and structural refinement methods to reveal subtle that were missed by conventional analyses.

The qualia problem of perception is simply pointing out that the way we perceive the world is in terms of subjective qualities rather than numerical quantities. For example, we perceive the color of light in the things we see rather than the frequency of light wave vibrations or wavelengths, just as we perceive the quality of the sounds we hear rather than the frequency of sound wave vibrations. Another example is emotional qualities, like the perception of pleasure and pain and the perception of other emotional qualities, like the emotional qualities that color the perception of the emotional body feelings we perceive with emotional expressions of fear and desire. There is no possible way to understand the perception of these emotional qualities, just as there is no way to understand the perception of the colors we see or the qualities of the sounds we hear, in terms of the neuronal firing rates of neurons in the brain or other nervous systems. The frequency of wave vibrations and the neuronal firing rates of neurons are both examples of quantities. The problem is we do not perceive things in terms of numerical quantities, but rather in terms of subjective qualities.

All our physical theories are formulated in terms of numerical quantities, not in terms of subjective qualities. For example, in ordinary quantum theory or in quantum field theory, we speak of the frequency of light wave vibrations or the wavelength of a light wave in terms of a quantum particle called the photon. A photon or light wave is characterized by the numerical quantities of frequency and wavelength. When we formulate the nature of a light wave or photon in quantum theory in terms of Maxwell’s equations for the electromagnetic field, we can only describe numerical quantities. In ordinary quantum theory and quantum field theory, the electromagnetic field is the quantum wave-function, ψ(x, t), that specifies the quantum probability that the point particle called the photon can be measured at a position x in space at a moment t in time. That quantum probability is specified in terms of the frequency and wavelength that characterizes the wave-function for the photon.