Toggle light / dark theme

Enter AI. Multiple deep learning methods can already accurately predict protein structures— a breakthrough half a century in the making. Subsequent studies using increasingly powerful algorithms have hallucinated protein structures untethered by the forces of evolution.

Yet these AI-generated structures have a downfall: although highly intricate, most are completely static—essentially, a sort of digital protein sculpture frozen in time.

A new study in Science this month broke the mold by adding flexibility to designer proteins. The new structures aren’t contortionists without limits. However, the designer proteins can stabilize into two different forms—think a hinge in either an open or closed configuration—depending on an external biological “lock.” Each state is analogous to a computer’s “0” or “1,” which subsequently controls the cell’s output.

Researchers from The University of Queensland applied an algorithm from a video game to study the dynamics of molecules in living brain cells.

Dr. Tristan Wallis and Professor Frederic Meunier from UQ’s Queensland Brain Institute came up with the idea while in lockdown during the COVID-19.

First identified in 2019 in Wuhan, China, COVID-19, or Coronavirus disease 2019, (which was originally called “2019 novel coronavirus” or 2019-nCoV) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). It has spread globally, resulting in the 2019–22 coronavirus pandemic.

Brain age was estimated using an algorithm that combined multiple measures of brain structure obtained through MRI scans when the participants were 45 years old. This algorithm quantified the difference between estimated brain age and the participants’ chronological age, referred to as brain age gap estimate.

If the estimated brain age is higher than the chronological age, it suggests that the brain’s structural characteristics are more similar to those of an older individual. Conversely, if the estimated brain age is lower than the chronological age, the brain’s structural characteristics resemble those of a younger individual.

Lay-Yee and his colleagues also adjusted their analyses for various potential confounding factors. These included socio-demographic factors like sex and socio-economic status, as well as family factors (teen-aged mother, single parent, change in residence, maltreatment) and child-behavioral factors (self-control, worry/fearfulness).

The development of robotic avatars could benefit from an improvement in how computers detect objects in low-resolution images.

A team at RIKEN has improved computer vision recognition capabilities by training algorithms to better identify objects in low-resolution images. Inspired by human brain memory formation techniques, the model degrades the quality of high-resolution images to train the algorithm in self-supervised learning, enhancing object recognition in low-quality images. The development is expected to benefit not only traditional computer vision applications but also the creation of cybernetic avatars and terahertz imaging technology.

Robotic avatar vision enhancement inspired by human perception.

Ticking clocks and flashing fireflies that start out of sync will fall into sync, a tendency that has been observed for centuries. A discovery two decades ago therefore came as a surprise: the dynamics of identical coupled oscillators can also be asynchronous. The ability to fall in and out of sync, a behavior dubbed a chimera state, is generic to identical coupled oscillators and requires only that the coupling is nonlocal. Now Yasuhiro Yamada and Kensuke Inaba of NTT Basic Research Laboratories in Japan show that this behavior can be analyzed using a lattice model (the XY model) developed to understand antiferromagnetism [1]. Besides a pleasing correspondence, Yamada and Inaba say that their finding offers a path to study the partial synchronization of neurons that underlie brain function and dysfunction.

The chimera states of a system are typically analyzed by looking at how the relative phases of the coupled oscillators fall in and out of sync. But that approach struggles to describe the system when the system contains distantly separated pockets of synchrony or when there are nontrivial configurations of the oscillators, such as twisted or spiral waves. It also requires knowledge of the network’s structure and the oscillators’ equations of motion.

In seeking an alternative approach, Yamada and Inaba turned to a two-dimensional lattice model used to tackle phase transitions in 2D condensed-matter systems. A crucial ingredient in that model is a topological defect called a vortex. Yamada and Inaba found that they could embody the asynchronous dynamics of pairs of oscillators by formulating the problem in terms of an analogous quantity that they call pseudovorticity, whose absence indicates synchrony and whose presence indicates asynchrony. Their calculations show that their pseudo-vorticity-containing lattice model can successfully recover the chimera state behavior of a simulated neural network made up of 200 model oscillators of a type commonly used to study brain activity.

Wave-based analog computing has recently emerged as a promising computing paradigm due to its potential for high computational efficiency and minimal crosstalk. Although low-frequency acoustic analog computing systems exist, their bulky size makes it difficult to integrate them into chips that are compatible with complementary metal-oxide semiconductors (CMOS). This research paper addresses this issue by introducing a compact analog computing system (ACS) that leverages the interactions between ultrasonic waves and metasurfaces to solve ordinary and partial differential equations. The results of our wave propagation simulations, conducted using MATLAB, demonstrate the high accuracy of the ACS in solving such differential equations. Our proposed device has the potential to enhance the prospects of wave-based analog computing systems as the supercomputers of tomorrow.

A new study led by University of Maryland physicists sheds light on the cellular processes that regulate genes. Published in the journal Science Advances, the paper explains how the dynamics of a polymer called chromatin—the structure into which DNA is packaged—regulate gene expression.

Through the use of machine learning and statistical algorithms, a research team led by Physics Professor Arpita Upadhyaya and National Institutes of Health Senior Investigator Gordon Hager discovered that can switch between a lower and higher mobility state within seconds. The team found that the extent to which chromatin moves inside cells is an overlooked but important process, with the lower mobility state being linked to gene expression.

Notably, (TFs)—proteins that bind specific DNA sequences within the chromatin polymer and turn on or off—exhibit the same mobility as that of the piece of chromatin they are bound to. In their study, the researchers analyzed a group of TFs called , which are targeted by drugs that treat a variety of diseases and conditions.

Recent advancements in deep learning have significantly impacted computational imaging, microscopy, and holography-related fields. These technologies have applications in diverse areas, such as biomedical imaging, sensing, diagnostics, and 3D displays. Deep learning models have demonstrated remarkable flexibility and effectiveness in tasks like image translation, enhancement, super-resolution, denoising, and virtual staining. They have been successfully applied across various imaging modalities, including bright-field and fluorescence microscopy; deep learning’s integration is reshaping our understanding and capabilities in visualizing the intricate world at microscopic scales.

In computational imaging, prevailing techniques predominantly employ supervised learning models, necessitating substantial datasets with annotations or ground-truth experimental images. These models often rely on labeled training data acquired through various methods, such as classical algorithms or registered image pairs from different imaging modalities. However, these approaches have limitations, including the laborious acquisition, alignment, and preprocessing of training images and the potential introduction of inference bias. Despite efforts to address these challenges through unsupervised and self-supervised learning, the dependence on experimental measurements or sample labels persists. While some attempts have used labeled simulated data for training, accurately representing experimental sample distributions remains complex and requires prior knowledge of sample features and imaging setups.

To address these inherent issues, researchers from the UCLA Samueli School of Engineering introduced an innovative approach named GedankenNet, which, on the other hand, presents a revolutionary self-supervised learning framework. This approach eliminates the need for labeled or experimental training data and any resemblance to real-world samples. By training based on physics consistency and artificial random images, GedankenNet overcomes the challenges posed by existing methods. It establishes a new paradigm in hologram reconstruction, offering a promising solution to the limitations of supervised learning approaches commonly utilized in various microscopy, holography, and computational imaging tasks.

A mysterious quantum phenomenon reveals an image of an atom like never before. You can even see the difference between protons and neutrons.

The Relativistic Heavy Ion Accelerator (RHIC), from the Brookhaven Laboratory in the United States, is a sophisticated device capable of accelerating gold ions to a speed of up to 99.995% that of light. Thanks to him, it has recently been possible to verify, for example, Einstein’s famous equation E=mc2.