Toggle light / dark theme

Scattering takes place across the universe at large and miniscule scales. Billiard balls clank off each other in bars, the nuclei of atoms collide to power the stars and create heavy elements, and even sound waves deviate from their original trajectory when they hit particles in the air.

Understanding such scattering can lead to discoveries about the forces that govern the universe. In a recent publication in Physical Review C, researchers from Lawrence Livermore National Laboratory (LLNL), the InQubator for Quantum Simulations and the University of Trento developed an algorithm for a quantum computer that accurately simulates scattering.

“Scattering experiments help us probe and their interactions,” said LLNL scientist Sofia Quaglioni. “The scattering of particles in matter [materials, atoms, molecules, nuclei] helps us understand how that matter is organized at a .”

To identify signs of particles like the Higgs boson, CERN researchers work with mountains of data generated by LHC collisions.

Hunting for evidence of an object whose behavior is predicted by existing theories is one thing. But having successfully observed the elusive boson, identifying new and unexpected particles and interactions is an entirely different matter.

To speed up their analysis, physicists feed data from the billions of collisions that occur in LHC experiments into machine learning algorithms. These models are then trained to identify anomalous patterns.

❗FlexiSpot Amazon Black Friday Deal Up to 70% OFF❗
Free Orders Nov.24 & Nov.27! 🎁
Use code COMHARDESK for an additional 5% OFF for my model: https://amzn.to/3ZoZT4u.
US site: https://amzn.to/3t8r9I
Canada Site: https://amzn.to/3sYHtLH
#blackfriday #amazon #standingdesk #flexispotus.

Human brain organoids (“mini-brains”) are being grown in labs around the world. They’re being fed neurotransmitters, competing with AI to solve non-linear equations, and going to space to study the effects of microgravity. This video reviews three preprints, preliminary reports of new scientific studies. (My AI voice caught a cold this week.)

Support the channel: https://www.patreon.com/ihmcurious.

Preprints:

- Brain Organoid Computing for Artificial Intelligence (Cai et al.) https://www.biorxiv.org/content/10.1101/2023.02.28.530502v1.full.

- Modulation of neuronal activity in cortical organoids with bioelectronic delivery of ions and neurotransmitters (Park et al.) https://www.biorxiv.org/content/10.1101/2023.06.10.544416v1.full.

Year 2024 face_with_colon_three


Our memristor is inspired and supported by a comprehensive theory directly derived from the underlying physical equations of diffusive and electric continuum ion transport. We experimentally quantitatively verified the predictions of our theory on multiple occasions, among which the specific and surprising prediction that the memory retention time of the channel depends on the channel diffusion time, despite the channel being constantly voltage-driven. The theory exclusively relies on physical parameters, such as channel dimensions and ion concentrations, and enabled streamlined experimentation by pinpointing the relevant signal timescales, signal voltages, and suitable reservoir computing protocol. Additionally, we identify an inhomogeneous charge density as the key ingredient for iontronic channels to exhibit current rectification (provided they are well described by slab-averaged PNP equations). Consequently, our theory paves the way for targeted advancements in iontronic circuits and facilitates efficient exploration of their diverse applications.

For future prospects, a next step is the integration of multiple devices, where the flexible fabrication methods do offer a clear path toward circuits that couple multiple channels. Additionally, optimizing the device to exhibit strong conductance modulation for lower voltages would be of interest to bring electric potentials found in nature into the scope of possible inputs and reduce the energy consumption for conductance modulation. From a theoretical perspective, the understanding of the (origin of the) inhomogeneous space charge and the surface conductance is still somewhat limited. These contain (physical) parameters that are now partially chosen from a reasonable physical regime to yield good agreement, but do not directly follow from underlying physical equations. We also assume that the inhomogeneous ionic space charge distribution is constant, while it might well be voltage-dependent.

A Canadian startup called Xanadu has built a new quantum computer it says can be easily scaled up to achieve the computational power needed to tackle scientific challenges ranging from drug discovery to more energy-efficient machine learning.

Aurora is a “photonic” quantum computer, which means it crunches numbers using photonic qubits—information encoded in light. In practice, this means combining and recombining laser beams on multiple chips using lenses, fibers, and other optics according to an algorithm. Xanadu’s computer is designed in such a way that the answer to an algorithm it executes corresponds to the final number of photons in each laser beam. This approach differs from one used by Google and IBM, which involves encoding information in properties of superconducting circuits.

Question Can an electrocardiography (ECG)–based artificial intelligence risk estimator for hypertension (AIRE-HTN) predict incident hypertension and stratify risk for incident hypertension-associated adverse events?

Findings In this prognostic study including an ECG algorithm trained on 189 539 patients at Beth Israel Deaconess Medical Center and externally validated on 65 610 patients from UK Biobank, AIRE-HTN predicted incident hypertension and stratified risk for cardiovascular death, heart failure, myocardial infarction, ischemic stroke, and chronic kidney disease.

Meaning Results suggest that AIRE-HTN can predict the development of hypertension and may identify at-risk patients for enhanced surveillance.

Optical fibers are fundamental components in modern science and technology due to their inherent advantages, providing an efficient and secure medium for applications such as internet communication and big data transmission. Compared with single-mode fibers (SMFs), multimode fibers (MMFs) can support a much larger number of guided modes (~103 to ~104), offering the attractive advantage of high-capacity information and image transportation within the diameter of a hair. This capability has positioned MMFs as a critical tool in fields such as quantum information and micro-endoscopy.

However, MMFs pose a significant challenge: their highly scattering nature introduces severe modal dispersion during transmission, which significantly degrades the quality of transmitted information. Existing technologies, such as (ANNs) and spatial light modulators (SLMs), have achieved limited success in reconstructing distorted images after MMF transmission. Despite these advancements, the direct optical transmission of undistorted images through MMFs using micron-scale integrated has remained an elusive goal in optical research.

Addressing the longstanding challenges of multi-mode fiber (MMF) transmission, the research team led by Prof. Qiming Zhang and Associate Prof. Haoyi Yu from the School of Artificial Intelligence Science and Technology (SAIST) at the University of Shanghai for Science and Technology (USST) has introduced a groundbreaking solution. The study is published in the journal Nature Photonics.

Estimating spectral features of quantum many-body systems has attracted great attention in condensed matter physics and quantum chemistry. To achieve this task, various experimental and theoretical techniques have been developed, such as spectroscopy techniques1,2,3,4,5,6,7 and quantum simulation either by engineering controlled quantum devices8,9,10,11,12,13,14,15,16 or executing quantum algorithms17,18,19,20 such as quantum phase estimation and variational algorithms. However, probing the behaviour of complex quantum many-body systems remains a challenge, which demands substantial resources for both approaches. For instance, a real probe by neutron spectroscopy requires access to large-scale facilities with high-intensity neutron beams, while quantum computation of eigenenergies typically requires controlled operations with a long coherence time17,18. Efficient estimation of spectral properties has become a topic of increasing interest in this noisy intermediate-scale quantum era21.

A potential solution to efficient spectral property estimation is to extract the spectral information from the dynamics of observables, rather than relying on real probes such as scattering spectroscopy, or direct computation of eigenenergies. This approach capitalises on the basics in quantum mechanics that spectral information is naturally carried by the observable’s dynamics10,20,22,23,24,25,26. In a solid system with translation invariance, for instance, the dynamic structure factor, which can be probed in spectroscopy experiments7,26, reaches its local maximum when both the energy and momentum selection rules are satisfied. Therefore, the energy dispersion can be inferred by tracking the peak of intensities in the energy excitation spectrum.

Imagine a world where the act of observation itself holds the key to solving our most complex problems, a world where the very fabric of reality becomes a canvas for computation. This is the tantalizing promise of Observational Computation (OC), a radical new paradigm poised to redefine the very nature of computation and our understanding of the universe itself.

Forget silicon chips and algorithms etched in code; OC harnesses the enigmatic dance of quantum mechanics and the observer effect, where the observer and the observed are inextricably intertwined. Instead of relying on traditional processing power, OC seeks to translate computational problems into carefully crafted observer-environment systems. Picture a quantum stage where potential solutions exist in a hazy superposition, like ghostly apparitions waiting for the spotlight of observation to solidify them into reality.

By meticulously designing these “observational experiments,” we can manipulate quantum systems, nudging them towards desired outcomes. This elegant approach offers tantalizing advantages over our current computational methods. Imagine harnessing the inherent parallelism of quantum superposition for exponentially faster processing, or tapping into the natural energy flows of the universe for unprecedented energy efficiency.

AI transformational impact is well under way. But as AI technologies develop, so too does their power consumption. Further advancements will require AI chips that can process AI calculations with high energy efficiency. This is where spintronic devices enter the equation. Their integrated memory and computing capabilities mimic the human brain, and they can serve as a building block for lower-power AI chips.

Now, researchers at Tohoku University, National Institute for Materials Science, and Japan Atomic Energy Agency have developed a new spintronic device that allows for the electrical mutual control of non-collinear antiferromagnets and ferromagnets. This means the device can switch magnetic states efficiently, storing and processing information with less energy—just like a brain-like AI chip.

The breakthrough can potentially revolutionize AI hardware via high efficiency and low energy costs. The team published their results in Nature Communications on February 5, 2025.