Toggle light / dark theme

Machine-learning program reveals genes responsible for sex-specific differences in Alzheimer’s disease progression

Alzheimer’s disease (AD) is a complex neurodegenerative illness with genetic and environmental origins. Females experience faster cognitive decline and cerebral atrophy than males, while males have greater mortality rates. Using a new machine-learning method they developed called “Evolutionary Action Machine Learning (EAML),” researchers at Baylor College of Medicine and the Jan and Dan Duncan Neurological Research Institute (Duncan NRI) at Texas Children’s Hospital have discovered sex-specific genes and molecular pathways that contribute to the development and progression of this condition. The study was published in Nature Communications.

“We have developed a unique machine-learning software that uses an advanced computational predictive metric called the evolutionary action (EA) score as a feature to identify that influence AD risk separately in males and females,” Dr. Olivier Lichtarge, MD, Ph.D., professor of biochemistry and at Baylor College of Medicine, said. “This approach lets us exploit a massive amount of evolutionary data efficiently, so we can now probe with greater accuracy smaller cohorts and identify involved in in AD.”

EAML is an ensemble computational approach that includes nine machine learning algorithms to analyze the functional impact of non-synonymous coding variants, defined as DNA mutations that affect the structure and function of the resulting protein, and estimates their deleterious effect on using the evolutionary action (EA) score.

New data-driven algorithm can forecast the mortality risk for certain cardiac surgery patients

A machine learning-based method developed by a Mount Sinai research team allows medical facilities to forecast the mortality risk for certain cardiac surgery patients. The new method is the first institution-specific model for determining the risk of a cardiac patient before surgery and was developed using vast amounts of Electronic Health Data (EHR).

Comparing the data-driven approach to the current population-derived models reveals a considerable performance improvement.

New algorithm-backed tool offers accurate tracking for deforestation crisis

Approximately 27 football fields’ worth of forests are lost every minute around the globe. That’s a massive annual loss of 15 billion trees.

Scientists have unveiled an innovative and comprehensive strategy to effectively detect and track large-scale forest disturbances, according to a new study published in the Journal of Remo.

Approximately 27 football fields’ worth of forests are lost every minute around the globe, resulting in a massive annual loss of 15 billion trees, according to the WWF. Given this concerning context, the new forest monitoring approach could be a valuable tool for effectively monitoring and managing forests as they undergo changes over time.

A programmable surface plasmonic neural network to detect and process microwaves

AI tools based on artificial neural networks (ANNs) are being introduced in a growing number of settings, helping humans to tackle many problems faster and more efficiently. While most of these algorithms run on conventional digital devices and computers, electronic engineers have been exploring the potential of running them on alternative platforms, such as diffractive optical devices.

A research team led by Prof. Tie Jun Cui at Southeast University in China has recently developed a new programmable neural network based on a so-called spoof surface plasmon polariton (SSPP), which is a surface that propagates along planar interfaces. This newly proposed surface plasmonic neural network (SPNN) architecture, introduced in a paper in Nature Electronics, can detect and process microwaves, which could be useful for wireless communication and other technological applications.

“In digital hardware research for the implementation of , optical neural networks and diffractive deep neural networks recently emerged as promising solutions,” Qian Ma, one of the researchers who carried out the study, told Tech Xplore. “Previous research focusing on optical neural networks showed that simultaneous high-level programmability and nonlinear computing can be difficult to achieve. Therefore, these ONN devices usually have been limited to without programmability, or only applied for simple recognition tasks (i.e., linear problems).”

How is human behaviour impacted by an unfair AI? A game of Tetris reveals all

A team of researchers give a spin to Tetris, and make observations as people play the game.

We live in a world run by machines. They make important decisions for us, like who to hire, who gets approved for a loan, or recommending user content on social media. Machines and computer programs have an increasing influence over our lives, now more than ever, with artificial intelligence (AI) making inroads in our lives in new ways. And this influence goes far beyond the person directly interacting with machines.


A Cornell University-led experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceived the other player as less likable, regardless of whether a person or an algorithm allocated the turns.

Compression algorithms run on AI hardware to simulate nature’s most complex systems

High-performance computing (HPC) has become an essential tool for processing large datasets and simulating nature’s most complex systems. However, researchers face difficulties in developing more intensive models because Moore’s Law—which states that computational power doubles every two years—is slowing, and memory bandwidth still cannot keep up with it. But scientists can speed up simulations of complex systems by using compression algorithms running on AI hardware.

A team led by computer scientist Hatem Ltaief are tackling this problem head-on by employing designed for (AI) to help scientists make their code more efficient. In a paper published in the journal High Performance Computing, they now report making simulations up to 150 times faster in the diverse fields of climate modeling, astronomy, seismic imaging and wireless communications.

Previously, Ltaief and co-workers showed that many scientists were riding the wave of hardware development and “over-solving” their models, carrying out lots of unnecessary calculations.

Supercomputing simulations spot electron orbital signatures

Something not musk:


No one will ever be able to see a purely mathematical construct such as a perfect sphere. But now, scientists using supercomputer simulations and atomic resolution microscopes have imaged the signatures of electron orbitals, which are defined by mathematical equations of quantum mechanics and predict where an atom’s electron is most likely to be.

Scientists at UT Austin, Princeton University, and ExxonMobil have directly observed the signatures of electron orbitals in two different transition-metal atoms, iron (Fe) and cobalt (Co) present in metal-phthalocyanines. Those signatures are apparent in the forces measured by atomic force microscopes, which often reflect the underlying orbitals and can be so interpreted.

Their study was published in March 2023 as an Editors’ Highlight in the journal Nature Communications.

Quantum Computing Algorithm Breakthrough Brings Practical Use Closer to Reality

Out of all common refrains in the world of computing, the phrase “if only software would catch up with hardware” would probably rank pretty high. And yet, software does sometimes catch up with hardware. In fact, it seems that this time, software can go as far as unlocking quantum computations for classical computers. That’s according to researchers with the RIKEN Center for Quantum Computing, Japan, who have published work on an algorithm that significantly accelerates a specific quantum computing workload. More significantly, the workload itself — called time evolution operators — has applications in condensed matter physics and quantum chemistry, two fields that can unlock new worlds within our own.

Normally, an improved algorithm wouldn’t be completely out of the ordinary; updates are everywhere, after all. Every app update, software update, or firmware upgrade is essentially bringing revised code that either solves problems or improves performance (hopefully). And improved algorithms are nice, as anyone with a graphics card from either AMD or NVIDIA can attest. But let’s face it: We’re used to being disappointed with performance updates.

Powering AI On Mobile Devices Requires New Math And Qualcomm Is Pioneering It

The feature image you see above was generated by an AI text-to-image rendering model called Stable Diffusion typically runs in the cloud via a web browser, and is driven by data center servers with big power budgets and a ton of silicon horsepower. However, the image above was generated by Stable Diffusion running on a smartphone, without a connection to that cloud data center and running in airplane mode, with no connectivity whatsoever. And the AI model rendering it was powered by a Qualcomm Snapdragon 8 Gen 2 mobile chip on a device that operates at under 7 watts or so.

It took Stable Diffusion only a few short phrases and 14.47 seconds to render this image.


This is an example of a 540p pixel input resolution image being scaled up to 4K resolution, which results in much cleaner lines, sharper textures, and a better overall experience. Though Qualcomm has a non-algorithmic version of this available today, called Snapdragon GSR, someday in the future, mobile enthusiast gamers are going to be treated to even better levels of image quality without sacrificing battery life and with even higher frame rates.

This is just one example of gaming and media enhancement with pre-trained and quantized machine learning models, but you can quickly think of a myriad of applications that could benefit greatly, from recommendation engines to location-aware guidance, to computational photography techniques and more.

We just needed a new math for all this AI heavy lifting on smartphones and other lower power edge devices, and it appears Qualcomm is leading that charge.

Generative AI Breaks The Data Center: Data Center Infrastructure And Operating Costs Projected To Increase To Over $76 Billion By 2028

Update: The image for the ChatGPT 3.5 and vicuna-13B comparison has been updated for readability.

With the launch of Large Language Models (LLMs) for Generative Artificial Intelligence (GenAI), the world has become both enamored and concerned with the potential for AI. The ability to hold a conversation, pass a test, develop a research paper, or write software code are tremendous feats of AI, but they are only the beginning to what GenAI will be able to accomplish over the next few years. All this innovative capability comes at a high cost in terms of processing performance and power consumption. So, while the potential for AI may be limitless, physics and costs may ultimately be the boundaries.

Tirias Research forecasts that on the current course, generative AI data center server infrastructure plus operating costs will exceed $76 billion by 2028, with growth challenging the business models and profitability of emergent services such as search, content creation, and business automation incorporating GenAI. For perspective, this cost is more than twice the estimated annual operating cost of Amazon’s cloud service AWS, which today holds one third of the cloud infrastructure services market according to Tirias Research estimates. This forecast incorporates an aggressive 4X improvement in hardware compute performance, but this gain is overrun by a 50X increase in processing workloads, even with a rapid rate of innovation around inference algorithms and their efficiency. Neural Networks (NNs) designed to run at scale will be even more highly optimized and will continue to improve over time, which will increase each server’s capacity. However, this improvement is countered by increasing usage, more demanding use cases, and more sophisticated models with orders of magnitude more parameters. The cost and scale of GenAI will demand innovation in optimizing NNs and is likely to push the computational load out from data centers to client devices like PCs and smartphones.

/* */