Toggle light / dark theme

This AI Uses a Scan of Your Retina to Predict Your Risk of Heart Disease

They then used QUARTZ to analyze retinal images from 7,411 more people, these aged 48 to 92, and combined this data with information about their health history (such as smoking, statin use, and previous heart attacks) to predict their risk of heart disease. Participants’ health was tracked for seven to nine years, and their outcomes were compared to Framingham risk score (FRS) predictions.

A common tool for estimating heart disease risk, the FRS looks at age, gender, total cholesterol, high density lipoprotein cholesterol, smoking habits, and systolic blood pressure to estimate the probability someone will develop heart disease within a given span of time, usually 10 to 30 years.

The QUARTZ team compared their data to 10-year FRS predictions and said the algorithm’s accuracy was on par with that of the conventional tool.

Journal of Experimental and Theoretical Physics

Circa 2020 Basically this means a magnetic transistor can have not only quantum properties but also it can have nearly infinite speeds for processing speeds. Which means we can have nanomachines with near infinite speeds eventually.


Abstract The discovery of spin superfluidity in antiferromagnetic superfluid 3He is a remarkable discovery associated with the name of Andrey Stanislavovich Borovik-Romanov. After 30 years, quantum effects in a magnon gas (such as the magnon Bose–Einstein condensate and spin superfluidity) have become quite topical. We consider analogies between spin superfluidity and superconductivity. The results of quantum calculations using a 53-bit programmable superconducting processor have been published quite recently[1]. These results demonstrate the advantage of using the quantum algorithm of calculations with this processor over the classical algorithm for some types of calculations. We consider the possibility of constructing an analogous (in many respecys) processor based on spin superfluidity.

Dr. David Markowitz, PhD — IARPA — High-Risk, High-Payoff Research For National Security Challenges

High-Risk, High-Payoff Bio-Research For National Security Challenges — Dr. David A. Markowitz, Ph.D., IARPA


Dr. David A. Markowitz, Ph.D. (https://www.markowitz.bio/) is a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA — https://www.iarpa.gov/) which is an organization that invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the U.S. Intelligence Community (IC).

IARPA’s mission is to push the boundaries of science to develop solutions that empower the U.S. IC to do its work better and more efficiently for national security. IARPA does not have an operational mission and does not deploy technologies directly to the field, but instead, they facilitate the transition of research results to IC customers for operational application.

Currently, Dr. Markowitz leads three research programs at the intersection between biology, engineering, and computing. These programs are: FELIX, which is revolutionizing the field of bio-surveillance with new experimental and computational tools for detecting genetic engineering in complex biological samples; MIST, which is developing compact and inexpensive DNA data storage devices to address rapidly growing enterprise storage needs; and MICrONS, which is guiding the development of next-generation machine learning algorithms by reverse-engineering the computations performed by mammalian neocortex.

Previously, as a researcher in neuroscience, Dr. Markowitz published first-author papers on neural computation, the neural circuit basis of cognition in primates, and neural decoding strategies for brain-machine interfaces.

Researchers at MIT Solve a Differential Equation Behind the Interaction of Two Neurons Through Synapses to Unlock a New Type of Speedy and Efficient Artificial Intelligence AI Algorithm

Continuous-time neural networks are one subset of machine learning systems capable of taking on representation learning for spatiotemporal decision-making tasks. Continuous differential equations are frequently used to depict these models (DEs). Numerical DE solvers, however, limit their expressive potential when used on computers. The scaling and understanding of many natural physical processes, like the dynamics of neural systems, have been severely hampered by this restriction.

Inspired by the brains of microscopic creatures, MIT researchers have developed “liquid” neural networks, a fluid, robust ML model that can learn and adapt to changing situations. These methods can be used in safety-critical tasks such as driving and flying.

However, as the number of neurons and synapses in the model grows, the underlying mathematics becomes more difficult to solve, and the processing cost of the model rises.

Why This Breakthrough AI Now Runs A Nuclear Fusion Reactor | New AI Supercomputer

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
Nuclear fusion researchers have created a machine learning AI algorithm to detect and track the existence of plasma blobs that build up inside the tokamak for prediction of plasma disruption, the diagnosis of plasma using spectroscopy and tomography, and the tracking of turbulence inside of the fusion reactor. New AI supercomputer with over 13.5 million processor cores and over 1 exaflop of compute power made be Cerebras. A new study reveals an innovative neuro-computational model of the human brain which could lead to the creation of conscious AI or artificial general intelligence (AGI).

AI News Timestamps:
0:00 Breakthrough AI Runs A Nuclear Fusion Reactor.
3:07 New AI Supercomputer.
6:19 New Brain Model For Conscious AI

#ai #ml #nuclear

Using game theory mathematics to resolve human conflicts

This could help achieve even world peace ✌️ called equilibrium theory by John Nash.


Game theory mathematics is used to predict outcomes in conflict situations. Now it is being adapted through big data to resolve highly contentious issues between people and the environment.

Game theory is a mathematical concept that aims to predict outcomes and solutions to an issue in which parties with conflicting, overlapping or mixed interests interact.

In “theory,” the “game” will bring everyone towards an optimal solution or “equilibrium.” It promises a scientific approach to understanding how people make decisions and reach compromises in real-world situations.

Solving brain dynamics gives rise to flexible machine-learning models

Its why we should reverse engineer lab rat brains, crow brains, pigs, and chimps, ending on fully reverse engineering the human brain. even if its a hassle. i still think could all be done by end of 2025.


Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing.

But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many , becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution.

Now, the same team of scientists has discovered a way to alleviate this by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. These modes have the same characteristics of liquid neural nets—flexible, causal, robust, and explainable—but are orders of magnitude faster, and scalable. This type of neural net could therefore be used for any task that involves getting insight into data over time, as they’re compact and adaptable even after training—while many traditional models are fixed.

MIT reveals a new type of faster AI algorithm for solving a complex equation

Researchers solved a differential equation behind the interaction of two neurons through synapses, creating a faster AI algorithm.

Artificial intelligence uses a technique called artificial neural networks (ANN) to mimic the way a human brain works. A neural network uses input from datasets to “learn” and output its prediction based on the given information.

Full Story:


Imaginima/iStock.

Recently, researchers from the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab (MIT CSAIL), have discovered a quicker way to solve an equation used in the algorithms for ‘liquid’ neural neurons.

MIT solved a century-old differential equation to break ‘liquid’ AI’s computational bottleneck

Last year, MIT developed an AI/ML algorithm capable of learning and adapting to new information while on the job, not just during its initial training phase. These “liquid” neural networks (in the Bruce Lee sense) literally play 4D chess — their models requiring time-series data to operate — which makes them ideal for use in time-sensitive tasks like pacemaker monitoring, weather forecasting, investment forecasting, or autonomous vehicle navigation. But, the problem is that data throughput has become a bottleneck, and scaling these systems has become prohibitively expensive, computationally speaking.

On Tuesday, MIT researchers announced that they have devised a solution to that restriction, not by widening the data pipeline but by solving a differential equation that has stumped mathematicians since 1907. Specifically, the team solved, “the differential equation behind the interaction of two neurons through synapses… to unlock a new type of fast and efficient artificial intelligence algorithms.”

“The new machine learning models we call ‘CfC’s’ [closed-form Continuous-time] replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” MIT professor and CSAIL Director Daniela Rus said in a Tuesday press statement. “CfC models are causal, compact, explainable, and efficient to train and predict. They open the way to trustworthy machine learning for safety-critical applications.”

/* */