Toggle light / dark theme

Thanks to an algorithm created by an Idaho State University professor, the way engineers, doctors, and physicists tackle the hard questions in their respective fields could all change.

Emanuele Zappala, an assistant professor of mathematics at ISU, and his colleagues at Yale have developed the Attentional Neural Integral Equations algorithm, or ANIE for short. Their work was recently published in Nature Machine Intelligence and describes how ANIE can model large, complex systems using data alone.

“Natural phenomena–everything from plasma physics to how viruses spread–are all governed by equations which we do not fully understand,” explains Zappala. “One of the main complexities lies in long-distance relations between different data points in the systems over space and time. What ANIE does is it allows us to learn these complex systems using just those known data points.”

Michael Levin is a Distinguished Professor in the Biology department at Tufts University and associate faculty at the Wyss Institute for Bioinspired Engineering at Harvard University. @drmichaellevin holds the Vannevar Bush endowed Chair and serves as director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology. Prior to college, Michael Levin worked as a software engineer and independent contractor in the field of scientific computing. He attended Tufts University, interested in artificial intelligence and unconventional computation. To explore the algorithms by which the biological world implemented complex adaptive behavior, he got dual B.S. degrees, in CS and in Biology and then received a PhD from Harvard University. He did post-doctoral training at Harvard Medical School, where he began to uncover a new bioelectric language by which cells coordinate their activity during embryogenesis. His independent laboratory develops new molecular-genetic and conceptual tools to probe large-scale information processing in regeneration, embryogenesis, and cancer suppression.

TIMESTAMPS:
0:00 — Introduction.
1:41 — Creating High-level General Intelligences.
7:00 — Ethical implications of Diverse Intelligence beyond AI & LLMs.
10:30 — Solving the Fundamental Paradox that faces all Species.
15:00 — Evolution creates Problem Solving Agents & the Self is a Dynamical Construct.
23:00 — Mike on Stephen Grossberg.
26:20 — A Formal Definition of Diverse Intelligence (DI)
30:50 — Intimate relationships with AI? Importance of Cognitive Light Cones.
38:00 — Cyborgs, hybrids, chimeras, & a new concept called “Synthbiosis“
45:51 — Importance of the symbiotic relationship between Science & Philosophy.
53:00 — The Space of Possible Minds.
58:30 — Is Mike Playing God?
1:02:45 — A path forward: through the ethics filter for civilization.
1:09:00 — Mike on Daniel Dennett (RIP)
1:14:02 — An Ethical Synthbiosis that goes beyond “are you real or faking it“
1:25:47 — Conclusion.

EPISODE LINKS:
- Mike’s Round 1: https://youtu.be/v6gp-ORTBlU
- Mike’s Round 2: https://youtu.be/kMxTS7eKkNM
- Mike’s Channel: https://www.youtube.com/@drmichaellevin.
- Mike’s Website: https://drmichaellevin.org/
- Blog Website: https://thoughtforms.life.
- Mike’s Twitter: https://twitter.com/drmichaellevin.
- Mike’s Publications: https://scholar.google.com/citations?user=luouyakAAAAJ&hl=en.
- Mike’s NOEMA piece: https://www.noemamag.com/ai-could-be-a-bridge-toward-diverse-intelligence/
- Stephen Grossberg: https://youtu.be/bcV1eSgByzg.
- Mark Solms: https://youtu.be/rkbeaxjAZm4
- VPRO Roundtable: https://youtu.be/RVrnn7QW6Jg?feature=shared.

CONNECT:

Here’s Malur Narayan of Latimer AI sharing about removing bias, and setting a standard for identifying and measuring it in artificial intelligence systems, and LLM’s.

Malur is a tech leader in AI / ML, mobile, quantum, and is an advocate of tech for good, and responsible AI.

Meet the rising stars,…


The AI system is dubbed a “quantum-tunneling deep neural network” and combines neural networks with quantum tunneling. A deep neural network is a collection of machine learning algorithms inspired by the structure and function of the brain — with multiple layers of nodes between the input and output. It can model complex non-linear relationships and, unlike conventional neural networks (which include a single layer between input and output) deep neural networks include many hidden layers.

Quantum tunneling, meanwhile, occurs when a subatomic particle, such as an electron or photon (particle of light), effectively passes through an impenetrable barrier. Because a subatomic particle like light can also behave as a wave — when it is not directly observed it is not in any fixed location — it has a small but finite probability of being on the other side of the barrier. When sufficient subatomic particles are present, some will “tunnel” through the barrier.

After the data representing the optical illusion passes through the quantum tunneling stage, the slightly altered image is processed by a deep neural network.

“If you constantly use an AI to find the music, career or political candidate you like, you might eventually forget how to do this yourself.” Ethicist Muriel Leuenberger considers the personal impact of relying on AI.

A new proof shows that an upgraded version of the 70-year-old Dijkstra’s algorithm reigns supreme: It finds the most efficient pathways through any graph.

It doesn’t just tell you the fastest route to one destination.


In an interview toward the end of his life, Dijkstra credited his algorithm’s enduring appeal in part to its unusual origin story. “Without pencil and paper you are almost forced to avoid all avoidable complexities,” he said.

Dijkstra’s algorithm doesn’t just tell you the fastest route to one destination. Instead, it gives you an ordered list of travel times from your current location to every other point that you might want to visit — a solution to what researchers call the single-source shortest-paths problem. The algorithm works in an abstracted road map called a graph: a network of interconnected points (called vertices) in which the links between vertices are labeled with numbers (called weights). These weights might represent the time required to traverse each road in a network, and they can change depending on traffic patterns. The larger a weight, the longer it takes to traverse that path.

PRESS RELEASE — After over a year of evaluation, NIST has selected 14 candidates for the second round of the Additional Digital Signatures for the NIST PQC Standardization Process. The advancing digital signature algorithms are:

NIST Internal Report (IR) 8528 describes the evaluation criteria and selection process. Questions may be directed to [email protected]. NIST thanks all of the candidate submission teams for their efforts in this standardization process as well as the cryptographic community at large, which helped analyze the signature schemes.

Moving forward, the second-round candidates have the option of submitting updated specifications and implementations (i.e., “tweaks”). NIST will provide more details to the submission teams in a separate message. This second phase of evaluation and review is estimated to last 12–18 months.

In 2022, a nuclear-fusion experiment yielded more energy than was delivered by the lasers that ignited the fusion reaction (see Viewpoint: Nuclear-Fusion Reaction Beats Breakeven). That demonstration was an example of indirect-drive inertial-confinement fusion, in which lasers collapse a fuel pellet by heating a gold can that surrounds it. This approach is less efficient than heating the pellet directly since the pellet absorbs less of the lasers’ energy. Nevertheless, it has been favored by researchers at the largest laser facilities because it is less sensitive to nonuniform laser illumination. Now Duncan Barlow at the University of Bordeaux, France, and his colleagues have devised an efficient way to improve illumination uniformity in direct-drive inertial-confinement fusion [1]. This advance helps overcome a remaining barrier to high-yield direct-drive fusion using existing facilities.

Triggering self-sustaining fusion by inertial confinement requires pressures and temperatures that are achievable only if the fuel pellet implodes with high uniformity. Such uniformity can be prevented by heterogeneities in the laser illumination and in the way the beams interact with the resulting plasma. Usually, researchers identify the laser configuration that minimizes these heterogeneities by iterating radiation-hydrodynamics simulations that are computationally expensive and labor intensive. Barlow and his colleagues developed an automatic, algorithmic approach that bypasses the need for such iterative simulations by approximating some of the beam–plasma interactions.

Compared with an experiment using a spherical, plastic target at the National Ignition Facility in California, the team’s optimization method should deliver an implosion that reaches 2 times the density and 3 times the pressure. But the approach can also be applied to other pellet geometries and at other facilities.

In a world powered by artificial intelligence applications, data is king, but it’s also the crown’s biggest burden.


As described in the article, quantum memory stores data in ways that classical memory systems cannot match. In quantum systems, information is stored in quantum states, using the principles of superposition and entanglement to represent data more efficiently. This ability allows quantum systems to process and store vastly more information, potentially impacting data-heavy industries like AI.

In a 2021 study from the California Institute of Technology, researchers showed that quantum memory could dramatically reduce the number of steps needed to model complex systems. Their method proved that quantum algorithms using memory could require exponentially fewer steps, cutting down on both time and energy. However, this early work required vast amounts of quantum memory—an obstacle that could have limited its practical application.

Now, two independent teams have derived additional insights, demonstrating how these exponential advantages can be achieved with much less quantum memory. Sitan Chen from Harvard University, along with his team, found that just two quantum copies of a system were enough to provide the same computational efficiency previously thought to require many more.

Predicting the behavior of many interacting quantum particles is a complex task, but it’s essential for unlocking the potential of quantum computing in real-world applications. A team of researchers, led by EPFL, has developed a new method to compare quantum algorithms and identify the most challenging quantum problems to solve.

Quantum systems, from subatomic particles to complex molecules, hold the key to understanding the workings of the universe. However, modeling these systems quickly becomes overwhelming due to their immense complexity. It’s like trying to predict the behavior of a massive crowd where everyone constantly influences everyone else. When you replace the crowd with quantum particles, you encounter what’s known as the “quantum many-body problem.”

Quantum many-body problems involve predicting the behavior of numerous interacting quantum particles. Solving these problems could lead to major breakthroughs in fields like chemistry and materials science, and even accelerate the development of technologies like quantum computers.