Toggle light / dark theme

Understanding the behavior of nuclear matter—including the quarks and gluons that make up the protons and neutrons of atomic nuclei—is extremely complicated. This is particularly true in our world, which is three dimensional. Mathematical techniques from condensed matter physics that consider interactions in just one spatial dimension (plus time) greatly simplify the challenge.

Using this two-dimensional approach, scientists solved the complex equations that describe how low-energy excitations ripple through a system of dense nuclear matter. This work indicates that the center of stars, where such dense nuclear matter exists in nature, may be described by an unexpected form.

Being able to understand the quark interactions in two dimensions opens a new window into understanding neutron stars, the densest form of matter in the universe. The approach could help advance the current “golden age” for studying these exotic stars. This surge in research success was triggered by recent discoveries of gravitational waves and electromagnetic emissions in the cosmos.

Computer scientists have, for decades, been optimizing how computers sort data to shave off crucial milliseconds in returning search results or alphabetizing contact lists. Now DeepMind, based in London, has vastly improved sorting speeds by applying the technology behind AlphaZero — its artificial-intelligence system for playing the board games chess, Go and shogi — to a game of building sorting algorithms. “This is an exciting result,” said Emma Brunskill, a computer scientist at Stanford University, California.

The system, AlphaDev, is described in a paper in Nature1, and has invented faster algorithms that are already part of two standard C++ coding libraries, so are being used trillions of times per day by programmers around the world.

One nebulous aspect of the poll, and of many of the headlines about AI we see on a daily basis, is how the technology is defined. What are we referring to when we say “AI”? The term encompasses everything from recommendation algorithms that serve up content on YouTube and Netflix, to large language models like ChatGPT, to models that can design incredibly complex protein architectures, to the Siri assistant built into many iPhones.

IBM’s definition is simple: “a field which combines computer science and robust datasets to enable problem-solving.” Google, meanwhile, defines it as “a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more.”

It could be that peoples’ fear and distrust of AI comes partly from a lack of understanding of it, and a stronger focus on unsettling examples than positive ones. The AI that can design complex proteins may help scientists discover stronger vaccines and other drugs, and could do so on a vastly accelerated timeline.

“Intelligence supposes goodwill,” Simone de Beauvoir wrote in the middle of the twentieth century. In the decades since, as we have entered a new era of technology risen from our minds yet not always consonant with our values, this question of goodwill has faded dangerously from the set of considerations around artificial intelligence and the alarming cult of increasingly advanced algorithms, shiny with technical triumph but dull with moral insensibility.

In De Beauvoir’s day, long before the birth of the Internet and the golden age of algorithms, the visionary mathematician, philosopher, and cybernetics pioneer Norbert Wiener (November 26, 1894–March 18, 1964) addressed these questions with astounding prescience in his 1954 book The Human Use of Human Beings, the ideas in which influenced the digital pioneers who shaped our present technological reality and have recently been rediscovered by a new generation of thinkers eager to reinstate the neglected moral dimension into the conversation about artificial intelligence and the future of technology.

A decade after The Human Use of Human Beings, Wiener expanded upon these ideas in a series of lectures at Yale and a philosophy seminar at Royaumont Abbey near Paris, which he reworked into the short, prophetic book God & Golem, Inc. (public library). Published by MIT Press in the final year of his life, it won him the posthumous National Book Award in the newly established category of Science, Philosophy, and Religion the following year.

“It’s an interesting new approach,” says Peter Sanders, who studies the design and implementation of efficient algorithms at the Karlsruhe Institute of Technology in Germany and who was not involved in the work. “Sorting is still one of the most widely used subroutines in computing,” he says.

DeepMind published its results in Nature today. But the techniques that AlphaDev discovered are already being used by millions of software developers. In January 2022, DeepMind submitted its new sorting algorithms to the organization that manages C++, one of the most popular programming languages in the world, and after two months of rigorous independent vetting, AlphaDev’s algorithms were added to the language. This was the first change to C++’s sorting algorithms in more than a decade and the first update ever to involve an algorithm discovered using AI.

Digital society is driving increasing demand for computation, and energy use. For the last five decades, we relied on improvements in hardware to keep pace. But as microchips approach their physical limits, it’s critical to improve the code that runs on them to make computing more powerful and sustainable. This is especially important for the algorithms that make up the code running trillions of times a day.

In our paper published today in Nature, we introduce AlphaDev, an artificial intelligence (AI) system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.

Nature Publication.

The team used publicly available neural network algorithms to program the robotic chef to pick up recipes.

Robots are the future of many industries. Robots are being trained all over the world to perform a wide range of tasks more meticulously — be it cleaning or playing football.

Mastering the art of cooking is one task that still has a long way to go. However, robots will soon pick up on this human skill.

General relativity, part of the wide-ranging physical theory of relativity formed by the German-born physicist Albert Einstein. It was conceived by Einstein in 1915. It explains gravity based on the way space can ‘curve’, or, to put it more accurately, it associates the force of gravity with the changing geometry of space-time. (Einstein’s gravity)

The mathematical equations of Einstein’s general theory of relativity, tested time and time again, are currently the most accurate way to predict gravitational interactions, replacing those developed by Isaac Newton several centuries prior.

Over the last century, many experiments have confirmed the validity of both special and general relativity. In the first major test of general relativity, astronomers in 1919 measured the deflection of light from distant stars as the starlight passed by our sun, proving that gravity does, in fact, distort or curve space.

Read it on : https://kllonusk.wordpress.com/2022/11/19/general-relativity…ed-simply/

Daniel Lidar, the Viterbi Professor of Engineering at USC and Director of the USC Center for Quantum Information Science & Technology, and Dr. Bibek Pokharel, a Research Scientist at IBM Quantum, have achieved a quantum speedup advantage in the context of a “bitstring guessing game.” They managed strings up to 26 bits long, significantly larger than previously possible, by effectively suppressing errors typically seen at this scale. (A bit is a binary number that is either zero or one). Their paper is published in the journal Physical Review Letters.

Quantum computers promise to solve certain problems with an advantage that increases as the problems increase in complexity. However, they are also highly prone to errors, or noise. The challenge, says Lidar, is “to obtain an advantage in the real world where today’s quantum computers are still ‘noisy.’” This noise-prone condition of current is termed the “NISQ” (Noisy Intermediate-Scale Quantum) era, a term adapted from the RISC architecture used to describe classical computing devices. Thus, any present demonstration of quantum speed advantage necessitates noise reduction.

The more unknown variables a problem has, the harder it usually is for a to solve. Scholars can evaluate a computer’s performance by playing a type of game with it to see how quickly an algorithm can guess hidden information. For instance, imagine a version of the TV game Jeopardy, where contestants take turns guessing a secret word of known length, one whole word at a time. The host reveals only one correct letter for each guessed word before changing the secret word randomly.