## Archive for the ‘mathematics’ category: Page 67

Try out my quantum mechanics course (and many others on math and science) on https://brilliant.org/sabine. You can get started for free, and the first 200 will get 20% off the annual premium subscription.

Welcome everybody to our first episode of Science News without the gobbledygook. Today we’ll talk about this year’s Nobel Prize in Physics, trouble with the new data from the Webb telescope, what’s next after NASA’s collision with an asteroid, new studies about the environmental impact of Bitcoin and exposure to smoke from wildfires, a test run of a new electric airplane, and dogs that can smell mathematics.

Recent explorations of unique geometric worlds reveal perplexing patterns, including the Fibonacci sequence and the golden ratio.

In an effort to clarify how deductive reasoning is accomplished, an fMRI study was performed to observe the neural substrates of logical reasoning and mathematical calculation. Participants viewed a problem statement and three premises, and then either a conclusion or a mathematical formula. They had to indicate whether the conclusion followed from the premises, or to solve the mathematical formula. Language areas of the brain (Broca’s and Wernicke’s area) responded as the premises and the conclusion were read, but solution of the problems was then carried out by non-language areas. Regions in right prefrontal cortex and inferior parietal lobe were more active for reasoning than for calculation, whereas regions in left prefrontal cortex and superior parietal lobe were more active for calculation than for reasoning. In reasoning, only those problems calling for a search for counterexamples to conclusions recruited right frontal pole. These results have important implications for understanding how higher cognition, including deduction, is implemented in the brain. Different sorts of thinking recruit separate neural substrates, and logical reasoning goes beyond linguistic regions of the brain.

Matrix multiplication is at the heart of many machine learning breakthroughs, and it just got faster—twice. Last week, DeepMind announced it discovered a more efficient way to perform matrix multiplication, conquering a 50-year-old record. This week, two Austrian researchers at Johannes Kepler University Linz claim they have bested that new record by one step.

In 1969, a German mathematician named Volker Strassen discovered the previous-best algorithm for multiplying 4×4 matrices, which reduces the number of steps necessary to perform a matrix calculation. For example, multiplying two 4×4 matrices together using a traditional schoolroom method would take 64 multiplications, while Strassen’s algorithm can perform the same feat in 49 multiplications.

How many bottles does he have to sell to buy out Twitter? You do the math.

The world’s richest person Elon Musk launched a new perfume, and about 24 hours later, he had orders worth two million dollars. With no prior exposure in the business, the perfume has sold on Musk’s reputation alone, and rightly so; the Tesla CEO now changed his Twitter description to Perfume Salesman.

Last Sunday, Musk unveiled the Burnt Hair perfume to his Twitter followers and how it would be a product from his tunneling venture, The Boring Company.

An interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or “carebots” used in healthcare settings.

“Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, and other people who require health monitoring or physical assistance,” says Veljko Dubljević, corresponding author of a paper on the work and an associate professor in the Science, Technology & Society program at North Carolina State University. “In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments.”

“For example, let’s say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?”

AlphaTensor opens the door to a world where AI designs programs that outperform anything humans engineer—including AI itself.

It was a big year. Researchers found a way to idealize deep neural networks using kernel machines—an important step toward opening these black boxes. There were major developments toward an answer about the nature of infinity. And a mathematician finally managed to model quantum gravity. Read the articles in full at Quanta Magazine: https://www.quantamagazine.org/the-year-in-math-and-computer-science-20211223/

Quanta Magazine is an editorially independent publication supported by the Simons Foundation.

In this episode we explore a User Interface Theory of reality. Since the invention of the computer virtual reality theories have been gaining in popularity, often to explain some difficulties around the hard problem of consciousness (See Episode #1 with Sue Blackmore to get a full analysis of the problem of how subjective experiences might emerge out of our brain neurology); but also to explain other non-local anomalies coming out of physics and psychology, like ‘quantum entanglement’ or ‘out of body experiences’. Do check the devoted episodes #4 and #28 respectively on those two phenomena for a full breakdown.
As you will hear today the vast majority of cognitive scientists believe consciousness is an emergent phenomena from matter, and that virtual reality theories are science fiction or ‘Woowoo’ and new age. One of this podcasts jobs is to look at some of these Woowoo claims and separate the wheat from the chaff, so the open minded among us can find the threshold beyond which evidence based thinking, no matter how contrary to the consensus can be considered and separated from wishful thinking.
So you can imagine my joy when a hugely respected cognitive scientist and User Interface theorist, who can cut through the polemic and orthodoxy with calm, respectful, evidence based argumentation, agreed to come on the show, the one and only Donald D Hoffman.

Hoffman is a full professor of cognitive science at the University of California, Irvine, where he studies consciousness, visual perception and evolutionary psychology using mathematical models and psychophysical experiments. His research subjects include facial attractiveness, the recognition of shape, the perception of motion and colour, the evolution of perception, and the mind-body problem. So he is perfectly placed to comment on how we interpret reality.

Improving the efficiency of algorithms for fundamental computations is a crucial task nowadays as it influences the overall pace of a large number of computations that might have a significant impact. One such simple task is matrix multiplication, which can be found in systems like neural networks and scientific computing routines. Machine learning has the potential to go beyond human intuition and beat the most exemplary human-designed algorithms currently available. However, due to the vast number of possible algorithms, this process of automated algorithm discovery is complicated. DeepMind recently made a breakthrough discovery by developing AplhaTensor, the first-ever artificial intelligence (AI) system for developing new, effective, and indubitably correct algorithms for essential operations like matrix multiplication. Their approach answers a mathematical puzzle that has been open for over 50 years: how to multiply two matrices as quickly as possible.

AlphaZero, an agent that showed superhuman performance in board games like chess, go, and shogi, is the foundation upon which AlphaTensor is built. The system expands on AlphaZero’s progression from playing traditional games to solving complex mathematical problems for the first time. The team believes this study represents an important milestone in DeepMind’s objective to improve science and use AI to solve the most fundamental problems. The research has also been published in the established Nature journal.

Matrix multiplication has numerous real-world applications despite being one of the most simple algorithms taught to students in high school. This method is utilized for many things, including processing images on smartphones, identifying verbal commands, creating graphics for video games, and much more. Developing computing hardware that multiplies matrices effectively consumes many resources; therefore, even small gains in matrix multiplication efficiency can have a significant impact. The study investigates how the automatic development of new matrix multiplication algorithms could be advanced by using contemporary AI approaches. In order to find algorithms that are more effective than the state-of-the-art for many matrix sizes, AlphaTensor further leans on human intuition. Its AI-designed algorithms outperform those created by humans, which represents a significant advancement in algorithmic discovery.

Page 67 of 147First6465666768697071Last