БЛОГ

Archive for the ‘mathematics’ category: Page 77

Sep 24, 2022

Musing on Understanding & AI — Hugo de Garis, Adam Ford, Michel de Haan

Posted by in categories: education, existential risks, information science, mapping, mathematics, physics, robotics/AI

Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments.
10:18 Webs of knowledge — contextual understanding.
12:16 Early childhood development — concept formation and navigation.
13:11 The intuitive ability for concept navigation isn’t complete.
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches — engineering, and copy the brain.
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding.
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom.
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first — understanding or generality?
40:47 Minsky’s ‘Society of Mind’
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature.
50:48 Deism — James Gates and error correction in super-symmetry.
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox.
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox.
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines — David Chalmers Zombie argument.
01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral ‘turing’ tests for understanding.
01:13:05 Shape sorters and reverse shape sorters.
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries…
01:15:11 Neural nets and adaptivity.
01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?

Filmed in the dandenong ranges in victoria, australia.

Many thanks for watching!

Sep 22, 2022

China Launches World’s Fastest Quantum Computers | China’s Advancement In Quantum Computers #techno

Posted by in categories: government, mathematics, quantum physics, supercomputing

https://www.youtube.com/watch?v=slEceKBmqts

China Launches World’s Fastest Quantum Computers | China’s Advancement In Quantum Computers #technology.

“Techno Jungles”

Continue reading “China Launches World’s Fastest Quantum Computers | China’s Advancement In Quantum Computers #techno” »

Sep 22, 2022

Nine Inch Nails — Me I’m Not — Music Video

Posted by in categories: computing, mathematics, media & arts, military

Nine Inch Nails “Me I’m Not” remixed with US military, math, science, and computer footage from the Prelinger Archives.

Sep 21, 2022

Her work helped her boss win the Nobel Prize. Now the spotlight is on her

Posted by in categories: computing, information science, mathematics, space

Scientists have long studied the work of Subrahmanyan Chandrasekhar, the Indian-born American astrophysicist who won the Nobel Prize in 1983, but few know that his research on stellar and planetary dynamics owes a deep debt of gratitude to an almost forgotten woman: Donna DeEtte Elbert.

From 1948 to 1979, Elbert worked as a “computer” for Chandrasekhar, tirelessly devising and solving mathematical equations by hand. Though she shared authorship with the Nobel laureate on 18 papers and Chandrasekhar enthusiastically acknowledged her seminal contributions, her greatest achievement went unrecognized until a postdoctoral scholar at UCLA connected threads in Chandrasekhar’s work that all led back to Elbert.

Elbert’s achievement? Before anyone else, she predicted the conditions argued to be optimal for a planet or star to generate its own magnetic field, said the scholar, Susanne Horn, who has spent half a decade building on Elbert’s work.

Sep 21, 2022

Advancing AI trustworthiness: Updates on responsible AI research

Posted by in categories: mathematics, robotics/AI

Inflated expectations around the capabilities of AI technologies may lead people to believe that computers can’t be wrong. The truth is AI failures are not a matter of if but when. AI is a human endeavor that combines information about people and the physical world into mathematical constructs. Such technologies typically rely on statistical methods, with the possibility for errors throughout an AI system’s lifespan. As AI systems become more widely used across domains, especially in high-stakes scenarios where people’s safety and wellbeing can be affected, a critical question must be addressed: how trustworthy are AI systems, and how much and when should people trust AI?

Sep 20, 2022

To infinity and some glimpses of beyond

Posted by in category: mathematics

Certain physical problems such as the rupture of a thin sheet can be difficult to solve as computations breakdown at the point of rupture. Here the authors propose a regularization approach to overcome this breakdown which could help dealing with mathematical models that have finite time singularities.

Sep 16, 2022

Understanding how a cell becomes a person, with math

Posted by in category: mathematics

We all start from a single cell, the fertilized egg. From this cell, through a process involving cell division, cell differentiation and cell death a human being takes shape, ultimately made up of over 37 trillion cells across hundreds or thousands of different cell types.

While we broadly understand many aspects of this developmental process, we do not know many of the details.

A better understanding of how a fertilized egg turns into trillions of cells to form a human is primarily a mathematical challenge. What we need are mathematical models that can predict and show what happens.

Sep 15, 2022

New method for comparing neural networks exposes how artificial intelligence works

Posted by in categories: mathematics, robotics/AI, transportation

A team at Los Alamos National Laboratory has developed a novel approach for comparing neural networks that looks within the “black box” of artificial intelligence to help researchers understand neural network behavior. Neural networks recognize patterns in datasets; they are used everywhere in society, in applications such as virtual assistants, facial recognition systems and self-driving cars.

“The research community doesn’t necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. “Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI.”

Jones is the lead author of the paper “If You’ve Trained One You’ve Trained Them All: Inter-Architecture Similarity Increases With Robustness,” which was presented recently at the Conference on Uncertainty in Artificial Intelligence. In addition to studying network similarity, the paper is a crucial step toward characterizing the behavior of robust neural networks.

Sep 13, 2022

Voxengo plugin developer says he’s broken into “some ‘backdoor’ in mathematics itself” that proves that the universe has a ‘creator’

Posted by in categories: cosmology, mathematics, physics

Vaneev posits that: “‘intelligent impulses’ or even ‘human mind’ itself (because a musician can understand these impulses) existed long before the ‘Big Bang’ happened. This discovery is probably both the greatest discovery in the history of mankind, and the worst discovery (for many) as it poses very unnerving questions that touch religious grounds.”

The Voxengo developer sums up his findings as follows: “These results of 1-bit PRVHASH say the following: if abstract mathematics contains not just a system of rules for manipulating numbers, but also a freely-defined fixed information that is also ‘readable’ by a person, then mathematics does not just ‘exist’, but ‘it was formed’, because mathematics does not evolve (beside human discovery of new rules and patterns). And since physics cannot be formulated without such mathematics, and physical processes clearly obey these mathematical rules, it means that a Creator/Higher Intelligence/God exists in relation to the Universe. For the author personally, everything is proven here.”

Vaneev says that he wanted to “share my astonishment and satisfaction with the results of this work that took much more of my time than I had wished for,” but that you don’t need to concern yourself too much with his findings if you don’t want to.”

Sep 11, 2022

Particle physics on the brain

Posted by in categories: biological, mathematics, neuroscience, particle physics, quantum physics

face_with_colon_three circa 2018.


Understanding the fundamental constituents of the universe is tough. Making sense of the brain is another challenge entirely. Each cubic millimetre of human brain contains around 4 km of neuronal “wires” carrying millivolt-level signals, connecting innumerable cells that define everything we are and do. The ancient Egyptians already knew that different parts of the brain govern different physical functions, and a couple of centuries have passed since physicians entertained crowds by passing currents through corpses to make them seem alive. But only in recent decades have neuroscientists been able to delve deep into the brain’s circuitry.

On 25 January, speaking to a packed audience in CERN’s Theory department, Vijay Balasubramanian of the University of Pennsylvania described a physicist’s approach to solving the brain. Balasubramanian did his PhD in theoretical particle physics at Princeton University and also worked on the UA1 experiment at CERN’s Super Proton Synchrotron in the 1980s. Today, his research ranges from string theory to theoretical biophysics, where he applies methodologies common in physics to model the neural topography of information processing in the brain.

Continue reading “Particle physics on the brain” »

Page 77 of 155First7475767778798081Last