БЛОГ

Archive for the ‘mathematics’ category: Page 77

Oct 3, 2022

‘Quantum hair’ could resolve Hawking’s black hole paradox, say scientists

Posted by in categories: cosmology, mathematics, quantum physics

Circa 2022 😀


New mathematical formulation means huge paradigm shift in physics would not be necessary.

Oct 2, 2022

Wiggling toward bio-inspired machine intelligence

Posted by in categories: biological, mathematics, robotics/AI

Juncal Arbelaiz Mugica is a native of Spain, where octopus is a common menu item. However, Arbelaiz appreciates octopus and similar creatures in a different way, with her research into soft-robotics theory.

More than half of an octopus’ nerves are distributed through its eight arms, each of which has some degree of autonomy. This distributed sensing and information processing system intrigued Arbelaiz, who is researching how to design decentralized intelligence for human-made systems with embedded sensing and computation. At MIT, Arbelaiz is an applied math student who is working on the fundamentals of optimal distributed control and estimation in the final weeks before completing her PhD this fall.

Continue reading “Wiggling toward bio-inspired machine intelligence” »

Sep 30, 2022

Posits, a New Kind of Number, Improves the Math of AI

Posted by in categories: mathematics, robotics/AI

Training the large neural networks behind many modern AI tools requires real computational might: For example, OpenAI’s most advanced language model, GPT-3, required an astounding million billion billions of operations to train, and cost about US $5 million in compute time. Engineers think they have figured out a way to ease the burden by using a different way of representing numbers.

Back in 2017, John Gustafson, then jointly appointed at A*STAR Computational Resources Centre and the National University of Singapore, and Isaac Yonemoto, then at Interplanetary Robot and Electric Brain Co., developed a new way of representing numbers. These numbers, called posits, were proposed as an improvement over the standard floating-point arithmetic processors used today.

Now, a team of researchers at the Complutense University of Madrid have developed the first processor core implementing the posit standard in hardware and showed that, bit-for-bit, the accuracy of a basic computational task increased by up to four orders of magnitude, compared to computing using standard floating-point numbers. They presented their results at last week’s IEEE Symposium on Computer Arithmetic.

Sep 30, 2022

A computational shortcut for neural networks

Posted by in categories: information science, mathematics, quantum physics, robotics/AI

Neural networks are learning algorithms that approximate the solution to a task by training with available data. However, it is usually unclear how exactly they accomplish this. Two young Basel physicists have now derived mathematical expressions that allow one to calculate the optimal solution without training a network. Their results not only give insight into how those learning algorithms work, but could also help to detect unknown phase transitions in physical systems in the future.

Neural networks are based on the principle of operation of the brain. Such computer algorithms learn to solve problems through repeated training and can, for example, distinguish objects or process spoken language.

For several years now, physicists have been trying to use to detect as well. Phase transitions are familiar to us from everyday experience, for instance when water freezes to ice, but they also occur in more complex form between different phases of magnetic materials or , where they are often difficult to detect.

Sep 25, 2022

Are We Living in a Simulation with David Chalmers [S3 Ep.12]

Posted by in categories: computing, cryptocurrencies, mathematics, neuroscience, virtual reality

Welcome to another episode of Conversations with Coleman.

My guest today is David Chalmers. David is a professor of philosophy and neuroscience at NYU and the co-director of NYU Centre for Mind, Brain and Consciousness.

Continue reading “Are We Living in a Simulation with David Chalmers [S3 Ep.12]” »

Sep 25, 2022

How infinity threatens cosmology

Posted by in categories: cosmology, mathematics, physics

Infinity is back. Or rather, it never (ever, ever…) went away. While mathematicians have a good sense of the infinite as a concept, cosmologists and physicists are finding it much more difficult to make sense of the infinite in nature, writes Peter Cameron.

Each of us has to face a moment, often fairly early in our life, when we realize that a loved one, formerly a fixture in our life, was not infinite, but has left us, and that someday we too will have to leave this place.

This experience, probably as much as the experience of looking at the stars and wondering how far they go on, shapes our views of infinity. And we urgently want answers to our questions. This has been so since the time, two and a half millennia ago, when Malunkyaputta put his doubts to the Buddha and demanded answers: among them he wanted to know if the world is finite or infinite, and if it is eternal or not.

Sep 24, 2022

MIT professor shares in $3 million Breakthrough Prize for quantum computing discoveries

Posted by in categories: computing, mathematics, quantum physics

An MIT professor who studies quantum computing is sharing a $3 million Breakthrough Prize.

MIT math professor Peter Shor shared in the Breakthrough Prize in Fundamental Physics with three other researchers, David Deutsch at the University of Oxford, Charles Bennett at IBM Research, and Gilles Brassard at the University of Montreal. All of them are “pioneers in the field of quantum information,” the prize foundation said in a statement.

Continue reading “MIT professor shares in $3 million Breakthrough Prize for quantum computing discoveries” »

Sep 24, 2022

Musing on Understanding & AI — Hugo de Garis, Adam Ford, Michel de Haan

Posted by in categories: education, existential risks, information science, mapping, mathematics, physics, robotics/AI

Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments.
10:18 Webs of knowledge — contextual understanding.
12:16 Early childhood development — concept formation and navigation.
13:11 The intuitive ability for concept navigation isn’t complete.
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches — engineering, and copy the brain.
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding.
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom.
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first — understanding or generality?
40:47 Minsky’s ‘Society of Mind’
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature.
50:48 Deism — James Gates and error correction in super-symmetry.
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox.
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox.
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines — David Chalmers Zombie argument.
01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral ‘turing’ tests for understanding.
01:13:05 Shape sorters and reverse shape sorters.
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries…
01:15:11 Neural nets and adaptivity.
01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?

Filmed in the dandenong ranges in victoria, australia.

Many thanks for watching!

Sep 22, 2022

China Launches World’s Fastest Quantum Computers | China’s Advancement In Quantum Computers #techno

Posted by in categories: government, mathematics, quantum physics, supercomputing

https://www.youtube.com/watch?v=slEceKBmqts

China Launches World’s Fastest Quantum Computers | China’s Advancement In Quantum Computers #technology.

“Techno Jungles”

Continue reading “China Launches World’s Fastest Quantum Computers | China’s Advancement In Quantum Computers #techno” »

Sep 22, 2022

Nine Inch Nails — Me I’m Not — Music Video

Posted by in categories: computing, mathematics, media & arts, military

Nine Inch Nails “Me I’m Not” remixed with US military, math, science, and computer footage from the Prelinger Archives.

Page 77 of 155First7475767778798081Last