Physicists from Trinity have unlocked the secret that explains how large groups of individual “oscillators”—from flashing fireflies to cheering crowds, and from ticking clocks to clicking metronomes—tend to synchronize when in each other’s company.
Their work, just published in the journal Physical Review Research, provides a mathematical basis for a phenomenon that has perplexed millions—their newly developed equations help explain how individual randomness seen in the natural world and in electrical and computer systems can give rise to synchronization.
We have long known that when one clock runs slightly faster than another, physically connecting them can make them tick in time. But making a large assembly of clocks synchronize in this way was thought to be much more difficult—or even impossible, if there are too many of them.
Strategy accelerates the best algorithmic solvers for large sets of cities.
Waiting for a holiday package to be delivered? There’s a tricky math problem that needs to be solved before the delivery truck pulls up to your door, and MIT researchers have a strategy that could speed up the solution.
The approach applies to vehicle routing problems such as last-mile delivery, where the goal is to deliver goods from a central depot to multiple cities while keeping travel costs down. While there are algorithms designed to solve this problem for a few hundred cities, these solutions become too slow when applied to a larger set of cities.
The solver algorithms work by breaking up the problem of delivery into smaller subproblems to solve — say, 200 subproblems for routing vehicles between 2,000 cities. Wu and her colleagues augment this process with a new machine-learning algorithm that identifies the most useful subproblems to solve, instead of solving all the subproblems, to increase the quality of the solution while using orders of magnitude less compute.
Their approach, which they call “learning-to-delegate,” can be used across a variety of solvers and a variety of similar problems, including scheduling and pathfinding for warehouse robots, the researchers say.
More than a score of companies are pushing to be early winners in the race for self-driving taxis — robotaxis — with the potential that brings to capture the entire value chain of car transport from your riders. They are all at different stages, and they almost all want to convince the public and investors that they are far along.
To really know how far along a project is, you need the chance to look inside it. To see the data only insiders see on just how well their vehicle is performing, as well as what it can and can’t do. Most teams want to keep those inside details secret, though in time they will need to reveal them to convince the public, and eventually regulators that they are ready to deploy.
Because they keep them secret, those of us looking in from the outside can only scrape for clues. The biggest clues come when they reach certain milestones, and when they take risks which tell us their own internal math has said it’s OK to take that risk. Most teams announce successes and release videos of drives, but these offer us only limited information because they can be cherry picked. The best indicators are what they do, not what they say.
Working with two teams of mathematicians, DeepMind engineered an algorithm that can look across different mathematical fields and spot connections that previously escaped the human mind. The AI doesn’t do all the work—when fed sufficient data, it finds patterns. These patterns are then passed on to human mathematicians to guide their intuition and creativity towards new laws of nature.
“I was not expecting to have some of my preconceptions turned on their head,” said Dr. Marc Lackenby at the University of Oxford, one of the scientists collaborating with DeepMind, to Nature, where the study was published.
The AI comes just a few months after DeepMind’s previous triumph in solving a 50-year-old challenge in biology. This is different. For the first time, machine learning is aiming at the core of mathematics—a science for spotting patterns that eventually leads to formally-proven ideas, or theorems, about how our world works. It also emphasized collaboration between machine and man in bridging observations to working theorems.
Computer simulations and visualizations of knots and other objects have long helped mathematicians to look for patterns and develop their intuition, says Jeffrey Weeks, a mathematician based in Canton, New York, who has pioneered some of those techniques since the 1980s. But, he adds, “Getting the computer to seek out patterns takes the research process to a qualitatively different level.”
The authors say the approach, described in a paper in the 2 December issue of Nature1, could benefit other areas of maths that involve large data sets.
We can add suggesting and proving mathematical theorems to the long list of what artificial intelligenceis capable of: Mathematicians and AI experts have teamed up to demonstrate how machine learning can open up new avenues to explore in the field.
While mathematicians have been using computers to discover patterns for decades, the increasing power of machine learning means that these networks can work through huge swathes of data and identify patterns that haven’t been spotted before.
Graphene consists of a planar structure, with carbon atoms connected in a hexagonal shape that resembles a beehive. When graphene is reduced to several nanometers (nm) in size, it becomes a graphene quantum dot that exhibits fluorescent and semiconductor properties. Graphene quantum dots can be used in various applications as a novel material, including display screens, solar cells, secondary batteries, bioimaging, lighting, photocatalysis, and sensors. Interest in graphene quantum dots is growing, because recent research has demonstrated that controlling the proportion of heteroatoms (such as nitrogen, sulfur, and phosphorous) within the carbon structures of certain materials enhances their optical, electrical, and catalytic properties.
For the first time, computer scientists and mathematicians have used artificial intelligence to help prove or suggest new mathematical theorems in the complex fields of knot theory and representation theory.
The astonishing results have been published today in the pre-eminent scientific journal, Nature.
Professor Geordie Williamson is Director of the University of Sydney Mathematical Research Institute and one of the world’s foremost mathematicians. As a co-author of the paper, he applied the power of Deep Mind’s AI processes to explore conjectures in his field of speciality, representation theory.
What does it mean when someone calls you smart or intelligent? According to developmental psychologist Howard Gardner, it could mean one of eight things. In this video interview, Dr. Gardner addresses his eight classifications for intelligence: writing, mathematics, music, spatial, kinesthetic, interpersonal, and intrapersonal.
HOWARD GARDNER: Howard Gardner is a developmental psychologist and the John H. and Elisabeth A. Hobbs Professor of Cognition and Education at the Harvard Graduate School of Education. He holds positions as Adjunct Professor of Psychology at Harvard University and Senior Director of Harvard Project Zero. Among numerous honors, Gardner received a MacArthur Prize Fellowship in 1981. In 1990, he was the first American to receive the University of Louisville’s Grawemeyer Award in Education and in 2000 he received a Fellowship from the John S. Guggenheim Memorial Foundation. In 2005 and again in 2008 he was selected by Foreign Policy and Prospect magazines as one of 100 most influential public intellectuals in the world. He has received honorary degrees from twenty-two colleges and universities, including institutions in Ireland, Italy, Israel, and Chile. The author of over twenty books translated into twenty-seven languages, and several hundred articles, Gardner is best known in educational circles for his theory of multiple intelligences, a critique of the notion that there exists but a single human intelligence that can be assessed by standard psychometric instruments. During the past twenty five years, he and colleagues at Project Zero have been working on the design of performance-based assessments, education for understanding, and the use of multiple intelligences to achieve more personalized curriculum, instruction, and assessment. In the middle 1990s, Gardner and his colleagues launched The GoodWork Project. “GoodWork” is work that is excellent in quality, personally engaging, and exhibits a sense of responsibility with respect to implications and applications. Researchers have examined how individuals who wish to carry out good work succeed in doing so during a time when conditions are changing very quickly, market forces are very powerful, and our sense of time and space is being radically altered by technologies, such as the web. Gardner and colleagues have also studied curricula. Gardner’s books have been translated into twenty-seven languages. Among his books are The Disciplined Mind: Beyond Facts and Standardized Tests, The K-12 Education that Every Child Deserves (Penguin Putnam, 2000) Intelligence Reframed (Basic Books, 2000), Good Work: When Excellence and Ethics Meet (Basic Books, 2001), Changing Minds: The Art and Science of Changing Our Own and Other People’s Minds (Harvard Business School Press, 2004), and Making Good: How Young People Cope with Moral Dilemmas at Work (Harvard University Press, 2004; with Wendy Fischman, Becca Solomon, and Deborah Greenspan). These books are available through the Project Zero eBookstore. Currently Gardner continues to direct the GoodWork project, which is concentrating on issues of ethics with secondary and college students. In addition, he co-directs the GoodPlay and Trust projects; a major current interest is the way in which ethics are being affected by the new digital media. In 2006 Gardner published Multiple Intelligences: New Horizons, The Development and Education of the Mind, and Howard Gardner Under Fire. In Howard Gardner Under Fire, Gardner’s work is examined critically; the book includes a lengthy autobiography and a complete biography. In the spring of 2007, Five Minds for the Future was published by Harvard Business School Press. Responsibility at Work, which Gardner edited, was published in the summer of 2007.
TRANSCRIPT: Howard Gardner: Currently I think there are eight intelligences that I’m very confident about and a few more that I’ve been thinking about. I’ll share that with our audience. The first two intelligences are the ones which IQ tests and other kind of standardized tests valorize and as long as we know there are only two out of eight, it’s perfectly fine to look at them. Linguistic intelligence is how well you’re able to use language. It’s a kind of skill that poets have, other kinds of writers; journalists tend to have linguistic intelligence, orators. The second intelligence is logical mathematical intelligence. As the name implies logicians, mathematicians…Read the full transcript at https://bigthink.com/videos/howard-gardner-on-the-eight-intelligences