Explains why we can meet aliens soon. He is on to something. Elon Musk disagrees with the research that argues that there are not aliens,. Elon Musk explains why drake equation is important and why Fermi paradox is wrong.
SUBSCRIBE IF YOU LIKED THIS VIDEO ╔═╦╗╔╦╗╔═╦═╦╦╦╦╗╔═╗ ║╚╣║║║╚╣╚╣╔╣╔╣║╚╣═╣ ╠╗║╚╝║║╠╗║╚╣║║║║║═╣ ╚═╩══╩═╩═╩═╩╝╚╩═╩═╝
The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” — that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots.
Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of SingularityNET. A project combining artificial intelligence and blockchain to democratize access to artificial intelligence. Ben seeks to fulfil the original ambitions of the field. Ben graduated with a PhD in Mathematics from Temple University in 1990. Ben’s approach to AGI over many decades now has been inspired by many disciplines, but in particular from human cognitive psychology and computer science perspective. To date Ben’s work has been mostly theoretically-driven. Ben thinks that most of the deep learning approaches to AGI today try to model the brain. They may have a loose analogy to human neuroscience but they have not tried to derive the details of an AGI architecture from an overall conception of what a mind is. Ben thinks that what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented.
Ben thinks that there is a certain set of key cognitive processes and interactions that AGI systems must implement explicitly such as; working and long-term memory, deliberative and reactive processing, perc biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitatively different domain.
Panel: Dr. Tim Scarfe, Dr. Yannic Kilcher, Dr. Keith Duggar.
Training robots to complete tasks in the real-world can be a very time-consuming process, which involves building a fast and efficient simulator, performing numerous trials on it, and then transferring the behaviors learned during these trials to the real world. In many cases, however, the performance achieved in simulations does not match the one attained in the real-world, due to unpredictable changes in the environment or task.
Researchers at the University of California, Berkeley (UC Berkeley) have recently developed DayDreamer, a tool that could be used to train robots to complete real-world tasks more effectively. Their approach, introduced in a paper pre-published on arXiv, is based on learning models of the world that allow robots to predict the outcomes of their movements and actions, reducing the need for extensive trial and error training in the real-world.
“We wanted to build robots that continuously learn directly in the real world, without having to create a simulation environment,” Danijar Hafner, one of the researchers who carried out the study, told TechXplore. “We had only learned world models of video games before, so it was super exciting to see that the same algorithm allows robots to quickly learn in the real world, too!”
City College of New York physicist Pouyan Ghaemi and his research team are claiming significant progress in using quantum computers to study and predict how the state of a large number of interacting quantum particles evolves over time. This was done by developing a quantum algorithm that they run on an IBM quantum computer. “To the best of our knowledge, such particular quantum algorithm which can simulate how interacting quantum particles evolve over time has not been implemented before,” said Ghaemi, associate professor in CCNY’s Division of Science.
Entitled “Probing geometric excitations of fractional quantum Hall states on quantum computers,” the study appears in the journal of Physical Review Letters.
“Quantum mechanics is known to be the underlying mechanism governing the properties of elementary particles such as electrons,” said Ghaemi. “But unfortunately there is no easy way to use equations of quantum mechanics when we want to study the properties of large number of electrons that are also exerting force on each other due to their electric charge.”
What does the future of AI look like? Let’s try out some AI software that’s readily available for consumers and see how it holds up against the human brain.
Whether you welcome our new AI overlords with open arms, or you’re a little terrified about what an AI future may look like, many say it’s not really a question of ‘if,’ but more of a question of ‘when.’
Okay, you’ve got AI technologies on a small scale to a grand scale. From Siri — self-driving cars, text generators — humanoid robots, but what really is the real threat? As far back as 2013, Oxford University (ironically) used a machine-learning algorithm to determine whether 702 different jobs throughout America could turn automated, this found that a whopping 47% could in fact be replaced by machines.
A huge concern that comes alongside this is whether the technology will be reliable enough? We’re already seeing AI technology in countless professions, most recently the boom of AI generated-text used in over 300 different apps. It’s even used beyond this planet, out in space. If anything, this is a rude awakening for the future potential of AI technology, outside of the industrial market.
Staff Scientist Daniele Filippetto working on the High Repetition-Rate Electron Scattering Apparatus. (Credit: Thor Swift/Berkeley Lab)
– By Will Ferguson
Scientists have developed a new machine-learning platform that makes the algorithms that control particle beams and lasers smarter than ever before. Their work could help lead to the development of new and improved particle accelerators that will help scientists unlock the secrets of the subatomic world.
Energy, mass, velocity. These three variables make up Einstein’s iconic equation E=MC2. But how did Einstein know about these concepts in the first place? A precursor step to understanding physics is identifying relevant variables. Without the concept of energy, mass, and velocity, not even Einstein could discover relativity. But can such variables be discovered automatically? Doing so could greatly accelerate scientific discovery.
This is the question that researchers at Columbia Engineering posed to a new AI program. The program was designed to observe physical phenomena through a video camera, then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published on July 25 in Nature Computational Science.
The researchers began by feeding the system raw video footage of phenomena for which they already knew the answer. For example, they fed a video of a swinging double pendulum known to have exactly four “state variables”—the angle and angular velocity of each of the two arms. After a few hours of analysis, the AI produced the answer: 4.7.
Using Newtonian physics, physicists have found an expression for the value of kinetic energy, specifically KE = ½ m v^2. Einstein came up with a very different expression, specifically KE = (gamma – 1) m c^2. In this video, Fermilab’s Dr. Don Lincoln shows how these two equations are the same at low energy and how you get from one to the other.
The design of protein sequences that can precisely fold into pre-specified 3D structures is a challenging task. A recently proposed deep-learning algorithm improves such designs when compared with traditional, physics-based protein design approaches.
ABACUS-R is trained on the task of predicting the AA at a given residue, using information about that residue’s backbone structure, and the backbone and AA of neighboring residues in space. To do this, ABACUS-R uses the Transformer neural network architecture6, which offers flexibility in representing and integrating information between different residues. Although these aspects are similar to a previous network2, ABACUS-R adds auxiliary training tasks, such as predicting secondary structures, solvent exposure and sidechain torsion angles. These outputs aren’t needed during design but help with training and increase sequence recovery by about 6%. To design a protein sequence, ABACUS-R uses an iterative ‘denoising’ process (Fig.
Scientists have developed a new machine-learning platform that makes the algorithms that control particle beams and lasers smarter than ever before. Their work could help lead to the development of new and improved particle accelerators that will help scientists unlock the secrets of the subatomic world.
Daniele Filippetto and colleagues at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) developed the setup to automatically compensate for real-time changes to accelerator beams and other components, such as magnets. Their machine learning approach is also better than contemporary beam control systems at both understanding why things fail, and then using physics to formulate a response. A paper describing the research was published late last year in Nature Scientific Reports.
“We are trying to teach physics to a chip, while at the same time providing it with the wisdom and experience of a senior scientist operating the machine,” said Filippetto, a staff scientist at the Accelerator Technology & Applied Physics Division (ATAP) at Berkeley Lab and deputy director of the Berkeley Accelerator Controls and Instrumentation Program (BACI) program.