Toggle light / dark theme

Using astrocytes to change the behavior of robots controlled by neuromorphic chips

Neurons, specialized cells that transmit nerve impulses, have long been known to be a vital element for the functioning of the human brain. Over the past century, however, neuroscience research has given rise to the false belief that neurons are the only cells that can process and learn information. This misconception or ‘neurocomputing dogma’ is far from true.

An is a different type of cell that has recently been found to do a lot more than merely fill up spaces between neurons, as researchers believed for over a century. Studies are finding that these cells also play key roles in brain functions, including learning and central pattern generation (CPG), which is the basis for critical rhythmic behaviors such as breathing and walking.

Although astrocytes are now known to underlie numerous brain functions, most existing inspired by the only target the structure and function of neurons. Aware of this gap in existing literature, researchers at Rutgers University are developing brain-inspired algorithms that also account for and replicate the functions of astrocytes. In a paper pre-published on arXiv and set to be presented at the ICONS 2020 Conference in July, they introduce a neuromorphic central pattern generator (CPG) modulated by artificial astrocytes that successfully entrained several rhythmic walking behaviors in their in-house robots.

Robot scientist discovers a new catalyst

The robot seen here can work almost 24–7, carrying out experiments by itself. The automated scientist – the first of its kind – can make its own decisions about which chemistry experiments to perform next, and has already discovered a new catalyst.

With humanoid dimensions, and working in a standard laboratory, it uses instruments much like a human does. Unlike a real person, however, this 400 kg robot has infinite patience, and works for 21.5 hours each day, pausing only to recharge its battery.

This new technology – reported in the journal Nature and featured on the front cover – is designed to tackle problems of a scale and complexity that are currently beyond our grasp. New drug formulations could be autonomously discovered, for example, by searching vast and unexplored chemical spaces.

6 Dimensionality Reduction Algorithms With Python

Dimensionality reduction is an unsupervised learning technique.

Nevertheless, it can be used as a data transform pre-processing step for machine learning algorithms on classification and regression predictive modeling datasets with supervised learning algorithms.

There are many dimensionality reduction algorithms to choose from and no single best algorithm for all cases. Instead, it is a good idea to explore a range of dimensionality reduction algorithms and different configurations for each algorithm.

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (director of research, IBM); 2) Meeting STEM Education and Workforce Needs, chaired by Catherine Bessant (CTO, Bank of America), and 3) New Models of Engagement for Federal/National Laboratories in the Multi-Sector R&D Enterprise, chaired by Dr. A.N. Sreeram (SVP, CTO, Dow Corp.)

Yesterday, the full report (Recommendations For Strengthening American Leadership In Industries Of The Future) was issued and it is fascinating and wide-ranging. To give you a sense of the scope, here are three highlights taken from the executive summary of the full report:

Quantum classifiers with tailored quantum kernel?

Quantum information scientists have introduced a new method for machine learning classifications in quantum computing. The non-linear quantum kernels in a quantum binary classifier provide new insights for improving the accuracy of quantum machine learning, deemed able to outperform the current AI technology.

The research team led by Professor June-Koo Kevin Rhee from the School of Electrical Engineering, proposed a quantum classifier based on quantum state fidelity by using a different initial state and replacing the Hadamard classification with a swap test. Unlike the conventional approach, this method is expected to significantly enhance the classification tasks when the training dataset is small, by exploiting the quantum advantage in finding non-linear features in a large feature space.

Quantum machine learning holds promise as one of the imperative applications for quantum computing. In machine learning, one fundamental problem for a wide range of applications is classification, a task needed for recognizing patterns in labeled training data in order to assign a label to new, previously unseen data; and the kernel method has been an invaluable classification tool for identifying non-linear relationships in complex data.

AI finds 250 foreign stars that migrated to our galaxy

Astrophysicians have used AI to discover 250 new stars in the Milky Way, which they believe were born outside the galaxy.

Caltech researcher Lina Necib named the collection Nyx, after the Greek goddess of the night. She suspects the stars are remnants of a dwarf galaxy that merged with the Milky Way many moons ago.

To develop the AI, Necib and her team first tracked stars across a simulated galaxy created by the Feedback in Realistic Environments (FIRE) project. They labeled the stars as either born in the host galaxy, or formed through galaxy mergers. These labels were used to train a deep learning model to spot where a star was born.

/* */