БЛОГ

Archive for the ‘information science’ category: Page 188

Mar 24, 2021

Crucial Milestone for Scalable Quantum Technology: 2D Array of Semiconductor Qubits That Functions as a Quantum Processor

Posted by in categories: computing, information science, quantum physics

The heart of any computer, its central processing unit, is built using semiconductor technology, which is capable of putting billions of transistors onto a single chip. Now, researchers from the group of Menno Veldhorst at QuTech, a collaboration between TU Delft and TNO, have shown that this technology can be used to build a two-dimensional array of qubits to function as a quantum processor. Their work, a crucial milestone for scalable quantum technology, was published today (March 242021) in Nature.

Quantum computers have the potential to solve problems that are impossible to address with classical computers. Whereas current quantum devices hold tens of qubits — the basic building block of quantum technology — a future universal quantum computer capable of running any quantum algorithm will likely consist of millions to billions of qubits. Quantum dot qubits hold the promise to be a scalable approach as they can be defined using standard semiconductor manufacturing techniques. Veldhorst: “By putting four such qubits in a two-by-two grid, demonstrating universal control over all qubits, and operating a quantum circuit that entangles all qubits, we have made an important step forward in realizing a scalable approach for quantum computation.”

Mar 24, 2021

Tiny swimming robots reach their target faster thanks to AI nudges

Posted by in categories: information science, particle physics, robotics/AI

Swimming robots the size of bacteria can be knocked off course by particles in the fluid they are moving through, but an AI algorithm learns from feedback to get them to their target quickly.

Mar 23, 2021

‘Doodles of light’ in real time mark leap for holograms at home

Posted by in categories: holograms, information science, supercomputing

Researchers from Tokyo Metropolitan University have devised and implemented a simplified algorithm for turning freely drawn lines into holograms on a standard desktop CPU. They dramatically cut down the computational cost and power consumption of algorithms that require dedicated hardware. It is fast enough to convert writing into lines in real time, and makes crisp, clear images that meet industry standards. Potential applications include hand-written remote instructions superimposed on landscapes and workbenches.

T potential applications of holography include important enhancements to vital, practical tasks, including remote instructions for surgical procedures, electronic assembly on circuit boards, or directions projected on landscapes for navigation. Making holograms available in a wide range of settings is vital to bringing this technology out of the lab and into daily life.

One of the major drawbacks of this state-of-the-art technology is the computational load of generation. The kind of quality we’ve come to expect in our 2D displays is prohibitive in 3D, requiring supercomputing levels of number crunching to achieve. There is also the issue of power consumption. More widely available hardware like GPUs in gaming rigs might be able to overcome some of these issues with raw power, but the amount of electricity they use is a major impediment to mobile applications. Despite improvements to available hardware, the solution can’t be achieved by brute force.

Mar 22, 2021

Researchers’ algorithm designs soft robots that sense

Posted by in categories: information science, robotics/AI

There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.

MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”

The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Mar 21, 2021

Deep learning model advances how robots can independently grasp objects

Posted by in categories: information science, robotics/AI

Robots are unable to perform everyday manipulation tasks, such as grasping or rearranging objects, with the same dexterity as humans. But Brazilian scientists have moved this research a step further by developing a new system that uses deep learning algorithms to improve a robot’s ability to independently detect how to grasp an object, known as autonomous robotic grasp detection.

In a paper published Feb. 24 in Robotics and Autonomous Systems, a team of engineers from the University of São Paulo addressed existing problems with the visual perception phase that occurs when a robot grasps an object. They created a model using deep learning neural networks that decreased the time a robot needs to process visual data, perceive an object’s location and successfully grasp it.

Deep learning is a subset of machine learning, in which computer algorithms are trained how to learn with data and to improve automatically through experience. Inspired by the structure and function of the human brain, deep learning uses a multilayered structure of algorithms called neural networks, operating much like the human brain in identifying patterns and classifying different types of information. Deep learning models are often based on convolutional neural networks, which specialize in analyzing visual imagery.

Mar 20, 2021

AI Meets Chipmaking: Applied Materials Incorporates AI In Wafer Inspection Process

Posted by in categories: information science, robotics/AI

Advanced system-on-chip designs are extremely complex in terms of transistor count and are hard to build using the latest fabrication processes. In a bid to make production of next-generation chips economically feasible, chip fabs need to ensure high yields early in their lifecycle by quickly finding and correcting defects.

But finding and fixing defects is not easy today, as traditional optical inspection tools don’t offer sufficiently detailed image resolution, while high-resolution e-beam and multibeam inspection tools are relatively slow. Looking to bridge the gap on inspection costs and time, Applied Materials has been developing a technology called ExtractAI technology, which uses a combination of the company’s latest Enlight optical inspection tool, SEMVision G7 e-beam review system, and deep learning (AI) to quickly find flaws. And surprisingly, this solution has been in use for about a year now.

“Applied’s new playbook for process control combines Big Data and AI to deliver an intelligent and adaptive solution that accelerates our customers’ time to maximum yield,” said Keith Wells, group vice president and general manager, Imaging and Process Control at Applied Materials. “By combining our best-in-class optical inspection and eBeam review technologies, we have created the industry’s only solution with the intelligence to not only detect and classify yield-critical defects but also learn and adapt to process changes in real-time. This unique capability enables chipmakers to ramp new process nodes faster and maintain high capture rates of yield-critical defects over the lifetime of the process.”

Mar 20, 2021

Deep science: AI is in the air, water, soil and steel

Posted by in categories: biotech/medical, government, information science, robotics/AI, science

Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the most relevant recent discoveries and papers — particularly in but not limited to artificial intelligence — and explain why they matter.

This week brings a few unusual applications of or developments in machine learning, as well as a particularly unusual rejection of the method for pandemic-related analysis.

Continue reading “Deep science: AI is in the air, water, soil and steel” »

Mar 20, 2021

Efficacy of the radial pair potential approximation for molecular dynamics simulations of dense plasmas

Posted by in categories: computing, information science, nuclear energy, particle physics

In this work, we carry out KS-MD simulations for a range of elements, temperatures, and densities, allowing for a systematic comparison of three RPP models. While multiple RPP models can be selected, 7–11 7. J. Vorberger and D. Gericke, “Effective ion–ion potentials in warm dense matter,” High Energy Density Phys. 9, 178 (2013). https://doi.org/10.1016/j.hedp.2012.12.009 8. Y. Hou, J. Dai, D. Kang, W. Ma, and J. Yuan, “Equations of state and transport properties of mixtures in the warm dense regime,” Phys. Plasmas 22, 022711 (2015). https://doi.org/10.1063/1.4913424 9. K. Wünsch, J. Vorberger, and D. Gericke, “Ion structure in warm dense matter: Benchmarking solutions of hypernetted-chain equations by first-principle simulations,” Phys. Rev. E 79, 010201 (2009). https://doi.org/10.1103/PhysRevE.79.010201 10. L. Stanton and M. Murillo, “Unified description of linear screening in dense plasmas,” Phys. Rev. E 91, 033104 (2015). https://doi.org/10.1103/PhysRevE.91.033104 11. W. Wilson, L. Haggmark, and J. Biersack, “Calculations of nuclear stopping, ranges, and straggling in the low-energy region,” Phys. Rev. B 15, 2458 (1977). https://doi.org/10.1103/PhysRevB.15.2458 we choose to compare the widely used Yukawa potential, which accounts for screening by linearly perturbing around a uniform density in the long-wavelength (Thomas–Fermi) limit, a potential constructed from a neutral pseudo-atom (NPA) approach, 12–15 12. L. Harbour, M. Dharma-wardana, D. D. Klug, and L. J. Lewis, “Pair potentials for warm dense matter and their application to x-ray Thomson scattering in aluminum and beryllium,” Phys. Rev. E 94, 053211 (2016). https://doi.org/10.1103/PhysRevE.94.053211 13. M. Dharma-wardana, “Electron-ion and ion-ion potentials for modeling warm dense matter: Applications to laser-heated or shock-compressed Al and Si,” Phys. Rev. E 86, 036407 (2012). https://doi.org/10.1103/PhysRevE.86.036407 14. F. Perrot and M. Dharma-Wardana, “Equation of state and transport properties of an interacting multispecies plasma: Application to a multiply ionized al plasma,” Phys. Rev. E 52, 5352 (1995). https://doi.org/10.1103/PhysRevE.52.5352 15. L. Harbour, G. Förster, M. Dharma-wardana, and L. J. Lewis, “Ion-ion dynamic structure factor, acoustic modes, and equation of state of two-temperature warm dense aluminum,” Phys. Rev. E 97, 043210 (2018). https://doi.org/10.1103/PhysRevE.97.043210 and the optimal force-matched RPP that is constructed directly from KS-MD simulation data.

Each of the models we chose impacts our physics understanding and has clear computational consequences. For example, success of the Yukawa model reveals the insensitivity to choices in the pseudopotential and screening function and allows for the largest-scale simulations. Large improvements are expected from the NPA model, which makes many fewer assumptions with a modest cost of pre-computing and tabulating forces. (See the Appendix for more details on the NPA model.) The force-matched RPP requires KS-MD data and is therefore the most expensive to produce, but it reveals the limitations of RPPs themselves since they are by definition the optimal RPP.

Using multiple metrics of comparison between RPP-MD and KS-MD including the relative force error, ion–ion equilibrium radial distribution function g (r), Einstein frequency, power spectrum, and the self-diffusion transport coefficient, the accuracy of each RPP model is analyzed. By simulating disparate elements, namely, an alkali metal, multiple transition metals, a halogen, a nonmetal, and a noble gas, we see that force-matched RPPs are valid for simulating dense plasmas at temperatures above fractions of an eV and beyond. We find that for all cases except for low temperature carbon, force-matched RPPs accurately describe the results obtained from KS-MD to within a few percent. By contrast, the Yukawa model appears to systematically fail at describing results from KS-MD at low temperatures for the conditions studied here validating the need for alternate models such as force-matching and NPA approaches at these conditions.

Mar 19, 2021

Solving ‘barren plateaus’ is the key to quantum machine learning

Posted by in categories: information science, mathematics, quantum physics, robotics/AI

Many machine learning algorithms on quantum computers suffer from the dreaded “barren plateau” of unsolvability, where they run into dead ends on optimization problems. This challenge had been relatively unstudied—until now. Rigorous theoretical work has established theorems that guarantee whether a given machine learning algorithm will work as it scales up on larger computers.

“The work solves a key problem of useability for . We rigorously proved the conditions under which certain architectures of variational quantum algorithms will or will not have barren plateaus as they are scaled up,” said Marco Cerezo, lead author on the paper published in Nature Communications today by a Los Alamos National Laboratory team. Cerezo is a post doc researching at Los Alamos. “With our theorems, you can guarantee that the architecture will be scalable to quantum computers with a large number of qubits.”

“Usually the approach has been to run an optimization and see if it works, and that was leading to fatigue among researchers in the field,” said Patrick Coles, a coauthor of the study. Establishing mathematical theorems and deriving first principles takes the guesswork out of developing algorithms.

Mar 18, 2021

Is the Schrödinger Equation True?

Posted by in categories: information science, mathematics

Just because a mathematical formula works does not mean it reflects reality.