БЛОГ

Archive for the ‘information science’ category: Page 213

Jun 7, 2020

Physicists create quantum-inspired optical sensor

Posted by in categories: biological, information science, quantum physics, space

Researchers from the Moscow Institute of Physics and Technology, joined by a colleague from Argonne National Laboratory, U.S., have implemented an advanced quantum algorithm for measuring physical quantities using simple optical tools. Published in Scientific Reports, their study takes us a step closer to affordable linear optics-based sensors with high performance characteristics. Such tools are sought after in diverse research fields, from astronomy to biology.

Maximizing the sensitivity of measurement tools is crucial for any field of science and technology. Astronomers seek to detect remote cosmic phenomena, biologists need to discern exceedingly tiny organic structures, and engineers have to measure the positions and velocities of objects, to name a few examples.

Until recently, no measurement could ensure precision above the so-called shot noise limit, which has to do with the statistical features inherent in classical observations. Quantum technology has provided a way around this, boosting precision to the fundamental Heisenberg limit, stemming from the basic principles of quantum mechanics. The LIGO experiment, which detected for the first time in 2016, shows it is possible to achieve Heisenberg-limited sensitivity by combining complex optical interference schemes and quantum techniques.

Jun 7, 2020

Quantum Dots Shift Sunlight’s Spectrum to Speed Plant Growth

Posted by in categories: information science, quantum physics

In the consumer electronics industry, quantum dots are used to dramatically improve color reproduction in TV displays. That’s because LCD TV displays, the kind in most of our living rooms, require a backlight. This light is typically made up of white, or white-ish LEDs. The LCD filters the white light into red, green, and blue pixels; their combinations create the colors that appear on the screen.

Before quantum dots, filtering meant that much of the light didn’t make it to the screen. Putting a layer of quantum dots between the LEDs and the LCD, however, changes that equation. QD TVs use blue LEDs as the light source, then take advantage of the quantum effect to shift some of that light to tightly constrained red and green wavelengths. Because only this purified light reaches the filters—instead of the full spectrum that makes up white light—far less is blocked and wasted.

It turns out that this same approach to making your TV picture better can make plants grow faster, because plants, like LCD filters, are tuned to certain colors of light.

Jun 6, 2020

Locus Robotics raises another $40M as retailers increasingly look to automate

Posted by in categories: biotech/medical, business, information science, robotics/AI

The COVID-19 pandemic will have a profound impact on robotics, as more companies look to automation as a way forward. While wide-scale automation had long seemed like an inevitability, the pandemic is set to accelerate the push as corporations look for processes that remove the human element from the equation.

Of course, Locus Robotics hasn’t had too much of an issue raising money previously. The Massachusetts-based startup, which raised $26 million back in April of last year, is adding a $40 million Series D to its funds. That brings the full amount to north of $105 million. This latest round, led by Zebra Technologies, comes as the company looks to expand operations with the launch of a European HQ.

“The new funding allows Locus to accelerate expansion into global markets,” CEO Rick Faulk said in a release, “enabling us to strengthen our support of retail, industrial, healthcare, and 3PL businesses around the world as they navigate through the COVID-19 pandemic, ensuring that they come out stronger on the other side.”

Jun 2, 2020

Research finds some AI advances are over-hyped

Posted by in categories: information science, robotics/AI

Is it possible some instances of artificial intelligence are not as intelligent as we thought?

Call it artificial artificial intelligence.

A team of computer graduate students reports that a closer examination of several dozen information retrieval algorithms hailed as milestones in artificial research were in fact nowhere near as revolutionary as claimed. In fact, AI used in those algorithms were often merely minor tweaks of previously established routines.

Jun 2, 2020

AI System – Using Neural Networks With Deep Learning – Beats Stock Market in Simulation

Posted by in categories: finance, information science, robotics/AI

Researchers in Italy have melded the emerging science of convolutional neural networks (CNNs) with deep learning — a discipline within artificial intelligence — to achieve a system of market forecasting with the potential for greater gains and fewer losses than previous attempts to use AI methods to manage stock portfolios. The team, led by Prof. Silvio Barra at the University of Cagliari, published their findings on IEEE/CAA Journal of Automatica Sinica.

The University of Cagliari-based team set out to create an AI-managed “buy and hold” (B&H) strategy — a system of deciding whether to take one of three possible actions — a long action (buying a stock and selling it before the market closes), a short action (selling a stock, then buying it back before the market closes), and a hold (deciding not to invest in a stock that day). At the heart of their proposed system is an automated cycle of analyzing layered images generated from current and past market data. Older B&H systems based their decisions on machine learning, a discipline that leans heavily on predictions based on past performance.

By letting their proposed network analyze current data layered over past data, they are taking market forecasting a step further, allowing for a type of learning that more closely mirrors the intuition of a seasoned investor rather than a robot. Their proposed network can adjust its buy/sell thresholds based on what is happening both in the present moment and the past. Taking into account present-day factors increases the yield over both random guessing and trading algorithms not capable of real-time learning.

Jun 2, 2020

Carnegie Mellon tool automatically turns math into pictures

Posted by in categories: information science, mathematics

Some people look at an equation and see a bunch of numbers and symbols; others see beauty. Thanks to a new tool created at Carnegie Mellon University, anyone can now translate the abstractions of mathematics into beautiful and instructive illustrations.

Jun 1, 2020

100-Year-Old Physics Problem Finally Solved – Accurately Predicts Transmission of Infectious Diseases

Posted by in categories: biotech/medical, computing, information science, mathematics

A Bristol academic has achieved a milestone in statistical/mathematical physics by solving a 100-year-old physics problem – the discrete diffusion equation in finite space.

The long-sought-after solution could be used to accurately predict encounter and transmission probability between individuals in a closed environment, without the need for time-consuming computer simulations.

In his paper, published in Physical Review X, Dr. Luca Giuggioli from the Department of Engineering Mathematics at the University of Bristol describes how to analytically calculate the probability of occupation (in discrete time and discrete space) of a diffusing particle or entity in a confined space – something that until now was only possible computationally.

May 31, 2020

Black Holes Help Prove That a Special Kind of Space-Time Is Unstable

Posted by in categories: cosmology, information science, quantum physics

Einstein’s equations describe three canonical configurations of space-time. Now one of these three — important in the study of quantum gravity — has been shown to be inherently unstable.

May 31, 2020

Self-driving laboratory for accelerated discovery of thin-film materials

Posted by in categories: information science, robotics/AI, solar power, sustainability

Discovering and optimizing commercially viable materials for clean energy applications typically takes more than a decade. Self-driving laboratories that iteratively design, execute, and learn from materials science experiments in a fully autonomous loop present an opportunity to accelerate this research process. We report here a modular robotic platform driven by a model-based optimization algorithm capable of autonomously optimizing the optical and electronic properties of thin-film materials by modifying the film composition and processing conditions. We demonstrate the power of this platform by using it to maximize the hole mobility of organic hole transport materials commonly used in perovskite solar cells and consumer electronics. This demonstration highlights the possibilities of using autonomous laboratories to discover organic and inorganic materials relevant to materials sciences and clean energy technologies.

Optimizing the properties of thin films is time intensive because of the large number of compositional, deposition, and processing parameters available (1, 2). These parameters are often correlated and can have a profound effect on the structure and physical properties of the film and any adjacent layers present in a device. There exist few computational tools for predicting the properties of materials with compositional and structural disorder, and thus, the materials discovery process still relies heavily on empirical data. High-throughput experimentation (HTE) is an established method for sampling a large parameter space (4, 5), but it is still nearly impossible to sample the full set of combinatorial parameters available for thin films. Parallelized methodologies are also constrained by the experimental techniques that can be used effectively in practice.

May 30, 2020

Scientists Use Artificial Intelligence and Computer Vision to Study Lithium-Ion Batteries

Posted by in categories: information science, particle physics, robotics/AI

New machine learning methods bring insights into how lithium ion batteries degrade, and show it’s more complicated than many thought.

Lithium-ion batteries lose their juice over time, causing scientists and engineers to work hard to understand that process in detail. Now, scientists at the Department of Energy’s SLAC National Accelerator Laboratory have combined sophisticated machine learning algorithms with X-ray tomography data to produce a detailed picture of how one battery component, the cathode, degrades with use.

The new study, published this month in Nature Communications, focused on how to better visualize what’s going on in cathodes made of nickel-manganese-cobalt, or NMC. In these cathodes, NMC particles are held together by a conductive carbon matrix, and researchers have speculated that one cause of performance decline could be particles breaking away from that matrix. The team’s goal was to combine cutting-edge capabilities at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) and the European Synchrotron Radiation Facility (ESRF) to develop a comprehensive picture of how NMC particles break apart and break away from the matrix and how that might contribute to performance losses.