БЛОГ

Archive for the ‘information science’ category: Page 178

Mar 3, 2020

Honeywell says it will soon launch the world’s most powerful quantum computer

Posted by in categories: computing, information science, quantum physics

“The best-kept secret in quantum computing.” That’s what Cambridge Quantum Computing (CQC) CEO Ilyas Khan called Honeywell’s efforts in building the world’s most powerful quantum computer. In a race where most of the major players are vying for attention, Honeywell has quietly worked on its efforts for the last few years (and under strict NDA’s, it seems). But today, the company announced a major breakthrough that it claims will allow it to launch the world’s most powerful quantum computer within the next three months.

In addition, Honeywell also today announced that it has made strategic investments in CQC and Zapata Computing, both of which focus on the software side of quantum computing. The company has also partnered with JPMorgan Chase to develop quantum algorithms using Honeywell’s quantum computer. The company also recently announced a partnership with Microsoft.

Mar 3, 2020

SLIDE algorithm for training deep neural nets faster on CPUs than GPUs

Posted by in categories: information science, robotics/AI

Computer scientists from Rice, supported by collaborators from Intel, will present their results today at the Austin Convention Center as a part of the machine learning systems conference MLSys.

Many companies are investing heavily in GPUs and other specialized hardware to implement deep learning, a powerful form of artificial intelligence that’s behind digital assistants like Alexa and Siri, facial recognition, product recommendation systems and other technologies. For example, Nvidia, the maker of the industry’s gold-standard Tesla V100 Tensor Core GPUs, recently reported a 41% increase in its fourth quarter revenues compared with the previous year.

Rice researchers created a cost-saving alternative to GPU, an algorithm called “sub-linear deep learning engine” (SLIDE) that uses general purpose central processing units (CPUs) without specialized acceleration hardware.

Mar 2, 2020

Novel camera calibration algorithm aims at making autonomous vehicles safer

Posted by in categories: information science, robotics/AI, transportation

Some forms of autonomous vehicle watch the road ahead using built-in cameras. Ensuring that accurate camera orientation is maintained during driving is, therefore, in some systems key to letting these vehicles out on roads. Now, scientists from Korea have developed what they say is an accurate and efficient camera-orientation estimation method to enable such vehicles to navigate safely across distances.


A fast camera-orientation estimation algorithm that pinpoints vanishing points could make self-driving cars safer.

John Wallace

Continue reading “Novel camera calibration algorithm aims at making autonomous vehicles safer” »

Mar 1, 2020

How China is using AI and big data to combat coronavirus outbreak

Posted by in categories: biotech/medical, information science, robotics/AI, surveillance

Authorities in China step up surveillance and roll out new artificial intelligence tools to fight deadly epidemic.

Mar 1, 2020

Meet Xenobot, an Eerie New Kind of Programmable Organism

Posted by in categories: bioengineering, information science

Under the watchful eye of a microscope, busy little blobs scoot around in a field of liquid—moving forward, turning around, sometimes spinning in circles. Drop cellular debris onto the plain and the blobs will herd them into piles. Flick any blob onto its back and it’ll lie there like a flipped-over turtle.

Their behavior is reminiscent of a microscopic flatworm in pursuit of its prey, or even a tiny animal called a water bear—a creature complex enough in its bodily makeup to manage sophisticated behaviors. The resemblance is an illusion: These blobs consist of only two things, skin cells and heart cells from frogs.

Writing today in the Proceedings of the National Academy of Sciences, researchers describe how they’ve engineered so-calleds (from the species of frog, Xenopus laevis, whence their cells came) with the help of evolutionary algorithms. They hope that this new kind of organism—contracting cells and passive cells stuck together—and its eerily advanced behavior can help scientists unlock the mysteries of cellular communication.

Feb 28, 2020

AI Is an Energy-Guzzler. We Need to Re-Think Its Design, and Soon

Posted by in categories: information science, robotics/AI

Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.

The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.

For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.

Feb 28, 2020

Witnessing the birth of baby universes 46 times: The link between gravity and soliton

Posted by in categories: information science, quantum physics

Scientists have been attempting to come up with an equation to unify the micro and macro laws of the Universe; quantum mechanics and gravity. We are one step closer with a paper that demonstrates that this unification is successfully realized in JT gravity. In the simplified toy model of the one dimensional domain, the holographic principle, or how information is stored on a boundary that manifests in another dimension is revealed.

How did the universe begin? How does quantum mechanics, the study of the smallest things, relate to gravity and the study of big things? These are some of the questions physicists have been working to solve ever since Einstein released his theory of relativity.

Formulas show that baby universes pops in and out of the main Universe. However, we don’t realize or experience this as humans. To calculate how this scales, devised the so-called JT gravity, which turns the into a toy-like model with only one dimension of time or space. These restricted parameters allows for a model in which scientists can test their theories.

Feb 26, 2020

Scientists propose new regulatory framework to make AI safer

Posted by in categories: information science, robotics/AI

Scientists from Imperial College London have proposed a new regulatory framework for assessing the impact of AI, called the Human Impact Assessment for Technology (HIAT).

The researchers believe the HIAT could identify the ethical, psychological and social risks of technological progress, which are already being exposed in a growing range of applications, from voter manipulation to algorithmic sentencing.

Feb 26, 2020

We’re Making Progress in Explainable AI, but Major Pitfalls Remain

Posted by in categories: information science, robotics/AI

Even in this experiment, though, the “psychology” of the algorithm in decision-making is counter-intuitive. For example, in the basketball case, the most important factor in making the decision was actually the player’s jerseys rather than the basketball.

Can You Explain What You Don’t Understand?

While it may seem trivial, the conflict here is a fundamental one in approaches to artificial intelligence. Namely, how far can you get with mere statistical associations between huge sets of data, and how much do you need to introduce abstract concepts for real intelligence to arise?

Feb 25, 2020

Progressing Towards Assuredly Safer Autonomous Systems

Posted by in categories: information science, mathematics, robotics/AI, transportation

The sophistication of autonomous systems currently being developed across various domains and industries has markedly increased in recent years, due in large part to advances in computing, modeling, sensing, and other technologies. While much of the technology that has enabled this technical revolution has moved forward expeditiously, formal safety assurances for these systems still lag behind. This is largely due to their reliance on data-driven machine learning (ML) technologies, which are inherently unpredictable and lack the necessary mathematical framework to provide guarantees on correctness. Without assurances, trust in any learning enabled cyber physical system’s (LE-CPS’s) safety and correct operation is limited, impeding their broad deployment and adoption for critical defense situations or capabilities.

To address this challenge, DARPA’s Assured Autonomy program is working to provide continual assurance of an LE-CPS’s safety and functional correctness, both at the time of its design and while operational. The program is developing mathematically verifiable approaches and tools that can be applied to different types and applications of data-driven ML algorithms in these systems to enhance their autonomy and assure they are achieving an acceptable level of safety. To help ground the research objectives, the program is prioritizing challenge problems in the defense-relevant autonomous vehicle space, specifically related to air, land, and underwater platforms.

The first phase of the Assured Autonomy program recently concluded. To assess the technologies in development, research teams integrated them into a small number of autonomous demonstration systems and evaluated each against various defense-relevant challenges. After 18 months of research and development on the assurance methods, tools, and learning enabled capabilities (LECs), the program is exhibiting early signs of progress.