Toggle light / dark theme

Using an algorithm to reduce energy bills—rain or shine

Researchers proposed implementing the residential energy scheduling algorithm by training three action dependent heuristic dynamic programming (ADHDP) networks, each one based on a weather type of sunny, partly cloudy, or cloudy. ADHDP networks are considered ‘smart,’ as their response can change based on different conditions.

“In the future, we expect to have various types of supplies to every household including the grid, windmills, and biogenerators. The issues here are the varying nature of these power sources, which do not generate electricity at a stable rate,” said Derong Liu, a professor with the School of Automation at the Guangdong University of Technology in China and an author on the paper. “For example, power generated from windmills and solar panels depends on the weather, and they vary a lot compared to the more stable power supplied by the grid. In order to improve these power sources, we need much smarter algorithms in managing/scheduling them.”

The details were published on the January 10th issue of IEEE/CAA Journal of Automatica Sinica, a joint bimonthly publication of the IEEE and the Chinese Association of Automation.

Looking to Listen: Audio-Visual Speech Separation

People are remarkably good at focusing their attention on a particular person in a noisy environment, mentally “muting” all other voices and sounds. Known as the cocktail party effect, this capability comes natural to us humans. However, automatic speech separation — separating an audio signal into its individual speech sources — while a well-studied problem, remains a significant challenge for computers.

In “Looking to Listen at the Cocktail Party”, we present a deep learning audio-visual model for isolating a single speech signal from a mixture of sounds such as other voices and background noise. In this work, we are able to computationally produce videos in which speech of specific people is enhanced while all other sounds are suppressed. Our method works on ordinary videos with a single audio track, and all that is required from the user is to select the face of the person in the video they want to hear, or to have such a person be selected algorithmically based on context. We believe this capability can have a wide range of applications, from speech enhancement and recognition in videos, through video conferencing, to improved hearing aids, especially in situations where there are multiple people speaking.

A Global Arms Race for Killer Robots Is Transforming Warfare

We are now on the brink of a “third revolution in warfare,” heralded by killer robots — the fully autonomous weapons that could decide who to target and kill… without human input.


Over the weekend, experts on military artificial intelligence from more than 80 world governments converged on the U.N. offices in Geneva for the start of a week’s talks on autonomous weapons systems. Many of them fear that after gunpowder and nuclear weapons, we are now on the brink of a “third revolution in warfare,” heralded by killer robots — the fully autonomous weapons that could decide who to target and kill without human input. With autonomous technology already in development in several countries, the talks mark a crucial point for governments and activists who believe the U.N. should play a key role in regulating the technology.

The meeting comes at a critical juncture. In July, Kalashnikov, the main defense contractor of the Russian government, announced it was developing a weapon that uses neural networks to make “shoot-no shoot” decisions. In January 2017, the U.S. Department of Defense released a video showing an autonomous drone swarm of 103 individual robots successfully flying over California. Nobody was in control of the drones; their flight paths were choreographed in real-time by an advanced algorithm. The drones “are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” a spokesman said. The drones in the video were not weaponized — but the technology to do so is rapidly evolving.

This April also marks five years since the launch of the International Campaign to Stop Killer Robots, which called for “urgent action to preemptively ban the lethal robot weapons that would be able to select and attack targets without any human intervention.” The 2013 launch letter — signed by a Nobel Peace Laureate and the directors of several NGOs — noted that they could be deployed within the next 20 years and would “give machines the power to decide who lives or dies on the battlefield.”

US approves artificial-intelligence device for diabetic eye problems

US regulators Wednesday approved the first device that uses artificial intelligence to detect eye damage from diabetes, allowing regular doctors to diagnose the condition without interpreting any data or images.

The device, called IDx-DR, can diagnose a condition called diabetic retinopathy, the most common cause of vision loss among the more than 30 million Americans living with diabetes.

Its software uses an artificial intelligence algorithm to analyze images of the eye, taken with a retinal camera called the Topcon NW400, the FDA said.

FDA approves AI-powered diagnostic that doesn’t need a doctor’s help

Marking a new era of “diagnosis by software,” the US Food and Drug Administration on Wednesday gave permission to a company called IDx to market an AI-powered diagnostic device for ophthalmology.

What it does: The software is designed to detect greater than a mild level of diabetic retinopathy, which causes vision loss and affects 30 million people in the US. It occurs when high blood sugar damages blood vessels in the retina.

How it works: The program uses an AI algorithm to analyze images of the adult eye taken with a special retinal camera. A doctor uploads the images to a cloud server, and the software then delivers a positive or negative result.

Human bias is a huge problem for AI. Here’s how we’re going to fix it

Machines don’t actually have bias. AI doesn’t ‘want’ something to be true or false for reasons that can’t be explained through logic. Unfortunately human bias exists in machine learning from the creation of an algorithm to the interpretation of data – and until now hardly anyone has tried to solve this huge problem.

A team of scientists from Czech Republic and Germany recently conducted research to determine the effect human cognitive bias has on interpreting the output used to create machine learning rules.

The team’s white paper explains how 20 different cognitive biases could potentially alter the development of machine learning rules and proposes methods for “debiasing” them.

What 40 Years of Research Reveals About the Difference Between Disruptive and Radical Innovation

“If you went to bed last night as an industrial company, you’re going to wake up this morning as a software and analytics company.” Jeff Immelt, former CEO of General Electric

The second wave of digitization is set to disrupt all spheres of economic life. As venture capital investor Marc Andreesen pointed out, “software is eating the world.” Yet, despite the unprecedented scope and momentum of digitization, many decision makers remain unsure how to cope, and turn to scholars for guidance on how to approach disruption.

The first thing they should know is that not all technological change is “disruptive.” It’s important to distinguish between different types of innovation, and the responses they require by firms. In a recent publication in the Journal of Product Innovation, we undertook a systematic review of 40 years (1975 to 2016) of innovation research. Using a natural language processing approach, we analyzed and organized 1,078 articles published on the topics of disruptive, architectural, breakthrough, competence-destroying, discontinuous, and radical innovation. We used a topic-modeling algorithm that attempts to determine the topics in a set of text documents. We quantitatively compared different models, which led us to select the model that best described the underlying text data. This model clustered text into 84 distinct topics. It performs best at explaining the variability of the data in assigning words to topics and topics to documents, minimizing noise in the data.

Replicating human memory structures in neural networks to create precise NLU algorithms

Machine learning and Artificial Intelligence developments are happening at a break neck speed! At such pace, you need to understand the developments at multiple levels – you obviously need to understand the underlying tools and techniques, but you also need to develop an intuitive understanding of what is happening.

By end of this article, you will develop an intuitive understanding of RNNs, specially LSTM & GRU.

Ready?

Protein Synthesis in Aging

Protein synthesis is a critical part of how our cells operate and keep us alive and when it goes wrong it drives the aging process. We take a look at how it works and what happens when things break down.


Suppose that your full-time job is to proofread machine-translated texts. The translation algorithm commits mistakes at a constant rate all day long; from this point of view, the quality of the translation stays the same. However, as a poor human proofreader, your ability to focus on this task will likely decline throughout the day; therefore, the number of missed errors, and therefore the number of translations that go out with mistakes, will likely go up with time, even though the machine doesn’t make any more errors at dusk than it did at dawn.

To an extent, this is pretty much what is going on with protein synthesis in your body.

Protein synthesis in a nutshell

The so-called coding regions of your DNA consist of genes that encode the necessary information to assemble the proteins that your cells use. As your DNA is, for all intents and purposes, the blueprint to build you, it is pretty important information, and as such, you want to keep it safe. That’s why DNA is contained in the double-layered membrane of the cell nucleus, where it is relatively safe from oxidative stress and other factors that might damage it. The protein-assembling machinery of the cell, ribosomes, are located outside the cell nucleus, and when a cell needs to build new proteins, what’s sent out to the assembly lines is not the blueprint itself, but rather a disposable mRNA (messenger RNA) copy of it that is read by the ribosomes, which will then build the corresponding protein. The process of making an mRNA copy of DNA is called “translation”, and as the initial analogy suggests, it is not error-free.

/* */