БЛОГ

Archive for the ‘information science’ category: Page 106

Feb 3, 2022

Mimicking the brain to realize ‘human-like’ virtual assistants

Posted by in categories: information science, robotics/AI

Speech is more than just a form of communication. A person’s voice conveys emotions and personality and is a unique trait we can recognize. Our use of speech as a primary means of communication is a key reason for the development of voice assistants in smart devices and technology. Typically, virtual assistants analyze speech and respond to queries by converting the received speech signals into a model they can understand and process to generate a valid response. However, they often have difficulty capturing and incorporating the complexities of human speech and end up sounding very unnatural.

Now, in a study published in the journal IEEE Access, Professor Masashi Unoki from Japan Advanced Institute of Science and Technology (JAIST), and Dung Kim Tran, a doctoral course student at JAIST, have developed a system that can capture the information in similarly to how humans perceive speech.

“In humans, the auditory periphery converts the information contained in input speech signals into neural activity patterns (NAPs) that the brain can identify. To emulate this function, we used a matching pursuit algorithm to obtain sparse representations of speech signals, or signal representations with the minimum possible significant coefficients,” explains Prof. Unoki. “We then used psychoacoustic principles, such as the equivalent rectangular bandwidth scale, gammachirp function, and masking effects to ensure that the auditory sparse representations are similar to that of the NAPs.”

Feb 3, 2022

Does AI Improve Human Judgment?

Posted by in categories: business, information science, robotics/AI

Decision-making has mostly revolved around learning from mistakes and making gradual, steady improvements. For several centuries, evolutionary experience has served humans well when it comes to decision-making. So, it is safe to say that most decisions human beings make are based on trial and error. Additionally, humans rely heavily on data to make key decisions. Larger the amount of high-integrity data available, the more balanced and rational their decisions will be. However, in the age of big data analytics, businesses and governments around the world are reluctant to use basic human instinct and know-how to make major decisions. Statistically, a large percentage of companies globally use big data for the purpose. Therefore, the application of AI in decision-making is an idea that is being adopted more and more today than in the past.

However, there are several debatable aspects of using AI in decision-making. Firstly, are *all* the decisions made with inputs from AI algorithms correct? And does the involvement of AI in decision-making cause avoidable problems? Read on to find out: involvement of AI in decision-making simplifies the process of making strategies for businesses and governments around the world. However, AI has had its fair share of missteps on several occasions.

Feb 3, 2022

Mathematicians Prove 30-Year-Old André-Oort Conjecture

Posted by in categories: information science, mathematics

“The methods used to approach it cover, I would say, the whole of mathematics,” said Andrei Yafaev of University College London.

The new paper begins with one of the most basic but provocative questions in mathematics: When do polynomial equations like x3 + y3 = z3 have integer solutions (solutions in the positive and negative counting numbers)? In 1994, Andrew Wiles solved a version of this question, known as Fermat’s Last Theorem, in one of the great mathematical triumphs of the 20th century.

In the quest to solve Fermat’s Last Theorem and problems like it, mathematicians have developed increasingly abstract theories that spark new questions and conjectures. Two such problems, stated in 1989 and 1995 by Yves André and Frans Oort, respectively, led to what’s now known as the André-Oort conjecture. Instead of asking about integer solutions to polynomial equations, the André-Oort conjecture is about solutions involving far more complicated geometric objects called Shimura varieties.

Feb 2, 2022

Chip designer mimicking brain, backed by Sam Altman, gets $25 million funding

Posted by in categories: information science, robotics/AI

(Reuters) — Rain Neuromorphics Inc., a startup designing chips that mimic the way the brain works and aims to serve companies using artificial intelligence (AI) algorithms, said on Wednesday it raised $25 million.

Gordon Wilson, CEO and co-founder of Rain, said that while most AI chips on the market today are digital, his company’s technology is analogue. Digital chips read 1s and 0s while analogue chips can decipher incremental information such as sound waves.

Feb 1, 2022

This AI Learned the Design of a Million Algorithms to Help Build New AIs Faster

Posted by in categories: information science, robotics/AI

Might there be a better way? Perhaps.

A new paper published on the preprint server arXiv describes how a type of algorithm called a “hypernetwork” could make the training process much more efficient. The hypernetwork in the study learned the internal connections (or parameters) of a million example algorithms so it could pre-configure the parameters of new, untrained algorithms.

The AI, called GHN-2, can predict and set the parameters of an untrained neural network in a fraction of a second. And in most cases, the algorithms using GHN-2’s parameters performed as well as algorithms that had cycled through thousands of rounds of training.

Feb 1, 2022

Will brains or algorithms rule the kingdom of science?

Posted by in categories: information science, neuroscience, science

Science today stands at a crossroads: will its progress be driven by human minds or by the machines that we’ve created?

Feb 1, 2022

AI nanny created by Chinese scientists to grow humans in robot wombs

Posted by in categories: biotech/medical, ethics, information science, robotics/AI

The AI nanny is here! In a new feat for science, robots and AI can now be paired to optimise the creation of human life. In a Matrix-esque reality, robotics and artificial intelligence can now help to develop babies with algorithms and artificial wombs.

Reported by South China Morning Post, Chinese scientists in Suzhou have developed the new technology. However, there are worries surrounding the ethics of actually artificially growing human babies.

Jan 31, 2022

IBM and CERN use quantum computing to hunt elusive Higgs boson

Posted by in categories: computing, finance, information science, particle physics, quantum physics

That is not to say that the advantage has been proven yet. The quantum algorithm developed by IBM performed comparably to classical methods on the limited quantum processors that exist today – but those systems are still in their very early stages.

And with only a small number of qubits, today’s quantum computers are not capable of carrying out computations that are useful. They also remain crippled by the fragility of qubits, which are highly sensitive to environmental changes and are still prone to errors.

Rather, IBM and CERN are banking on future improvements in quantum hardware to demonstrate tangibly, and not only theoretically, that quantum algorithms have an advantage.

Jan 30, 2022

What jobs are affected by AI? Better-paid, better-educated workers face the most exposure

Posted by in categories: economics, employment, information science, robotics/AI

In part because the technologies have not yet been widely adopted, previous analyses have had to rely either on case studies or subjective assessments by experts to determine which occupations might be susceptible to a takeover by AI algorithms. What’s more, most research has concentrated on an undifferentiated array of “automation” technologies including robotics, software, and AI all at once. The result has been a lot of discussion—but not a lot of clarity—about AI, with prognostications that range from the utopian to the apocalyptic.

Given that, the analysis presented here demonstrates a new way to identify the kinds of tasks and occupations likely to be affected by AI’s machine learning capabilities, rather than automation’s robotics and software impacts on the economy. By employing a novel technique developed by Stanford University Ph.D. candidate Michael Webb, the new report establishes job exposure levels by analyzing the overlap between AI-related patents and job descriptions. In this way, the following paper homes in on the impacts of AI specifically and does it by studying empirical statistical associations as opposed to expert forecasting.

Jan 28, 2022

The danger of AI micro-targeting in the metaverse

Posted by in categories: information science, robotics/AI

Artificial intelligence will soon become one of the most important, and likely most dangerous, aspects of the metaverse. I’m talking about agenda-driven artificial agents that look and act like any other users but are virtual simulations that will engage us in “conversational manipulation,” targeting us on behalf of paying advertisers.

This is especially dangerous when the AI algorithms have access to data about our personal interests, beliefs, habits and temperament, while also reading our facial expressions and vocal inflections. Such agents will be able to pitch us more skillfully than any salesman. And it won’t just be to sell us products and services – they could easily push political propaganda and targeted misinformation on behalf of the highest bidder.

And because these AI agents will look and sound like anyone else in the metaverse, our natural skepticism to advertising will not protect us. For these reasons, we need to regulate some aspects of the coming metaverse, especially AI-driven agents. If we don’t, promotional AI-avatars will fill our lives, sensing our emotions in real time and quickly adjusting their tactics for a level of micro-targeting never before experienced.