БЛОГ

Archive for the ‘information science’ category: Page 147

Jan 22, 2021

AI and Big Data Memory Solutions: Improving our everyday lives | Samsung

Posted by in categories: information science, robotics/AI

Samsung’s memory technology innovates artificial intelligence and Big Data analytics to bring impactful change to the way we live, work, and interact with each other. Through next-generation memory technology that enables faster and more complex tasks in AI and Big Data, Samsung takes part in the revolutionary advancement of technology that is enriching our everyday lives.

Jan 22, 2021

Healthy skin with OneSkin — Interview//Presentation with Carolina Reis Oliveira

Posted by in categories: biotech/medical, information science, life extension

Oneskin — the first skin cream that destroys senescent cells:


Longevity, Health, Long Lifespans, and Halthspans, Psychology, Spirituality — I and Carolina Reis Oliveira talk about all these things in relation to the skin. Find out how you can have very healthy skin with OneSkin!

Continue reading “Healthy skin with OneSkin -- Interview//Presentation with Carolina Reis Oliveira” »

Jan 21, 2021

New MIT Social Intelligence Algorithm Helps Build Machines That Better Understand Human Goals

Posted by in category: information science

A new algorithm capable of inferring goals and plans could help machines better adapt to the imperfect nature of human planning.

In a classic experiment on human social intelligence by psychologists Felix Warneken and Michael Tomasello (see video below), an 18-month old toddler watches a man carry a stack of books towards an unopened cabinet. When the man reaches the cabinet, he clumsily bangs the books against the door of the cabinet several times, then makes a puzzled noise.

Jan 19, 2021

Rethinking spin chemistry from a quantum perspective

Posted by in categories: biotech/medical, chemistry, computing, information science, quantum physics

Researchers at Osaka City University use quantum superposition states and Bayesian inference to create a quantum algorithm, easily executable on quantum computers, that accurately and directly calculates energy differences between the electronic ground and excited spin states of molecular systems in polynomial time.

Understanding how the natural world works enables us to mimic it for the benefit of humankind. Think of how much we rely on batteries. At the core is understanding molecular structures and the behavior of electrons within them. Calculating the energy differences between a molecule’s electronic ground and excited spin states helps us understand how to better use that molecule in a variety of chemical, biomedical and industrial applications. We have made much progress in molecules with closed-shell systems, in which electrons are paired up and stable. Open-shell systems, on the other hand, are less stable and their underlying electronic behavior is complex, and thus more difficult to understand. They have unpaired electrons in their ground state, which cause their energy to vary due to the intrinsic nature of electron spins, and makes measurements difficult, especially as the molecules increase in size and complexity.

Jan 19, 2021

A Language AI Is Accurately Predicting Covid-19 ‘Escape’ Mutations

Posted by in categories: biotech/medical, genetics, information science, robotics/AI

Weird, right?

The team’s critical insight was to construct a “viral language” of sorts, based purely on its genetic sequences. This language, if given sufficient examples, can then be analyzed using NLP techniques to predict how changes to its genome alter its interaction with our immune system. That is, using artificial language techniques, it may be possible to hunt down key areas in a viral genome that, when mutated, allow it to escape roaming antibodies.

It’s a seriously kooky idea. Yet when tested on some of our greatest viral foes, like influenza (the seasonal flu), HIV, and SARS-CoV-2, the algorithm was able to discern critical mutations that “transform” each virus just enough to escape the grasp of our immune surveillance system.

Jan 17, 2021

Accelerating AI computing to the speed of light

Posted by in categories: information science, robotics/AI

Artificial intelligence and machine learning are already an integral part of our everyday lives online. For example, search engines such as Google use intelligent ranking algorithms, and video streaming services such as Netflix use machine learning to personalize movie recommendations.

As the demands for AI online continue to grow, so does the need to speed up AI performance and find ways to reduce its energy consumption.

Now a University of Washington-led team has come up with a system that could help: an core prototype that uses phase-change material. This system is fast, energy efficient and capable of accelerating the used in AI and . The technology is also scalable and directly applicable to cloud computing.

Jan 14, 2021

FTC settlement with Ever orders data and AIs deleted after facial recognition pivot

Posted by in categories: cybercrime/malcode, information science, robotics/AI

The maker of a defunct cloud photo storage app that pivoted to selling facial recognition services has been ordered to delete user data and any algorithms trained on it, under the terms of an FTC settlement.

The regulator investigated complaints the Ever app — which gained earlier notoriety for using dark patterns to spam users’ contacts — had applied facial recognition to users’ photographs without properly informing them what it was doing with their selfies.

Under the proposed settlement, Ever must delete photos and videos of users who deactivated their accounts and also delete all face embeddings (i.e. data related to facial features which can be used for facial recognition purposes) that it derived from photos of users who did not give express consent to such a use.

Jan 13, 2021

A framework to assess the importance of variables for different predictive models

Posted by in categories: information science, robotics/AI

Two researchers at Duke University have recently devised a useful approach to examine how essential certain variables are for increasing the reliability/accuracy of predictive models. Their paper, published in Nature Machine Intelligence, could ultimately aid the development of more reliable and better performing machine-learning algorithms for a variety of applications.

“Most people pick a predictive machine-learning technique and examine which variables are important or relevant to its predictions afterwards,” Jiayun Dong, one of the researchers who carried out the study, told TechXplore. “What if there were two models that had similar performance but used wildly different variables? If that was the case, an analyst could make a mistake and think that one variable is important, when in fact, there is a different, equally good model for which a totally different set of variables is important.”

Dong and his colleague Cynthia Rudin introduced a method that researchers can use to examine the importance of variables for a variety of almost-optimal predictive models. This approach, which they refer to as “variable importance clouds,” could be used to gain a better understanding of machine-learning models before selecting the most promising to complete a given task.

Jan 12, 2021

Deconstructing Schrödinger’s Cat – Solving the Paradox

Posted by in categories: information science, particle physics, quantum physics

The French theoretical physicist Franck Laloë presents a modification of Schrödinger’s famous equation that ensures that all measured states are unique, helping to solve the problem that is neatly encompassed in the Schördinger’s cat paradox.

The paradox of Schrödinger’s cat – the feline that is, famously, both alive and dead until its box is opened – is the most widely known example of a recurrent problem in quantum mechanics: its dynamics seems to predict that macroscopic objects (like cats) can, sometimes, exist simultaneously in more than one completely distinct state. Many physicists have tried to solve this paradox over the years, but no approach has been universally accepted. Now, however, theoretical physicist Franck Laloë from Laboratoire Kastler Brossel (ENS-Université PSL) in Paris has proposed a new interpretation that could explain many features of the paradox. He sets out a model of this possible theory in a new paper in EPJ D.

One approach to solving this problem involves adding a small, random extra term to the Schrödinger equation, which allows the quantum state vector to ‘collapse’, ensuring that – as is observed in the macroscopic universe – the outcome of each measurement is unique. Laloë’s theory combines this interpretation with another from de Broglie and Bohm and relates the origins of the quantum collapse to the universal gravitational field. This approach can be applied equally to all objects, quantum and macroscopic: that is, to cats as much as to atoms.

Jan 12, 2021

Diffractive networks improve optical image classification accuracy

Posted by in categories: information science, robotics/AI

Recently, there has been a reemergence of interest in optical computing platforms for artificial intelligence-related applications. Optics is ideally suited for realizing neural network models because of the high speed, large bandwidth and high interconnectivity of optical information processing. Introduced by UCLA researchers, Diffractive Deep Neural Networks (D2NNs) constitute such an optical computing framework, comprising successive transmissive and/or reflective diffractive surfaces that can process input information through light-matter interaction. These surfaces are designed using standard deep learning techniques in a computer, which are then fabricated and assembled to build a physical optical network. Through experiments performed at terahertz wavelengths, the capability of D2NNs in classifying objects all-optically was demonstrated. In addition to object classification, the success of D2NNs in performing miscellaneous optical design and computation tasks, including e.g., spectral filtering, spectral information encoding, and optical pulse shaping have also been demonstrated.

In their latest paper published in Light: Science & Applications, UCLA team reports a leapfrog advance in D2NN-based image classification accuracy through ensemble learning. The key ingredient behind the success of their approach can be intuitively understood through the experiment of Sir Francis Galton (1822–1911), an English philosopher and statistician, who, while visiting a livestock fair, asked the participants to guess the weight of an ox. None of the hundreds of participants succeeded in guessing the weight. But to his astonishment, Galton found that the median of all the guesses came quite close—1207 pounds, and was accurate within 1% of the true weight of 1198 pounds. This experiment reveals the power of combining many predictions in order to obtain a much more accurate prediction. Ensemble learning manifests this idea in machine learning, where an improved predictive performance is attained by combining multiple models.

In their scheme, UCLA researchers reported an ensemble formed by multiple D2NNs operating in parallel, each of which is individually trained and diversified by optically filtering their inputs using a variety of filters. 1252 D2NNs, uniquely designed in this manner, formed the initial pool of networks, which was then pruned using an iterative pruning algorithm, so that the resulting physical ensemble is not prohibitively large. The final prediction comes from a weighted average of the decisions from all the constituent D2NNs in an ensemble. The researchers evaluated the performance of the resulting D2NN ensembles on CIFAR-10 image dataset, which contains 60000 natural images categorized in 10 classes and is an extensively used dataset for benchmarking various machine learning algorithms. Simulations of their designed ensemble systems revealed that diffractive optical networks can significantly benefit from the ‘wisdom of the crowd’.