БЛОГ

Archive for the ‘information science’ category: Page 66

Feb 7, 2023

A New AI Research From MIT Reduces Variance in Denoising Score-Matching, Improving Image Quality, Stability, and Training Speed in Diffusion Models

Posted by in categories: information science, robotics/AI

Diffusion models have recently produced outstanding results on various generating tasks, including the creation of images, 3D point clouds, and molecular conformers. Ito stochastic differential equations (SDE) are a unified framework that can incorporate these models. The models acquire knowledge of time-dependent score fields through score-matching, which later directs the reverse SDE during generative sampling. Variance-exploding (VE) and variance-preserving (VP) SDE are common diffusion models. EDM offers the finest performance to date by expanding on these compositions. The existing training method for diffusion models can still be enhanced, despite achieving outstanding empirical results.

The Stable Target Field (STF) objective is a generalized variation of the denoising score-matching objective. Particularly, the high volatility of the denoising score matching (DSM) objective’s training targets can result in subpar performance. They divide the score field into three regimes to comprehend the cause of this volatility better. According to their investigation, the phenomenon mostly occurs in the intermediate regime, defined by various modes or data points having a similar impact on the scores. In other words, under this regime, it is still being determined where the noisy samples produced throughout the forward process originated. Figure 1(a) illustrates the differences between the DSM and their proposed STF objectives.

Figure 1: Examples of the DSM objective’s and our suggested STF objective’s contrasts.

Feb 7, 2023

Echolocation could give small robots the ability to find lost people

Posted by in categories: drones, information science, robotics/AI

Scientists and roboticists have long looked at nature for inspiration to develop new features for machines. In this case, researchers from Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland were inspired by bats and other animals that rely on echolocation to design a method that would give small robots that ability to navigate themselves — one that doesn’t need expensive hardware or components too large or too heavy for tiny machines. In fact, according to PopSci, the team only used the integrated audio hardware of an interactive puck robot and built an audio extension deck using cheap mic and speakers for a tiny flying drone that can fit in the palm of your hand.

The system works just like bat echolocation. It was designed to emit sounds across frequencies, which a robot’s microphone then picks up as they bounce off walls. An algorithm the team created then goes to work to analyze sound waves and create a map with the room’s dimensions.

In a paper published in IEEE Robotics and Automation Letters, the researchers said existing “algorithms for active echolocation are less developed and often rely on hardware requirements that are out of reach for small robots.” They also said their “method is model-based, runs in real time and requires no prior calibration or training.” Their solution could give small machines the capability to be sent on search-and-rescue missions or to previously uncharted locations that bigger robots wouldn’t be able to reach. And since the system only needs onboard audio equipment or cheap additional hardware, it has a wide range of potential applications.

Feb 7, 2023

AI can predict the effectiveness of breast cancer chemotherapy

Posted by in categories: biotech/medical, information science, robotics/AI

Engineers at the University of Waterloo have developed artificial intelligence (AI) technology to predict if women with breast cancer would benefit from chemotherapy prior to surgery.

The new AI algorithm, part of the open-source Cancer-Net initiative led by Dr. Alexander Wong, could help unsuitable candidates avoid the serious side effects of chemotherapy and pave the way for better surgical outcomes for those who are suitable.

“Determining the right treatment for a given breast cancer patient is very difficult right now, and it is crucial to avoid unnecessary side effects from using treatments that are unlikely to have real benefit for that patient,” said Wong, a professor of systems design engineering.

Feb 7, 2023

An extension of FermiNet to discover quantum phase transitions

Posted by in categories: chemistry, information science, quantum physics, robotics/AI

Architectures based on artificial neural networks (ANNs) have proved to be very helpful in research settings, as they can quickly analyze vast amounts of data and make accurate predictions. In 2020, Google’s British AI subsidiary DeepMind used a new ANN architecture dubbed the Fermionic neural network (FermiNet) to solve the Schrodinger equation for electrons in molecules, a central problem in the field of chemistry.

The Schroedinger is a partial differential equation based on well-established theory of energy conservation, which can be used to derive information about the behavior of electrons and solve problems related to the properties of matter. Using FermiNet, which is a conceptually simple method, DeepMind could solve this equation in the context of chemistry, attaining very accurate results that were comparable to those obtained using highly sophisticated quantum chemistry techniques.

Researchers at Imperial College London, DeepMind, Lancaster University, and University of Oxford recently adapted the FermiNet architecture to tackle a quantum physics problem. In their paper, published in Physical Review Letters, they specifically used FermiNet to calculate the ground states of periodic Hamiltonians and study the homogenous electron gas (HEG), a simplified quantum mechanical model of electrons interacting in solids.

Feb 6, 2023

Code-generating platform Magic challenges GitHub’s Copilot with $23M in VC backing

Posted by in categories: information science, robotics/AI

Magic, a startup developing a code-generating platform similar to GitHub’s Copilot, today announced that it raised $23 million in a Series A funding round led by Alphabet’s CapitalG with participation from Elad Gil, Nat Friedman and Amplify Partners. So what’s its story?

Magic’s CEO and co-founder, Eric Steinberger, says that he was inspired by the potential of AI at a young age. In high school, he and his friends wired up the school’s computers for machine learning algorithm training, an experience that planted the seeds for Steinberger’s computer science degree and his job at Meta as an AI researcher.

“I spent years exploring potential paths to artificial general intelligence, and then large language models (LLMs) were invented,” Steinberger told TechCrunch in an email interview. “I realized that combining LLMs trained on code with my research on neural memory and reinforcement learning might allow us to build an AI software engineer that feels like a true colleague, not just a tool. This would be extraordinarily useful for companies and developers.”

Feb 6, 2023

Vectors of Cognitive AI: Attention

Posted by in categories: information science, robotics/AI

Panelists: michael graziano, jonathan cohen, vasudev lal, joscha bach.

The seminal contribution “Attention is all you need” (Vasvani et al. 2017), which introduced the Transformer algorithm, triggered a small revolution in machine learning. Unlike convolutional neural networks, which construct each feature out of a fixed neighborhood of signals, Transformers learn which data a feature on the next layer of a neural network should attend to. However, attention in neural networks is very different from the integrated attention in a human mind. In our minds, attention seems to be part of a top-down mechanism that actively creates a coherent, dynamic model of reality, and plays a crucial role in planning, inference, reflection and creative problem solving. Our consciousness appears to be involved in maintaining the control model of our attention.

Continue reading “Vectors of Cognitive AI: Attention” »

Feb 5, 2023

Generalist AI beyond Deep Learning

Posted by in categories: biological, information science, robotics/AI

Generative AI represents a big breakthrough towards models that can make sense of the world by dreaming up visual, textual and conceptual representations, and are becoming increasingly generalist. While these AI systems are currently based on scaling up deep learning algorithms with massive amounts of data and compute, biological systems seem to be able to make sense of the world using far less resources. This phenomenon of efficient intelligent self-organization still eludes AI research, creating an exciting new frontier for the next wave of developments in the field. Our panelists will explore the potential of incorporating principles of intelligent self-organization from biology and cybernetics into technical systems as a way to move closer to general intelligence. Join in on this exciting discussion about the future of AI and how we can move beyond traditional approaches like deep learning!

This event is hosted and sponsored by Intel Labs as part of the Cognitive AI series.

Feb 4, 2023

Google’s ChatGPT rival to be released in coming ‘weeks and months’

Posted by in categories: information science, robotics/AI

“We are just at the beginning of our AI journey, and the best is yet to come,” said Google CEO.

Search engine giant Google is looking to deploy its artificial intelligence (A.I.)-based large language models available as a “companion to search,” CEO Sundar Pichai said during an earnings report on Thursday, Bloomberg.

A large language model (LLM) is a deep learning algorithm that can recognize and summarize content from massive datasets and use it to predict or generate text. OpenAI’s GPT-3 is one such LLM that powers the hugely popular chatbot, ChatGPT.

Feb 4, 2023

Google tries to reassure investors on AI progress as ChatGPT breathes down its neck

Posted by in categories: business, information science, robotics/AI

Google worked to reassure investors and analysts on Thursday during its quarterly earnings call that it’s still a leader in developing AI. The company’s Q4 2022 results were highly anticipated as investors and the tech industry awaited Google’s response to the popularity of OpenAI’s ChatGPT, which has the potential to threaten its core business.

During the call, Google CEO Sundar Pichai talked about the company’s plans to make AI-based large language models (LLMs) like LaMDA available in the coming weeks and months. Pichai said users will soon be able to use large language models as a companion to search. An LLM, like ChatGPT, is a deep learning algorithm that can recognize, summarize and generate text and other content based on knowledge from enormous amounts of text data. Pichai said the models that users will soon be able to use are particularly good for composing, constructing and summarizing.

“Now that we can integrate more direct LLM-type experiences in Search, I think it will help us expand and serve new types of use cases, generative use cases,” Pichai said. “And so, I think I see this as a chance to rethink and reimagine and drive Search to solve more use cases for our users as well. It’s early days, but you will see us be bold, put things out, get feedback and iterate and make things better.”

Feb 3, 2023

Will an AI Be the First to Discover Alien Life?

Posted by in categories: alien life, information science, robotics/AI

SETI, the search for extraterrestrial intelligence, is deploying machine-learning algorithms that filter out Earthly interference and spot signals humans might miss.

Page 66 of 293First6364656667686970Last