БЛОГ

Archive for the ‘robotics/AI’ category

Jul 26, 2024

Creation of a deep learning algorithm to detect unexpected gravitational wave events

Posted by in categories: information science, physics, robotics/AI

Starting with the direct detection of gravitational waves in 2015, scientists have relied on a bit of a kludge: they can only detect those waves that match theoretical predictions, which is rather the opposite way that science is usually done.

Jul 26, 2024

Optimization algorithm successfully computes the ground state of interacting quantum matter

Posted by in categories: information science, quantum physics, robotics/AI

Over the past decades, computer scientists have developed various computing tools that could help to solve challenges in quantum physics. These include large-scale deep neural networks that can be trained to predict the ground states of quantum systems. This method is now referred to as neural quantum states (NQSs).

Jul 26, 2024

Ongoing Cyberattack Targets Exposed Selenium Grid Services for Crypto Mining

Posted by in categories: cybercrime/malcode, robotics/AI

Discover how the SeleniumGreed campaign exploits exposed Selenium Grid services for crypto mining, posing risks to automated testing frameworks.

Jul 26, 2024

A Complete No-Brainer: ReRAM for Neuromorphic Computing

Posted by in category: robotics/AI

In the last 60 years technology has evolved at such an exponentially fast rate that we are now regularly conversing with AI based chatbots, and that same OpenAI technology has been put into a humanoid robot. It’s truly amazing to see this rapid development. Above: OpenAI technology in a humanoid robot Continued advancement […].

Jul 25, 2024

Tony Blair, Prophet of the Inevitable, Embraces AI

Posted by in category: robotics/AI

He pushed the British left to accept capitalism. Now he’s asking the world to make peace with artificial intelligence.

Jul 25, 2024

OpenAI announces SearchGPT, its AI-powered search engine

Posted by in category: robotics/AI

SearchGPT is just a “prototype” for now. The service is powered by the GPT-4 family of models and will only be accessible to 10,000 test users at launch, OpenAI spokesperson Kayla Wood tells The Verge. Wood says that OpenAI is working with third-party partners and using direct content feeds to build its search results. The goal is to eventually integrate the search features directly into ChatGPT.

It’s the start of what could become a meaningful threat to Google, which has rushed to bake in AI features across its search engine, fearing that users will flock to competing products that offer the tools first. It also puts OpenAI in more direct competition with the startup Perplexity, which bills itself as an AI “answer” engine. Perplexity has recently come under criticism for an AI summaries feature that publishers claimed was directly ripping off their work.

Jul 25, 2024

AI could enhance almost two-thirds of British jobs, claims Google

Posted by in categories: employment, robotics/AI

Research commissioned by Google estimates 31% of jobs would be insulated from AI and 61% radically transformed by it.

Jul 25, 2024

Network properties determine neural network performance

Posted by in categories: information science, mapping, mathematics, mobile phones, robotics/AI, transportation

Machine learning influences numerous aspects of modern society, empowers new technologies, from Alphago to ChatGPT, and increasingly materializes in consumer products such as smartphones and self-driving cars. Despite the vital role and broad applications of artificial neural networks, we lack systematic approaches, such as network science, to understand their underlying mechanism. The difficulty is rooted in many possible model configurations, each with different hyper-parameters and weighted architectures determined by noisy data. We bridge the gap by developing a mathematical framework that maps the neural network’s performance to the network characters of the line graph governed by the edge dynamics of stochastic gradient descent differential equations. This framework enables us to derive a neural capacitance metric to universally capture a model’s generalization capability on a downstream task and predict model performance using only early training results. The numerical results on 17 pre-trained ImageNet models across five benchmark datasets and one NAS benchmark indicate that our neural capacitance metric is a powerful indicator for model selection based only on early training results and is more efficient than state-of-the-art methods.

Jul 25, 2024

Cancer Drug Shows Promise for Autism Cognitive Function

Posted by in categories: biotech/medical, robotics/AI

Summary: A new experimental cancer drug could ease cognitive difficulties for those with Rett syndrome, a rare autism-linked disorder, by enhancing brain cell functions. The drug, ADH-503, improves the activity of microglia, which are crucial for maintaining neural networks.

Researchers found that healthy microglia restored synapse function in brain organoids mimicking Rett syndrome. This breakthrough suggests potential therapies for Rett syndrome and other neurological conditions.

Jul 25, 2024

Using AI to train AI: Model collapse could be coming for LLMs, say researchers

Posted by in categories: internet, mathematics, robotics/AI

Using AI-generated datasets to train future generations of machine learning models may pollute their output, a concept known as model collapse, according to a new paper published in Nature. The research shows that within a few generations, original content is replaced by unrelated nonsense, demonstrating the importance of using reliable data to train AI models.

Generative AI tools such as (LLMs) have grown in popularity and have been primarily trained using human-generated inputs. However, as these AI models continue to proliferate across the Internet, computer-generated content may be used to train other AI models—or themselves—in a recursive loop.

Ilia Shumailov and colleagues present mathematical models to illustrate how AI models may experience model collapse. The authors demonstrate that an AI may overlook certain outputs (for example, less common lines of text) in training data, causing it to train itself on only a portion of the dataset.

Page 1 of 2,31312345678Last