БЛОГ

Archive for the ‘information science’ category: Page 20

Jul 19, 2024

Amazon proposes a new AI benchmark to measure RAG

Posted by in categories: information science, robotics/AI

Choosing the right algorithm for RAG could yield more AI improvements than scaling to larger and larger language models, say AWS researchers.

Jul 19, 2024

Bioplausible Artificial Intelligence

Posted by in categories: information science, robotics/AI

Listen to this episode from The Futurists on Spotify. Monica Anderson returns to the Futurists to share a radical concept: future AI models based on Darwinism. The “AI epistemologist” shares provocative opinions about where the current crop of generative AI systems went wrong, and why generative AI is computationally expensive and energy intensive, and why scaling AI with hardware will not achieve general intelligence. Instead she offers a radical alternative: a design for machine intelligence that is inspired by biology, and in particular by the Darwinian process of selection. Topics include: why generative AI is not a plagiarism machine; syntax versus semantics and why AI needs both; there is only one algorithm for creativity; and how to construct an AI that consumes a million times less energy.

Jul 18, 2024

Visualization and Quantitative Evaluation of Functional Structures of Soybean Root Nodules via Synchrotron X-ray Imaging

Posted by in categories: information science, robotics/AI

Published in Plant Phenomics:Click the link to read the full article for free:


The efficiency of N2-fixation in legume–rhizobia symbiosis is a function of root nodule activity. Nodules consist of 2 functionally important tissues: (a) a central infected zone (CIZ), colonized by rhizobia bacteria, which serves as the site of N2-fixation, and (b) vascular bundles (VBs), serving as conduits for the transport of water, nutrients, and fixed nitrogen compounds between the nodules and plant. A quantitative evaluation of these tissues is essential to unravel their functional importance in N2-fixation. Employing synchrotron-based x-ray microcomputed tomography (SR-μCT) at submicron resolutions, we obtained high-quality tomograms of fresh soybean root nodules in a non-invasive manner. A semi-automated segmentation algorithm was employed to generate 3-dimensional (3D) models of the internal root nodule structure of the CIZ and VBs, and their volumes were quantified based on the reconstructed 3D structures. Furthermore, synchrotron x-ray fluorescence imaging revealed a distinctive localization of Fe within CIZ tissue and Zn within VBs, allowing for their visualization in 2 dimensions. This study represents a pioneer application of the SR-μCT technique for volumetric quantification of CIZ and VB tissues in fresh, intact soybean root nodules. The proposed methods enable the exploitation of root nodule’s anatomical features as novel traits in breeding, aiming to enhance N2-fixation through improved root nodule activity.

Jul 17, 2024

Researchers ‘Crack the Code’ for Quelling Electromagnetic Interference

Posted by in categories: information science, robotics/AI

Florida Atlantic Center for Connected Autonomy and Artificial Intelligence (CA-AI.fau.edu) researchers have “cracked the code” on interference when machines need to talk with each other—and people.

Electromagnetic waves make wireless connectivity possible but create a lot of unwanted chatter. Referred to as “electromagnetic interference,” this noisy byproduct of wireless communications poses formidable challenges in modern day dense IoT and AI robotic environments. With the demand for lightning-fast data rates reaching unprecedented levels, the need to quell this interference is more pressing than ever.

Equipped with a breakthrough algorithmic solution, researchers from FAU Center for Connected Autonomy and AI, within the College of Engineering and Computer Science, and FAU Institute for Sensing and Embedded Network Systems Engineering (I-SENSE), have figured out a way to do that.

Jul 15, 2024

The real long-term dangers of AI

Posted by in categories: information science, mathematics, robotics/AI

Read & tell me what you think 🙂


There is a rift between near and long-term perspectives on AI safety – one that has stirred controversy. Longtermists argue that we need to prioritise the well-being of people far into the future, perhaps at the expense of people alive today. But their critics have accused the Longtermists of obsessing on Terminator-style scenarios in concert with Big Tech to distract regulators from more pressing issues like data privacy. In this essay, Mark Bailey and Susan Schneider argue that we shouldn’t be fighting about the Terminator, we should be focusing on the harm to the mind itself – to our very freedom to think.

There has been a growing debate between near and long-term perspectives on AI safety – one that has stirred controversy. “Longtermists” have been accused of being co-opted by Big Tech and fixating on science fiction-like Terminator-style scenarios to distract regulators from the real, more near-term, issues, such as algorithmic bias and data privacy.

Continue reading “The real long-term dangers of AI” »

Jul 15, 2024

Cosmological constraints in symmetric teleparallel gravity with bulk viscosity

Posted by in categories: information science, space

In this study, we explore the accelerated expansion of the universe within the framework of modified f(Q) gravity. The investigation focus on the role of bulk viscosity in understanding the universe’s accelerated expansion. Specifically, a bulk viscous matter-dominated cosmological model is considered, with the bulk viscosity coefficient expressed as $$\zeta = \zeta _0 \rho H^{-1} + \zeta _1 H $$ ζ = ζ 0 ρ H — 1 + ζ 1 H. We consider the power law f(Q) function $$f(Q)=\alpha Q^n $$ f (Q ) = α Q n, where $$\alpha $$ α and n are arbitrary constants and derive the analytical solutions for the field equations corresponding to a flat FLRW metric. Subsequently, we used the combined Cosmic Chronometers (CC)+Pantheon+SH0ES sample to estimate the free parameters of the obtained analytic solution.

Jul 15, 2024

New framework enables animal-like agile movements in four-legged robots

Posted by in categories: information science, robotics/AI

Four-legged animals are innately capable of agile and adaptable movements, which allow them to move on a wide range of terrains. Over the past decades, roboticists worldwide have been trying to effectively reproduce these movements in quadrupedal (i.e., four-legged) robots.

Computational models trained via reinforcement learning have been found to achieve particularly promising results for enabling agile locomotion in quadruped robots. However, these models are typically trained in simulated environments and their performance sometimes declines when they are applied to real robots in real-world environments.

Alternative approaches to realizing agile quadruped locomotion utilize footage of moving animals collected by and cameras as demonstrations, which are used to train controllers (i.e., algorithms for executing the movements of robots). This approach, dubbed “imitation learning,” was found to enable the reproduction of animal-like movements in some quadrupedal robots.

Jul 15, 2024

Integrating small-angle neutron scattering with machine learning enhances measurements of complex molecular structures

Posted by in categories: chemistry, information science, nanotechnology, robotics/AI

Small-angle scattering (SAS) is a powerful technique for studying nanoscale samples. So far, however, its use in research has been held back by its inability to operate without some prior knowledge of a sample’s chemical composition. Through new research published in The European Physical Journal E, Eugen Anitas at the Bogoliubov Laboratory of Theoretical Physics in Dubna, Russia, presents a more advanced approach, which integrates SAS with machine learning algorithms.

Jul 13, 2024

Learning to express reward prediction error-like dopaminergic activity requires plastic representations of time

Posted by in categories: computing, information science, neuroscience

One of the variables in TD algorithms is called reward prediction error (RPE), which is the difference between the discounted predicted reward at the current state and the discounted predicted reward plus the actual reward at the next state. TD learning theory gained traction in neuroscience once it was demonstrated that firing patterns of dopaminergic neurons in the ventral tegmental area (VTA) during reinforcement learning resemble RPE5,9,10.

Implementations of TD using computer algorithms are straightforward, but are more complex when they are mapped onto plausible neural machinery11,12,13. Current implementations of neural TD assume a set of temporal basis-functions13,14, which are activated by external cues. For this assumption to hold, each possible external cue must activate a separate set of basis-functions, and these basis-functions must tile all possible learnable intervals between stimulus and reward.

In this paper, we argue that these assumptions are unscalable and therefore implausible from a fundamental conceptual level, and demonstrate that some predictions of such algorithms are inconsistent with various established experimental results. Instead, we propose that temporal basis functions used by the brain are themselves learned. We call this theoretical framework: Flexibly Learned Errors in Expected Reward, or FLEX for short. We also propose a biophysically plausible implementation of FLEX, as a proof-of-concept model. We show that key predictions of this model are consistent with actual experimental results but are inconsistent with some key predictions of the TD theory.

Jul 12, 2024

A New Large-Scale Simulation Platform to Train Robots on Everyday Tasks

Posted by in categories: information science, internet, robotics/AI

The performance of artificial intelligence (AI) tools, including large computational models for natural language processing (NLP) and computer vision algorithms, has been rapidly improving over the past decades. One reason for this is that datasets to train these algorithms have exponentially grown, collecting hundreds of thousands of images and texts often collected from the internet.

Training data for robot control and planning algorithms, on the other hand, remains far less abundant, in part because acquiring it is not as straightforward. Some computer scientists have thus been trying to create larger datasets and platforms that could be used to train computational models for a wide range of robotics applications.

In a recent paper, pre-published on the server arXiv and set to be presented at the Robotics: Science and Systems 2024 conference, researchers at the University of Texas at Austin and NVIDIA Research introduced one of these platforms, called RoboCasa.

Page 20 of 326First1718192021222324Last