БЛОГ

Archive for the ‘information science’ category: Page 9

Nov 21, 2023

New computer code for mechanics of tissues and cells in three dimensions

Posted by in categories: biological, genetics, information science, mathematics, supercomputing

Biological materials are made of individual components, including tiny motors that convert fuel into motion. This creates patterns of movement, and the material shapes itself with coherent flows by constant consumption of energy. Such continuously driven materials are called active matter.

The mechanics of cells and tissues can be described by active matter theory, a scientific framework to understand the shape, flow, and form of living materials. The active matter theory consists of many challenging mathematical equations.

Scientists from the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) in Dresden, the Center for Systems Biology Dresden (CSBD), and the TU Dresden have now developed an algorithm, implemented in an open-source supercomputer code, that can for the first time solve the equations of active matter theory in realistic scenarios.

Nov 21, 2023

A scientist explains an approaching milestone marking the arrival of quantum computers

Posted by in categories: computing, encryption, information science, quantum physics

Quantum advantage is the milestone the field of quantum computing is fervently working toward, where a quantum computer can solve problems that are beyond the reach of the most powerful non-quantum, or classical, computers.

Quantum refers to the scale of atoms and molecules where the laws of physics as we experience them break down and a different, counterintuitive set of laws apply. Quantum computers take advantage of these strange behaviors to solve problems.

Continue reading “A scientist explains an approaching milestone marking the arrival of quantum computers” »

Nov 20, 2023

MIT Researchers Introduce MechGPT: A Language-Based Pioneer Bridging Scales, Disciplines, and Modalities in Mechanics and Materials Modeling

Posted by in categories: information science, materials

Researchers confront a formidable challenge within the expansive domain of materials science—efficiently distilling essential insights from densely packed scientific texts. This intricate dance involves navigating complex content and generating coherent question-answer pairs that encapsulate the core of the material. The complexity lies in the substantial task of extracting pivotal information from the dense fabric of scientific texts, requiring researchers to craft meaningful question-answer pairs that capture the essence of the material.

Current methodologies within this domain often lean on general-purpose language models for information extraction. However, these approaches need help with text refinement and the accurate incorporation of equations. In response, a team of MIT researchers introduced MechGPT, a novel model grounded in a pretrained language model. This innovative approach employs a two-step process, utilizing a general-purpose language model to formulate insightful question-answer pairs. Beyond mere extraction, MechGPT enhances the clarity of key facts.

The journey of MechGPT commences with a meticulous training process implemented in PyTorch within the Hugging Face ecosystem. Based on the Llama 2 transformer architecture, the model flaunts 40 transformer layers and leverages rotary positional embedding to facilitate extended context lengths. Employing a paged 32-bit AdamW optimizer, the training process attains a commendable loss of approximately 0.05. The researchers introduce Low-Rank Adaptation (LoRA) during fine-tuning to augment the model’s capabilities. This involves integrating additional trainable layers while freezing the original pretrained model, preventing the model from erasing its initial knowledge base. The result is heightened memory efficiency and accelerated training throughput.

Nov 20, 2023

Researchers Refute a Widespread Belief About Online Algorithms

Posted by in categories: computing, information science

“It’s really simple to define this problem,” said Marcin Bieńkowski, an algorithms researcher at the University of Wrocław in Poland. But it “turns out to be bizarrely difficult.” Since researchers began attacking the k-server problem in the late 1980s, they have wondered exactly how well online algorithms can handle the task.

Over the decades, researchers began to believe there’s a certain level of algorithmic performance you can always achieve for the k-server problem. So no matter what version of the problem you’re dealing with, there’ll be an algorithm that reaches this goal. But in a paper first published online last November, three computer scientists showed that this isn’t always achievable. In some cases, every algorithm falls short.

Nov 20, 2023

UC Berkeley Researchers Propose an Artificial Intelligence Algorithm that Achieves Zero-Shot Acquisition of Goal-Directed Dialogue Agents

Posted by in categories: information science, policy, robotics/AI

Large Language Models (LLMs) have shown great capabilities in various natural language tasks such as text summarization, question answering, generating code, etc., emerging as a powerful solution to many real-world problems. One area where these models struggle, though, is goal-directed conversations where they have to accomplish a goal through conversing, for example, acting as an effective travel agent to provide tailored travel plans. In practice, they generally provide verbose and non-personalized responses.

Models trained with supervised fine-tuning or single-step reinforcement learning (RL) commonly struggle with such tasks as they are not optimized for overall conversational outcomes after multiple interactions. Moreover, another area where they lack is dealing with uncertainty in such conversations. In this paper, the researchers from UC Berkeley have explored a new method to adapt LLMs with RL for goal-directed dialogues. Their contributions include an optimized zero-shot algorithm and a novel system called imagination engine (IE) that generates task-relevant and diverse questions to train downstream agents.

Since the IE cannot produce effective agents by itself, the researchers utilize an LLM to generate possible scenarios. To enhance the effectiveness of an agent in achieving desired outcomes, multi-step reinforcement learning is necessary to determine the optimal strategy. The researchers have made one modification to this approach. Instead of using any on-policy samples, they used offline value-based RL to learn a policy from the synthetic data itself.

Nov 19, 2023

The origins of the black hole information paradox

Posted by in categories: cosmology, information science, mathematics, quantum physics

While physics tells us that information can neither be created nor destroyed (if information could be created or destroyed, then the entire raison d’etre of physics, that is to predict future events or identify the causes of existing situations, would be impossible), it does not demand that the information be accessible. For decades physicists assumed that the information that fell into a black hole is still there, still existing, just locked away from view.

This was fine, until the 1970s when Stephen Hawking discovered the secret complexities of the event horizon. It turns out that these dark beasts were not as simple as we had been led to believe, and that the event horizons of are one of the few places in the entire cosmos where meets quantum mechanics in a manifest way.

Continue reading “The origins of the black hole information paradox” »

Nov 19, 2023

Could Photosynthesis Blossom Into Quantum Computing Technology?

Posted by in categories: biotech/medical, computing, information science, quantum physics

As we learned in middle school science classes, inside this common variety of greens—and most other plants—are intricate circuits of biological machinery that perform the task of converting sunlight into usable energy. Or photosynthesis. These processes keep plants alive. Boston University researchers have a vision for how they could also be harnessed into programmable units that would enable scientists to construct the first practical quantum computer.

A quantum computer would be able to perform calculations much faster than the classical computers that we use today. The laptop sitting on your desk is built on units that can represent 0 or 1, but never both or a combination of those states at the same time. While a classical computer can run only one analysis at a time, a quantum computer could run a billion or more versions of the same equation at the same time, increasing the ability of computers to better model extremely complex systems—like weather patterns or how cancer will spread through tissue—and speeding up how quickly huge datasets can be analyzed.

The idea of using photosynthetic molecules from, say, a spinach leaf to power quantum computing services might sound like science fiction. It’s not. It is “on the fringe of possibilities,” says David Coker, a College of Arts & Sciences professor of chemistry and a College of Engineering professor of materials science and engineering. Coker and collaborators at BU and Princeton University are using computer simulations and experiments to provide proof-of-concepts that photosynthetic circuits could unlock new technological capabilities. Their work is showing promising early results.

Nov 18, 2023

This company is building AI for African languages

Posted by in categories: information science, internet, robotics/AI

The AI startups working to build products that support African languages often get ignored by investors, says Hadgu, owing to the small size of the potential market, a lack of political support, and poor internet infrastructure. However, Hadgu says small African startups including Lesan, GhanaNLP, and Lelapa AI are playing an important role: “Big tech companies do not give focus to our languages,” he says, “but we cannot wait for them.”

Lelapa AI is trying to create a new paradigm for AI models in Africa, says Vukosi Marivate, a data scientist on the company’s AI team. Instead of tapping into the internet alone to collect data to train its model, like companies in the West, Lelapa AI works both online and offline with linguists and local communities to gather data, annotate it, and identify use cases where the tool might be problematic.

Bonaventure Dossou, a researcher at Lelapa AI specializing in natural-language processing (NLP), says that working with linguists enables them to develop a model that’s context-specific and culturally relevant. “Embedding cultural sensitivity and linguistic perspectives makes the technological system better,” says Dossou. For example, the Lelapa AI team built sentiment and tone analysis algorithms tailored to specific languages.

Nov 18, 2023

These noise-canceling headphones can filter specific sounds on command, thanks to deep learning

Posted by in categories: information science, mobile phones, robotics/AI, transportation

Scientists have built noise-canceling headphones that filter out specific types of sound in real-time — such as birds chirping or car horns blaring — thanks to a deep learning artificial intelligence (AI) algorithm.

The system, which researchers at the University of Washington dub “semantic hearing,” streams all sounds captured by headphones to a smartphone, which cancels everything before letting wearers pick the specific types of audio they’d like to hear. They described the protoype in a paper published Oct. 29 in the journa IACM Digital Library.

Nov 18, 2023

LHC physicists can’t save them all

Posted by in categories: information science, particle physics, robotics/AI

In 2010, Mike Williams traveled from London to Amsterdam for a physics workshop. Everyone there was abuzz with the possibilities—and possible drawbacks—of machine learning, which Williams had recently proposed incorporating into the LHCb experiment. Williams, now a professor of physics and leader of an experimental group at the Massachusetts Institute of Technology, left the workshop motivated to make it work.

LHCb is one of the four main experiments at the Large Hadron Collider at CERN. Every second, inside the detectors for each of those experiments, proton beams cross 40 million times, generating hundreds of millions of proton collisions, each of which produces an array of particles flying off in different directions. Williams wanted to use machine learning to improve LHCb’s trigger system, a set of decision-making algorithms programmed to recognize and save only collisions that display interesting signals—and discard the rest.

Of the 40 million crossings, or events, that happen each second in the ATLAS and CMS detectors—the two largest particle detectors at the LHC—data from only a few thousand are saved, says Tae Min Hong, an associate professor of physics and astronomy at the University of Pittsburgh and a member of the ATLAS collaboration. “Our job in the trigger system is to never throw away anything that could be important,” he says.

Page 9 of 280First678910111213Last