БЛОГ

Archive for the ‘information science’ category: Page 12

Nov 2, 2023

Late not great—imperfect timekeeping places significant limit on quantum computers

Posted by in categories: computing, information science, mobile phones, quantum physics

New research from a consortium of quantum physicists, led by Trinity College Dublin’s Dr. Mark Mitchison, shows that imperfect timekeeping places a fundamental limit to quantum computers and their applications. The team claims that even tiny timing errors add up to place a significant impact on any large-scale algorithm, posing another problem that must eventually be solved if quantum computers are to fulfill the lofty aspirations that society has for them.

The paper is published in the journal Physical Review Letters.

It is difficult to imagine modern life without clocks to help organize our daily schedules; with a digital clock in every person’s smartphone or watch, we take precise timekeeping for granted—although that doesn’t stop people from being late.

Oct 31, 2023

Like Humans, This Breakthrough AI Makes Concepts Out of the Words It Learns

Posted by in categories: information science, robotics/AI

Even as toddlers, we have an uncanny ability to turn what we learn about the world into concepts. With just a few examples, we form an idea of what makes a “dog” or what it means to “jump” or “skip.” These concepts are effortlessly mixed and matched inside our heads, resulting in a toddler pointing at a prairie dog and screaming, “But that’s not a dog!”

Last week, a team from New York University created an AI model that mimics a toddler’s ability to generalize language learning. In a nutshell, generalization is a sort of flexible thinking that lets us use newly learned words in new contexts—like an older millennial struggling to catch up with Gen Z lingo.

When pitted against adult humans in a language task for generalization, the model matched their performance. It also beat GPT-4, the AI algorithm behind ChatGPT.

Oct 30, 2023

Research claims novel algorithm can exactly compute information rate for any system

Posted by in category: information science

75 years ago Claude Shannon, the “father of information theory,” showed how information transmission can be quantified mathematically, namely via the so-called information transmission rate.

Yet, until now this quantity could only be computed approximately. AMOLF researchers Manuel Reinhardt and Pieter Rein ten Wolde, together with a collaborator from Vienna, have now developed a simulation technique that—for the first time—makes it possible to compute the information rate exactly for any system. The researchers have published their results in the journal Physical Review X.

To calculate the information rate exactly, the AMOLF researchers developed a novel simulation algorithm. It works by representing a complex physical system as an interconnected network that transmits the information via connections between its nodes. The researchers hypothesized that by looking at all the different paths the information can take through this network, it should be possible to obtain the information rate exactly.

Oct 28, 2023

Memes, Genes, and Brain Viruses

Posted by in categories: biotech/medical, information science, robotics/AI

Go to https://brilliant.org/EmergentGarden to get a 30-day free trial + the first 200 people will get 20% off their annual subscription.

Continue reading “Memes, Genes, and Brain Viruses” »

Oct 27, 2023

New AI Model Counters Bias In Data With A DEI Lens

Posted by in categories: information science, robotics/AI

AI has exploded onto the scene in recent years, bringing both promise and peril. Systems like ChatGPT and Stable Diffusion showcase the tremendous potential of AI to enhance productivity and creativity. Yet they also reveal a dark reality: the algorithms often reflect the same systemic prejudices and societal biases present in their training data.

While the corporate world has quickly capitalized on integrating generative AI systems, many experts urge caution, considering the critical flaws in how AI represents diversity. Whether it’s text generators reinforcing stereotypes or facial recognition exhibiting racial bias, the ethical challenges cannot be ignored.


From generating text that furthers stereotypes to producing discriminatory facial recognition results, biased AI poses ethical and social challenges.

Continue reading “New AI Model Counters Bias In Data With A DEI Lens” »

Oct 27, 2023

AI-ready architecture doubles power with FeFETs

Posted by in categories: drones, information science, robotics/AI

Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature Communications (“First demonstration of in-memory computing crossbar using multi-level Cell FeFET”), the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms and robotic applications.

  • The new architecture enables both data storage and calculations to be carried out on the same transistors, boosting efficiency and reducing heat.
  • The chip performs at 885 TOPS/W, significantly outperforming current CMOS chips which operate in the range of 10–20 TOPS/W, making it ideal for applications like real-time drone calculations, generative AI, and deep learning algorithms.
  • Oct 25, 2023

    Atom Computing is the first to announce a 1,000+ qubit quantum computer

    Posted by in categories: computing, information science, particle physics, quantum physics

    How many qubits do we have to have in a quantum computer and accessble to a wide market to trully have something scfi worthy?


    Today, a startup called Atom Computing announced that it has been doing internal testing of a 1,180 qubit quantum computer and will be making it available to customers next year. The system represents a major step forward for the company, which had only built one prior system based on neutral atom qubits—a system that operated using only 100 qubits.

    The error rate for individual qubit operations is high enough that it won’t be possible to run an algorithm that relies on the full qubit count without it failing due to an error. But it does back up the company’s claims that its technology can scale rapidly and provides a testbed for work on quantum error correction. And, for smaller algorithms, the company says it’ll simply run multiple instances in parallel to boost the chance of returning the right answer.

    Continue reading “Atom Computing is the first to announce a 1,000+ qubit quantum computer” »

    Oct 24, 2023

    Eureka: With GPT-4 overseeing training, robots can learn much faster

    Posted by in categories: information science, robotics/AI, space

    On Friday, researchers from Nvidia, UPenn, Caltech, and the University of Texas at Austin announced Eureka, an algorithm that uses OpenAI’s GPT-4 language model for designing training goals (called “reward functions”) to enhance robot dexterity. The work aims to bridge the gap between high-level reasoning and low-level motor control, allowing robots to learn complex tasks rapidly using massively parallel simulations that run through trials simultaneously. According to the team, Eureka outperforms human-written reward functions by a substantial margin.

    “Leveraging state-of-the-art GPU-accelerated simulation in Nvidia Isaac Gym,” writes Nvidia on its demonstration page, “Eureka is able to quickly evaluate the quality of a large batch of reward candidates, enabling scalable search in the reward function space.

    Oct 24, 2023

    Finding flows of a Navier–Stokes fluid through quantum computing

    Posted by in categories: computing, information science, quantum physics

    face_with_colon_three This looks awesome :3.


    There is great interest in using quantum computers to efficiently simulate a quantum system’s dynamics as existing classical computers cannot do this. Little attention, however, has been given to quantum simulation of a classical nonlinear continuum system such as a viscous fluid even though this too is hard for classical computers. Such fluids obey the Navier–Stokes nonlinear partial differential equations, whose solution is essential to the aerospace industry, weather forecasting, plasma magneto-hydrodynamics, and astrophysics. Here we present a quantum algorithm for solving the Navier–Stokes equations. We test the algorithm by using it to find the steady-state inviscid, compressible flow through a convergent-divergent nozzle when a shockwave is (is not) present.

    Oct 24, 2023

    Artificial intelligence predicts the future of artificial intelligence research

    Posted by in categories: information science, robotics/AI

    It has become nearly impossible for human researchers to keep track of the overwhelming abundance of scientific publications in the field of artificial intelligence and to stay up-to-date with advances.

    Scientists in an international team led by Mario Krenn from the Max-Planck Institute for the Science of Light have now developed an AI algorithm that not only assists researchers in orienting themselves systematically but also predictively guides them in the direction in which their own research field is likely to evolve. The work was published in Nature Machine Intelligence.

    In the field of artificial intelligence (AI) and (ML), the number of is growing exponentially and approximately doubling every 23 months. For human researchers, it is nearly impossible to keep up with progress and maintain a comprehensive overview.

    Page 12 of 280First910111213141516Last