БЛОГ

Archive for the ‘supercomputing’ category

Sep 12, 2022

Most Powerful Supercomputer — SURPASSES The HUMAN BRAIN (64 EXAFLOPS)

Posted by in categories: biotech/medical, robotics/AI, supercomputing

The most powerful Exascale Supercomputer is going to release in 2021 and will feature a total of 64 Exaflops. More than 6 times as much, as the Leonardo Supercomputer that’s also set to release this year.
This is accomplished with the help of a new type of processor technology from Tachyum that’s called “Prodigy” and is described as the first Universal Processor.

This new processor is set to enable General Artificial Intelligence at the speed of the human brain in real-time. It’s many times faster than the fastest intel xeon, nvidia graphics card or apple silicon. This new super-computer will enable previously-thought impossible simulations of the brain, medicine and more.

Continue reading “Most Powerful Supercomputer — SURPASSES The HUMAN BRAIN (64 EXAFLOPS)” »

Sep 11, 2022

No knowledge, only intuition!

Posted by in categories: big data, complex systems, computing, innovation, internet, life extension, lifeboat, machine learning, posthumanism, robotics/AI, science, singularity, supercomputing, transhumanism

Article originally published on LINKtoLEADERS under the Portuguese title “Sem saber ler nem escrever!”

In the 80s, “with no knowledge, only intuition”, I discovered the world of computing. I believed computers could do everything, as if it were an electronic God. But when I asked the TIMEX Sinclair 1000 to draw the planet Saturn — I am fascinated by this planet, maybe because it has rings —, I only glimpse a strange message on the black and white TV:

0/0

Continue reading “No knowledge, only intuition!” »

Sep 3, 2022

Quantum Matter Is Being Studied At A Temperature 3 Billion Times Colder Than Deep Space

Posted by in categories: particle physics, quantum physics, space, supercomputing

A team of Japanese and US physicists has pushed thousands of Ytterbium atoms to just within a billionth of a degree above absolute zero to understand how matter behaves at these extreme temperatures. The approach treats the atoms as fermions, the type of particles like electrons and protons, that cannot end up in the so-called fifth state of matter at those extreme temperatures: a Bose-Einstein Condensate.

When fermions are actually cooled down, they do exhibit quantum properties in a way that we can’t simulate even with the most powerful supercomputer. These extremely cold atoms are placed in a lattice and they simulate a “Hubbard model” which is used to study the magnetic and superconductive behavior of materials, in particular the collective motion of electrons through them.

The symmetry of these models is known as the special unitary group, or, SU, and depends on the possible spin state. In the case of Ytterbium, that number is 6. Calculating the behavior of just 12 particles in a SU Hubbard model can’t be done with computers. However, as reported in Nature Physics, the team used laser cooling to reduce the temperature of 300,000 atoms to a value almost three billion times colder than the temperature of outer space.

Continue reading “Quantum Matter Is Being Studied At A Temperature 3 Billion Times Colder Than Deep Space” »

Sep 2, 2022

Revolutionizing image generation through AI: Turning text into images

Posted by in categories: information science, robotics/AI, supercomputing

Creating images from text in seconds—and doing so with a conventional graphics card and without supercomputers? As fanciful as it may sound, this is made possible by the new Stable Diffusion AI model. The underlying algorithm was developed by the Machine Vision & Learning Group led by Prof. Björn Ommer (LMU Munich).

“Even for laypeople not blessed with artistic talent and without special computing know-how and , the new model is an effective tool that enables computers to generate images on command. As such, the model removes a barrier to expressing their creativity,” says Ommer. But there are benefits for seasoned artists as well, who can use Stable Diffusion to quickly convert new ideas into a variety of graphic drafts. The researchers are convinced that such AI-based tools will be able to expand the possibilities of creative image generation with paintbrush and Photoshop as fundamentally as computer-based word processing revolutionized writing with pens and typewriters.

In their project, the LMU scientists had the support of the start-up Stability. Ai, on whose servers the AI model was trained. “This additional computing power and the extra training examples turned our AI model into one of the most powerful image synthesis algorithms,” says the computer scientist.

Aug 31, 2022

Making Computer Chips Act More like Brain Cells

Posted by in categories: biological, chemistry, neuroscience, supercomputing

The human brain is an amazing computing machine. Weighing only three pounds or so, it can process information a thousand times faster than the fastest supercomputer, store a thousand times more information than a powerful laptop, and do it all using no more energy than a 20-watt lightbulb.

Researchers are trying to replicate this success using soft, flexible organic materials that can operate like biological neurons and someday might even be able to interconnect with them. Eventually, soft “neuromorphic” computer chips could be implanted directly into the brain, allowing people to control an artificial arm or a computer monitor simply by thinking about it.

Like real neurons — but unlike conventional computer chips — these new devices can send and receive both chemical and electrical signals. “Your brain works with chemicals, with neurotransmitters like dopamine and serotonin. Our materials are able to interact electrochemically with them,” says Alberto Salleo, a materials scientist at Stanford University who wrote about the potential for organic neuromorphic devices in the 2021 Annual Review of Materials Research.

Continue reading “Making Computer Chips Act More like Brain Cells” »

Aug 30, 2022

ROBE Array could let small companies access popular form of AI

Posted by in categories: information science, robotics/AI, supercomputing

A breakthrough low-memory technique by Rice University computer scientists could put one of the most resource-intensive forms of artificial intelligence—deep-learning recommendation models (DLRM)—within reach of small companies.

DLRM recommendation systems are a popular form of AI that learns to make suggestions users will find relevant. But with top-of-the-line training models requiring more than a hundred terabytes of memory and supercomputer-scale processing, they’ve only been available to a short list of technology giants with deep pockets.

Rice’s “random offset block embedding ,” or ROBE Array, could change that. It’s an algorithmic approach for slashing the size of DLRM memory structures called embedding tables, and it will be presented this week at the Conference on Machine Learning and Systems (MLSys 2022) in Santa Clara, California, where it earned Outstanding Paper honors.

Aug 28, 2022

Inside Tesla’s Innovative And Homegrown “Dojo” AI Supercomputer

Posted by in categories: military, nuclear weapons, robotics/AI, space travel, supercomputing

How expensive and difficult does hyperscale-class AI training have to be for a maker of self-driving electric cars to take a side excursion to spend how many hundreds of millions of dollars to go off and create its own AI supercomputer from scratch? And how egotistical and sure would the company’s founder have to be to put together a team that could do it?

Like many questions, when you ask these precisely, they tend to answer themselves. And what is clear is that Elon Musk, founder of both SpaceX and Tesla as well as a co-founder of the OpenAI consortium, doesn’t have time – or money – to waste on science projects.

Continue reading “Inside Tesla’s Innovative And Homegrown ‘Dojo’ AI Supercomputer” »

Aug 27, 2022

Wickedly Fast Frontier Supercomputer Officially Ushers in the Next Era of Computing

Posted by in categories: mathematics, supercomputing

Today, Oak Ridge National Laboratory’s Frontier supercomputer was crowned fastest on the planet in the semiannual Top500 list. Frontier more than doubled the speed of the last titleholder, Japan’s Fugaku supercomputer, and is the first to officially clock speeds over a quintillion calculations a second—a milestone computing has pursued for 14 years.

That’s a big number. So before we go on, it’s worth putting into more human terms.

Imagine giving all 7.9 billion people on the planet a pencil and a list of simple arithmetic or multiplication problems. Now, ask everyone to solve one problem per second for four and half years. By marshaling the math skills of the Earth’s population for a half-decade, you’ve now solved over a quintillion problems.

Continue reading “Wickedly Fast Frontier Supercomputer Officially Ushers in the Next Era of Computing” »

Aug 25, 2022

Supercomputer Emulator—AI’s New Role in Science

Posted by in categories: robotics/AI, science, supercomputing

Bishop: They can still be computationally very expensive. Additionally, emulators learn from data, so they’re typically not more accurate than the data used to train them. Moreover, they may give insufficiently accurate results when presented with scenarios that are markedly different from those on which they’re trained.

“I believe in “use-inspired basic research”—[like] the work of Pasteur. He was a consultant for the brewing industry. Why did this beer keep going sour? He basically founded the whole field of microbiology.” —Chris Bishop, Microsoft Research.

Aug 24, 2022

Supercomputing center dataset aims to accelerate AI research into optimizing high-performance computing systems

Posted by in categories: biotech/medical, employment, robotics/AI, supercomputing

When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner—and the team is looking for ways to improve.

“We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC.

To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

Page 1 of 6012345678Last