БЛОГ

Archive for the ‘supercomputing’ category: Page 15

Aug 5, 2023

Calculations reveal high-resolution view of quarks inside protons

Posted by in categories: nuclear energy, particle physics, supercomputing

A collaboration of nuclear theorists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, Argonne National Laboratory, Temple University, Adam Mickiewicz University of Poland, and the University of Bonn, Germany, has used supercomputers to predict the spatial distributions of charges, momentum, and other properties of “up” and “down” quarks within protons. The results, just published in Physical Review D, revealed key differences in the characteristics of the up and down quarks.

“This work is the first to leverage a new theoretical approach to obtain a high-resolution map of quarks within a ,” said Swagato Mukherjee of Brookhaven Lab’s nuclear theory group and a co-author on the paper. “Our calculations show that the up quark is more symmetrically distributed and spread over a smaller distance than the down quark. These differences imply that up and down quarks may make different contributions to the fundamental properties and structure of the proton, including its internal energy and spin.”

Co-author Martha Constantinou of Temple University noted, “Our calculations provide input for interpreting data from nuclear physics experiments exploring how quarks and the gluons that hold them together are distributed within the proton, giving rise to the proton’s overall properties.”

Jul 31, 2023

‘Organoid Intelligence’ — how mini-brains could replace AI for supercomputing

Posted by in categories: robotics/AI, supercomputing

While Artificial Intelligence has the ability to crunch huge amounts of data in a short span of time, it still falls behind when it comes to finding an energy-efficient way to make complex decisions. Researchers from John Hopkins University in the US are now proposing that 3D cell structures that mimic brain functions can be used to create biocomputers.

Join our channel to get access to perks. Click ‘JOIN’ or follow the link below:
https://www.youtube.com/channel/UCuyRsHZILrU7ZDIAbGASHdA/join.

Continue reading “‘Organoid Intelligence’ — how mini-brains could replace AI for supercomputing” »

Jul 31, 2023

Microsoft warns of service disruptions if it can’t get enough A.I. chips for its data centers

Posted by in categories: robotics/AI, supercomputing

Those efforts and the interest in ChatGPT have led Microsoft to seek more GPUs than it had expected.

“I am thrilled that Microsoft announced Azure is opening private previews to their H100 AI supercomputer,” Jensen Huang, Nvidia’s CEO, said at his company’s GTC developer conference in March.

Microsoft has begun looking outside its own data centers to secure enough capacity, signing an agreement with Nvidia-backed CoreWeave, which rents out GPUs to third-party developers as a cloud service.

Jul 29, 2023

Tesla Commences Production of Dojo Supercomputer for Autonomous Vehicle Training

Posted by in categories: Elon Musk, robotics/AI, supercomputing, sustainability

In its second-quarter earnings report for 2023, Tesla revealed its ambitious plan to address vehicle autonomy at scale with four key technology pillars: an extensive real-world dataset, neural net training, vehicle hardware, and vehicle software. Notably, the electric vehicle manufacturer asserted its commitment to developing each of these pillars in-house. A significant milestone in this endeavor was announced as Tesla started the production of its custom-built Dojo training computer, a critical component in achieving faster and more cost-effective neural net training.

While Tesla already possesses one of the world’s most potent Nvidia GPU-based supercomputers, the Dojo supercomputer takes a different approach by utilizing chips specifically designed by Tesla. Back in 2019, Tesla CEO Elon Musk christened this project as “Dojo,” envisioning it as an exceptionally powerful training computer. He claimed that Dojo would be capable of performing an exaflop, or one quintillion (1018) floating-point operations per second, an astounding level of computational power. To put it into perspective, performing one calculation every second on a one exaFLOP computer system would take over 31 billion years, as reported by Network World.

The development of Dojo has been a continuous process. At Tesla’s AI Day in 2021, the automaker showcased its initial chip and training tiles, which would eventually form a complete Dojo cluster, also known as an “exapod.” Tesla’s plan involves combining two sets of three tiles in a tray, and then placing two trays in a computer cabinet to achieve over 100 petaflops per cabinet. With a 10-cabinet system, Tesla’s Dojo exapod will exceed the exaflop barrier of compute power.

Jul 25, 2023

‘Quantum avalanche’ explains how nonconductors turn into conductors

Posted by in categories: particle physics, quantum physics, supercomputing

Looking only at their subatomic particles, most materials can be placed into one of two categories.

Metals—like copper and iron—have free-flowing electrons that allow them to conduct electricity, while —like glass and rubbe r— keep their electrons tightly bound and therefore do not conduct electricity.

Insulators can turn into metals when hit with an intense electric field, offering tantalizing possibilities for microelectronics and supercomputing, but the behind this phenomenon called resistive switching is not well understood.

Jul 21, 2023

Finding game-changing superconductors with machine learning tools

Posted by in categories: biotech/medical, nuclear energy, robotics/AI, supercomputing

Superconductors—found in MRI machines, nuclear fusion reactors and magnetic-levitation trains—work by conducting electricity with no resistance at temperatures near absolute zero, or −459.67°F.

The search for a conventional superconductor that can function at room temperature has been ongoing for roughly a century, but research has sped up dramatically in the last decade because of new advances in (ML) using supercomputers such as Expanse at the San Diego Supercomputer Center (SDSC) at UC San Diego.

Most recently, Huan Tran, a senior research scientist at Georgia Institute of Technology (Georgia Tech) School of Materials Science and Engineering, has worked on Expanse with Professor Tuoc Vu from Hanoi University of Science and Technology (Vietnam) to create an artificial intelligence/machine learning (AI/ML) approach to help identify new candidates for potential superconductors in a much faster and reliable way.

Jul 21, 2023

The world’s fastest supercomputer with a processing power of 4 exaflops unveiled

Posted by in categories: business, robotics/AI, space, supercomputing

The supercomputer is part of the larger constellation of inter-connected supercomputers with a combined capacity of 36 exaFLOPS.

Abu Dhabi-based technology holding group G42 has unveiled the world’s fastest supercomputer, the Condor Galaxy-1 (CG-1), which has 54 million cores and a processing capacity of four exaflops, a press release said. The supercomputer is located in Santa Clara, California, and will be operated by Cerebras, a US-based AI firm under US laws.

As artificial intelligence (AI) technology takes center stage, there is a strong demand for supercomputers to help businesses train their own models. Companies like Microsoft have offered to build the extremely expensive infrastructure and rent it out for companies to work on them.

Jul 21, 2023

Cerebras Systems signs $100 million AI supercomputer deal with UAE’s G42

Posted by in categories: robotics/AI, space, supercomputing

July 20 (Reuters) — Cerebras Systems on Thursday said that it has signed an approximately $100 million deal to deliver the first of what could be up to nine artificial intelligence (AI) supercomputers in a partnership with United Arab Emirates-based technology group G42.

The deal comes as cloud computing providers around the world are searching for alternatives to chips from Nvidia Corp (NVDA.O), the market leader in AI computing whose products are in short supply, thanks to the surging popularity of ChatGPT and other services. Cerebras is one of several startups looking to challenge Nvidia.

Silicon Valley-based Cerebras said that G42 has agreed to purchase three of what it calls its Condor Galaxy systems, all of which it will build in the U.S. to speed up the roll out. The first one will come online this year, with two more coming in early 2024.

Jul 20, 2023

An A.I. Supercomputer Whirs to Life, Powered by Giant Computer Chips

Posted by in categories: robotics/AI, supercomputing

The new supercomputer, made by the Silicon Valley start-up Cerebras, was unveiled as the A.I. boom drives demand for chips and computing power.

Jul 20, 2023

Tesla is building a custom $1 billion A.I. supercomputer because it cannot get enough Nvidia chips

Posted by in categories: Elon Musk, robotics/AI, supercomputing, transportation

Elon Musk is pushing hard to complete development of its Full Self-Driving software and power forward with its Optimus robot program as it looks to celebrate its own “ChatGPT moment”.

Page 15 of 87First1213141516171819Last