Nov 18, 2022

How AI has made hardware interesting again

Posted by in categories: robotics/AI, supercomputing

Lawrence Livermore National Laboratory has long been one of the world’s largest consumers of supercomputing capacity. With computing power of more than 200 petaflops, or 200 billion floating-point operations per second, the U.S. Department of Energy-operated institution runs supercomputers from every major U.S. manufacturer.

For the past two years, that lineup has included two newcomers: Cerebras Systems Inc. and SambaNova Systems Inc. The two startups, which have collectively raised more than $1.8 billion in funding, are attempting to upend a market that has been dominated so far by off-the-shelf x86 central processing units and graphics processing units with hardware that’s purpose-built for use in artificial intelligence model development and inference processing to run those models.

Cerebras says its WSE-2 chip, built on a wafer-scale architecture, can bring 2.6 trillion transistors and 850,000 CPU cores to bear on the task of training neural networks. That’s about 500 times as many transistors and 100 times as many cores as are found on a high-end GPU. With 40 gigabytes of onboard memory and the ability to access up to 2.4 petabytes of external memory, the company claims, the architecture can process AI models that are too massive to be practical on GPU-based machines. The company has raised $720 million on a $4 billion valuation.

Comments are closed.