Cerebras Systems, the Silicon valley startup recognised for its dinner plate-sized chip made for artificial intelligence work, on Monday revealed its AI supercomputer called Andromeda. The 13.5 million core AI supercomputer is now available for commercial and academic work, the company informed.
Andromeda is made by linking up 16 Cerebras CS-2 systems. The company claims that Andromeda features more cores than 1,953 Nvidia A100 GPUs and 1.6 times as many cores as the largest supercomputer in the world, Frontier, which has 8.7 million cores.
“Andromeda can be used simultaneously by multiple users. Users can easily specify how many of Andromeda’s CS-2s they want to use within seconds. It can be used as a 16 CS-2 supercomputer cluster for a single user working on a single job, or 16 individual CS-2 systems for sixteen distinct users with 16 distinct jobs, or any combination in between,” Cerebras said.
Andromeda can perform 1 exaflop worth of AI computing - or at least one quintillion (10 to the power of 18) operations per second - based on a precision of 16 bit floating-point format.
According to Reuters, the fastest U.S. supercomputer that can do nuclear weapons simulations - the Frontier at Oak Ridge National Laboratory breached the 1 exaflop performance based on 64-bit double precision format this year.
“They’re a bigger machine. We’re not beating them. They cost $600 million to build. This is less than $35 million,” said Andrew Feldman, founder and CEO of Cerebras told Reuters when asked about the Frontier supercomputer.
He added while complicated nuclear simulations and weather simulations historically ran in 64-bit double precision computers, this is a computationally expensive format, so researchers are considering into whether AI algorithms can ultimately match such results.
Andromeda is deployed in a leading high performance data centre in Santa Clara, California, in 16 racks at Colovore. Companies and researchers can access it remotely.
Cerebras states that the system is designed for accelerating AI and changing the future of AI work forever, enabling customers to speed up their deep learning work by orders of magnitude.
(Inputs from Reuters)