The Edinburgh Parallel Computing Center (EPPC) is in operation with its Cerebras CS-1 wafer scale system and, in addition to its own research on various programming and AI models and projects in natural language processing, is already working with European companies in the fields of biomedicine and cybersecurity Collaborative security and genome-wide association studies.
EPCC Director Mark Parsons says that while they traditionally focused on supercomputing modeling and simulation, over the past five years they have pushed it into the realm of data science and the creation of a new facility, the Edinburgh International Data Facility, which will host the CS-1 is equipped with an HPE SuperDome Flex system.
The CS-1 arrived in March and was ready for its first workloads in May. Parsons says they chose the SuperDome Flex because it serves as a “stepping stone between the massively parallel supercomputing world and virtualized data science.”
That push-and-pull between the demands of traditional HPC and emerging data science and AI / ML is also part of what made the center look like for Cerebras. âThe problem on the supercomputing side is that the system is limiting the software and on the other side of the data science side, the demanding users run out of performance. We wanted to solve that with the CS-1. “
EPCC’s setup is similar to that of the Pittsburgh Supercomputer Center with its own CS-1. EPCC has connected a large ClusterStor system to the network that communicates with the SuperDome system (18 TB main memory and 100 TB NVMe for the Cerebras system).
âAI has transformative potential to change the way we process data and make decisions, but it is fundamentally constrained today by computer infrastructure. The main symptom is that models take too long to train. It’s not uncommon in NLP for even large GPU clusters with real data sets to take days or weeks, âsays Andy Hock, VP of Product at Cerebras.
âExisting machines were not built for this work. CPU and GPU are suitable, but they are not scaled optimally, they can be underutilized, inefficient and difficult to program. “
Keep in mind that Cerebras announced its CS-2 system that doubles computational power and memory, although it wasn’t made public in time for EPCC’s new center. The CS-2 changes from around 400,000 cores to 850,000, makes an on-chip memory jump from 18 GB to 40 GB and increases the memory bandwidth from 9 PB to 20 PB with more than twice the structure. As Cerebras Co-Founder and CEO Andrew Feldman tells us, all of this results in a system with a redesigned case and performance / cooling optimizations that is around 20-25% more expensive than its predecessor.
There are several companies and institutions that figured out the data pipeline for their CS-1 and several for the CS-2, Feldman says. Remember, Cerebras has CS-1s in Argonne, LLNL, Edinburgh, with a system that is now in drug discovery at GlaxoSmithKline.
Register for our newsletter
With highlights, analyzes and stories from the week straight from us in your inbox, without in between.