Understanding the high-end computing power of Cerebras and its role in AI and healthcare: Exclusive talk with Natalia Vassilieva, Director of Product, Machine Learning at Cerebras Systems

0
Natalia Vassilieva is Director of Product, Machine Learning at Cerebras Systems, a computer systems company dedicated to accelerating deep learning. Her focus is machine learning and artificial intelligence, analytics, and application-driven software-hardware optimization and co-design. Prior to joining Cerebras, Natalia was a Sr. Research Manager at Hewlett Packard Labs, where she led the Software and AI group and served as the head of HP Labs Russia from 2011 until 2015. Prior to HPE, she was an Associate Professor at St. Petersburg State University in Russia and worked as a software engineer for several IT companies. Natalia holds a Ph.D. in computer science from St. Petersburg State University. 
Q1. Tell us about Cerebras Systems.

Natalie: Cerebras is an AI systems company. We have built a novel computing system to greatly accelerate deep neural network (DNN) training and open up new areas of research, enabling scientists and practitioners to do previously impossible work.

Q2. Can you tell us a little about your role as Director of Product, Machine Learning, at Cerebras?

Natalie: My role is strict. As Director of Product, I sit between our engineers, customers and the market at large. My job is to understand what kind of product we should build, how it’s useful for our customers and how it can potentially open doors in other markets. In practice, this means looking at trends in the industry as a whole. When it comes to AI, we need to keep an eye on the latest research to understand where the field is going. Machine learning is a very rapidly evolving field and many new research papers are published every day.

As new research becomes available, usually with some delay, companies adopt these new methods to make them easier to use in hardware and other applications. We look at cutting-edge research, what customers want to achieve with that research, and what kind of applications our customers want to solve. We ask what kind of methods can be used to help them in their task? Collecting data on the engineering company’s requirements allows us to develop the next version of our product or software release.

Q3. Can you tell us a bit about your products, specifically those with applications in healthcare, pharma, and drug discovery?
Image: cerebrum

Natalie: At Cerebras, we built the largest and fastest AI computer in the world – the CS-2 system. It’s a very powerful computer that allows you to train deep neural networks in hours or days, as opposed to the weeks or months required with traditional hardware. What we hear from our customers is that when you’re working on cutting-edge research, time is of the essence. The ability to train a model in hours or days means researchers can test many more hypotheses that could lead to major scientific breakthroughs.

For example, we are working with leading pharmaceutical company GlaxoSmithKline to use AI for drug discovery. They hypothesized that adding epigenomic data to their AI models would lead to more accurate and useful models. So far, however, they have not been able to test this hypothesis because it would take too long to run on legacy hardware. They called us and we put them in our CS-1 system. They were able to prove their hypothesis that they could improve their models by adding epigenomic data.

Regarding the pharmaceutical industry, the quality of the models has increased when applied to the modeling of sequence data. You can think of the natural language of the text as a sequence of characters or a sequence of words. People learn how to self-monitor and train efficient and representative models on this data where you don’t need labels. You just feed it all the text you have and it learns representations and can do some useful work for you. Many models have been designed to represent natural language and sequence data. The models created for the language are directly applicable to modeling for biological tasks.

Interest in working with domain-specific text is growing. It is important to gain insight into the medical literature and to understand what kind of information can be derived from clinical reports or other written texts. There are many examples of sequence data in biotechnology. Some examples of biological sequences include proteins, the sequence of amino acids, and DNA. If you want to model what happens in the genome, you have to model these sequences a lot.

These models are usually quite computationally intensive. Tasks with high computational effort require extensive infrastructure to be trained in a reasonable amount of time. It is challenging to train at scale on existing, traditional hardware. Our hardware is able to significantly speed up the training of this type of model. We are relevant to the pharmaceutical industry because we can process data faster with the CS-2 system.

Q4. What is the Cerebras CS-2 system? How does Cerebras use AI to drive faster drug discovery? How does the CS-2 differ from your competitors? 

Natalie: The Cerebras CS-2 is our second generation system. While our competitors try to connect many weak processors together, we have a huge processor capable of training very large models. One of the main innovations in the CS-2 is the wafer-scale engine with 850,000 cores. That’s significantly more than you can find on any CPU or GPU. It gives us the ability to significantly speed up tasks that require a lot of computing power.

With traditional hardware for compute-intensive tasks, researchers are forced to cluster or connect multiple traditional processors to complete the work in a reasonable time. It’s not very efficient. Instead of connecting multiple processors with small core count, we can use our single big chip. It’s easier to harness the computing power of many cores all packed into a single device. The CS-2 system accelerates various computational tasks, such as deep neural network training.

An automatically generated image with table description
Image: cerebrum
Q5. What are some of the biggest challenges that Cerebras is looking to address in healthcare and other verticals?

Natalie: Across all industries, the main value proposition we offer is a powerful tool that enables subject matter experts to complete their experiments much faster. We want to enable researchers to learn more quickly from the results of their experiments.

The field of machine learning is, by and large, a field of trial and error. There is no golden book with rules on how to assemble a specific model. Usually you have to try a lot of different things before you settle on something that works for your problem. The speed of experimentation and the speed at which you can carry out these different experiments is extremely important.

With our hardware, we give researchers a way to make these experiments faster. We let them test many more hypotheses than they otherwise could. What we often find in practice is that researchers start with one idea or another that they want to test. It often takes months to test a single idea in traditional environments. With Cerebras, you can test more ideas and test them faster.

The reality is, if you don’t have the result of these experiments, it slows your imagination. When a tool can deliver results in hours or days, the number of ideas researchers generate explodes. Once a researcher can see what works and what doesn’t work, they can come up with 2-5 new ideas to test. It encourages creativity and speeds up research significantly.

An automatically generated image with text, interior, door and computer description
Image: cerebrum
Q6. Can you tell us about Cerebras's latest partnerships in healthcare and AI? 

Natalie: We have several projects. Our partnerships with pharmaceutical companies provide a tool that allows them to create and develop new AI-driven methods. In the case of GlaxoSmithKline (GSK), we help them on their way to new therapeutics and new vaccines, while gaining insights using artificial intelligence.

Another example is the collaboration with AstraZeneca. AstraZeneca was interested in developing an internal search engine that will enable a question and answer engine. This Q&A engine will allow their researchers to quickly access answers to questions related to previous research and previous clinical studies. Another task was to create a domain-specific language model that can help them build the question answering and machine translation engines.

Q7. How does the Cerebras platform give value to its customers?

Natalie: Typically in healthcare we work with computational chemists, experts in biology and bioinformatics. Many of them are machine learning experts, but almost none are distributed programming experts. It should be really easy for them to test their ideas without knowing how the hardware underneath works and without spending too much time thinking about optimizing certain tasks. There is great value in making experiments much faster and making it easier for researchers. Ease of use and quick experimentation are key. And that’s what our system brings to the table.

I’m from Russia, so let me share one more analogy from my college days. I took my first programming class when we were only allowed to sit at a computer for an hour. You had to complete all your programs on a piece of paper first. You have a chance to test if your program is running properly. You really had to think carefully about how to design this program, how to write that down, and then you either do it right or you don’t, and you have no other options. In many cases, researchers are currently in the same situation with these deep neural networks. When it takes you months to test your hypotheses, you know you only have one chance and it limits your options.

Our system has significantly reduced the cost of curiosity. That way, you don’t have to spend so many resources checking whether your idea is worth pursuing. I can go ahead and test it and get more insight faster.

This interview was originally published in our AI in Healthcare Magazine (March 2022)

Share.

Comments are closed.