Introduction to AWS ParallelCluster 3 – HPCwire

0

Running HPC workloads such as computational fluid dynamics (CFD), molecular dynamics, or weather forecasting typically involve many moving parts. You need hundreds or thousands of cores, a job scheduler to run them, a shared file system tuned for throughput or IOPS (or both), lots of libraries, a fast network, and a master node to understand all of this . These are just the table stakes too, because when you move to the cloud, that’s what you expect more ambitious things – most likely because you are a researcher who needs to solve a problem and a lab full of colleagues is waiting for the answer.

Since 2018, AWS ParallelCluster has simplified the orchestration of HPC environments, helping researchers and engineers solve some of the most ambitious problems the world is facing today. Watch as customers discover what “Infrastructure as code“Means in the context of HPC really got us to find new ways to excite them. If a single shell command can create a complex thing like an HPC cluster, and a Luster filesystem, and In a visualization studio, this leads to more people trying out the cloud than ever before and asking us about new functions.

That’s why we’re announcing AWS ParallelCluster 3 today. Customers, system integrators, and other developers have told us that they want to create end-to-end “recipes” for HPC that span everything from infrastructure to middleware, libraries and runtime code. They also explained to us their need for an API-like interface so that they can programmatically interact with ParallelCluster to create interfaces and services for their users. As we are known, we worked backwards from this feedback and used thousands of conversations with clients to create what we are showing you today.

There are many changes that you will notice – big and small. Here are a few highlights before we dive in deeper later in this post:

  • A new flexible AWS ParallelCluster API – This simplifies the creation of solutions and interfaces on ParallelCluster or the inclusion of your cluster life cycle as part of a pipeline. We also adjusted the CLI to make scripted or event-driven workflows easy.
  • Build custom AMIs with EC2 Image Builder – Support for custom AMIs in ParallelCluster has gone from being a feature in 2018 to being a mainstream process. With the introduction of EC2 Image Builder, we now have a way to automate this process without anyone having to invent the automation. This creates clusters with custom AMIs scales faster because it brings the image creation phase forward. It also improves reliability, and you’ll find it easier to stay patched and even more difficult to mess with your security posture.
  • A new configuration file format – ParallelCluster configurations now use a YAML format and each defines only one cluster. Along with a few other changes, we think it will be easier to keep your cluster configurations organized and readable.
  • Simplified network configuration options – We have optimized the support for networks to enable the use of private, already existing Route 53 zones and provided more flexibility in the use of Elastic IPs.
  • More detailed IAM permissions – We changed the way we assign permissions. You can specify an IAM role or an instance profile, separately for the main node and the compute nodes. We support the creation of IAM authorization limits for organizations that require certain limits in the application of roles.
  • Scripts for runtime adjustment – You can now use the pre- and post-install scripts separately for the compute nodes on a. optimize Live running clusterand they are updated when you run the pcluster update command.

These features simplify the initial cluster setup and make cluster organization and reproducibility easier, saving customers time building custom environments. Read the full blog to find out more about the latest features in ParallelCluster 3.

As a reminder, you can learn a lot from AWS HPC engineers by doing this HPC Tech Short YouTube Channel, and following the AWS HPC blog channel.


Source link

Share.

Comments are closed.