Introduction of AWS HPC Connector for NICE EnginFrame

0


HPC customers regularly tell us how excited they are when they use the cloud for the first time. In conversations, we always want to dig a little deeper to see how we can improve those early experiences and get the most out of the potential they see. Most of the time, they have told us that they need an easier way to get started migrate and Demolition your workloads to the cloud.

Today we’re introducing the AWS HPC Connector, a new feature in NICE EnginFrame which enables customers to consume managed HPC resources on AWS. With this version, EnginFrame offers a uniform interface for administrators to make hybrid HPC resources available both locally and within AWS. This means that highly specialized users such as scientists and engineers can use EnginFrame’s portal interface to carry out their critical work processes without having to understand the detailed operation of the underlying infrastructure. After all, HPC is a human tool. you Productivity is the real measure of success, and we believe the AWS HPC Connector will make a world of difference for you.

In this post, we provide context to typical EnginFrame use cases and show how you can use the AWS HPC Connector to provision HPC compute resources on AWS.

background

NICE EnginFrame is an installable service-side application that provides an easy-to-use application portal for submitting, controlling, and monitoring HPC jobs. It includes sophisticated data management for each phase of a job’s lifecycle and integrates with HPC job planners and middleware tools to submit, monitor and manage these jobs. The modular EnginFrame system allows extreme customization to add new functionality (application integrations, authentication sources, license monitoring and more) via the web portal.

The most popular feature for end users is EnginFrame’s web portal, which offers an easy to understand and consistent user interface. The underlying HPC compute and storage capabilities can be used without having to be familiar with command line interfaces (CLIs) or writing scripts. This allows you to scale your HPC systems below this and make them available to non-IT audiences who are focused on curing cancer or developing a better wind turbine.

Behind the scenes, EnginFrame “spools” a management process for each submitted job. This spooler runs in the background to manage data movement and job placement on the selected computing resource and returns the results when the job is complete. This is transparent to the end user. As an administrator, you provide the necessary configuration to set up an application – application-specific parameters, storage location of the data, where analyzes should be carried out, who can submit jobs. The Admin Portal also shows health and status information for the registered HPC systems, as shown in Figure 1.

Figure 1: NICE EnginFrame operations portal showing historical resource usage.

Prior to this release, EnginFrame treated all registered HPC clusters equally, even if some were static local resources and some were elastic clusters in the cloud. Specific to AWS, EnginFrame let you make all decisions about your AWS infrastructure, including network layouts, security status, and scaling. Very often used by customers AWS ParallelCluster (our cluster management tool that makes it easy to deploy and manage HPC clusters on AWS) to build clusters within an AWS region. You would then manually install EnginFrame on your main node and integrate the two. While this approach worked, we knew the experience could be better.

In September we have new API functions in. introduced ParallelCluster 3, in preparation for today so that you can leverage all of the capabilities of ParallelCluster in EnginFrame with a single administration, management and deployment path for Hybrid HPC.

AWS HPC connector

The AWS HPC Connector begins by allowing you to register ParallelCluster 3 configuration files in the EnginFrame management portal. The ParallelCluster configuration file is designed as a simple YAML text file to describe the resources required for your HPC applications and to automate their provision in a secure manner. Once a ParallelCluster configuration is registered in EnginFrame, you can start and stop clusters as needed. The cluster scales the computing resources based on the number of jobs submitted according to your defined scaling criteria and node types up to the limits you set for the execution of instances. Once the submitted jobs are complete, ParallelCluster automatically stops the Compute Instances it created by scaling it down to the minimum number of instances you define, which is usually zero. At this point only the main node remains operational – ready to receive new jobs. Figure 2 shows a high-level architecture diagram showing how the AWS HPC Connector in EnginFrame works with ParallelCluster to provision resources on AWS.

Figure 2: High-level architecture of NICE EnginFrame AWS HPC Connector.

read this full blog to learn more about using the NICE EnginFrame AWS HPC Connector to manage your workflows locally and on AWS.

As a reminder, you can learn a lot from AWS HPC engineers by doing this HPC Tech Short YouTube Channel, and following the AWS HPC blog channel.


Share.

Comments are closed.