Sunday, February 8, 2026

Handle Amazon SageMaker HyperPod clusters utilizing the HyperPod CLI and SDK


Coaching and deploying giant AI fashions requires superior distributed computing capabilities, however managing these distributed techniques shouldn’t be advanced for knowledge scientists and machine studying (ML) practitioners. The command line interface (CLI) and software program improvement equipment (SDK) for Amazon SageMaker HyperPod with Amazon Elastic Kubernetes Service (Amazon EKS) orchestration simplify the way you handle cluster infrastructure and use the service’s distributed coaching and inference capabilities.

The SageMaker HyperPod CLI gives knowledge scientists with an intuitive command-line expertise, abstracting away the underlying complexity of distributed techniques. Constructed on high of the SageMaker HyperPod SDK, the CLI presents simple instructions for managing HyperPod clusters and customary workflows like launching coaching or fine-tuning jobs, deploying inference endpoints, and monitoring cluster efficiency. This makes it very best for fast experimentation and iteration.

A layered structure for simplicity

The HyperPod CLI and SDK observe a multi-layered, shared structure. The CLI and the Python module function user-facing entry factors and are each constructed on high of widespread SDK elements to offer constant conduct throughout interfaces. For infrastructure automation, the SDK orchestrates cluster lifecycle administration via a mixture of AWS CloudFormation stack provisioning and direct AWS API interactions. Coaching and inference workloads and built-in improvement environments (IDEs) (Areas) are expressed as Kubernetes Customized Useful resource Definitions (CRDs), which the SDK manages via the Kubernetes API.

On this submit, we reveal easy methods to use the CLI and the SDK to create and handle SageMaker HyperPod clusters in your AWS account. We stroll via a sensible instance and dive deeper into the consumer workflow and parameter decisions.

This submit focuses on cluster creation and administration. For a deep dive into utilizing the HyperPod CLI and SDK to submit coaching jobs and deploy inference endpoints, see our companion submit: Practice and deploy fashions on Amazon SageMaker HyperPod utilizing the brand new HyperPod CLI and SDK.

Stipulations

To observe the examples on this submit, you have to have the next stipulations:

Set up the SageMaker HyperPod CLI

First, set up the newest model of the SageMaker HyperPod CLI and SDK. The examples on this submit are primarily based on model 3.5.0. Out of your native surroundings, run the next command, you’ll be able to alternatively set up the CLI in a Python digital surroundings:

# Set up the HyperPod CLI and SDK
pip set up sagemaker-hyperpod

This command units up the instruments wanted to work together with SageMaker HyperPod clusters. For an current set up, ensure you have the newest model of the bundle put in (SageMaker HyperPod 3.5.0 or later) to have the ability to use the related set of options described on this submit. To confirm if the CLI is put in appropriately, run the hyp command and test the outputs:

# Examine if the HyperPod CLI is appropriately put in
hyp

The output shall be much like the next, and contains directions on easy methods to use the CLI:

Utilization: hyp [OPTIONS] COMMAND [ARGS]...

Choices:
  --version  Present model data
  --help     Present this message and exit.

Instructions:
  configure                       Replace any subset of fields in ./config.yaml by passing -- flags.
  create                          Create endpoints, pytorch jobs, cluster stacks, house, house entry or house admin config.
  delete                          Delete endpoints, pytorch jobs, house, house entry or house template.
  describe                        Describe endpoints, pytorch jobs or cluster stacks, areas or house template.
  exec                            Execute instructions in pods for endpoints or pytorch jobs.
  get-cluster-context             Get context associated to the present set cluster.
  get-logs                        Get pod logs for endpoints, pytorch jobs or areas.
  get-monitoring                  Get monitoring configurations for Hyperpod cluster.
  get-operator-logs               Get operator logs for endpoints.
  init                            Initialize a TEMPLATE scaffold in DIRECTORY.
  invoke                          Invoke mannequin endpoints.
  checklist                            Checklist endpoints, pytorch jobs, cluster stacks, areas, and house templates.
  list-accelerator-partition-type
                                  Checklist obtainable accelerator partition sorts for an occasion kind.
  list-cluster                    Checklist SageMaker Hyperpod Clusters with metadata.
  list-pods                       Checklist pods for endpoints or pytorch jobs.
  reset                           Reset the present listing's config.yaml to an "empty" scaffold: all schema keys set to default values (however preserving the...
  set-cluster-context             Connect with a HyperPod EKS cluster.
  begin                           Begin house assets.
  cease                            Cease house assets.
  replace                          Replace an current HyperPod cluster configuration, house, or house template.
  validate                        Validate this listing's config.yaml towards the suitable schema.

For extra data on CLI utilization and the obtainable instructions and respective parameters, see the CLI reference documentation.

The HyperPod CLI gives instructions to handle the complete lifecycle of HyperPod clusters. The next sections clarify easy methods to create new clusters, monitor their creation, modify occasion teams, and delete clusters.

Creating a brand new HyperPod cluster

HyperPod clusters could be created via the AWS Administration Console or the HyperPod CLI, each of which offer streamlined experiences for cluster creation. The console presents the simplest and most guided strategy, whereas the CLI is particularly helpful for purchasers preferring a programmatic expertise—for instance, to allow reproducibility or to construct automation round cluster creation. Each strategies use the identical underlying CloudFormation template, which is out there within the SageMaker HyperPod cluster setup GitHub repository. For a walkthrough of the console-based expertise, see the cluster creation expertise weblog submit.

Creating a brand new cluster via the HyperPod CLI follows a configuration-based workflow: the CLI first generates configuration recordsdata, that are then edited to match the supposed cluster specs. These recordsdata are subsequently submitted as a CloudFormation stack that creates the HyperPod cluster together with the required assets, comparable to a VPC and FSx for Lustre filesystem, amongst others.To initialize a brand new cluster configuration by working the next command:hyp init cluster-stack

This initializes a brand new cluster configuration within the present listing and generates a config.yaml file that you should utilize to specify the configuration of the cluster stack. Moreover it can create a README.md with details about the performance and workflow along with a template for the CloudFormation stack parameters in cfn_params.jinja.

(base) xxxxxxxx@3c06303f9abb hyperpod % hyp init cluster-stack
Initializing new scaffold for 'cluster-stack'…
✔️ cluster-stack for schema model='1.0' is initialized in .
🚀 Welcome!
📘 See ./README.md for utilization.

The cluster stack’s configuration variables are outlined in config.yaml. The next is an excerpt from the file:

...
# Prefix for use for all assets. A 4-digit UUID shall be added to prefix throughout submission
resource_name_prefix: hyp-eks-stack
# Boolean to Create HyperPod Cluster Stack
create_hyperpod_cluster_stack: True
# Identify of SageMaker HyperPod Cluster
hyperpod_cluster_name: hyperpod-cluster
# Boolean to Create EKS Cluster Stack
create_eks_cluster_stack: True
# The Kubernetes model
kubernetes_version: 1.31
...

The resource_name_prefix parameter serves as the first identifier for the AWS assets created throughout deployment. Every deployment should use a novel useful resource title prefix to keep away from conflicts. The worth of the prefix parameter is mechanically appended with a novel identifier throughout cluster creation to offer useful resource uniqueness.

The configuration could be edited both immediately by opening config.yaml in an editor of your selection or by working the hyp configure command. The next instance reveals easy methods to specify the Kubernetes model of the Amazon EKS cluster that shall be created by the stack:

hyp configure --kubernetes-version 1.33

Updating variables via the CLI instructions gives added safety by performing validation towards the outlined schema earlier than setting the worth in config.yaml.

Moreover the Kubernetes model and the useful resource title prefix, some examples of great parameters are listed under:

# Checklist of string containing occasion group configurations
instance_group_settings:
  - {'InstanceCount': 1, 'InstanceGroupName': 'default', 'InstanceType': 'ml.t3.medium', 'TargetAvailabilityZoneId': 'use2-az2', 'ThreadsPerCore': 1, 'InstanceStorageConfigs': [{'EbsVolumeConfig': {'VolumeSizeInGB': 500}}]}

# Boolean to Create EKS Cluster Stack
create_eks_cluster_stack: True

# The title of the S3 bucket used to retailer the cluster lifecycle scripts
s3_bucket_name: amzn-s3-demo-bucket

# Storage capability for the FSx file system in GiB
storage_capacity: 1200

There are two essential nuances when updating the configuration values via hyp configure instructions:

  • Underscores (_) in variable names inside config.yaml turn into hyphens (-) within the CLI instructions. Thus kubernetes_version in config.yaml is configured by way of hyp configure --kubernetes-version within the CLI.
  • Variables that include lists of entries inside config.yaml are configured as JSON lists within the CLI command. For instance, a number of occasion teams are configured inside config.yaml as the next:
instance_group_settings:
  - {'InstanceCount': 1, 'InstanceGroupName': 'default', 'InstanceType': 'ml.t3.medium', 'TargetAvailabilityZoneId': 'use2-az2', 'ThreadsPerCore': 1, 'InstanceStorageConfigs': [{'EbsVolumeConfig': {'VolumeSizeInGB': 500}}]}
  - {'InstanceCount': 2, 'InstanceGroupName': 'employee', 'InstanceType': 'ml.t3.giant', 'TargetAvailabilityZoneId': 'use2-az2', 'ThreadsPerCore': 1, 'InstanceStorageConfigs': [{'EbsVolumeConfig': {'VolumeSizeInGB': 1000}}]}

Which interprets to the next CLI command:

hyp configure —instance-group-settings "[{'InstanceCount': 1, 'InstanceGroupName': 'default', 'InstanceType': 'ml.t3.medium', 'TargetAvailabilityZoneId': 'use2-az2', 'ThreadsPerCore': 1, 'InstanceStorageConfigs': [{'EbsVolumeConfig': {'VolumeSizeInGB': 500}}]}, {'InstanceCount': 2, 'InstanceGroupName': 'employee', 'InstanceType': 'ml.t3.giant', 'TargetAvailabilityZoneId': 'use2-az2', 'ThreadsPerCore': 1, 'InstanceStorageConfigs': [{'EbsVolumeConfig': {'VolumeSizeInGB': 1000}}]}]"

After you’re achieved making the specified modifications, validate your configuration file by working the next command:hyp validate

It will validate the parameters in config.yaml towards the outlined schema. If profitable, the CLI will output the next:

(base) xxxxxxxx@3c06303f9abb hyperpod % hyp validate
✔️  config.yaml is legitimate!

The cluster creation stack could be submitted to CloudFormation by working the next command:hyp create --region

The hyp create command performs validation and injects values from config.yaml into the cfn_params.jinja template. If no AWS Area is explicitly supplied, the command makes use of the default Area out of your AWS credentials configuration. The resolved configuration file and CloudFormation template values are saved to a timestamped subdirectory underneath the ./run/ listing, offering a light-weight native versioning mechanism to trace which configuration was used to create a cluster at a given time limit. You may also select to commit these artifacts to your model management system to enhance reproducibility and auditability. If profitable, the command outputs the CloudFormation stack ID:

(base) xxxxxxxx@3c06303f9abb dev % hyp create
✔️ config.yaml is legitimate!
✔️ Submitted! Information written to run/20251118T101501
Submitting to default area: us-east-1.
Stack creation initiated. Stack ID: arn:aws:cloudformation:us-east-1:xxxxxxxxxxx:stack/HyperpodClusterStack-d5351/5b83ed40-c491-11f0-a31f-1234073395a1

Monitoring the HyperPod cluster creation course of

You may checklist the present CloudFormation stacks by working the next command:hyp checklist cluster-stack --region

You may optionally filter the output by stack standing by including the next flag: --status "['CREATE_COMPLETE', 'UPDATE_COMPLETE']".

The output of this command will look much like the next:

(base) xxxxxxxx@3c06303f9abb dev % hyp checklist cluster-stack
📋 HyperPod Cluster Stacks (94 discovered)

[1] Stack Particulars:
 Area | Worth
---------------------+---------------------------------------------------------------------------------------------------------------------------------------------------
 StackId | arn:aws:cloudformation:us-east-1:xxxxxxxxxxx:stack/HyperpodClusterStack-d5351-S3EndpointStack-10JBD25F965A8/e2898250-c491-11f0-bf25-0afff7e082cf
 StackName | HyperpodClusterStack-d5351-S3EndpointStack-10JBD25F965A8
 TemplateDescription | S3 Endpoint Stack
 CreationTime | :18:50
 StackStatus | CREATE_COMPLETE
 ParentId | arn:aws:cloudformation:us-east-1:xxxxxxxxxxx:stack/HyperpodClusterStack-d5351/5b83ed40-c491-11f0-a31f-1234073395a1
 RootId | arn:aws:cloudformation:us-east-1:xxxxxxxxxxx:stack/HyperpodClusterStack-d5351/5b83ed40-c491-11f0-a31f-1234073395a1
 DriftInformation | {'StackDriftStatus': 'NOT_CHECKED'}

Relying on the configuration in config.yaml, a number of nested stacks are created that cowl completely different points of the HyperPod cluster setup such because the EKSClusterStack, FsxStack and the VPCStack.

You should utilize the describe command to view particulars about any of the person stacks:hyp describe cluster-stack --region

The output for an exemplary substack, S3EndpointStack, will appear to be the next:

(base) xxxxxxxx@3c06303f9abb dev % hyp describe cluster-stack HyperpodClusterStack-d5351-S3EndpointStack-10JBD25F965A8
📋 Stack Particulars for: HyperpodClusterStack-d5351-S3EndpointStack-10JBD25F965A8
Standing: CREATE_COMPLETE
 Area | Worth 
-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------
 StackId | arn:aws:cloudformation:us-east-1:xxxxxxxxxxx:stack/HyperpodClusterStack-d5351-S3EndpointStack-10JBD25F965A8/e2898250-c491-11f0-bf25-0afff7e082cf
 StackName | HyperpodClusterStack-d5351-S3EndpointStack-10JBD25F965A8
 Description | S3 Endpoint Stack
 Parameters | [
 |  "ParameterKey": "ResourceNamePrefix",
 ,
 |  "ParameterValue": "vpc-XXXXXXXXXXXXXX"
 ,
 |  ,
 |  "ParameterValue": "rtb-XXXXXXXXXXXXXX,rtb-XXXXXXXXXXXXXX"
 
 | ]
 CreationTime | :18:50.007000+00:00
 RollbackConfiguration | {}
 StackStatus | CREATE_COMPLETE
 DisableRollback | True
 NotificationARNs | []
 Capabilities | [
 | "CAPABILITY_AUTO_EXPAND",
 | "CAPABILITY_IAM",
 | "CAPABILITY_NAMED_IAM"
 | ]
 Tags | []
 EnableTerminationProtection | False
 ParentId | arn:aws:cloudformation:us-east-1:xxxxxxxxxxx:stack/HyperpodClusterStack-d5351/5b83ed40-c491-11f0-a31f-1234073395a1
 RootId | arn:aws:cloudformation:us-east-1:xxxxxxxxxxx:stack/HyperpodClusterStack-d5351/5b83ed40-c491-11f0-a31f-1234073395a1
 DriftInformation | {
 | "StackDriftStatus": "NOT_CHECKED"

If any of the stacks present CREATE_FAILED, ROLLBACK_* or DELETE_*, open the CloudFormation web page within the console to research the basis trigger. Failed cluster creation stacks are sometimes associated to inadequate service quotas for the cluster itself, the occasion teams, or the community elements comparable to VPCs or NAT gateways. Examine the corresponding SageMaker HyperPod Quotas to be taught extra in regards to the required quotas for SageMaker HyperPod.

Connecting to a cluster

After the cluster stack has efficiently created the required assets and the standing has modified to CREATE_COMPLETE, you’ll be able to configure the CLI and your native Kubernetes surroundings to work together with the HyperPod cluster.

hyp set-cluster-context --cluster-name —area

The --cluster-name choice specifies the title of the HyperPod cluster to connect with and the --region choice specifies the Area the place the cluster has been created. Optionally, a particular namespace could be configured utilizing the --namespace parameter. The command updates your native Kubernetes config in ./kube/config, so that you could use each the HyperPod CLI and Kubernetes utilities comparable to kubectl to handle the assets in your HyperPod cluster.

See our companion weblog submit for additional details about easy methods to use the CLI to submit coaching jobs and inference deployments to your newly created HyperPod cluster: Practice and deploy fashions on Amazon SageMaker HyperPod utilizing the brand new HyperPod CLI and SDK.

Modifying an current HyperPod cluster

The HyperPod CLI gives a command to switch the occasion teams and node restoration mode of an current HyperPod cluster via the hyp replace cluster command. This may be helpful if you might want to scale your cluster by including or eradicating employee nodes, or if you wish to change the occasion sorts utilized by the node teams.

To replace the occasion teams, run the next command, tailored along with your cluster title and desired occasion group settings:

hyp replace cluster --cluster-name  --region  
 --instance-groups '[{
        "instance_count": 2,
        "instance_group_name": "worker-nodes",
        "instance_type": "ml.m5.large",
        "execution_role": "arn:aws:iam:::role/",
        "life_cycle_config": {
            "source_s3_uri": "s3:///amzn-s3-demo-source-bucket/",
            "on_create": "on_create.sh"
        }
    }]'

Word that all the fields within the previous command are required to run the replace command, even when, for instance, solely the occasion rely is modified. You may checklist the present cluster and occasion group configurations to acquire the required values by working the hyp describe cluster --region command.

The output of the replace command will appear to be the next:

[11/18/25 13:21:57] Replace Params: {'instance_groups': [ClusterInstanceGroupSpecification(instance_count=2, instance_group_name="worker-nodes", instance_type="ml.m5.large", life_cycle_config=ClusterLifeCycleConfig(source_s3_uri='s3://amzn-s3-demo-source-bucket2', on_create="on_create.sh"), execution_role="arn:aws:iam::037065979077:role/hyp-eks-stack-4e5aExecRole", threads_per_core=, instance_storage_configs=, on_start_deep_health_checks=, training_plan_arn=, override_vpc_config=, scheduled_update_config=, image_id=)], 'node_recovery': 'Automated'}
[11/18/25 13:21:58]  Updating cluster useful resource. assets.py:3506
INFO:sagemaker_core.principal.assets:Updating cluster useful resource.
Cluster has been up to date
Cluster hyperpod-cluster has been up to date 

The --node-recovery choice permits you to configure the node restoration conduct, which could be set to both Automated or None. For details about the SageMaker HyperPod automated node restoration characteristic, see Automated node restoration.

Deleting an current HyperPod cluster

To delete an current HyperPod cluster, run the next command. Word that this motion is not reversible:

hyp delete cluster-stack --region

This command removes the desired CloudFormation stack and the related AWS assets. You should utilize the non-compulsory --retain-resources flag to specify a comma-separated checklist of logical useful resource IDs to retain in the course of the deletion course of. It’s essential to fastidiously contemplate which assets you might want to retain, as a result of the delete operation can’t be undone.

The output of this command will appear to be the next, asking you to substantiate the useful resource deletion:

⚠ WARNING: It will delete the next 12 assets:

Different (12):
 - EKSClusterStack
 - FsxStack
 - HelmChartStack
 - HyperPodClusterStack
 - HyperPodParamClusterStack
 - LifeCycleScriptStack
 - PrivateSubnetStack
 - S3BucketStack
 - S3EndpointStack
 - SageMakerIAMRoleStack
 - SecurityGroupStack
 - VPCStack

Proceed? [y/N]: y
✓ Stack 'HyperpodClusterStack-d5351' deletion initiated efficiently

SageMaker HyperPod SDK

SageMaker HyperPod additionally features a Python SDK for programmatic entry to the options described earlier. The Python SDK is utilized by the CLI instructions and is put in while you set up the sagemaker-hyperpod Python bundle as described to start with of this submit. The HyperPod CLI is greatest suited to customers preferring a streamlined, interactive expertise for widespread HyperPod administration duties like creating and monitoring clusters, coaching jobs, and inference endpoints. It’s notably useful for fast prototyping, experimentation, and automating repetitive HyperPod workflows via scripts or steady integration and supply (CI/CD) pipelines. In distinction, the HyperPod SDK gives extra programmatic management and suppleness, making it the popular selection when you might want to embed HyperPod performance immediately into your software, combine with different AWS or third-party companies, or construct advanced, custom-made HyperPod administration workflows. Think about the complexity of your use case, the necessity for automation and integration, and your crew’s familiarity with programming languages when deciding whether or not to make use of the HyperPod CLI or SDK.

The SageMaker HyperPod CLI GitHub repository reveals examples of how cluster creation and administration could be applied utilizing the Python SDK.

Conclusion

The SageMaker HyperPod CLI and SDK simplify cluster creation and administration. With the examples on this submit, we’ve demonstrated how these instruments present worth via:

  • Simplified lifecycle administration – From preliminary configuration to cluster updates and cleanup, the CLI aligns with how groups handle long-running coaching and inference environments and abstracts away pointless complexity.
  • Declarative management when wanted – The SDK exposes the underlying configuration mannequin, in order that groups can codify cluster specs, occasion teams, storage filesystems, and extra.
  • Built-in observability – Visibility into CloudFormation stacks is out there with out switching instruments, supporting easy iteration throughout improvement and operation.

Getting began with these instruments is as simple as putting in the SageMaker HyperPod bundle. The SageMaker HyperPod CLI and SDK present the best degree of abstraction for each knowledge scientists trying to rapidly experiment with distributed coaching and ML engineers constructing manufacturing techniques.

In the event you’re keen on easy methods to use the HyperPod CLI and SDK for submitting coaching jobs and deploying fashions to your new cluster, be certain that to test our companion weblog submit: Practice and deploy fashions on Amazon SageMaker HyperPod utilizing the brand new HyperPod CLI and SDK.


Concerning the authors

Nicolas Jourdan

Nicolas Jourdan is a Specialist Options Architect at AWS, the place he helps clients unlock the complete potential of AI and ML within the cloud. He holds a PhD in Engineering from TU Darmstadt in Germany, the place his analysis centered on the reliability and MLOps of business ML purposes. Nicolas has intensive hands-on expertise throughout industries, together with autonomous driving, drones, and manufacturing, having labored in roles starting from analysis scientist to engineering supervisor. He has contributed to award-winning analysis, holds patents in object detection and anomaly detection, and is enthusiastic about making use of cutting-edge AI to resolve advanced real-world issues.

Andrew Brown

Andrew Brown is a Sr. Options Architect who has been working at AWS within the Vitality Business for the final 4 years. He makes a speciality of Deep Studying and Excessive Efficiency Computing.

Giuseppe Angelo Porcelli

Giuseppe Angelo Porcelli is a Principal Machine Studying Specialist Options Architect for Amazon Internet Providers. With a number of years of software program engineering and an ML background, he works with clients of any measurement to know their enterprise and technical wants and design AI and ML options that make the perfect use of the AWS Cloud and the Amazon Machine Studying stack. He has labored on initiatives in numerous domains, together with MLOps, pc imaginative and prescient, and NLP, involving a broad set of AWS companies. In his free time, Giuseppe enjoys taking part in soccer.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles