Operating machine studying (ML) fashions in manufacturing requires extra than simply infrastructure resilience and scaling effectivity. You want practically steady visibility into efficiency and useful resource utilization. When latency will increase, invocations fail, or sources grow to be constrained, you want rapid perception to diagnose and resolve points earlier than they affect your prospects.
Till now, Amazon SageMaker AI supplied Amazon CloudWatch metrics that provided helpful high-level visibility, however these had been combination metrics throughout all situations and containers. Whereas useful for total well being monitoring, these aggregated metrics obscured particular person occasion and container particulars, making it tough to pinpoint bottlenecks, enhance useful resource utilization, or troubleshoot successfully.
SageMaker AI endpoints now assist enhanced metrics with configurable publishing frequency. This launch offers the granular visibility wanted to observe, troubleshoot, and enhance your manufacturing endpoints. With SageMaker AI endpoint enhanced metrics, we will now drill down into container-level and instance-level metrics, which give capabilities reminiscent of:
- View particular mannequin copy metrics. With a number of mannequin copies deployed throughout a SageMaker AI endpoint utilizing Inference Parts, it’s helpful to view metrics per mannequin copy reminiscent of concurrent requests, GPU utilization, and CPU utilization to assist diagnose points and supply visibility into manufacturing workload site visitors patterns.
- View how a lot every mannequin prices. With a number of fashions sharing the identical infrastructure, calculating the true price per mannequin might be advanced. With enhanced metrics, we will now calculate and affiliate price per mannequin by monitoring GPU allocation on the inference element degree.
What’s new
Enhanced metrics introduce two classes of metrics with a number of ranges of granularity:
- EC2 Useful resource Utilization Metrics: Observe CPU, GPU, and reminiscence consumption on the occasion and container degree.
- Invocation Metrics: Monitor request patterns, errors, latency, and concurrency with exact dimensions.
Every class offers completely different ranges of visibility relying in your endpoint configuration.
Occasion-level metrics: obtainable for all endpoints
Each SageMaker AI endpoint now has entry to instance-level metrics, supplying you with visibility into what’s occurring on every Amazon Elastic Compute Cloud (Amazon EC2) occasion in your endpoint.
Useful resource utilization (CloudWatch namespace: /aws/sagemaker/Endpoints)
Observe CPU utilization, reminiscence consumption, and per-GPU utilization and reminiscence utilization for each host. When a problem happens, you may instantly determine which particular occasion wants consideration. For accelerator-based situations, you will notice utilization metrics for every particular person accelerator.
Invocation metrics (CloudWatch namespace: AWS/SageMaker)
Observe request patterns, errors, and latency by drilling right down to the occasion degree. Monitor invocations, 4XX/5XX errors, mannequin latency, and overhead latency with exact dimensions that enable you pinpoint precisely which occasion skilled points. These metrics enable you diagnose uneven site visitors distribution, determine error-prone situations, and correlate efficiency points with particular sources.
Container-level metrics: for inference elements
For those who’re utilizing Inference Parts to host a number of fashions on a single endpoint, you now have container-level visibility.
Useful resource utilization (CloudWatch namespace: /aws/sagemaker/InferenceComponents)
Monitor useful resource consumption per container. See CPU, reminiscence, GPU utilization, and GPU reminiscence utilization for every mannequin copy. This visibility helps you perceive which inference element mannequin copies are consuming sources, preserve truthful allocation in multi-tenant situations, and determine containers experiencing efficiency points. These detailed metrics embody dimensions for InferenceComponentName and ContainerId.
Invocation metrics (CloudWatch namespace: AWS/SageMaker)
Observe request patterns, errors, and latency on the container degree. Monitor invocations, 4XX/5XX errors, mannequin latency, and overhead latency with exact dimensions that enable you pinpoint precisely the place points occurred.
Configuring enhanced metrics
Allow enhanced metrics by including one parameter when creating your endpoint configuration:
response = sagemaker_client.create_endpoint_config(
EndpointConfigName=”my-config”,
ProductionVariants=[{
‘VariantName’: ‘AllTraffic’,
‘ModelName’: ‘my-model’,
‘InstanceType’: ‘ml.g6.12xlarge’,
‘InitialInstanceCount’: 2
}],
MetricsConfig={
‘EnableEnhancedMetrics’: True,
‘MetricsPublishFrequencyInSeconds’: 10, # Default 60s
})
Selecting your publishing frequency
After you’ve enabled enhanced metrics, configure the publishing frequency primarily based in your monitoring wants:
Customary decision (60 seconds): The default frequency offers detailed visibility for many manufacturing workloads. That is ample for capability planning, troubleshooting, and optimization, whereas preserving prices manageable.
Excessive decision (10 or 30 seconds): For important functions needing close to real-time monitoring, allow 10-second publishing. That is beneficial for aggressive auto scaling, extremely variable site visitors patterns, or deep troubleshooting.
Instance use instances
On this publish, we stroll by means of three widespread situations the place Enhanced Metrics delivers measurable enterprise worth, all of which can be found on this pocket book :
- Actual-time GPU utilization monitoring throughout Inference Parts
When operating a number of fashions on shared infrastructure utilizing Inference Parts, understanding GPU allocation and utilization is important for price optimization and efficiency tuning.With enhanced metrics, you may question GPU allocation per inference element:
response = cloudwatch.get_metric_data(
MetricDataQueries=[ {
‘Id’: ‘m1’,
‘Expression’: ‘SEARCH(‘{/aws/sagemaker/InferenceComponents,InferenceComponentName,GpuId} MetricName=”GPUUtilizationNormalized” InferenceComponentName=”IC-my-model”‘, ‘SampleCount’, 10)’
}, {
‘Id’: ‘e1’,
‘Expression’: ‘SUM(m1)’ # Returns GPU count
} ],
StartTime=start_time,
EndTime=end_time )
This question makes use of the GpuId dimension to depend particular person GPUs allotted to every inference element. By monitoring the SampleCount statistic, you get a exact depend of GPUs in use for a particular Inference Element, which is crucial for:
- Validating useful resource allocation matches your configuration
- Detecting when inference elements scale up or down
- Calculating per-GPU prices for chargeback fashions
- Per-model price attribution in multi-model deployments
One of the crucial requested capabilities is knowing the true price of every mannequin when a number of fashions share the identical endpoint infrastructure. Enhanced metrics make this attainable by means of container-level GPU monitoring.Right here’s tips on how to calculate cumulative price per mannequin:
response = cloudwatch.get_metric_data(
MetricDataQueries=[ {
‘Id’: ‘e1’,
‘Expression’: ‘SEARCH(‘{/aws/sagemaker/InferenceComponents,InferenceComponentName,GpuId} MetricName=”GPUUtilizationNormalized” InferenceComponentName=”IC-my-model”‘, ‘SampleCount’, 10)’
}, {
‘Id’: ‘e2’,
‘Expression’: ‘SUM(e1)’ # GPU count
}, {
‘Id’: ‘e3’,
‘Expression’: ‘e2 * 5.752 / 4 / 360’ # Cost per 10s based on ml.g6.12xlarge hourly cost
}, {
‘Id’: ‘e4’,
‘Expression’: ‘RUNNING_SUM(e3)’ # Cumulative cost
} ],
StartTime=start_time, EndTime=end_time )
This calculation:
- Counts GPUs allotted to the inference element (e2)
- Calculates price per 10-second interval primarily based on occasion hourly price (e3)
- Accumulates complete price over time utilizing RUNNING_SUM (e4)
For instance, with an ml.g6.12xlarge occasion ($5.752/hour for 4 GPUs), in case your mannequin makes use of 4 GPUs, the fee per 10 seconds is $0.016. The RUNNING_SUM offers a constantly growing complete, good for dashboards and price monitoring.
- Cluster-wide useful resource monitoring
Enhanced metrics allow complete cluster monitoring by aggregating metrics throughout all inference elements on an endpoint:
response = cloudwatch.get_metric_data(
MetricDataQueries=[ {
‘Id’: ‘e1’,
‘Expression’: ‘SUM(SEARCH(‘{/aws/sagemaker/InferenceComponents,EndpointName,GpuId} MetricName=”GPUUtilizationNormalized” EndpointName=”my-endpoint”‘, ‘SampleCount’, 10))’
}, {
‘Id’: ‘m2’,
‘MetricStat’: {
‘Metric’: {
‘Namespace’: ‘/aws/sagemaker/Endpoints’,
‘MetricName’: ‘CPUUtilizationNormalized’,
‘Dimensions’: [ {
‘Name’: ‘EndpointName’,
‘Value’: ‘my-endpoint’
}, {
‘Name’: ‘VariantName’,
‘Value’: ‘AllTraffic’
}
] },
‘Interval’: 10,
‘Stat’: ‘SampleCount’ # Returns occasion depend
}
}, {
‘Id’: ‘e2’,
‘Expression’: ‘m2 * 4 – e1’ # Free GPUs (assuming 4 GPUs per occasion)
} ],
StartTime=start_time, EndTime=end_time )
This question offers:
- Complete GPUs in use throughout all inference elements (e1)
- Variety of situations within the endpoint (m2)
- Accessible GPUs for brand spanking new deployments (e2)
This visibility is essential for capability planning and ensuring that you’ve ample sources for brand spanking new mannequin deployments or scaling current ones.
Creating operational dashboards
The accompanying pocket book demonstrates tips on how to create CloudWatch dashboards programmatically that mix these metrics:
from endpoint_metrics_helper import create_dashboard
create_dashboard(
dashboard_name=”my-endpoint-monitoring”,
endpoint_name=”my-endpoint”,
inference_components=[ {
‘name’: ‘IC-model-a’,
‘label’: ‘MODEL_A’
}, {
‘name’: ‘IC-model-b’,
‘label’: ‘MODEL_B’
} ],
cost_per_hour=5.752,
area=’us-east-1′ )
This creates a dashboard with:
- Cluster-level useful resource utilization (situations, used/unused GPUs)
- Per-model price monitoring with cumulative totals
- Actual-time price per 10-second interval
The pocket book additionally consists of interactive widgets for ad-hoc evaluation.
from endpoint_metrics_helper import create_metrics_widget, create_cost_widget
# Cluster metrics
create_metrics_widget(‘my-endpoint’)
# Per-model price evaluation
create_cost_widget (‘IC-model-a’, cost_per_hour=5.752)
These widgets present dropdown time vary choice (final 5/10/half-hour, 1 hour, or customized vary) and show:
- Variety of situations
- Complete/used/free GPUs
- Cumulative price per mannequin
- Price per 10-second interval
Greatest practices
- Begin with a 60-second decision: This offers ample granularity for many use instances whereas preserving CloudWatch prices manageable. Be aware that solely Utilization metrics generate CloudWatch fees. All different metric sorts are revealed at no further price to you.
- Use 10-second decision selectively: Allow high-resolution metrics just for important endpoints or throughout troubleshooting intervals.
- Use dimensions strategically: Use InferenceComponentName, ContainerId, and GpuId dimensions to drill down from cluster-wide views to particular containers.
- Create price allocation dashboards: Use RUNNING_SUM expressions to trace cumulative prices per mannequin for correct chargeback and budgeting.
- Arrange alarms on unused GPU capability: Monitor the unused GPU metric to just remember to preserve buffer capability for scaling or new deployments.
- Mix with invocation metrics: Correlate useful resource utilization with request patterns to grasp the connection between site visitors and useful resource consumption.
Conclusion
Enhanced Metrics for Amazon SageMaker AI Endpoints transforms the way you monitor, enhance, and function manufacturing ML workloads. By offering container-level visibility with configurable publishing frequency, you achieve the operational intelligence wanted to:
- Precisely attribute prices to particular person fashions in multi-tenant deployments
- Monitor real-time GPU allocation and utilization throughout inference elements
- Observe cluster-wide useful resource availability for capability planning
- Troubleshoot efficiency points with exact, granular metrics
The mix of detailed metrics, versatile publishing frequency, and wealthy dimensions lets you construct refined monitoring options that scale along with your ML operations. Whether or not you’re operating a single mannequin or managing dozens of inference elements throughout a number of endpoints, enhanced metrics present the visibility it’s essential to run AI effectively at scale.
Get began immediately by enabling enhanced metrics in your SageMaker AI endpoints and discover the accompanying pocket book for full implementation examples and reusable helper capabilities.
Concerning the authors
Dan Ferguson
Dan Ferguson is a Options Architect at AWS, primarily based in New York, USA. As a machine studying providers professional, Dan works to assist prospects on their journey to integrating ML workflows effectively, successfully, and sustainably.
Marc Karp
Marc Karp is an ML Architect with the Amazon SageMaker Service workforce. He focuses on serving to prospects design, deploy, and handle ML workloads at scale. In his spare time, he enjoys touring and exploring new locations.

