Amazon SageMaker now supports deploying large models through configurable volume size and timeout quotas - devamazonaws.blogspot.com

Amazon SageMaker enables customers to deploy ML models to make predictions (also known as inference) for any use case. You can now deploy large models (up to 500GB) for inference on Amazon SageMaker’s Real-time and Asynchronous Inference options by configuring the maximum EBS volume size and timeout quotas. This launch enables customers to leverage SageMaker's fully managed Real-time and Asynchronous inference capabilities to deploy and manage large ML models such as variants of GPT and OPT.

Post Updated on September 09, 2022 at 08:06PM

Comments

Popular posts from this blog

Scenarios capability now generally available for Amazon Q in QuickSight - devamazonaws.blogspot.com

[MS] Introducing Pull Request Annotation for CodeQL and Dependency Scanning in GitHub Advanced Security for Azure DevOps - devamazonaws.blogspot.com

AWS Console Mobile Application adds support for Amazon Lightsail - devamazonaws.blogspot.com