Amazon SageMaker now supports deploying large models through configurable volume size and timeout quotas - devamazonaws.blogspot.com

Amazon SageMaker enables customers to deploy ML models to make predictions (also known as inference) for any use case. You can now deploy large models (up to 500GB) for inference on Amazon SageMaker’s Real-time and Asynchronous Inference options by configuring the maximum EBS volume size and timeout quotas. This launch enables customers to leverage SageMaker's fully managed Real-time and Asynchronous inference capabilities to deploy and manage large ML models such as variants of GPT and OPT.

Post Updated on September 09, 2022 at 08:06PM

Comments

Popular posts from this blog

Scenarios capability now generally available for Amazon Q in QuickSight - devamazonaws.blogspot.com

Research and Engineering Studio on AWS Version 2024.08 now available - devamazonaws.blogspot.com

Amazon EC2 C6id instances are now available in AWS Europe (Paris) region - devamazonaws.blogspot.com