%pip install sagemaker --upgrade --quiet
import sagemaker
from sagemaker.djl_inference.model import DJLModel
role = sagemaker.get_execution_role() # execution role for the endpoint
session = sagemaker.session.Session() # sagemaker session for interacting with different AWS APIs
Step 2: Start building SageMaker endpoint¶
In this step, we will build SageMaker endpoint from scratch
Getting the container image URI (optional)¶
Check out available images: Large Model Inference available DLC
# Choose a specific version of LMI image directly:
# image_uri = "763104351884.dkr.ecr.us-west-2.amazonaws.com/djl-inference:0.28.0-lmi10.0.0-cu124"
Create SageMaker model¶
Here we are using LMI PySDK to create the model.
Checkout more configuration options.
model_id = "OpenAssistant/llama2-13b-orca-8k-3319" # model will be download form Huggingface hub
env = {
"TENSOR_PARALLEL_DEGREE": "4", # use 4 GPUs
"OPTION_ROLLING_BATCH": "vllm", # use vllm for rolling batching
"OPTION_TRUST_REMOTE_CODE": "true",
}
model = DJLModel(
model_id=model_id,
env=env,
role=role)
Create SageMaker endpoint¶
You need to specify the instance to use and endpoint names
instance_type = "ml.g5.12xlarge"
endpoint_name = sagemaker.utils.name_from_base("lmi-model")
predictor = model.deploy(initial_instance_count=1,
instance_type=instance_type,
endpoint_name=endpoint_name,
# container_startup_health_check_timeout=3600
)
Step 3: Test and benchmark the inference¶
predictor.predict(
{"inputs": "tell me a story of the little red riding hood", "parameters": {}}
)
%%timeit -n3 -r1
predictor.predict(
{"inputs": "tell me a story of the little red riding hood", "parameters": {}}
)
Clean up the environment¶
session.delete_endpoint(endpoint_name)
session.delete_endpoint_config(endpoint_name)
model.delete_model()