
Deploying DeepSeek-R1 Distill Llama Models on Amazon Bedrock
Deploying the DeepSeek-R1 Distill Llama models on Amazon Bedrock involves utilizing the Custom Model Import feature, which allows you to integrate your externally fine-tuned models into the Bedrock environment seamlessly. This process enables you to leverage Bedrock's serverless infrastructure and unified API for efficient model deployment.
- Model Compatibility:
Ensure your DeepSeek R1 Distill model is based on a supported architecture, such as Llama 2, Llama 3, Llama 3.1, Llama 3.2, or Llama 3.3. Amazon Bedrock supports these architectures for custom model imports. - Model Files Preparation:
Prepare the necessary model files in the Hugging Face format, including:These files should be stored in an Amazon S3 bucket accessible to your AWS account.> Important: The model is already available in safe tensor format, so we dont need to prepare files seperately.- Model weights in
.safetensors
format. - Configuration file (
config.json
). - Tokenizer files (
tokenizer_config.json
,tokenizer.json
,tokenizer.model
).
DeepSeek-R1-Distill-Llama-8B
model:us-east-1
or us-west-2
.- In the Bedrock console, select "Custom models" and choose "Import model."
- Provide the S3 URI where your model files are stored (e.g.,
s3://your-s3-bucket-name/DeepSeek-R1-Distill-Llama-8B/
). - Follow the prompts to complete the import process.
'your-account-id'
and 'your-model-id'
with your specific AWS account ID and model ID, respectively.Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.