AWS Logo
Menu

Turbocharge Your Favorite LLM Client with Bedrock

Connect desktop LLM client MSTY to AWS Bedrock: enjoy friendly interfaces with powerful foundation models in minutes.

Qinjie Zhang
Amazon Employee
Published Mar 13, 2025
Last Modified Mar 26, 2025

Background

It's common to use desktop applications like MSTY, Chatbox AI, LM Studio to simplify the use of LLM models. Most of them offer a friendly user interface with a more feature-packed experience as compared to web apps. You can switch between providers and models to save costs. It allows you to connect to local models for privacy and security, or connect to multiple popular LLM providers like OpenAI, Claude using API Keys.
But what if you also like the wide variety of foundation models offered in AWS Bedrock. This blog provides a step-by-step guide on how you can connect your favorite local LLM clients to AWS Bedrock and enjoy the choice of latest LLM in the market.
Steps:
  • Deploy a Bedrock proxy API endpoint, which is compatible with OpenAI, using bedrock-access-gateway.
  • Configure LLM client, MSTY in this case, to use the Bedrock proxy API.

Step 1: Deploy Bedrock Proxy API

Bedrock Access Gateway is an open-source project by AWS. It provides an OpenAI-compatible API interface that allows applications designed for OpenAI to connect with AWS Bedrock's diverse foundation models without code changes.

Request AWS Model Access

We need to enable the model access for the models which we would like to use.
1. In AWS console, go to `Amazon Bedrock > Bedrock configurations > Model access`; modify the model access and request access to desired models.
Request Model Access
Request Model Access
2. Note that some models can only be used through cross-region inference. To use these models, you need to request model access in all AWS regions in the profile. For example, request model access in `us-east-1`, `us-east-2` and `us-west-2` for US region.

Create an Secret

Our API endpoint will be exposed to the public. We will use a secret key to protect it. Requests without a valid key will be rejected.
1. In AWS console, go to `Secret Manager > Store a new secret`. Create a key/value pair with key = `api_key` (Don't change this key name).
Create a Secret as API Key
Create a Secret as API Key
2. After creation, take note of the Secret ARN.
Get Secret ARN
Get Secret ARN

Deployment bedrock-access-gateway

The `bedrock-access-gateway` provides CloudFormation templates which can be conveniently deployed through one-click.
1. Go to bedrock-access-gateway repository in Github. Scroll down to the CloudFormation stack session. Choose to deploy either lambda or Fargate version.
Choose a CloudFormation Template
Choose a CloudFormation Template
2. Set the Secret ARN for the API key, which is from previous step. Set a default model ID, which can be found in Bedrock models list. For example, I choose `us.anthropic.claude-3-7-sonnet-20250219-v1:0`; Check the acknowledgement.
Configure CloudFormation Temlpate
Configure CloudFormation Temlpate
3. In AWS console, go to CloudFormation and check out the stack. Wait for the deployment to complete, which will take around 5 minutes. 4. After completion, from the `Outputs` of the stack, find the `APIBaseUrl`. This will be the endpoint URL.
Get the API Base URL
Get the API Base URL

Testing

Let's test the endpoint in the terminal to make sure it's working.
1. Configure the 2 environment variables for both our Secret Key value and API Base URL. Replace the values with your own values from previous steps.
2. Example 1: Invoke the endpoint to list models available in Bedrock.
3. Example 2: Chat with the model.

Step 2 - Configure MSTY

MSTY is a feature-rich desktop application for interacting with large language models (LLMs). It provides a user-friendly interface that allows seamless switching between different AI providers and models, including both local and cloud-based options. It supports knowledge stacks for context-aware responses, internet search capabilities, and customizable chat experiences.

Add Model Provider

Assume you already have MSTY installed in your computer, we will configure it to use the above endpoints.
1. In MSTY, go to `Setting > Remote Model Providers`. Click on `+ Add Remote Model Provider`.
Add a Remote Model Provider
Add a Remote Model Provider
2. Configure the model provider's API Endpoint URL and API Key. Click on `Fetch Models` to fetch the model list. Search and add desired models.
Configure Remote Model Provider
Configure Remote Model Provider

Test Endpoint using MSTY

In a new chat window, choose the newly added model, e.g. `us.anthropic.claude-3-7-sonnet-20250219-v1:0`.
1. Test that we are able to chat with the model.
Chat Example 1
Chat Example 1
2. If you have set up a knowledge stack, test that the model is able to answer questions from your selected knowledge stack.
Chat Example 2
Chat Example 2
3. Enable internet access. Test the model is able to answer questions from internet search results.
Chat Example 3
Chat Example 3

Summary

This guide demonstrates how to seamlessly integrate popular local LLM clients, e.g. MSTY in this blog, with AWS Bedrock's extensive foundation model offerings.
This integration offers the best of both worlds—the user-friendly interface and features of desktop LLM clients combined with AWS Bedrock's enterprise-grade security and diverse model selection. The solution enables users to leverage advanced AI capabilities while maintaining flexibility in their choice of interface, creating a more versatile and powerful AI workflow that adapts to individual preferences and requirements.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments