AWS Logo
Menu
Spring AI 1.0 Brings AI to the Developer Masses

Spring AI 1.0 Brings AI to the Developer Masses

Spring AI 1.0 has been released, enabling millions of developers to quickly and easily add AI to their Java and Kotlin systems.

James Ward
Amazon Employee
Published May 20, 2025
Last Modified May 21, 2025
Spring AI 1.0 has been released, enabling millions of developers to quickly and easily add AI to their Java and Kotlin systems. Before Spring AI, developers may have needed to use languages or frameworks they are unfamiliar with or not using in their organization. Now the masses of Spring developers who have built everything from simple micro-services to massive enterprise systems can easily add LLM-based chat, MCP, RAG, and many other AI features into their systems.
Many organizations use Spring because of its expansive and “batteries included” approach.  For example if you need observability, just add Spring Actuator and it works across the many “vertical” areas Spring covers (Spring Data, Spring Web, etc) and of-course Spring AI. The same is true for security with Spring Security and many other foundational services.  Like all of the other Spring “verticals,” Spring AI fits great alongside all of the other parts of the Spring ecosystem.  Of course you could use it standalone as well.
For those running Spring on AWS, Spring AI provides a number of ways to get to production at scale.  For example, Spring AI provides Amazon Bedrock support enabling easy integration of AI models for multi-modal chat, embeddings, and more. Spring AI supports MCP servers enabling easy integrations between LLMs and data/services which the LLMs don’t have direct access to.  These MCP clients & servers can be run on AWS, creating the foundation for AI Agents which can be assigned “tasks” and then autonomously carry out the planning and execution.  Read more about how to do that: Running MCP-Based Agents (Clients & Servers) on AWS. Watch the associated video: Building Agents with AWS: Complete Tutorial (Java, Spring AI, Amazon Bedrock & MCP). MCP can also be used for inter-agent communication by exposing Agents as MCP servers themselves. Read an overview about inter-agent communication with MCP: Open Protocols for Agent Interoperability Part 1: Inter-Agent Communication on MCP.
To pull this all together let’s walk through an end-to-end inter-agent scenario that will use Bedrock for chat/inference and MCP for integrating AI agents to data and other agents. The scenario for this example is a human resources (HR) employee that asks a question about employee skills to an HR Agent. The HR Agent relies on another agent, the Employee Info Agent, to answer the user’s question. The Employee Info Agent integrates data from an internal employee database. At a high-level the flow is:
To achieve this let’s start with the MCP server that exposes employee data to the LLM.  This one is written in Kotlin (for brevity) but the rest will be written in Java.  We start by exposing the employee data as MCP tools:
The tool descriptions and tool parameters will describe to the LLM what data and services are possible and how to call them with the right inputs.
To setup our Spring MVC or WebFlux server to expose this over HTTP (using the MCP SSE transport), we just create a new Bean to expose our Tools:
Once we’ve started this MCP server (by running the Spring application), we can use the MCP Inspector to investigate the tools:
The MCP server returns the requested employee data. So far there isn’t an agent or LLM involved, only an MCP server which will be used to provide additional context to an LLM.
For the next piece we will create the Employee Info Agent which answers natural language queries about employees using the employee data MCP server.
First we need to configure our Spring AI agent to use Bedrock for AI chat using our employee data MCP server (in this case using application.properties):
This uses the Amazon Nova Pro model in Bedrock and there are many other models you could use.  An MCP client is configured to point to the employee data MCP server with a default localhost-based setting if the URL isn’t set by an environment variable or other configuration.
Next we will create an Agent using Spring AI and Java, starting with a Bedrock chat client and an MCP client (connected to the configured employee data MCP server):
The default system prompt is configured to do a small natural language transformation on the data we get from the employee data MCP server (abbreviate first names). Now with the chat client we can build a multi-turn agent which may perform multiple “steps” to complete its task or return the results of a provided query. Multi-turn can be thought of as a dynamic workflow engine which uses the AI model to determine which tools to call, then based on their outputs, may make additional tool calls, repeatedly iterating until it can complete the request. Here is a basic implementation:
In this case the query method uses the chat client which uses the MCP servers, to complete the task of making employee related queries. At this point the agent isn’t accessible for use outside of this program.  In our scenario we want to combine this agent with a more general HR agent which likely has access to many different agents.  This inter-agent communication can be achieved using MCP.
Like before we need to expose our data/services as an MCP tool:
To configure this new MCP server to be exposed via the MCP SSE transport, we add a Bean to expose the new tools:
After running this Spring application we can now test it again with the MCP inspector:
Now our employee info agent, exposed via an MCP server, can handle natural language queries using the employee data MCP server.  We can now call this agent from another agent. For this example, we need to configure the HR agent to use the Nova Pro model via Bedrock and utilize the employee data agent:
The HR agent needs to be exposed as a REST service (instead of an MCP service) using Spring Web:
Like before we create a Bean for the chat client, based on Bedrock, which connects to the configured MCP servers (in this case only the Employee Info Agent), and then exposes a REST endpoint that enables access via a REST controller.
With a few lines of Spring AI code we’ve been able to create an agent that communicates with another agent, which communicates with some backing tools! Get the complete example source code.  Of course you can run all of this on AWS!  The source includes a CloudFormation template that runs everything, using containers, on Amazon ECS.
These high-level abstractions are just the surface of AI capabilities you can easily build on Spring AI. For example, add the Spring Actuator in for observability. Or add Spring Security for authentication / authorization. Read more on securing MCP with Spring AI with Spring Security.
The Spring AI team has created a number of sample applications and we’ve also created some samples that are deployable on AWS.
The Spring AI 1.0 release is a huge step forward to bringing the power of AI to millions of developers!  Whether you are building MCP servers or complex AI agents, Spring AI makes it easy to build for the new AI-powered future and run the inference and compute on AWS!  Let me know in the comments if there are other things you’d like to see in this space.
Note: Co-authored with Josh Long, Spring Developer Advocate
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments