logo
Menu
Beyond Auto-Replies: Building an AI-Powered E-commerce Support system

Beyond Auto-Replies: Building an AI-Powered E-commerce Support system

Implementation of an AI-driven multi-agent system for automated e-commerce customer email support. It covers the architecture and setup of specialized AI agents using the Multi-Agent Orchestrator framework, integrating automated processing with human-in-the-loop oversight. The guide explores email ingestion, intelligent routing, automated response generation, and human verification, providing a comprehensive approach to balancing AI efficiency with human expertise in customer support

Published Sep 12, 2024
Ever received an automated customer support email so useless, you wondered if it was secretly written by a random word generator? "We value your inquiry about your missing order. Have you tried turning your mailbox off and on again?". Hold onto your sanity, because I've discovered a customer service revolution that makes those robo-responses look like cave paintings: say hello to AI Agents and the Multi-Agent Orchestrator framework.
In this article, I'm fixing the messy world of online store help, where shoppers get angry quicker than people leaving an email list after a bad ad. I'll show you how to build an AI-powered support system so effective, it makes those "Your call is important to us" messages sound sincere šŸ¤©.
We'll explore how the Multi-Agent Orchestrator and Project Lakechain team up to create the superhero of customer support - no cape or secret identity required. Ready to rescue customers from the clutches of confusion? Let's swoop in!
Our intelligent email processing system leverages Amazon SQS, Lakechain's EmailTextProcessor, and the Multi-Agent Orchestrator framework to efficiently handle customer inquiries. Here's how it works:
  • Incoming emails are ingested into an SQS queue
  • Lakechain's EmailTextProcessor extracts the text and forwards it to another SQS queue
  • A Lambda function processes this queue using the Multi-Agent Orchestrator, which:
    • Determines the most appropriate AI Agent for each query
    • Generates a response using the chosen agent
  • The response is then sent to a final SQS queue for further processing or direct customer communication
Before we dive into the technical implementation, let's visualize the high-level workflow of our intelligent email processing system:
This diagram outlines the journey of customer emails through our AI-powered support system:
  1. Inbound emails first land in a queue for raw emails.
  2. The Email Content Extraction step processes these raw emails, extracting the relevant text.
  3. Processed emails are then placed in another queue, ready for AI analysis.
  4. The Response Generation phase is where our Multi-Agent Orchestrator shines, determining the most appropriate AI agent to handle the query and generating a response.
  5. Finally, the generated responses are queued for outbound delivery to customers.
This streamlined architecture ensures efficient email processing, intelligent query routing, and prompt customer response generation.

The Cast of AI Agents

Based on the kinds of emails we could get, here's our help team:
  1. šŸ“¦ Order Management Agent: Handles everything related to orders, shipments, and returns.
  2. šŸ·ļø Product Information Agent: Answers questions about product specifications, compatibility, and availability.
  3. šŸ’ Customer Service Agent: For all those general inquiries and account stuff. It's basically a digital version of that super-helpful store clerk we all wish we had.
  4. šŸ‘¤ Human Agent: Handles complex issues that require human intervention.
  5. šŸ¤–šŸ‘€ AI with Human Verification Agent: Generates AI responses for high-priority inquiries, which are then verified by a human.

Orchestrating our AI Team

A. Setting Up Your Project:
Create a new project directory and navigate into it
B. Initialize a Node.js project and install necessary dependencies:
C. Create a src directory and the agents.ts file:

Step 1: Create mock functions:

Create a new file named agents.ts in your project directory:
These imports and mock databases set up the foundation for our system. In a real-world scenario, you should replace these mock databases with actual database connections.

Step 2: Order Management Agent Setup:

In this step, we create an Order Management Agent using the BedrockLLMAgent, which combines Amazon Bedrock's powerful language understanding capabilities with the Converse API. This agent is designed to handle complex order-related inquiries and perform actions like order lookup, shipment tracking, and return processing.
The key feature of this setup is the definition and implementation of tools that the LLM can call to execute specific tasks based on customer requests. We've defined three tools:
  1. OrderLookup: Retrieves order details from the database
  2. ShipmentTracker: Gets real-time shipping information
  3. ReturnProcessor: Initiates and manages return requests
Here's how the tools are defined and implemented:
  1. Tool Definition: We define the tools in the orderManagementToolConfig array. Each tool has a name, description, and an input schema that specifies what information is needed to use the tool.
  2. Tool Implementation: The orderManagementToolHandler function contains the actual implementation of these tools. When the LLM determines it needs to use a tool, it calls this handler. The handler then:
    • Identifies which tool is being called
    • Executes the corresponding function (e.g., orderLookup, shipmentTracker, or returnProcessor)
    • Formats the result and adds it to the conversation
  3. LLM Integration: The BedrockLLMAgent is configured with these tools using the toolConfig property. This allows the LLM to:
    • Understand what tools are available
    • Decide when to use a tool based on the customer's request
    • Call the appropriate tool and receive the results
This setup enables the Order Management Agent to seamlessly combine natural language understanding with specific order-related actions. For example, when a customer asks about their order status, the LLM can recognize the need for order information, use the OrderLookup tool to fetch the details, and then formulate a natural language response based on the retrieved data.
By defining these tools and their implementation, we're giving the LLM the ability to perform concrete actions in response to customer queries, making the agent much more capable and useful in handling real-world order management scenarios.

3: Customer Service Agent Setup:

Important Note: The AmazonBedrockAgent requires a pre-existing Bedrock Agent with an attached knowledge base. This agent should be created and configured in the AWS Bedrock console before using it in this code. The knowledge base should contain information relevant to customer service, such as:
  • Frequently Asked Questions (FAQs)
  • Company policies (returns, shipping, privacy, etc.)
  • Account management procedures
  • Troubleshooting guides for common issues
Replace "your-agent-id" and "your-agent-alias-id" with the actual IDs of your Bedrock Agent.

4. Product Information Agent Setup:

We chose the BedrockLLMAgent specifically the Claude 3 Haiku model. This setup offers several advantages:
  1. Knowledge Integration: This setup implements a RAG. The AmazonKnowledgeBasesRetriever connects to a vector database containing contextual product data. When a query is received, relevant information is retrieved from this database and fed into the language model, allowing it to provide responses based on the most up-to-date and specific product information.
  2. Stateless Interactions: Setting saveChat to false optimizes the agent for independent queries, as product information requests typically don't require context from previous interactions.
This configuration creates a Product Information Agent capable of handling a wide range of product-related inquiries. By using RAG, it ensures that responses are drawing from the latest data in the vector database.

5. Human Agent Setup:

In this setup, we create a Human Agent by extending the base Agent class. This agent is designed to handle specific domains or topics that are beyond the capabilities of our AI models. The key idea here is that we've defined a "domain" of queries that should be directly routed to human operators, rather than being processed by an LLM.
This Human Agent serves several purposes:
  1. Handling Complex Issues: For topics that require nuanced understanding, emotional intelligence the Human Agent ensures these queries are directed to actual human operators.
  2. Sensitive Matters: Complaints, legal issues, or other sensitive topics that an LLM might not handle appropriately are routed through this agent.
  3. Compliance: For industries with strict regulations or where human oversight is mandatory, this agent ensures that certain types of inquiries are always handled by authorized personnel.
In a real-world implementation, the simulateHumanResponse method would likely integrate with a ticketing system. This could involve placing the query in a queue for human review, sending an email to a relevant department, or triggering an alert in a customer service dashboard.

5. AI with Human verification Agent Setup:

This setup demonstrates how to combine AI efficiency with human oversight for handling sensitive or complex customer inquiries. In this approach, the LLM (customerServiceAgent) is utilized to generate an initial response. This AI-generated response serves as a helpful starting point, potentially saving time and ensuring consistency.
However, recognizing that some domains require absolute accuracy or may involve nuances that AI might miss, a human reviewer then steps in. The human verifier checks the AI-generated response for accuracy, appropriateness, and completeness.
This hybrid approach leverages the strengths of both AI and human expertise:
  1. The LLM quickly generates a draft response, providing efficiency and consistency.
  2. A human expert reviews and refines the response, ensuring accuracy and adding nuance where necessary.
By using a ChainAgent**** to combine these steps, we create a workflow that balances speed with precision, particularly valuable in high-stakes communications or industries with strict regulatory requirements.

6. Add agents to orchestrator:

In this final step, we create an instance of the MultiAgentOrchestrator and add all the agents we've created to it. This process registers each agent with the orchestrator, allowing it to manage and coordinate their actions.
Depending on your specific use case, you can easily add more specialized agents to the orchestrator. The flexibility of this system allows you to expand and customize your agent network to meet the unique needs of your application or business domain.

A. Local usage

To test the orchestrator locally, you can create a simple script. Here's an example local-test.ts:
Run this script using ts-node local-test.ts to see how the orchestrator handles different types of inquiries locally.

B. Cloud Deployment with CDK

I've shown you how to process emails locally using our multi-agent orchestrator, but in a real-world scenario, you'll likely want to integrate this system into a more robust, scalable architecture. Imagine a workflow where incoming customer emails are sent to a queue for processing, and the responses are then placed in another queue to be sent back to customers.
This is where Project Lakechain comes in handy. Project Lakechain is a powerful tool that allows us to quickly set up and deploy complex data processing pipelines on AWS. In this section, I'll show you how to use Project Lakechain components to create an end-to-end email processing system that leverages our multi-agent orchestrator.
Let's set up the infrastructure using AWS CDK:
Implementing the Lambda Function
Here's an example of how the Lambda function might be implemented:
šŸ’» The full CDK code for this implementation is available on my gist for easy reference and use. This gist provides a complete, runnable example of the AI-powered customer support system we've discussed.
This Lambda function reads messages from the SQS queue, processes them using the Multi-Agent Orchestrator, and sends the responses to the output queue.
Creating this system was so straightforward, I had more trouble choosing my lunch than setting up these AI Agents!
After diving deep into the implementation details, let me show you the full picture of what I've built. Here's a diagram that captures all the components of our AI-powered customer support system:
Testing this system is remarkably simple: just send raw emails to the entry SQS queue and observe the responses in the output queue. This allows you to easily verify the end-to-end flow of your AI-powered customer support system.

Test Scenarios and Results

During my testing phase, I sent various sample emails to the system and observed the responses. Here are some example scenarios with their inputs and outputs:

Scenario 1: Order Status Inquiry

Sample Input Email:
System Response:

Scenario 2: Return Request

Sample Input Email
System Response:
These examples demonstrate how our multi-agent system handles a variety of customer inquiries. Each query starts as a raw email in the first SQS queue, is processed by the EmailTextProcessor, routed to the most appropriate agent by the Multi-Agent Orchestrator, and finally output as a structured response in the final SQS queue. The handlingAgent field in the output shows which specialized agent was selected to handle the specific inquiry.

Conclusion

By leveraging the Multi-Agent Orchestrator framework and Project Lakechain, I've not only solved the customer support headaches but also unlocked a new level of efficiency and scalability for e-commerce businesses. This intelligent system, with its specialized AI Agents, can handle everything from simple order inquiries to complex customer issues, all while maintaining that crucial human touch when needed.
Now, I'm curious to hear from you. How do you see these AI Agents and the Multi-Agent Orchestrator framework transforming your business? What other innovative applications can you envision for this technology? Let me know in the comments below ā€” who knows, your idea might be the next big revolution in AI-powered customer service!
While this example uses Typescript, the Multi-Agent Orchestrator framework is also available for Python, allowing you to implement the same functionality.Ā 

Explore more

If you enjoyed this article and want to explore more about what Multi Agent Orchestrator framework can do, check out our related piece: From 'Bonjour' to 'Boarding Pass': Multilingual AI Chatbot for Flight Reservations. This article walks you through building a global flight reservation chatbot, demonstrating how to chain AI agents for instant language processing and booking across multiple languages.

Multi Agent Orchestrator

Project Lakechain

If you find these frameworks helpful, please consider giving us a star on GitHub. Also we would love to hear your thoughts, so feel free to leave a comment below. And if you have ideas for new features or improvements, don't hesitate to create a feature request on our GitHub repository.
Ā 

Comments