logo
Menu
Diagrams to CDK/Terraform using Claude 3 on Amazon Bedrock

Diagrams to CDK/Terraform using Claude 3 on Amazon Bedrock

Use AI to generate infrastructure as code from Architecture diagrams and images

Deepayan Pandharkar
Amazon Employee
Published Mar 5, 2024
Last Modified Mar 15, 2024
Update #1: Updated content to incorporate feedback.

Introduction 

In today's cloud-native world, infrastructure as code (IaC) has become an indispensable practice for developers and DevOps teams.
With the recent announcement of Claude 3's sonnet on Amazon Bedrock, and its image-to-text capabilities, starts a new era of seamless integration between architecture diagrams and IaC tools like AWS Cloud Development Kit (CDK) or Terraform.
This blog post will explore how you can harness the power of this integration to streamline your infrastructure provisioning and management processes.

Architecture Diagrams

Architecture diagrams are a visual representation of your system's components, their relationships, and the overall structure of your application or infrastructure. They serve as a blueprint for communication, collaboration, and decision-making among team members. However, manually translating these diagrams into code can be time-consuming and error-prone, especially in complex environments.

Enter Claude 3's Sonnet on Amazon Bedrock

Anthropic has dropped a new line of AI models called the Claude 3 family, and these bad boys are straight fire. We're talking about Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku – the next generation of state-of-the-art models that'll blow your mind.
It has capability to parse text from images. This is what we are going to use in our solution. https://www.anthropic.com/news/claude-3-family
Now about performance, when it comes to most workloads, the Sonnet model is the real MVP. It's faster than Anthropic's previous Claude 2 and 2.1 models, both on inputs and outputs, and it packs a serious punch with higher levels of intelligence. But that's not all – Sonnet is also more steerable, which means you get more predictable and higher-quality outcomes. Talk about a win-win situation!
Here's where it gets even better
Amazon Bedrock has announced support for Anthropic Claude 3 family. https://www.aboutamazon.com/news/aws/amazon-bedrock-anthropic-ai-claude-3
Amazon Bedrock is a fully managed service that's like a one-stop-shop for all things generative AI. With Bedrock, you get to choose from a range of high-performing foundation models from top AI companies like Anthropic, and it comes loaded with a ton of capabilities to make building and scaling generative AI applications a breeze.

Solution

Ok, Lets get right into the solution then. Follow below steps to get an architecture extractor for yourself.
For this walkthrough, you should have the following prerequisites:

Follow below steps

Step 1: Enable Anthropic Claude 3 Sonnet on Amazon Bedrock
Step 2: Create a file called claude_vision.py and copy below code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
import base64
import json
import os

import boto3
import click
from botocore.exceptions import ClientError

def call_claude_multi_model(bedrock_runtime, model_id, input_text, image, max_tokens):
"""
Streams the response from a multimodal prompt.
Args:
bedrock_runtime: The Amazon Bedrock boto3 client.
model_id (str): The model ID to use.
input_text (str) : The prompt text
image (str) : The path to an image that you want in the prompt.
max_tokens (int) : The maximum number of tokens to generate.
Returns:
None.
"""


with open(image, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())

body = json.dumps(
{
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": max_tokens,
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": input_text},
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": encoded_string.decode("utf-8"),
},
},
],
}
],
}
)

response = bedrock_runtime.invoke_model_with_response_stream(
body=body, modelId=model_id
)

for event in response.get("body"):
chunk = json.loads(event["chunk"]["bytes"])

if chunk["type"] == "content_block_delta":
if chunk["delta"]["type"] == "text_delta":
print(chunk["delta"]["text"], end="")

@click.command()
@click.option("--image_path", prompt="path to image", help="Image you want to parse")
def main(image_path):
"""
Entrypoint for Anthropic Claude Sonnet multimodal prompt example.
"""


model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
input_text = """
You are a AWS solution architect,
The image provided is an architecture diagram. Use two heading to explain below.
1. Explain the technical data flow in detail.
2. Provide cdk typescript code to implement using aws-cdk-lib

Do not:
1. use preambles.
2. make assumptions.
"""

max_tokens = 4000

try:

bedrock_runtime = boto3.client("bedrock-runtime")

call_claude_multi_model(
bedrock_runtime,
model_id,
input_text,
os.path.abspath(image_path),
max_tokens,
)

except ClientError as err:
message = err.response["Error"]["Message"]
logger.error("A client error occurred: %s", message)
print("A client error occured: " + format(message))

if __name__ == "__main__":
main()
Step 3: Note the path of image file. For the purpose of demonstration I will be using below diagram, saved as s3_extractor.png on local disk.
Step 4: Run the script python src/utils/claude_vision.py --image_path ~/Desktop/s3_extractor.png and save the output.
Here’s a sample response
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
$ python src/utils/claude_vision.py --image_path ~/Desktop/s3_extractor.png

Technical Data Flow Explanation

The architecture diagram depicts a serverless data pipeline using AWS services. The data flow can be described as follows:

1. An Amazon S3 bucket is the source of data, which likely contains files or objects that need to be processed.
2. When a new object is uploaded or modified in the S3 bucket, it triggers an AWS Lambda function named "LambdaExtractor."
3. The LambdaExtractor function is responsible for extracting relevant data or events from the input files or objects in the S3 bucket.
4. After processing, the extracted data or events are stored in an Amazon DynamoDB table named "StoreEventsDB."

CDK Typescript Code Implementation

To implement this architecture using the AWS Cloud Development Kit (CDK) and TypeScript, you can use the following code:

```typescript
import * as cdk from 'aws-cdk-lib';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
import * as lambdaEventSources from 'aws-cdk-lib/aws-lambda-event-sources';

const app = new cdk.App();
const stack = new cdk.Stack(app, 'DataPipelineStack');

// Create the S3 bucket
const sourceBucket = new s3.Bucket(stack, 'SourceBucket');

// Create the DynamoDB table
const eventsTable = new dynamodb.Table(stack, 'StoreEventsDB', {
partitionKey: { name: 'id', type: dynamodb.AttributeType.STRING },
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
});

// Create the Lambda function
const lambdaExtractor = new lambda.Function(stack, 'LambdaExtractor', {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset('lambda'),
handler: 'hello.handler',
environment: {
EVENTS_TABLE_NAME: eventsTable.tableName,
},
});

// Grant permissions for the Lambda function to access the DynamoDB table
eventsTable.grantWriteData(lambdaExtractor);

// Create the S3 event source and associate it with the Lambda function
lambdaExtractor.addEventSource(
new lambdaEventSources.S3EventSource(sourceBucket, {
events: [s3.EventType.OBJECT_CREATED, s3.EventType.OBJECT_REMOVED],
recursive: true,
})
);
```

In this code:

1. An S3 bucket named "SourceBucket" is created to serve as the source of data.
2. A DynamoDB table named "StoreEventsDB" is created with a partition key "id" of type string.
3. A Lambda function named "LambdaExtractor" is created with the provided source code and environment variables.
4. The Lambda function is granted write permissions to the DynamoDB table.
5. An S3 event source is created and associated with the Lambda function, triggering the function when objects are created or removed from the "SourceBucket."

Note: You need to replace 'lambda/hello.js' to your Lambda function code, and provide the necessary logic in the Lambda function to extract data from the S3 objects and store it in the DynamoDB table.
Step 5: Initialise a CDK project and update by copying relevant code parts. Now you can copy code as is to make it work but I am making a couple more changes to follow best practices. One is by copying only resource code in stack construct in /lib directory and other is changing scope to this keyword. Please note that same result can be generated directly by GenAi by tweaking prompt.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
> cdk init --language=typescript --generate-only
Applying project template app for typescript
# Welcome to your CDK TypeScript project

This is a blank project for CDK development with TypeScript.

The `cdk.json` file tells the CDK Toolkit how to execute your app.

## Useful commands

* `npm run build` compile typescript to js
* `npm run watch` watch for changes and compile
* `npm run test` perform the jest unit tests
* `npx cdk deploy` deploy this stack to your default AWS account/region
* `npx cdk diff` compare deployed stack with current state
* `npx cdk synth` emits the synthesized CloudFormation template

✅ All done!
****************************************************
*** Newer version of CDK is available [2.131.0] ***
*** Upgrade recommended (npm install -g aws-cdk) ***
****************************************************
Update the stack construct.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
cat lib/genai-iac-stack.ts
import * as cdk from 'aws-cdk-lib';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
import * as lambdaEventSources from 'aws-cdk-lib/aws-lambda-event-sources';
import { Construct } from 'constructs';

export class GenAiIacStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);

// Create the S3 bucket
const sourceBucket = new s3.Bucket(this, 'SourceBucket');

// Create the DynamoDB table
const eventsTable = new dynamodb.Table(this, 'StoreEventsDB', {
partitionKey: { name: 'id', type: dynamodb.AttributeType.STRING },
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
});

// Create the Lambda function
const lambdaExtractor = new lambda.Function(this, 'LambdaExtractor', {
runtime: lambda.Runtime.NODEJS_20_X,
code: lambda.Code.fromAsset('lambda'),
handler: 'hello.handler',
environment: {
EVENTS_TABLE_NAME: eventsTable.tableName,
},
});

// Grant permissions for the Lambda function to access the DynamoDB table
eventsTable.grantWriteData(lambdaExtractor);

// Create the S3 event source and associate it with the Lambda function
lambdaExtractor.addEventSource(
new lambdaEventSources.S3EventSource(sourceBucket, {
events: [s3.EventType.OBJECT_CREATED, s3.EventType.OBJECT_REMOVED],
})
);
}
}
Finally create a lambda folder at the root and create a sample hello.js file with below sample code.
1
2
3
4
5
6
7
8
9
exports.handler = async function(event) {
console.log("request:", JSON.stringify(event, undefined, 2));
return {
statusCode: 200,
headers: { "Content-Type": "text/plain" },
body: `Hello, CDK! You've hit ${event.path}\n`
};
};
Step 6: Run cdk synth and cdk deploy –all. Voila!!

Clean Up

To avoid incurring future charges, delete the resources.
  1. Run cdk destroy
  2. Disable Amazon Bedrock Model access.

Benefits of Converting Architecture Diagrams to IaC

  1. Seamless Automation with AI-Powered Assistance: Harnessing AI capabilities to read diagrams and generate code, even if it produces initial boilerplate code if not the fully structured code, streamlines the development process. As AI continues to evolve, the potential for generating more sophisticated and structured code increases, promising even greater efficiency gains in the future.
  2. Accessibility for Non-Programmers: Diagram-to-code tools empower team members without extensive programming backgrounds to contribute to infrastructure development. By providing a user-friendly interface for creating diagrams and generating code, these tools democratize the process, enabling more team members to participate effectively in infrastructure-as-code initiatives.
  3. Accelerated Prototyping and Iteration: The ability to quickly generate boilerplate code from diagrams accelerates prototyping and iteration cycles. Teams can rapidly translate architectural designs into functional code, enabling faster feedback loops and more agile development practices.
  4. Facilitated Learning and Skill Development: For individuals looking to enhance their coding skills, diagram-to-code tools provide a valuable learning resource. By observing the generated code and its relationship to the architectural diagrams, team members can gain insights into coding principles and practices, fostering skill development over time.

Conclusion

The integration of Claude 3's sonnet with AWS Bedrock and the ability to convert architecture diagrams to CDK or Terraform code represents a significant step forward for developers and DevOps teams.
By embracing this approach, you can unlock the power of infrastructure as code, and accelerate the delivery of reliable and scalable cloud infrastructure. Embark on this journey and experience the seamless fusion of visual design and automated code generation, empowering you to build and manage your cloud environments with unprecedented efficiency and confidence.
Next up, a streamlit web UI based approach which provides a friendly interactive UI to use this solution for those who hate CLIs
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

2 Comments