It’s not a chat bot: Writing Documentation
Generative AI use case that just work
Published Jul 3, 2024
Last Modified Jul 4, 2024
As developers, we’ve all encountered it: the dreaded task of writing documentation. Whether it’s explaining your own code or trying to understand someone else’s, a lack of clear documentation can be a major headache. Often, we spend hours deciphering code when we could be writing new features or fixing bugs.
For those of you who follow my various mediums, you’ll have noticed a recurring theme in my content over the last three months: “It’s not a chatbot!” This seems to be becoming a series in its own right. I endeavor to advocate for Generative AI use cases without showcasing a chatbot.
If you haven’t followed my previous works, here are some links to catch up:
As a developer, have you ever written an application or script and then been asked to document it? Or read someone else’s code and wished it had better documentation? This is a common frustration shared by developers.
“Oh my! It’s Sir Claude the Third!”
I saw this as an opportunity to test out the latest Bedrock feature, Converse API with Claude 3 Haiku. I wanted to create something independent of any architecture, so no serverless lambda functions or API gateways in this example. Just the ability to:
- Run from a command line
- Apply to pipelines like GitHub Actions
- Be portable to run anywhere
“Amazon Bedrock announces the new Converse API, which provides developers a consistent way to invoke Amazon Bedrock models removing the complexity to adjust for model-specific differences such as inference parameters” - Amazon Bedrock announces new Converse API - AWS
As the AWS blog post suggests, Converse API is designed to reduce the complexity of invoking a machine learning model, making it even quicker and easier to write code with Bedrock. In fact, here is my Converse API call:
Compare this with a similar invoke_model prompt:
While I’ve used a few variables in the Converse API call, even with variables, the invoke_model call looks fairly ugly and hard to decipher.
So, as with all good Generative AI projects, we need a good prompt first. I’ve learned a fair bit about Prompt Engineering recently, so let me remind you of my top tips:
- Keep your prompt simple but concise.
- Treat it like you’re training someone to do something for the first time.
- Give the model a purpose.
- Let it think.
- Structure your output.
In this case, I’m going to use a system prompt (below) to tell the model what its purpose is and then hand it the source code in a text prompt.
I then construct the rest of my Converse API call, applying some simple configuration.
Yep, that’s it. The rest of the complexity comes down to how I want to read the output and store it later.
Once I’ve extracted the content from the markup tags, I store it in a file.
And there we have it: using the source for this entire script to document itself, from source to docs, as shown below:
To this in Markdown;
This code can now be run from my laptop as I write new code or applied to GitHub Actions or AWS CodePipeline when I deploy new code, it’ll automatically document itself!
If you want to play with this then feel free to check out the repo below. I’ve made some code samples and generated documentation as well available