logo
Menu
Getting Started with Valkey using Docker and Go

Getting Started with Valkey using Docker and Go

Valkey is an open source (BSD) high-performance key/value data store that supports a variety of workloads—such as caching, message queues, and can act as a primary database. It was forked from the open source Redis project right before the transition to their new source available licenses. In this blog post, I'll give you a head-start on how to get your hands dirty with Valkey using Docker and Go.

Ricardo Ferreira
Amazon Employee
Published May 14, 2024
Last Modified May 15, 2024
If you are reading this blog post, chances are you are a current user of Redis. Moreover, you may have heard that Redis changed its licensing model to a source available license. Because of this change, the Valkey project was born. It is a direct fork for the last open source version of Redis, and as a community-driven project, it will allow anyone to contribute how this technology is going to evolve. If you are interested in understanding in more details about this change and the origins of the Valkey project, you must watch this amazing interview with David Nalley, Director of Open Source Strategy and Marketing at AWS, where he explains all of this in details.
My job here is getting you started with the technology. In this blog post, I will share how to spin up a development instance of Valkey using Docker, how to use the CLI to perform a check, and how to test things out with a code written in Go.
Let's go (no pun intended)

Valkey with Docker

Start by creating a folder to hold the code and configuration files. In this example, I'm naming this folder getting-started-with-valkey-using-docker. Inside this folder, create the following folders and files.
I will explain the need of the folders conf and data in a minute. For now, let's implement the Docker Compose file. Here is what you need to write into this file.
This creates a container named valkey with a distribution of Valkey version 7.2.5. This is going to be a single development server exposed over the port 6379. Since there is a port binding in the code, you will be able to access Valkey from the localhost:6379 endpoint. But only if you configure Valkey to allow this access.
This is why you must implement the file valkey.conf under the folder conf. This file will tell Valkey how to expose the server over all the network interfaces within the container, as well as to disable the protected mode, making the server accessible even from other hosts. Write this configuration in the valkey.conf file.
To provide this configuration file as a parameter to the container, a volume binding is required. This is why in the Docker Compose file you have written the following binding:
It maps the local file to the container in the /etc/valkey folder. Then Docker Compose can use it as a command to provide the configuration file to Valkey.
Now that you understood that part, let's go back to the data folder. You may have noticed that in the Docker Compose file, I used another volume binding for that folder as well. This is not required, but it serves the purpose of saving all the data stored at Valkey in that folder. This way, if you bounce the container, when it restarts, the data will be there. Otherwise, every time you start, the container will create a server with no data.
Let's start Valkey. With Docker installed, execute the following command:
If you list the containers running with the command docker ps -a, you should have at least one container named valkey.
Now how can you test if everything is working? You can use the Valkey CLI for this. You can either install Valkey locally to have a native distribution to your computer, or you can use the binary from the Docker container. To install a native distribution, browser to the page https://valkey.io/download and grab a distribution for your operating system. If you are using MacOS, you can install Valkey using Homebrew:
Then type:
You will connect with Valkey and be presented with the prompt 127.0.0.1:6379>. You can issue the command PING to test a connection. If everything goes well, you should be replied with PONG.
As mentioned before, you can also use the binary from the Docker container. To do this, you must get into the running container with the following command:
Once you are inside the container, just repeat the same steps shared before as if you had Valkey locally. To exit the container, just type exit.
Valkey inherits many of the commands available with Redis, as you can check in this list. Play around with some of these commands using the Valkey CLI to get yourself familiar with the technology. When you are done playing with Docker, just execute the command docker compose down to stop the container.

Connecting from Go

Now that you know Valkey is up and running, it is time for you to play with the server from a small application. Create a new Go file named main.go, and write the following code:
To execute this code, just open a new terminal and type:
You should see a simple output saying Valkey. Now let me explain why. 🙂
This Go code starts a connection with Valkey given the localhost:6379. If no errors are found, it continues its execution, postponing the connection close operation to after the code finishes doing everything.
The first operation executed against Valkey is the write operation using the SET command. It writes the key new-redis with the value Valkey. As in, who is the new Redis? 😅
Finally, it uses the GET command to retrieve the value associated with the key new-redis. When the value is received, it is converted into a string and then printed into the output.
This is why you see Valkey after executing the Go code.

Valkey with TestContainers

Another interesting way for you to play with Valkey running with Docker is using TestContainers for Go. It allows you to programmatically code Valkey as a dependency in your code while the Docker container plumbing is managed for you behind the scenes. Here is an updated version of the existing code.
Executing this code will produce the same output and print Valkey. So what is the difference, right? Well, the difference is that you don't need to run Docker behind the scenes before executing the code. In fact; you don't even need to remember to stop the container thereafter, because TestContainers take care of disposing of the container instance for you. All you need to do is to create a container using the strategy below.
Here, we are creating a container request given the specifications we want to work with, which is the exact container image, which ports bindings to expose, and what is going to be the criteria to decide the container is ready. With the container request in place, you can create a container instance which will act as your working container for your code.
Once you have the container instance, you can retrieve the endpoint necessary for your code to create a connection with Valkey. With TestContainers, the endpoint may not be the same every time you execute the code; but that is an implementation detail you don't have to worry about, as container instances are ephemeral and disposable.
Using TestContainers is a great way for you to create dependencies in your code programmatically, such as databases, messaging systems, and any sort of backend that in the real world usually lives outside of the scope of your application. It is great to implement functional and integration tests.
💡 You can find the complete source code of this blog post here.

Summary

Valkey is an exciting new project that continues the great work done with Redis, but in the open. With a community-based project, anyone can shape the future of the technology. In this blog post, I showed you how to get started with Valkey using Docker, how to perform initial tests, and how to use Go to give Valkey a whirl. This is only the tip of the iceberg about what is possible, and I'm excited to see what you are going to build next with Valkey.
Follow me on LinkedIn if you want to geek out about technologies.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

1 Comment