logo
Menu
Getting Started with Valkey using Docker and Go

Getting Started with Valkey using Docker and Go

Valkey is an open source (BSD) high-performance key/value data store that supports a variety of workloads—such as caching, message queues, and can act as a primary database. It was forked from the open source Redis project right before the transition to their new source available licenses. In this blog post, I'll give you a head-start on how to get your hands dirty with Valkey using Docker and Go.

Ricardo Ferreira
Amazon Employee
Published May 14, 2024
Last Modified May 15, 2024
If you are reading this blog post, chances are you are a current user of Redis. Moreover, you may have heard that Redis changed its licensing model to a source available license. Because of this change, the Valkey project was born. It is a direct fork for the last open source version of Redis, and as a community-driven project, it will allow anyone to contribute how this technology is going to evolve. If you are interested in understanding in more details about this change and the origins of the Valkey project, you must watch this amazing interview with David Nalley, Director of Open Source Strategy and Marketing at AWS, where he explains all of this in details.
My job here is getting you started with the technology. In this blog post, I will share how to spin up a development instance of Valkey using Docker, how to use the CLI to perform a check, and how to test things out with a code written in Go.
Let's go (no pun intended)

Valkey with Docker

Start by creating a folder to hold the code and configuration files. In this example, I'm naming this folder getting-started-with-valkey-using-docker. Inside this folder, create the following folders and files.
1
2
3
4
├── conf
│ └── valkey.conf
├── data
└── docker-compose.yml
I will explain the need of the folders conf and data in a minute. For now, let's implement the Docker Compose file. Here is what you need to write into this file.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
services:
valkey:
container_name: valkey
hostname: valkey
image: valkey/valkey:7.2.5
volumes:
- ./conf/valkey.conf:/etc/valkey/valkey.conf
- ./data:/data
command: valkey-server /etc/valkey/valkey.conf
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
interval: 1s
timeout: 3s
retries: 5
ports:
- 6379:6379
This creates a container named valkey with a distribution of Valkey version 7.2.5. This is going to be a single development server exposed over the port 6379. Since there is a port binding in the code, you will be able to access Valkey from the localhost:6379 endpoint. But only if you configure Valkey to allow this access.
This is why you must implement the file valkey.conf under the folder conf. This file will tell Valkey how to expose the server over all the network interfaces within the container, as well as to disable the protected mode, making the server accessible even from other hosts. Write this configuration in the valkey.conf file.
1
2
bind 0.0.0.0 -::1
protected-mode no
To provide this configuration file as a parameter to the container, a volume binding is required. This is why in the Docker Compose file you have written the following binding:
1
2
volumes:
- ./conf/valkey.conf:/etc/valkey/valkey.conf
It maps the local file to the container in the /etc/valkey folder. Then Docker Compose can use it as a command to provide the configuration file to Valkey.
1
command: valkey-server /etc/valkey/valkey.conf
Now that you understood that part, let's go back to the data folder. You may have noticed that in the Docker Compose file, I used another volume binding for that folder as well. This is not required, but it serves the purpose of saving all the data stored at Valkey in that folder. This way, if you bounce the container, when it restarts, the data will be there. Otherwise, every time you start, the container will create a server with no data.
Let's start Valkey. With Docker installed, execute the following command:
1
docker compose up -d
If you list the containers running with the command docker ps -a, you should have at least one container named valkey.
1
2
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8415fa4e2732 valkey/valkey:7.2.5 "docker-entrypoint.s…" 27 seconds ago Up 27 seconds (healthy) 0.0.0.0:6379->6379/tcp valkey
Now how can you test if everything is working? You can use the Valkey CLI for this. You can either install Valkey locally to have a native distribution to your computer, or you can use the binary from the Docker container. To install a native distribution, browser to the page https://valkey.io/download and grab a distribution for your operating system. If you are using MacOS, you can install Valkey using Homebrew:
1
brew install valkey
Then type:
1
valkey-cli
You will connect with Valkey and be presented with the prompt 127.0.0.1:6379>. You can issue the command PING to test a connection. If everything goes well, you should be replied with PONG.
1
2
3
127.0.0.1:6379> PING
PONG
127.0.0.1:6379>
As mentioned before, you can also use the binary from the Docker container. To do this, you must get into the running container with the following command:
1
docker exec -it -u root valkey bash
Once you are inside the container, just repeat the same steps shared before as if you had Valkey locally. To exit the container, just type exit.
Valkey inherits many of the commands available with Redis, as you can check in this list. Play around with some of these commands using the Valkey CLI to get yourself familiar with the technology. When you are done playing with Docker, just execute the command docker compose down to stop the container.

Connecting from Go

Now that you know Valkey is up and running, it is time for you to play with the server from a small application. Create a new Go file named main.go, and write the following code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
package main

import (
"context"
"fmt"

"github.com/valkey-io/valkey-go"
)

const (
redisAddr = "localhost:6379"
keyData = "new-redis"
valueData = "Valkey"
)

func main() {
client, err := valkey.NewClient(valkey.ClientOption{
InitAddress: []string{redisAddr},
})
if err != nil {
panic(err)
}
defer client.Close()

ctx := context.Background()
err = client.Do(ctx, client.B().Set().Key(keyData).Value(valueData).Build()).Error()
if err != nil {
panic(err)
}

value, err := client.Do(ctx, client.B().Get().Key(keyData).Build()).ToString()
if err != nil {
panic(err)
}
fmt.Println(value)
}
To execute this code, just open a new terminal and type:
1
go run main.go
You should see a simple output saying Valkey. Now let me explain why. 🙂
This Go code starts a connection with Valkey given the localhost:6379. If no errors are found, it continues its execution, postponing the connection close operation to after the code finishes doing everything.
1
2
3
4
5
6
7
client, err := valkey.NewClient(valkey.ClientOption{
InitAddress: []string{redisAddr},
})
if err != nil {
panic(err)
}
defer client.Close()
The first operation executed against Valkey is the write operation using the SET command. It writes the key new-redis with the value Valkey. As in, who is the new Redis? 😅
1
2
3
4
5
ctx := context.Background()
err = client.Do(ctx, client.B().Set().Key(keyData).Value(valueData).Build()).Error()
if err != nil {
panic(err)
}
Finally, it uses the GET command to retrieve the value associated with the key new-redis. When the value is received, it is converted into a string and then printed into the output.
1
2
3
4
5
value, err := client.Do(ctx, client.B().Get().Key(keyData).Build()).ToString()
if err != nil {
panic(err)
}
fmt.Println(value)
This is why you see Valkey after executing the Go code.

Valkey with TestContainers

Another interesting way for you to play with Valkey running with Docker is using TestContainers for Go. It allows you to programmatically code Valkey as a dependency in your code while the Docker container plumbing is managed for you behind the scenes. Here is an updated version of the existing code.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
package main

import (
"context"
"fmt"
"log"

"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
"github.com/valkey-io/valkey-go"
)

const (
keyData = "new-redis"
valueData = "Valkey"
)

func main() {
ctx := context.Background()

containerRequest := testcontainers.ContainerRequest{
Name: "valkey",
Image: "valkey/valkey:7.2.5",
ExposedPorts: []string{"6379/tcp"},
WaitingFor: wait.ForListeningPort("6379/tcp"),
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: containerRequest,
Started: true,
Reuse: true,
})
if err != nil {
log.Fatalf("Could not start Valkey: %s", err)
}
defer func() {
if err := container.Terminate(ctx); err != nil {
log.Fatalf("Unable to stop Valkey: %s", err)
}
}()

endpoint, err := container.Endpoint(ctx, "")
if err != nil {
log.Fatalf("Unable to retrieve the endpoint: %s", err)
}

client, err := valkey.NewClient(valkey.ClientOption{
InitAddress: []string{endpoint},
})
if err != nil {
panic(err)
}
defer client.Close()

err = client.Do(ctx, client.B().Set().Key(keyData).Value(valueData).Build()).Error()
if err != nil {
panic(err)
}

value, err := client.Do(ctx, client.B().Get().Key(keyData).Build()).ToString()
if err != nil {
panic(err)
}
fmt.Println(value)
}
Executing this code will produce the same output and print Valkey. So what is the difference, right? Well, the difference is that you don't need to run Docker behind the scenes before executing the code. In fact; you don't even need to remember to stop the container thereafter, because TestContainers take care of disposing of the container instance for you. All you need to do is to create a container using the strategy below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
containerRequest := testcontainers.ContainerRequest{
Name: "valkey",
Image: "valkey/valkey:7.2.5",
ExposedPorts: []string{"6379/tcp"},
WaitingFor: wait.ForListeningPort("6379/tcp"),
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: containerRequest,
Started: true,
Reuse: true,
})
if err != nil {
log.Fatalf("Could not start Valkey: %s", err)
}
defer func() {
if err := container.Terminate(ctx); err != nil {
log.Fatalf("Unable to stop Valkey: %s", err)
}
}()
Here, we are creating a container request given the specifications we want to work with, which is the exact container image, which ports bindings to expose, and what is going to be the criteria to decide the container is ready. With the container request in place, you can create a container instance which will act as your working container for your code.
1
2
3
4
5
6
7
8
9
10
11
12
endpoint, err := valkeyContainer.Endpoint(ctx, "")
if err != nil {
log.Fatalf("Unable to retrieve the endpoint: %s", err)
}

client, err := valkey.NewClient(valkey.ClientOption{
InitAddress: []string{endpoint},
})
if err != nil {
panic(err)
}
defer client.Close()
Once you have the container instance, you can retrieve the endpoint necessary for your code to create a connection with Valkey. With TestContainers, the endpoint may not be the same every time you execute the code; but that is an implementation detail you don't have to worry about, as container instances are ephemeral and disposable.
Using TestContainers is a great way for you to create dependencies in your code programmatically, such as databases, messaging systems, and any sort of backend that in the real world usually lives outside of the scope of your application. It is great to implement functional and integration tests.
💡 You can find the complete source code of this blog post here.

Summary

Valkey is an exciting new project that continues the great work done with Redis, but in the open. With a community-based project, anyone can shape the future of the technology. In this blog post, I showed you how to get started with Valkey using Docker, how to perform initial tests, and how to use Go to give Valkey a whirl. This is only the tip of the iceberg about what is possible, and I'm excited to see what you are going to build next with Valkey.
Follow me on LinkedIn if you want to geek out about technologies.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

1 Comment