From Podman to Kubernetes: A Practical Integration Guide
Podman is a lightweight container engine that provides an easy-to-use command-line interface for managing images and containers. It is often used as a drop-in replacement for Docker due to the fact that, excluding Docker Swarm commands, its CLI is fully compatible with the Docker CLI.
However, Podman's capabilities extend beyond Docker compatibility, one of them being Kubernetes integration (the ability to parse and generate Kubernetes manifests). This feature offers additional convenience and flexibility, allowing you to easily deploy and manage your Podman workloads in a Kubernetes cluster or seamlessly transfer existing workloads from a Kubernetes cluster to a Podman installation.
This guide aims to demonstrate how Podman and Kubernetes can be integrated to leverage the benefits of both technologies in an efficient and practical manner. We will go through a basic introduction to pods before diving into more advanced topics and scenarios involving Kubernetes.
By the end of this article, you'll have a clear understanding of how Podman and Kubernetes can be utilized together to optimize your container management workflows and maximize the efficiency of your deployments.
Let's start with an overview of pods and how they're used in Podman.
Prerequisites
- Good Linux command-line skills.
- Basic experience with Podman and Kubernetes.
- Recent version of Podman installed on your system.
- (Optional) Docker Engine installed on your system for running the minikube examples.
Understanding pods
As you know, the concept of pods doesn't exist in all container engines. For instance, Docker doesn't support pods. Thus, many engineers are unaware of pods and their use-cases and prefer working with individual containers instead. However, with the increasing popularity of Kubernetes, it has become essential for many users to understand and integrate pods into their containerization workflows.
In Kubernetes, pods represent the smallest and simplest deployable objects, consisting of one or more containers managed as a cohesive unit. Containers within a pod can share resources like network and storage while maintaining separate filesystems and process namespaces, ensuring tighter security and better stability.
Podman aligns with this concept by allowing users to organize containers into pods. While there are differences in the implementations of Kubernetes and Podman, the core idea of managing containers as a unified entity remains consistent, making Podman pods capable of performing similar tasks.
To create a new pod, you execute:
podman pod create my-first-pod
This outputs a SHA-256 hash uniquely identifying the pod on your system:
e22b6a695bd8e808cadd2c39490951aba29c971c7be83eacc643b11a0bdc4ec7
You can issue the following command to further confirm that the pod is created successfully:
podman pod ls
It produces a similar output:
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
e22b6a695bd8 my-first-pod Created 23 seconds ago 131ee0bcd059 1
Let's examine each column:
POD ID
shows the unique identifier of the newly created pod. Upon closer examination, you'll notice that its value corresponds to the initial 12 characters of the SHA-256 hash generated by thepodman pod create
command. You can use this ID to distinguish this pod in subsequent commands and operations.NAME
indicates the name of the newly created pod. Mostpodman
commands allow you to reference a pod by either its name or its ID interchangeably.STATUS
indicates the state of the newly created pod, which can be one ofCreated
,Running
,Stopped
,Exited
orDead
. In this case, the status isCreated
, which means that the pod definition has been created, but no container processes are currently actively running inside.CREATED
simply indicates how long ago the pod was created.INFRA ID
is an interesting one. It shows the identifier of the infrastructure container that the pod was created with (in this case,131ee0bcd059
). The infrastructure container is what allows containers running inside a pod to share various Linux namespaces. By default, Podman orchestrates the pod in a way that allows its containers to share thenet
,uts
, andipc
namespaces. This allows containers within the pod to communicate with each other and re-use certain resources.# OF CONTAINERS
shows the number of containers attached to the pod. A pod always starts with 1 container attached to it by default (the infrastructure container), even though its process is not started automatically, as you will see in a moment.
To examine the existing containers, type:
podman container ps -a
The output shows the infrastructure container of the pod that you just created:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
131ee0bcd059 localhost/podman-pause:4.3.1-0 51 seconds ago Created e22b6a695bd8-infra
Notice how the CONTAINER ID
matches the INFRA ID
of the created pod, and how the first 12 characters of the container name, e22b6a695bd8-infra
, match the POD ID
. These relationships are always true and make it very simple to identify the infrastructure container for each pod on systems where several pods might be running simultaneously.
When you create a new empty pod, the infrastructure container is prepared for launch, but no process is actually started. Because of that, the container initially shows as Created
instead of Running
, and the -a
flag is required for the podman container ps
command to display it.
At this point, no namespaces have been established for the pod containers either. Type in the following command to verify this:
lsns -T
You will see a similar output:
NS TYPE NPROCS PID USER COMMAND
4026531837 user 4 98786 marin /lib/systemd/systemd --user
├─4026531834 time 5 98786 marin /lib/systemd/systemd --user
├─4026531835 cgroup 5 98786 marin /lib/systemd/systemd --user
├─4026531836 pid 5 98786 marin /lib/systemd/systemd --user
├─4026531838 uts 5 98786 marin /lib/systemd/systemd --user
├─4026531839 ipc 5 98786 marin /lib/systemd/systemd --user
├─4026531840 net 5 98786 marin /lib/systemd/systemd --user
├─4026531841 mnt 4 98786 marin /lib/systemd/systemd --user
├─4026532336 mnt 0 root
└─4026532337 user 1 99106 marin catatonit -P
└─4026532338 mnt 1 99106 marin catatonit -P
The /lib/systemd/systemd --user
lines display the namespaces utilized by the service manager that was initiated when you logged in to your user account on the given Linux machine. The catatonit -P
lines, on the other hand, display the namespaces held by the global pause process that Podman maintains while you interact with it in rootless mode. We won't delve into the details of why these namespaces exist in the first place, but it's important to know that they are there and that this is typically the standard lsns
output that you will observe even before a new pod has performed any actual work.
Let's add a container to the newly created pod and see what happens. For this experiment, we'll use the hashicorp/http-echo image from Docker Hub (http-echo
is a small in-memory webserver commonly employed for testing purposes):
podman run -d --pod my-first-pod docker.io/hashicorp/http-echo:1.0.0
List the containers once again:
podman container ps
This time both the infrastructure container and the http-echo
container appear to be Running
:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
131ee0bcd059 localhost/podman-pause:4.3.1-0 6 minutes ago Up 23 seconds ago e22b6a695bd8-infra
c57f4d354eb4 docker.io/hashicorp/http-echo:1.0.0 22 seconds ago Up 23 seconds ago gallant_wescoff
The pod is listed as Running
as well:
podman pod ps
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
e22b6a695bd8 my-first-pod Running 7 minutes ago 131ee0bcd059 2
If you perform an lsns
again, you'll notice several changes:
lsns -T
NS TYPE NPROCS PID USER COMMAND
4026531837 user 4 98786 marin /lib/systemd/systemd --user
├─4026531834 time 10 98786 marin /lib/systemd/systemd --user
├─4026531835 cgroup 8 98786 marin /lib/systemd/systemd --user
├─4026531836 pid 8 98786 marin /lib/systemd/systemd --user
├─4026531838 uts 8 98786 marin /lib/systemd/systemd --user
├─4026531839 ipc 8 98786 marin /lib/systemd/systemd --user
├─4026531840 net 8 98786 marin /lib/systemd/systemd --user
├─4026531841 mnt 4 98786 marin /lib/systemd/systemd --user
├─4026532336 mnt 0 root
└─4026532337 user 6 99106 marin catatonit -P
├─4026532338 mnt 3 99106 marin catatonit -P
├─4026532340 net 2 100589 marin /catatonit -P
├─4026532401 mnt 1 100589 marin /catatonit -P
├─4026532402 mnt 1 100584 marin /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/netns-844a415e-435c-39aa-9962-b04eaf69e806 tap0
├─4026532403 uts 2 100589 marin /catatonit -P
├─4026532404 ipc 2 100589 marin /catatonit -P
├─4026532405 pid 1 100589 marin /catatonit -P
├─4026532406 cgroup 1 100589 marin /catatonit -P
├─4026532407 mnt 1 100594 165531 /http-echo
├─4026532408 pid 1 100594 165531 /http-echo
└─4026532409 cgroup 1 100594 165531 /http-echo
The /catatonit -P
process (PID: 100589
) is the main process of the infrastructure container. As you can see, it operates inside net
, mnt
, utc
, ipc
, pid
, and cgroup
namespaces that are completely different from the root namespaces (as indicated by the systemd
process). The /http-echo
process, itself, runs in separate mnt
, pid
and cgroup
namespaces, but shares its net
, uts
, and ipc
namespaces with the catatonit
process in the infrastructure container.
This may not be completely obvious at first, so to confirm this, you can also run:
lsns -T -p $(pgrep http-echo)
The output is clear:
NS TYPE NPROCS PID USER COMMAND
4026531837 user 4 98786 marin /lib/systemd/systemd --user
├─4026531834 time 10 98786 marin /lib/systemd/systemd --user
└─4026532337 user 6 99106 marin catatonit -P
├─4026532340 net 2 100589 marin /catatonit -P
├─4026532403 uts 2 100589 marin /catatonit -P
├─4026532404 ipc 2 100589 marin /catatonit -P
├─4026532407 mnt 1 100594 165531 /http-echo
├─4026532408 pid 1 100594 165531 /http-echo
└─4026532409 cgroup 1 100594 165531 /http-echo
- The
net
,uts
andipc
namespaces are the same as the ones held by the infrastructure container. - The
user
namespace is the same as the one held by the global pause process maintained by rootless Podman. - The
time
namespace is the roottime
namespace. - The
mnt
,pid
andcgroup
namespaces are unique to thehttp-echo
container, isolating it from other containers in the pod.
This solidifies the idea that pods are essentially a group of containers capable of sharing namespaces.
As I said earlier, pods also allow you to manage containers as one cohesive unit. To see this in practice, type:
podman pod stop my-first-pod
Output:
e22b6a695bd8e808cadd2c39490951aba29c971c7be83eacc643b11a0bdc4ec7
This command stops the pod and all of its associated containers. To confirm this, type:
podman container ps -a
You will see that both containers were stopped:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
131ee0bcd059 localhost/podman-pause:4.3.1-0 25 minutes ago Exited (0) 22 seconds ago e22b6a695bd8-infra
c57f4d354eb4 docker.io/hashicorp/http-echo:1.0.0 19 minutes ago Exited (2) 22 seconds ago gallant_wescoff
The pod itself was stopped as well:
podman pod ls
Output:
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
e22b6a695bd8 my-first-pod Exited 28 minutes ago 131ee0bcd059 2
When you no longer need a pod, you can remove it completely by typing:
podman pod rm my-first-pod
Output:
e22b6a695bd8e808cadd2c39490951aba29c971c7be83eacc643b11a0bdc4ec7
This removes not only the pod, but also all of its associated containers.
You can verify this worked by repeating the podman pod ls
and podman container ps -a
commands. You will see that neither pods nor containers exist on your system:
podman pod ls
Output:
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
podman container ps -a
Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
With that, you have covered the basics of working with Podman pods. Now, let's explore their practical use through a real-world example.
Exploring sidecar containers
Pods are often used for adding sidecar containers to an application. Sidecar containers basically provide additional functionality and support to the main application container. This supports use cases such as configuration management, log shipping, role-based access control, and more.
To understand this better, let's explore a practical log shipping example, where a web server logs incoming HTTP requests and a log shipper forwards them to an external service for indexing. In this scenario, the application pod will include two containers:
- A Caddy container for serving web pages over HTTP.
- A Vector container configured to ship logs from your web server to Better Stack.
Create a new pod by typing:
podman pod create --name example --publish 8080:80
Output:
e21066fdb234833ffd3167a1b3bda8f5910df7708176da594a054dd09200fae
Note how the command looks slightly different compared to your previous invocation of podman pod create
.
First, you are using the --name
option to specify the name of the pod. A name can be provided to the podman pod create
command by either using the --name
option or as the very last positional argument. In other words, the command podman pod create --publish 8080:80 example
is also perfectly valid and serves the very same purpose, but for the sake of clarity, using --name
when passing multiple command-line options is usually a lot easier to read and comprehend.
Most importantly though, you specified the additional command-line option --publish 8080:80
. As you remember, we already established that containers within a pod share the same network namespace by default. Therefore, if you want to receive any web traffic, you need to expose port 8080
to the host for the entire pod. You can't do it for just an individual container, as it shares its network namespace with the other containers in the pod, and the network namespace is configured when the pod is originally created. By using the --publish
option, you ensure that any traffic coming to port 8080
on the host machine is going to be forwarded to port 80
within the pod, where the Caddy container will be listening at.
Add Caddy to the pod by typing:
podman create --pod example --name caddy docker.io/library/caddy:2.7.6-alpine
Here, through the --pod example
option, you are specifying that you want Podman to attach the container to an existing pod named example
(the one that you created earlier). You're also giving the container a specific name with the --name caddy
option. Finally, docker.io/library/caddy:2.7.6-alpine
specifies the precise image that the container should be created from.
Podman fulfills the request and produces the following output:
Trying to pull docker.io/library/caddy:2.7.6-alpine...
Getting image source signatures
Copying blob b7343593237d done
Copying blob c926b61bad3b done
Copying blob 6fd2155878b9 done
Copying blob 08886dfc0722 done
Copying config 657b947906 done
Writing manifest to image destination
Storing signatures
7307f130b2951ea8202bbf6d1d6d1a81fbdb66d022d65c26f9c209ee2e664bf2
Keep in mind that the container's assigned name doesn't apply only to the specific pod but is reserved globally.
If you try to create another container with the same name, you will get an error, even though it's not running in the same pod:
podman create --name caddy docker.io/library/caddy:2.7.6-alpine
Output:
Error: creating container storage: the container name "caddy" is already in use by 7307f130b2951ea8202bbf6d1d6d1a81fbdb66d022d65c26f9c209ee2e664bf2. You have to remove that container to be able to reuse that name: that name is already in use
Now that the Caddy container has been created, it's interesting to see it in action. Run the following command:
curl localhost:8080
Surprisingly, it turns out that the web server is currently unreachable:
curl: (7) Failed to connect to localhost port 8080 after 0 ms: Couldn't connect to server
Why is that? While the podman create
command indeed creates the container and attaches it to the example
pod, it doesn't actually start its main process. If you wish the process to start immediately after the container is created, you should execute podman run
instead of podman create
, like this:
podman run --name caddy docker.io/library/caddy:2.7.6-alpine
Currently, however, not starting the process is desired, because the default Caddy configuration doesn't emit logs, and this leaves you without any data for Vector to process. You can rectify this issue by modifying the default configuration first, and only then starting the main caddy
process inside the container.
Create a new file named Caddyfile
and paste the following contents, to ensure that logs will be generated:
The log
directive instructs Caddy to start emitting logs over a network socket listening for TCP connections at localhost:9000
inside the pod. This network socket doesn't exist yet, but it will be created by the Vector container that you'll set up next.
Copy the updated Caddyfile
to the Caddy container by issuing:
podman cp Caddyfile caddy:/etc/caddy/Caddyfile
Note how you are referring to the container by the name that you specified earlier (caddy
). This is a lot easier than writing:
podman cp Caddyfile 7307f130b295:/etc/caddy/Caddyfile
You're almost ready to start the main caddy
process. But before that, let's quickly customize the homepage that it's going to serve, just so it's easier to display its contents in a terminal.
Create a new file named index.html
and paste the following contents:
Then copy the index.html
file to the container by issuing:
podman cp index.html caddy:/usr/share/caddy/index.html
Finally, start the Caddy container:
podman start caddy
Once again, you're using the name you specified earlier (caddy
) to identify the container. This is why choosing clear and descriptive names is so important.
Confirm that the container is running by typing:
podman ps -f name=caddy
A similar output should appear:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7307f130b295 docker.io/library/caddy:2.7.6-alpine caddy run --confi... 7 minutes ago Up About a minute ago 0.0.0.0:8080->80/tcp caddy
Try accessing the server again:
curl localhost:8080
This time, the expected output appears:
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport"
content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Example</title>
</head>
<body>
<p>This is an example page.</p>
</body>
</html>
Good, Caddy works and the example
pod is capable of receiving HTTP requests on port 8080
and forwarding them for processing to the Caddy container (on port 80
).
You can also access your server from a web browser. Type in localhost:8080
and a similar web page should appear:
Earlier, we mentioned that you cannot expose additional ports for a specific container after providing the initial pod definition. Let's confirm this.
Create another pod:
podman pod create dummy-pod
Now, try adding a new Caddy container to that pod, attempting to publish port 80
of the container to port 8081
on the host:
podman create --pod dummy-pod --publish 8081:80 docker.io/library/caddy:2.7.6-alpine
You get an error:
Error: invalid config provided: published or exposed ports must be defined when the pod is created: network cannot be configured when it is shared with a pod
With this clarified, you're now ready to start setting up the Vector container.
Sign into your Better Stack account and create a new data source:
In the presented form, specify Podman tutorial as the name and Vector as the platform, then click Create source:
If all goes well, the new source will be created successfully. Copy the token presented under the Source token field. We'll refer to this token as <your_source_token>
and use it for configuring Vector to send logs to Better Stack.
Now create a new file named vector.yaml
and paste the following contents:
This file will instruct the main process running inside the Vector container to create a new network socket listening for TCP connections on port 9000
. Caddy will connect to this socket to emit its logs. Furthermore, this configuration will tell Vector to forward all collected logs over to Better Stack via HTTP.
Create a new container running the official Vector image and add it to the example
pod:
podman create --pod example --name vector docker.io/timberio/vector:0.35.0-alpine
Copy the configuration file to the container:
podman cp vector.yaml vector:/etc/vector/vector.yaml
Finally, start the container:
podman start vector
Verify that all containers inside the pod are running by typing:
podman ps --pod
You should see a similar output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME
5827494c3cce localhost/podman-pause:4.3.1-0 12 minutes ago Up 3 minutes ago 0.0.0.0:8080->80/tcp bf97c02c7c07-infra bf97c02c7c07 example
7307f130b295 docker.io/library/caddy:2.7.6-alpine caddy run --confi... 12 minutes ago Up 3 minutes ago 0.0.0.0:8080->80/tcp caddy bf97c02c7c07 example
cd2daa5962e1 docker.io/timberio/vector:0.35.0-alpine 33 seconds ago Up 21 seconds ago 0.0.0.0:8080->80/tcp vector bf97c02c7c07 example
Now navigate back to your browser, and refresh the web page at localhost:8080
a couple of times, or issue a couple of curl localhost:8080
commands from the terminal.
curl localhost:8080/[1-10]
In Better Stack, navigate to Live tail:
You should see some logs collected from the Caddy container:
Your setup works. The Caddy and Vector containers run in the same network namespace, so they can communicate over the TCP socket that vector
established.
To confirm that the network namespace is the same, run:
lsns -t net -p $(pgrep caddy)
Output:
NS TYPE NPROCS PID USER NETNSID NSFS COMMAND
4026532340 net 5 166215 marin unassigned rootlessport
Then run:
lsns -t net -p $(pgrep vector)
Output:
NS TYPE NPROCS PID USER NETNSID NSFS COMMAND
4026532340 net 5 166215 marin unassigned rootlessport
Both processes run in the network namespace with file descriptor 4026532340
. The rootlessport
command is a port forwarder which, when running Podman in rootless mode, facilitates the forwarding of traffic from port 80
on the host machine to port 8080
within the network namespace held by the pod.
With all of this out of the way, let's go ahead and explore how Podman can be used for generating manifests and deploying them to a Kubernetes cluster, and how existing Kubernetes manifests can be deployed into a local Podman installation.
Make sure to leave your example
pod running, as you're going to need it in the next section.
Integrating with Kubernetes
As I mentioned earlier, Podman doesn't ship with a tool such as Docker Swarm for managing container orchestration. In a more sophisticated deployment scenario, where high availability, scalability, and fault tolerance are required and multiple hosts need to be involved, Podman users can leverage an orchestrator such as Kubernetes to handle the complexity of managing their workloads.
Podman aims to ease the transition to and from Kubernetes by exposing commands for converting existing workloads to YAML files (manifests) that Kubernetes can understand. Furthermore, users can import existing Kubernetes manifests into Podman, and Podman can parse and run these workloads locally.
If you're not familiar with what a Kubernetes manifest is, it's a file that describes the desired state of your Kubernetes cluster. It includes information about the pods, volumes, and other resources that have to be created and managed by Kubernetes.
Before proceeding with this example, you have to install minikube to be able to experiment with Kubernetes locally. If you don't know what Minikube is, it is a tool allowing you to run a single-node Kubernetes cluster on your local machine.
Follow the official Minikube installation instructions and run:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
This will download a binary file named minikube-linux-amd64
into your current directory. Use the following command to move this file to one of the directories specified in your $PATH
:
sudo install minikube-linux-amd64 /usr/local/bin/minikube
This will enable you to run the minikube
command from anywhere in your terminal.
Since the install
command doesn't move, but only copies the minikube-linux-amd64
file to the /usr/local/bin
directory, you can go ahead and remove the redundant copy by issuing:
rm minikube-linux-amd64
To confirm that minikube
has been installed successfully, run:
minikube version
You should see a similar output:
minikube version: v1.32.0
commit: 8220a6eb95f0a4d75f7f2d7b14cef975f050512d
Since the Podman driver for Minikube is still in experimental stage at the time of this writing, and this causes some networking and DNS resolution issues inside Minikube depending on the specific underlying setup, for a stable Minikube experience under Linux, you still have to use Docker.
If you don't have Docker installed, you can generally follow the official Docker installation instructions.
The examples that follow assume that Docker Engine is already installed and running on your system, which you can verify by issuing:
docker --version
You should see a similar output:
Docker version 24.0.7, build afdd53b
You also need to make sure that your current user is added to the docker
group, so sudo
isn't required for running commands against the Docker daemon:
sudo usermod -aG docker $USER && newgrp docker
Otherwise, Minikube will fail with a similar error:
👎 Unable to pick a default driver. Here is what was considered, in preference order:
▪ docker: Not healthy: "docker version --format {{.Server.Os}}-{{.Server.Version}}:{{.Server.Platform.Name}}" exit status 1: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/versio
n": dial unix /var/run/docker.sock: connect: permission denied
With all of these out of the way, go ahead and start Minikube:
minikube start
You should see a similar output:
😄 minikube v1.32.0 on Ubuntu 23.10 (kvm/amd64)
✨ Automatically selected the docker driver. Other choices: none, ssh
📌 Using Docker driver with root privileges
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.28.3 preload ...
> preloaded-images-k8s-v18-v1...: 403.35 MiB / 403.35 MiB 100.00% 36.90 M
> gcr.io/k8s-minikube/kicbase...: 453.88 MiB / 453.90 MiB 100.00% 36.32 M
🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
🐳 Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
💡 kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
With Minikube running, you can proceed to generating Kubernetes manifests from your Podman resources.
Verify that the example
pod that you created earlier, along with all of its containers, are still running:
podman pod ls
Output:
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
bf97c02c7c07 example Running 7 minutes ago 5827494c3cce 3
Then:
podman container ps
Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5827494c3cce localhost/podman-pause:4.3.1-0 8 minutes ago Up 8 minutes ago 0.0.0.0:8080->80/tcp bf97c02c7c07-infra
7307f130b295 docker.io/library/caddy:2.7.6-alpine caddy run --confi... 8 minutes ago Up 8 minutes ago 0.0.0.0:8080->80/tcp caddy
cd2daa5962e1 docker.io/timberio/vector:0.35.0-alpine 8 minutes ago Up 7 minutes ago 0.0.0.0:8080->80/tcp vector
Podman can easily build a Kubernetes manifest from a running pod through the podman kube generate
command. It expects you to provide the following parameters:
podman kube generate <pod_name> --service -f <output_file>
To create the necessary manifest corresponding to your example
pod, type:
podman kube generate example --service -f example.yaml
In this process, you may observe the following warning, but since these particular annotations don't carry any significant meaning, you can safely disregard the message:
WARN[0000] Truncation Annotation: "5827494c3cce19080da3e0804596c4f46c71c342429d8171bfa45f4188b140bf" to "5827494c3cce19080da3e0804596c4f46c71c342429d8171bfa45f4188b140b": Kubernetes only allows 63 characters
WARN[0000] Truncation Annotation: "5827494c3cce19080da3e0804596c4f46c71c342429d8171bfa45f4188b140bf" to "5827494c3cce19080da3e0804596c4f46c71c342429d8171bfa45f4188b140b": Kubernetes only allows 63 characters
5827494c3cce19080da3e0804596c4f46c71c342429d8171bfa45f4188b140bf
in this case is the SHA-256 ID of the infrastructure container associated with the pod, which is used for populating the io.kubernetes.cri-o.SandboxID/caddy
and io.kubernetes.cri-o.SandboxID/vector
annotations inside the generated manifest file. These annotations play no significant role for the deployment of this pod to Kubernetes.
An example.yaml
file should now appear in your current folder:
ls -l example.yaml
Output:
-rw-r--r-- 1 marin marin 2270 Jan 19 11:21 example.yaml
Let's examine its contents:
You can now run the following command to deploy this manifest to your Kubernetes cluster:
minikube kubectl -- create -f example.yaml
This results in a similar output:
> kubectl.sha256: 64 B / 64 B [-------------------------] 100.00% ? p/s 0s
> kubectl: 47.56 MiB / 47.56 MiB [------------] 100.00% 2.42 GiB p/s 200ms
service/example created
pod/example created
Wait a minute or two, then type:
minikube kubectl -- get all
You should see a similar output:
NAME READY STATUS RESTARTS AGE
pod/example 2/2 Running 0 7m11s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/example NodePort 10.110.196.168 <none> 80:30381/TCP 7m11s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
This indicates that the pod is up and running inside your local Kubernetes cluster.
From the output, it appears that the Pod is ready to accept incoming HTTP requests on port 80
through the corresponding NodePort
service. In this case, the NodePort
service basically maps port 30381
of the Kubernetes node that the pod is running on to port 80
in the pod.
However, if you type in:
curl localhost:80
You'll notice that the web server is unreachable:
curl: (7) Failed to connect to localhost port 80 after 0 ms: Couldn't connect to server
That's because the minikube
network is isolated from your host network. You can run the following command to determine the URL that you can connect to:
minikube service list
This will output a similar table:
|-------------|------------|--------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|------------|--------------|---------------------------|
| default | example | 80/80 | http://192.168.49.2:30381 |
| default | kubernetes | No node port | |
| kube-system | kube-dns | No node port | |
|-------------|------------|--------------|---------------------------|
The address listed in the URL
column is the one enabling access to your web server.
Try again, and open http://192.168.49.2:30381
in a browser or type:
curl http://192.168.49.2:30381
You'll see the familiar "Caddy, works!" page:
Your pod is now successfully running on Kubernetes. The changes that you did before through podman cp
are, of course, missing from the deployed images, so Caddy defaults to displaying the "Caddy, works!" page, but essentially all it took to deploy the application to Kubernetes was a single command.
You can remove the pod from Kubernetes by typing:
podman kube down example.yaml
This produces a similar output:
Pods stopped:
98e78483cfd2258fa5d82fb77d113b9cbdd39adc33712ea448b4de15800bb4ce
Pods removed:
98e78483cfd2258fa5d82fb77d113b9cbdd39adc33712ea448b4de15800bb4ce
As you can see, with only a few commands, you were able to generate a manifest for deploying your application on Kubernetes. Then, you took an existing Kubernetes manifest and ran it locally with Podman. This demonstrates the power and flexibility that Podman can provide for orchestrating your containerized workloads.
Exploring Podman Desktop
Even though using the CLI is a common way to interact with Podman, users who prefer a graphical interface have the additional option of using Podman Desktop, an open-source tool that provides a user-friendly GUI for managing containers and images and interacting with Kubernetes manifests.
Podman Desktop aims to abstract away the low level details and let users focus more on application development.
The usual way to install Podman Desktop is through its corresponding Flatpak bundle. If you don't happen to have flatpak
installed on your system, you can install it by running:
sudo apt install flatpak
Then add the flathub
repository, as follows:
flatpak remote-add --if-not-exists --user flathub https://flathub.org/repo/flathub.flatpakrepo
You may have to restart your session for all changes to take effect. When you're done, you can run the following command to install Podman Desktop:
flatpak install --user flathub io.podman_desktop.PodmanDesktop
Finally, to start Podman Desktop, run:
flatpak run io.podman_desktop.PodmanDesktop
Soon after, the Podman Desktop GUI will appear:
Let's recreate the pod from our previous examples by issuing the following commands in the terminal:
podman pod create --name example --publish 8080:80
podman create --pod example --name caddy docker.io/library/caddy:2.7.6-alpine
podman create --pod example --name vector docker.io/timberio/vector:0.35.0-alpine
Then, in Podman Desktop, navigate to Pods:
You will see the example
pod listed:
Instead of having to type podman kube generate
to create a Kubernetes manifest from this pod, you can use the Generate Kube action:
A manifest appears, containing the same content that you would otherwise get by running podman kube generate example -f example.yaml
.
You may have noticed though that a Service
definition is missing from that manifest. Earlier, you requested it explicitly by passing the --service
flag to podman kube generate
. At first sight, it may appear that Podman Desktop doesn't allow you to define a Service
easily. However, this isn't the case.
Go back to the Pods screen and select the Deploy to Kubernetes action:
The same YAML definition appears, but there is also an additional checkbox allowing you to define a Service
:
Scroll down a little bit, and you will see minikube
listed as the Kubernetes context. This corresponds to the minikibe
cluster you created earlier:
Click Deploy and after a few moments the pod will get deployed to your local minikube
cluster:
Go back to the terminal and issue:
minikube service list
This outputs:
|-------------|--------------|--------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|--------------|--------------|-----|
| default | example-8080 | No node port | |
| default | kubernetes | No node port | |
| kube-system | kube-dns | No node port | |
|-------------|--------------|--------------|-----|
Unlike before, even though a service was created, there is no node port available for connecting to Caddy. That's because Podman Desktop created a service of type ClusterIP
instead of NodePort
.
To verify this, issue:
minikube kubectl -- get all
You'll see that the example-8080
service created by Podman Desktop has a type of ClusterIP
:
NAME READY STATUS RESTARTS AGE
pod/example 2/2 Running 0 4m25s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/example-8080 ClusterIP 10.105.82.9 <none> 8080/TCP 4m25s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m21s
One possible way to address this problem in order to access Caddy is by patching the service to change its type:
minikube kubectl -- patch svc example-8080 --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]
Output:
service/example-8080 patched
You can now re-run:
minikube service list
This time, a URL appears allowing you to access Caddy:
|-------------|--------------|-------------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|--------------|-------------------|---------------------------|
| default | example-8080 | example-8080/8080 | http://192.168.49.2:30381 |
| default | kubernetes | No node port | |
| kube-system | kube-dns | No node port | |
|-------------|--------------|-------------------|---------------------------|
Open the listed URL in a browser and you'll see a familiar page:
Everything appears to work correctly!
Next, let's explore how importing an existing Kubernetes manifest works with Podman Desktop. Before that, however, let's remove all pods created so far in order to start in a clean state.
Open Podman Desktop and navigate to the Pods page. You will see both the Podman and the Kubernetes example
pods appearing in the list:
Click the Delete buttons next to each pod in order to remove them from your system:
When you're done, you should see an empty list of pods:
Click on the Play Kubernetes YAML button at the top right of the Pods screen:
Press that button, and a form will appear prompting you to specify a *.yaml
file to execute:
Select the example.yaml
file that you created earlier and click Play:
A message appears prompting you to wait while Podman orchestrates your containers:
After a moment, the process completes and Podman Desktop displays a JSON document indicating that the pod was started:
You can click the Done button, after which you'll see the newly created example
pod in the list of pods:
Effectively, this entire process performs the same actions as the podman kube play example.yaml
command you used earlier.
Open localhost:8080
in a browser, and it will take you to the familiar Caddy homepage:
To remove the pod and all of its attached containers in a way similar to podman kube down
, just navigate back to the Pods page and click Delete Pod:
A loader icon appears and soon after the pod is gone:
As you can see, Podman Desktop provides a convenient interface for managing your pods, making it easy to create, view, and delete them with just a few clicks. It also simplifies the process of working with Kubernetes and allows you to quickly perform actions like creating pods, accessing their public-facing services, and removing them when they are no longer needed. With Podman Desktop, you can effectively manage your containerized applications without the need for complex command-line instructions.
Final thoughts
The ability of Podman to integrate with Kubernetes presents a promising and flexible solution for container orchestration in modern IT environments. You can take advantage of these capabilities to seamlessly manage and deploy your containers across development, staging, and production environments.
For example, you can prototype your applications locally using Podman before eventually deploying them to a shared Kubernetes cluster for testing. You can also import externally provided Kubernetes manifests into your local Podman environments in order to explore and validate the behavior of applications without the need to run full-fledged Kubernetes clusters.
The options are endless, and both Podman CLI and Podman Desktop provide the necessary tools and flexibility for you to efficiently work with pods in various scenarios. To explore Podman further, consider visiting the official Podman website, exploring its documentation, and joining its growing community.
Thanks for reading!