Kubernetes Tutorial for Swift on the Server

In this tutorial, you’ll learn how to use Kubernetes to deploy a Kitura server that’s resilient, with crash recovery and replicas. You’ll start by using the kubectl CLI, then use Helm to combine it all into one command. By David Okun.

Leave a rating/review
Download materials
Save for later
Share
You are currently viewing page 3 of 5 of this article. Click here to view the first page.

Tagging Your RazeKube Docker Image

Open a web browser and navigate to localhost:8080 to make sure you can see the home page. Next, press Control-C in your Terminal to stop the container.

Now, enter the command docker image ls — your output should look like this:

REPOSITORY            TAG     IMAGE ID      CREATED         SIZE
razekube-swift-run    latest  eb85ef44e45f  2 minutes ago   598MB
razekube-swift-tools  latest  2008ae41e316  3 minutes ago   1.97GB

The Kitura CLI configures your app to use a separate container — razekube-swift-tools — to compile your app than the one that ultimately runs it — razekube-swift-run — all in the name of saving you space on your runtime.

If you think that this is still a bit large for a container, you aren’t alone – “slim” images and multi-stage Dockerfiles are in the works as you read this!

Lastly, tag your image like so:

docker tag razekube-swift-run razekube-swift-run:1.0.0

Type docker image ls again to make sure your razekube-swift-run tag was created:

REPOSITORY            TAG     IMAGE ID       CREATED         SIZE
razekube-swift-run    1.0.0   eb85ef44e45f   3 minutes ago   598MB
razekube-swift-run    latest  eb85ef44e45f   3 minutes ago   598MB
razekube-swift-tools  latest  2008ae41e316   4 minutes ago   1.97GB

All right, next you’ll put this inside your Kubernetes cluster!

Deploying RazeKube to Kubernetes

First, type kubectl get all and kubectl get pods, and check that the output looks like so:

➜ kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   19h
➜ kubectl get pods
No resources found.

In Kubernetes, a pod is the smallest unit available — just a set of co-located containers. Observing a pod is similar to observing an app you deploy.

Make a pod for RazeKube by entering the following command in Terminal:

kubectl create deployment razekube --image=razekube-swift-run:1.0.0

Confirm that your app deployed by running kubectl get pods, and check that your output looks similar to this:

NAME                        READY     STATUS    RESTARTS   AGE
razekube-6dfd6844f7-74j7f   1/1       Running   0          26s

Kubernetes creates a unique identifier for each pod as it runs, unless you specify otherwise. While this is great to see that your app is running, you haven’t yet configured a way to access it!

Creating a RazeKube Service

This is where Kubernetes begins to shine. Rather than take control away, you are given complete control over how your end users access each deployment via a service.

Add a point of access for your app by creating a service like so:

kubectl expose deployment razekube --type="NodePort" --port=8080

Now type kubectl get svc to get a list of exposed services currently in flight on Kubernetes, and you should see output like so:

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          20h
razekube     NodePort    10.105.98.111   <none>        8080:32612/TCP   1m

Notice the PORT(S) column — Kubernetes has mapped port 8080 on your app to a randomly assigned port. This port will be different every time, so make sure you note which port Kubernetes opened for you. Open a web browser, and navigate to that address, which would be localhost:32612 in my case. If you see the home page, ask the almighty Kube to demonstrate its power by navigating to localhost:32612/kubed?number=4 — you should see this:

4 cubed running within Kubernetes

Nice! You are now running a Swift app on Kubernetes!!!

The sun with some sun glasses on

Recovering From a Crash

Now you’re going to test out how Kubernetes keeps things working for you. First, type kubectl get all in Terminal, and you should see the following output:

NAME                            READY     STATUS    RESTARTS   AGE
pod/razekube-6dfd6844f7-74j7f   1/1       Running   0          11m

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          20h
service/razekube     NodePort    10.105.98.111   <none>        8080:32612/TCP   8m

NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/razekube   1         1         1            1           11m

NAME                                  DESIRED   CURRENT   READY     AGE
replicaset.apps/razekube-6dfd6844f7   1         1         1         11m

Notice how every component of your state is enumerated for you.

Next, type the command kubectl get pods, but don’t press Return just yet. In a moment, what you’re going to do is:

  • Navigate to localhost:32612/uhoh in your browser, which will deliberately crash your app.
  • Press Return in Terminal, and run the samekubectl get pods command over and over repeatedly until you see that your STATUS is Running. Hint: Press the Up Arrow to redisplay the previous command.
  • Navigate to localhost:32612 in your browser.

As you keep entering your command in Terminal, you will see your pod state evolve like so:

NAME                        READY     STATUS    RESTARTS   AGE
razekube-6dfd6844f7-74j7f   0/1       Error     0          17m

NAME                        READY     STATUS             RESTARTS   AGE
razekube-6dfd6844f7-74j7f   0/1       CrashLoopBackOff   0          17m

NAME                        READY     STATUS                RESTARTS   AGE
razekube-6dfd6844f7-74j7f   0/1       ContainerCreating     1          17m

NAME                        READY     STATUS    RESTARTS   AGE
razekube-6dfd6844f7-74j7f   1/1       Running   1          17m

As Kubernetes scans the state of everything in your cluster, it reconciles how things are — crashed — with how it should be — etcd. If there is a mismatch, then Kubernetes works to resolve the difference!

You have dictated that there should be a functioning deployment called razekube, but by triggering the /uhoh route, that deployment is no longer functioning. When Kubernetes picks up that the non-functional state doesn’t match the desired functional state in etcd, it redeploys the container to bring it back to a functional state. After your deployment is running again, you then access your app to see that you’re back in business!

Deploying Replicas

Running/not-running isn’t the only state that can be managed by Kubernetes. Consider the scenario that a bunch of people have heard about the almighty Kube, and they want to check out its power. You’ll need to have more than one app running concurrently to handle all that traffic!

In Terminal, enter the following command:

kubectl scale --replicas=5 deployment razekube

Typically, with heavier apps, you could enter this command to watch this happen in real time:

kubectl rollout status deployment razekube

But this is a fairly lightweight app, so the change will happen immediately.

Enter kubectl get pods and kubectl get deployments to check out the new app state:

➜ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
razekube-6dfd6844f7-74j7f   1/1       Running   4          32m
razekube-6dfd6844f7-88wr7   1/1       Running   0          1m
razekube-6dfd6844f7-b4snx   1/1       Running   0          1m
razekube-6dfd6844f7-tn6mr   1/1       Running   0          1m
razekube-6dfd6844f7-vnr7w   1/1       Running   0          1m
➜ kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
razekube   5         5         5            5           33m

In this case, you’ve told etcd the desired state of your cluster should be that there are 5 replicas for your razekube deployment.

Hit your /uhoh route a couple of times, and type kubectl get pods over and over again in Terminal to observe the state of your pods as they work to maintain their dictated state!

Kubernetes can manage so much more than just these two examples. You can do things like:

  • Manage TLS certificate secrets for encrypted traffic.
  • Create an Ingress controller to handle where certain traffic goes into your cluster.
  • Handle a load balancer so that deployments inside your cluster receive equal amounts of traffic.

And because you worked with a Docker container this whole time, this means that this tool isn’t native to just Swift — it works for any app that you can put into Docker ;].