Serverless Kotlin on Google Cloud Run

Learn how to build a serverless API using Ktor then dockerize and deploy it to Google Cloud Run. By Kshitij Chauhan.

Leave a rating/review
Download materials
Save for later
Share

Managing servers is a hassle. Provisioning resources for traffic spikes, managing security updates, phased rollouts, hardware maintenance and other related tasks get in the way of developers who want to focus on building their applications. The serverless paradigm of computing helps solve this problem by abstracting servers away from developers.

Kotlin is a great fit for writing APIs that run on serverless platforms. In this tutorial, you’ll use Kotlin to build a HTTP API to detect a user’s location from their IP address. Through this process, you’ll learn how to:

  1. Build back-end APIs with Kotlin and Ktor.
  2. Dockerize Ktor Applications.
  3. Publish Docker containers on Google Artifact Registry.
  4. Deploy Docker containers to Google Cloud Run.
Note: This tutorial assumes you’re familiar with the basics of Kotlin and building REST APIs.

Getting Started

Download the starter project by clicking the Download Materials button at the top or bottom of the tutorial.

You’ll find two projects: a shell project for a Ktor back-end API (api directory), and an Android App that consumes this API (app directory).

You’ll work with the back-end API first. You’ll add code step by step to this project to get to a fully functional back-end API. First, though, it’s important to understand the basics of the serverless computing model.

Defining Serverless

Contrary to what the name suggests, the serverless model doesn’t eliminate the need for servers. It just makes it someone else’s responsibility to manage the servers for you. In most cases, that “someone else” is a cloud provider with decades of expertise in managing servers.

In the serverless model, you provide your application’s source code to a cloud provider to invoke in response to one or more triggers. The mode in which you ship your source code to the cloud provider depends on the product you use. Services built on FaaS (Functions as a Service) ask for your source code directly, and others require you to package it in a container instead.

Serverless applications scale up and down to meet demand automatically, including scaling down to zero. This enables a billing model in which you pay only for what you use: if your application receives no traffic, you won’t have to pay for it.


Diagram representing how serverless scales up or down in response to traffic

In this tutorial, you’ll use Cloud Run to run Docker containers serverlessly on Google Cloud, and then you’ll configure it to invoke your containers in response to incoming HTTP requests.

Understanding Cloud Run

Simple serverless offerings like Firebase Functions (Cloud Functions) let you upload your raw source code to the cloud provider, which then packages it into an executable format automatically. It’s great for simple use cases, but it doesn’t fit well with more complex ones — you trade control for convenience.

Google introduced Cloud Run in 2019, and it helps solve this problem. It leverages Docker to provide developers the flexibility of customizing their app’s runtime environment.

Using Docker

Docker helps you package applications in reproducible runtime environments using containers. It’s based on low-level Linux kernel primitives of namespaces and cgroups, but provides a high-level and developer-friendly API to work with.

To package your application as a Docker container, you create a Dockerfile with instructions on how to build and run it. Once built, you can ship the container to a container registry to let other developers fetch it.

For Cloud Run, you typically ship containers to a private Google Artifact Registry repository.

That’s enough theory — time to move on to building your back-end API now!

Getting Started with Ktor

Ktor is a framework based on Kotlin Coroutines. It’s used for building asynchronous client and server applications. You’ll use Ktor as both a server application framework — as well as an HTTP client — starting with the server side.

Open the empty starter project in the api directory in IntelliJ IDEA. Navigate to the build.gradle.kts file, and add the dependencies for Ktor:

val ktorVersion = "2.0.2"
implementation("io.ktor:ktor-server-core:$ktorVersion")
implementation("io.ktor:ktor-server-netty:$ktorVersion")

Ktor also requires you to add an implementation for the SL4J logger API. In this case, you’ll use Logback. Add the dependency for it in the same block:

implementation("ch.qos.logback:logback-classic:1.2.11")

The starter project includes the Gradle application plugin, which allows you to run the project as an app with Gradle. You need to configure it with the name of the class that contains the main() function. Add this configuration line above the dependencies block:

application {
  mainClass.set("com.yourcompany.serverlesskt.ApplicationKt")
}

Note this file doesn’t exist yet. In the next steps, you’ll create this file with code that starts your application. Go ahead and synchronize your project now.

Creating the HTTP Server

First, create a Kotlin source set directory with the path src/main/kotlin.

Then, create a package path under the kotlin directory: com.yourcompany.serverlesskt (if you’re using a different package name, modify it accordingly).

Finally, create an Application.kt file in this directory. The file path should be src/main/kotlin/com/yourcompany/android/serverlesskt/Application.kt.

Create a server in Application.kt using the embeddedServer function:

import io.ktor.server.engine.*
import io.ktor.server.netty.*

val server = embeddedServer(Netty, port = 8080) {}

To communicate with clients over HTTP, you need to create a server that can respond to incoming requests. While Ktor lets you pick from a variety of HTTP servers, here you’re using the well-known Netty server running on port 8080.

This server doesn’t do much yet. To add some functionality to it, you must create API routes that define what it can do. REST is a popular format for building APIs. It models routes using HTTP verbs: GET, POST, PUT, PATCH and DELETE.

Ktor lets you add routes to your server using the Routing module. Use routing to configure this module:

import io.ktor.server.application.*
import io.ktor.server.engine.*
import io.ktor.server.netty.*
import io.ktor.server.response.*
import io.ktor.server.routing.*

val server = embeddedServer(Netty, port = 8080) {
  // 1
  routing {
    // 2
    get("/") {
      // 3
      call.respond("Hello, world!")
    }
  }
}

Here’s what’s happening in the code above:

  1. routing lets you configure the Routing module with the trailing lambda passed to it through a receiver.
  2. get is an extension function on the lambda’s receiver that adds an HTTP GET route on its path (“/”). Whenever a client sends a GET / request to the server, it’s handled by the handler function mounted on this route.
  3. The handler function is another trailing lambda that handles the incoming request (represented by the call extension property). In this case, the handler simply responds with Hello, world! to the client.