Fri. Dec 2nd, 2022

Hello Android

All android in one place

Serverless Kotlin on Google Cloud Run

16 min read

Learn how to build a serverless API using Ktor then dockerize and deploy it to Google Cloud Run.

Managing servers is a hassle. Provisioning resources for traffic spikes, managing security updates, phased rollouts, hardware maintenance and other related tasks get in the way of developers who want to focus on building their applications. The serverless paradigm of computing helps solve this problem by abstracting servers away from developers.

Kotlin is a great fit for writing APIs that run on serverless platforms. In this tutorial, you’ll use Kotlin to build a HTTP API to detect a user’s location from their IP address. Through this process, you’ll learn how to:

  1. Build back-end APIs with Kotlin and Ktor.
  2. Dockerize Ktor Applications.
  3. Publish Docker containers on Google Artifact Registry.
  4. Deploy Docker containers to Google Cloud Run.

Note: This tutorial assumes you’re familiar with the basics of Kotlin and building REST APIs.

Getting Started

Download the starter project by clicking the Download Materials button at the top or bottom of the tutorial.

You’ll find two projects: a shell project for a Ktor back-end API (api directory), and an Android App that consumes this API (app directory).

You’ll work with the back-end API first. You’ll add code step by step to this project to get to a fully functional back-end API. First, though, it’s important to understand the basics of the serverless computing model.

Defining Serverless

Contrary to what the name suggests, the serverless model doesn’t eliminate the need for servers. It just makes it someone else’s responsibility to manage the servers for you. In most cases, that “someone else” is a cloud provider with decades of expertise in managing servers.

In the serverless model, you provide your application’s source code to a cloud provider to invoke in response to one or more triggers. The mode in which you ship your source code to the cloud provider depends on the product you use. Services built on FaaS (Functions as a Service) ask for your source code directly, and others require you to package it in a container instead.

Serverless applications scale up and down to meet demand automatically, including scaling down to zero. This enables a billing model in which you pay only for what you use: if your application receives no traffic, you won’t have to pay for it.

In this tutorial, you’ll use Cloud Run to run Docker containers serverlessly on Google Cloud, and then you’ll configure it to invoke your containers in response to incoming HTTP requests.

Understanding Cloud Run

Simple serverless offerings like Firebase Functions (Cloud Functions) let you upload your raw source code to the cloud provider, which then packages it into an executable format automatically. It’s great for simple use cases, but it doesn’t fit well with more complex ones — you trade control for convenience.

Google introduced Cloud Run in 2019, and it helps solve this problem. It leverages Docker to provide developers the flexibility of customizing their app’s runtime environment.

Using Docker

Docker helps you package applications in reproducible runtime environments using containers. It’s based on low-level Linux kernel primitives of namespaces and cgroups, but provides a high-level and developer-friendly API to work with.

To package your application as a Docker container, you create a Dockerfile with instructions on how to build and run it. Once built, you can ship the container to a container registry to let other developers fetch it.

For Cloud Run, you typically ship containers to a private Google Artifact Registry repository.

That’s enough theory — time to move on to building your back-end API now!

Getting Started with Ktor

Ktor is a framework based on Kotlin Coroutines. It’s used for building asynchronous client and server applications. You’ll use Ktor as both a server application framework — as well as an HTTP client — starting with the server side.

Open the empty starter project in the api directory in IntelliJ IDEA. Navigate to the build.gradle.kts file, and add the dependencies for Ktor:

val ktorVersion = "2.0.2"

Ktor also requires you to add an implementation for the SL4J logger API. In this case, you’ll use Logback. Add the dependency for it in the same block:


The starter project includes the Gradle application plugin, which allows you to run the project as an app with Gradle. You need to configure it with the name of the class that contains the main() function. Add this configuration line above the dependencies block:

application {

Note this file doesn’t exist yet. In the next steps, you’ll create this file with code that starts your application. Go ahead and synchronize your project now.

Creating the HTTP Server

First, create a Kotlin source set directory with the path src/main/kotlin.

New directory

New Kotlin directory

Then, create a package path under the kotlin directory: com.yourcompany.serverlesskt (if you’re using a different package name, modify it accordingly).

Finally, create an Application.kt file in this directory. The file path should be src/main/kotlin/com/yourcompany/android/serverlesskt/Application.kt.

Create a server in Application.kt using the embeddedServer function:

import io.ktor.server.engine.*
import io.ktor.server.netty.*

val server = embeddedServer(Netty, port = 8080) {}

To communicate with clients over HTTP, you need to create a server that can respond to incoming requests. While Ktor lets you pick from a variety of HTTP servers, here you’re using the well-known Netty server running on port 8080.

This server doesn’t do much yet. To add some functionality to it, you must create API routes that define what it can do. REST is a popular format for building APIs. It models routes using HTTP verbs: GET, POST, PUT, PATCH and DELETE.

Ktor lets you add routes to your server using the Routing module. Use routing to configure this module:

import io.ktor.server.application.*
import io.ktor.server.engine.*
import io.ktor.server.netty.*
import io.ktor.server.response.*
import io.ktor.server.routing.*

val server = embeddedServer(Netty, port = 8080) {
  // 1
  routing {
    // 2
    get("/") {
      // 3
      call.respond("Hello, world!")

Here’s what’s happening in the code above:

  1. routing lets you configure the Routing module with the trailing lambda passed to it through a receiver.
  2. get is an extension function on the lambda’s receiver that adds an HTTP GET route on its path (“/”). Whenever a client sends a GET / request to the server, it’s handled by the handler function mounted on this route.
  3. The handler function is another trailing lambda that handles the incoming request (represented by the call extension property). In this case, the handler simply responds with Hello, world! to the client.

Starting the Server

In the Application.kt file, add a main method that starts the server:

val server = ...

fun main() {
  server.start(wait = true)

The wait parameter tells the application to block until the server terminates.

At this point, you have everything you need to get a basic server up and running. To start the server, use the green icon next to main IntelliJ:

Screenshot with an arrow that points to the "run" icon in IntelliJ

If everything went well, you’ll see logs indicating your server is running!

Screenshot that shows logs produced by a running Ktor server

To test your server, use the curl command line utility. Enter the following command in the terminal:

curl -X GET ""

You’ll see the correct response: “Hello, world!”.

➜  ~ curl -X GET ""
Hello, world!

Note: If you get a response similar to curl: (7) Failed to connect to port 8080 after 0 ms: Address not available, replace your curl request with curl -X GET "".
You could update the networking configuration in order to continue using but that’s outside the scope of this tutorial. In subsequent requests, you’ll have to keep using localhost instead of

Detecting the Client’s IP Address

In the routing function, add a route that returns the client’s IP address back to them. To get the client’s IP address, use the origin property of the request object associated with a call.

import io.ktor.server.plugins.*

// Add this in the `routing` block:
get("/ip") {
  val ip = call.request.origin.remoteHost

This adds an HTTP GET route on the “/ip” path. On each request, the handler extracts the client’s IP address using call.request.origin.remoteHost and returns it in the response.

Restart the server, and try this new route using curl again:

➜  ~ curl -X GET ""
➜  ~

The server responds with localhost, which just means the client and server are on the same machine.

Fetching Locations Using IP Addresses

To fetch a client’s location from their IP address, you need a geolocation database. There are many free third-party services that let you query geolocation databases. IP-API is an example.

IP-API provides a JSON API to query the geolocation data for an IP address. To interact with it from your server, you’ll need to make HTTP requests to it using an HTTP client. For this tutorial, you’ll use the Ktor client.

Additionally, you’ll need the ability to parse JSON responses from IP-API. Parsing and marshalling JSON data is a part of data serialization. Kotlin has an excellent first-party library, kotlinx.serialization, to help with it.

The process of detecting the client’s location will look like this:

Detecting client's location

Adding Kotlinx Serialization and Ktor Client

The kotlinx.serialization library requires a compiler plugin as well as a support library.

Add the compiler plugin inside the plugins of the build.gradle.kts file:

plugins {
  // ...
  kotlin("plugin.serialization") version "1.6.10"

Then add these dependencies to interop with it using Ktor:

dependencies {
  // ...

Here’s a description of these artifacts:

  • io.ktor:ktor-client-core provides core Ktor client APIs.
  • io.ktor:ktor-client-cio provides a Coroutines-based Ktor client engine.
  • io.ktor:ktor-client-serialization, io.ktor:ktor-serialization-kotlinx-json and io.ktor:ktor-client-content-negotiation provide APIs to serialize request/response data in JSON format using the kotlinx.serialization library.

Using Ktor Client

So far you’ve used Ktor as an application server. Now you’ll use the other side of Ktor: an HTTP client.

First, create a data class to model the responses of IP-API. Create a file named IpToLocation.kt, and add the following code to it:


import kotlinx.serialization.Serializable

data class LocationResponse(
  val country: String,
  val regionName: String,
  val city: String,
  val query: String

Then, create a function that sends an HTTP request to IP-API with the client’s IP address. In the same file, add the following code:

import io.ktor.client.*
import io.ktor.client.request.*

 * Specifies which fields to expect in the
 * response from the API
 * More info:
private const val FIELDS = "country,regionName,city,query"

 * Prefix URL for all requests made to the IP to location API
private const val BASE_URL = ""

 * Fetches the [LocationResponse] for the given IP address
 * from the IP to Location API
 * @param ip The IP address to fetch the location for
 * @param client The HTTP client to make the request from
suspend fun getLocation(ip: String, client: HttpClient): LocationResponse {
  // 1
  val url = buildString {
    if (ip != "localhost" && ip != "_gateway") {

  // 2
  val response = client.get(url) {
    parameter("fields", FIELDS)

  // 3
  return response.body()

getLocation fetches the location data for an IP address using IP-API. It uses an HttpClient supplied to it to make the HTTP request.

First, it constructs the URL to send the request to. Second, it adds FIELDS as a query parameter to the URL. This parameter tells IP-API which fields you want in the response (learn more here). Finally, it sends an HTTP GET request to the constructed URL and returns the response.

Fetching Location Data

To use getLocation, you must create an instance of the Ktor HTTP client. In the Application.kt file, add the following code above main:

import io.ktor.client.*
import io.ktor.client.engine.cio.*
import io.ktor.client.plugins.contentnegotiation.ContentNegotiation as ClientContentNegotiation
import io.ktor.serialization.kotlinx.json.*

val client = HttpClient(CIO) {
  install(ClientContentNegotiation) {

This not only creates an HttpClient, but it also adds a Ktor feature ContentNegotiation (aliased as ClientContentNegotiation to avoid import collision with the server feature of the same name) for JSON serialization/deserialization.

Then, add a route to your server to fetch the location data. In the routing block, add the following route:

get("/location") {
  val ip = call.request.origin.remoteHost
  val location = getLocation(ip, client)

Note this route responds with an object of type LocationResponse, which should be deserialized to the JSON format before sending it to the client. To tell Ktor how to deal with this, install the server-side ContentNegotiation plugin.

First, add the following dependency in the build.gradle.kts file:


In the Application.kt file, modify the configuration block for embeddedServer by adding the following code:

import io.ktor.server.plugins.contentnegotiation.ContentNegotiation as ServerContentNegotiation

val server = embeddedServer(Netty, port=8080) {
  install(ServerContentNegotiation) {
  // ...

Finally, restart the server and use curl to send a request to the “/location” route. You’ll see a response like this:

➜  ~ curl -X GET ""
➜  ~

That’s it for your back-end API! So far you’ve built three API routes:

  • /: Returns “Hello, world!”.
  • /ip: Returns the client’s IP address.
  • /location: Fetches the client’s IP geolocation data and returns it.

The next step is to containerize the application to deploy it on Cloud Run.

Containerizing the Application

To deploy your API on Cloud Run, you need to containerize it first. Create a file named Dockerfile in the root directory of the project.

Add the following code to it:

# 1
FROM gradle:latest as builder

# 2
COPY . .
RUN ./gradlew installDist

# 3
FROM openjdk:latest

# 4
COPY --from=builder /app/build/install/serverlesskt ./
CMD ["./bin/serverlesskt"]

The code above defines a multi-stage Docker build to ensure the final assembled image is as small as possible. Here’s what’s happening:

  1. This step instructs Docker to use the gradle:latest base image for the builder stage. It provides you with a pre-existing Gradle installation, which is great because you need it to build your application.
  2. The next few steps instruct Docker on how to assemble your application’s executable binary file by copying the source code to the image and invoking the installDist task.
  3. This step instructs Docker to add a second stage to the build using the openjdk:latest base image, which provides you with an existing Java installation.
  4. Finally, it copies over the built binary of your application from the previous builder stage and sets up the image to run it whenever a container with this image starts.

You must have the gcloud and docker CLIs installed, along with an existing Google Cloud Platform project with billing enabled.
To build an image with this Dockerfile, first make sure to stop the application within Intellij. Then open the terminal and run the following command:

docker build -t serverlesskt-api .

Note: If you’re on an M1 Mac, or any architecture other than x64, add the --platform linux/amd64 flag to this command. This ensures your image can run on Cloud Run too.

Here’s what you’ll see:

➜  docker build -t serverlesskt-api .

 => exporting to image                                                                                                                                                                                                             0.0s
 => => exporting layers                                                                                                                                                                                                            0.0s
 => => writing image sha256:3e39d9e1ab51ba1f16e1a75be7978c85de26eb6fafc2f65b5d603eb922125c0b                                                                                                                                       0.0s
 => => naming to                                                                                                                                                                                0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

Optionally, you can test the image locally by starting a container with this command:

docker container run -p8080:8080 serverlesskt-api 

Test it with curl:

➜  ~ curl -X GET ""
Hello, world!

If you got the same response, then your image is built correctly! Next step is to push the image to an image registry on Google Cloud.

Pushing Images to Artifact Registry

Artifact Registry is a GCP product that lets you host build artifacts such as container images, Maven packages, etc. on Google Cloud in public or private repositories.

To push your image to Artifact Registry, you must have an existing Google Cloud Project with billing enabled, as well as a Docker image repository on Artifact Registry. See the documentation on how to accomplish this if you haven’t already.

Once you’ve created a repository for Docker images on Artifact Registry, the next step is to use the gcloud CLI to authenticate docker CLI with your repository. The process is as simple as running the following command:

gcloud auth configure-docker "<project-region>"

project-region depends on the specifics of your Google Cloud project.

Next, you must tag your Docker image with the the following pattern:

  • LOCATION is the region of your GCP project (e.g., us-east-1).
  • PROJECT-ID is the ID of your GCP project.
  • REPOSITORY is the name of your Artifact Registry repository.
  • IMAGE is the name you want to give to your image.

Remember to substitute the placeholders with information specific to your project and then tag the Docker image:

docker tag serverlesskt-api

Finally, push the image to Artifact Registry:

docker push

You’ll see your uploaded image in Artifact Registry.

➜  ~ gcloud artifacts docker images list
Listing items under project cloud-run-kt, location asia-south2, repository cloud-run-kt.

IMAGE                                                            DIGEST                                                                   CREATE_TIME          UPDATE_TIME  sha256:a6803aa97e720e3870fca2e63e49ce0739ac4c4a322e93f34c0b7ddf5b49efe7  2022-02-06T12:03:18  2022-02-06T12:03:18

Deploying Image to Cloud Run

Once you’ve pushed an image to Artifact Registry, deploying it to Cloud Run is as simple as invoking a single command.

Deploy your image to Cloud Run with the gcloud CLI:

gcloud run deploy iplocation2 
   --image <image-url> 
   --project <project-id> 
   --region <project-region> 
   --port 8080 
  • image flag specifies the URL of the image to deploy. Set it to the tag of the image you uploaded to Artifact Registry.
  • project and region flags are specific to your project’s settings.
  • port flag tells Cloud Run which port the container listens on for incoming requests.
  • allow-unauthenticated flag lets anyone on the internet invoke your API. For the purposes of this tutorial, the API you deploy should be public.

You’ll see a success message followed by the URL of your deployed API!

Deploying container to Cloud Run service [iplocation] in project [cloud-run-kt] region [asia-south2]
✓ Deploying... Done.
  ✓ Creating Revision...
  ✓ Routing traffic...
  ✓ Setting IAM Policy...
Service [iplocation2] revision [iplocation2-00002-qaq] has been deployed and is serving 100 percent of traffic.
Service URL:

Try sending requests to your API’s URL with curl:

➜  ~ curl
Hello, world!

You’ve successfully deployed a Kotlin Ktor application to Cloud Run!

Try experimenting with all three endpoints to test your deployment. Once you’re satisfied with the results, you can begin integrating the API within an Android app.

Consuming the API

The starter material for this project also includes an Android app to consume the API. Head over to Android Studio and open the starter app project in it.

Build and run the app. You’ll see a simple screen that lets you request your current location.

App main screen

In the next few steps, you’ll add code to integrate the API with this application using the Retrofit library.

Defining a Service Interface

To interact with an API using Retrofit, you must have an interface to model its routes.

Create a new package api under the package, and then add a new file named LocationApi.kt to it. Within this file, add the following code:

import com.jakewharton.retrofit2.converter.kotlinx.serialization.asConverterFactory
import kotlinx.serialization.ExperimentalSerializationApi
import kotlinx.serialization.json.Json
import okhttp3.MediaType
import retrofit2.Retrofit

// Put your Cloud Run service's URL here
private const val API_URL = ""

val contentType: MediaType = MediaType.get("application/json")

val retrofit: Retrofit = Retrofit.Builder()

This defines a Retrofit instance to communicate with your API. Populate API_URL with the URL of your deployed API on Cloud Run (e.g.

Note: You can also test the Android app using your local running server. Populate API_URL with if you’re using an emulator, or use the corresponding IP address if you’re using a device connected to the same network.

Then, create models to communicate with your API. First, define a class to model the API response, and then an interface LocationApi to model the API routes:

import retrofit2.http.GET
import kotlinx.serialization.Serializable

data class LocationResponse(
  val country: String,
  val regionName: String,
  val city: String,
  val query: String

interface LocationApi {

  suspend fun getLocation(): LocationResponse

With this complete, move on toward instantiating the service interface and using it.

Using the Service Interface

Create LocationService.kt in the location package. Within this file, add a new object LocationService with a private property to hold a reference to the Retrofit service:


object LocationService {
  private val api = retrofit.create(

Finally, add a method to send network requests to the API:


object LocationService {
  private val api = // ...
  suspend fun fetchLocation(): Result<LocationResponse> {
    return try {
      val location = api.getLocation()
    } catch (ex: Throwable) {

Build the application to make sure there aren’t compilation errors.

Integrating with ViewModel

While the LocationService knows how to make network requests to the API, the trigger for the requests resides in the view layer of the application.

Navigate to the LocationViewModel.kt file in the location package. It defines a simple ViewModel that contains a dummy implementation of the network request in fetchLocation.

Modify fetchLocation to call the service instead:

fun fetchLocation() {
  viewModelScope.launch {
    _state.value = LocationState.Loading

      .onSuccess { response -> _state.value = LocationState.Success(response) }
      .onFailure { error ->
        _state.value = LocationState.Error(error.message ?: "An unknown error occurred")

You’ll need to add the following to the LocationState sealed class:

data class Success(val location: LocationResponse) : LocationState()

Open LocationFragment.kt and handle this state inside onCreateView:

override fun onCreateView(...) {
  // ...
  when (state) {
    // ...
    is LocationState.Success -> renderSuccess(state.location)

Finally, add the missing renderSuccess method with the following content:

private fun renderSuccess(location: LocationResponse) = binding.apply {
  progressBar.visibility = View.GONE
  locationInfo.text = buildString {
    appendLine("City: ${}")
    appendLine("Region: ${location.regionName}")
    appendLine("Country: ${}")

Don’t forget to include the internet permission in the app’s AndroidManifest.xml file!

<uses-permission android:name="android.permission.INTERNET"/>

Build and run. Press the Locate Me button to send a network request and get your approximate location.

App main screen with location

Where to Go From Here?

Download the completed project files by clicking the Download Materials button at the top or bottom of the tutorial.

Congratulations! You learned a lot in this tutorial and can now build and deploy serverless Kotlin apps on Google Cloud Run.

If you’re wondering what to learn next, check out the official documentation for Google Cloud Run, or this tutorial for building Kotlin APIs on GCP.

We hope you enjoyed this tutorial. If you have any questions or comments, please join the forum discussion below!

Leave a Reply

Your email address will not be published. Required fields are marked *

Hello android © All rights reserved. | Newsphere by AF themes.