17 minutes read

You already know how to set up and run Ktor applications. Today we will get acquainted in more detail with the engines that we can use together with Ktor.

Ktor Engines

When you create an application on Ktor, you can use the routing system to add handlers to the user's different URL requests. All the work of receiving, processing, and sending HTTP requests is done by the web engine. Thus, it saves us from having to work with raw HTTP.

In this approach, you don't run your application directly. You first initialize your web server, which runs your business logic internally. The server receives incoming user requests, processes them, and calls our handler, where it passes a special object (in Ktor's case, it's a call object). With this object, our application can read the input parameters of the request and return some data to the output.

By default, you use Netty as the web engine. But Ktor also supports other engines such as Jetty, Tomcat, CIO (Coroutine-based I/O) and ServletApplicationEngine. All of them run on the JVM platform and integrate well with Ktor applications.

Jetty is a simple web server by Eclipse that is used to handle common HTTP requests. Jetty runs on the JVM.

Tomcat is a simple web server by Apache that is also designed to handle common HTTP requests. Tomcat also runs on the JVM.

CIO (Coroutine-based I/O) is a special engine based on Kotlin coroutines. It uses them to implement the logic implementing an IETF RFC or another protocol without relying on external JVM-based libraries. Therefore, the CIO can work either on the JVM or without it (based on kotlin native or GraalVM).

Netty is a complete framework for developing asynchronous applications. It supports not only TCP but also UDP connections. Of all the engines listed here, Netty is the most powerful and flexible, which is probably why Ktor's developers recommended it to all new users by default. Netty runs on the JVM.

A Ktor application can also be run and deployed inside servlet containers (ServletApplicationEngine) that include Tomcat and Jetty. We will not consider this engine within this topic.

Now let's look at how we can connect the desired engine to the Ktor application.

Launching the server with embeddedServer

There are two ways to initialize your Ktor application: using the embeddedServer function and using the EngineMain method.
The first method involves passing server parameters directly in the application code when calling embeddedServer. In the second method, the parameters are specified in a separate configuration file, application.conf (or application.yaml).

First, let's consider the variant of specifying the parameters directly in the code, because this way is simpler. You probably used it when you ran your first Ktor application.

Before we start, we need to include dependencies for the desired engine in build.gradle.kts:

  • For Netty: implementation("io.ktor:ktor-server-netty:$ktor_version")

  • For Jetty: implementation("io.ktor:ktor-server-jetty:$ktor_version")

  • For Tomcat: implementation("io.ktor:ktor-server-tomcat:$ktor_version")

  • For CIO: implementation("io.ktor:ktor-server-cio:$ktor_version")

We also need to add the appropriate import in the main Application.kt file:

  • For Netty: import io.ktor.server.netty.*

  • For Jetty: import io.ktor.server.jetty.*

  • For Tomcat: import io.ktor.server.tomcat.*

  • For CIO: import io.ktor.server.cio.*

Now we can use the engine we have chosen.

As you know, the entry point of any Kotlin application is the main function. In this function, we just have to start the server, which will launch our route handlers. With the embeddedServer function, this is very easy to do:

package com.example

import io.ktor.server.engine.*
import io.ktor.server.netty.*

fun main() {
    embeddedServer(Netty, port = 8080, host = "0.0.0.0") {

    }.start(wait = true)
}

The first parameter we pass to the embeddedServer function is a special object Netty (an engine factory), which we need to create the Netty engine. This object is available because we added the corresponding import at the beginning of the file. We also specify the port and host on which the server will run as parameters.

Inside the embeddedServer function, we define router handlers, which will be started by the engine automatically when the request is received from a user:

package com.example

import io.ktor.server.engine.*
import io.ktor.server.netty.*

import io.ktor.server.response.*
import io.ktor.server.routing.*
import io.ktor.server.application.*

fun main() {
    embeddedServer(Netty, port = 8080, host = "0.0.0.0") {

        routing {
            get("/") {
                call.respondText("Hello World!")
            }
        }

    }.start(wait = true)
}

Other engines are started in the same way. You just need to pass the corresponding engine factory object (Jetty, Tomcat or, CIO) to embeddedServer.

Configuring engines with embeddedServer

The embeddedServer function, besides the port and host, also accepts the configure parameter, which can set various other settings, including those unique to a particular engine.

There are 3 parameters common to all engines:

fun main() {
    embeddedServer(Netty, port = 8080, host = "0.0.0.0", configure = {
        callGroupSize = 10
        connectionGroupSize = 2
        workerGroupSize = 5
    }) {
        //...
    }.start(wait = true)
}

The callGroupSize parameter specifies the minimum size of a thread pool used to process application calls.

The connectionGroupSize parameter specifies how many threads are used to accept new connections and start call processing.

The workerGroupSize parameter specifies the size of the event group for processing connections, parsing messages and doing engine's internal work.

By default, these values are set as follows:

callGroupSize = parallelism
connectionGroupSize = parallelism / 2 + 1
workerGroupSize = parallelism / 2 + 1

Where parallelism is the current parallelism level, for example, the number of available processors.

If you're not sure or don't fully understand what these parameters do, you're better off just not specifying them. In that case, they will have a default value, which is appropriate in most cases.

In addition to the general, there are also engine-specific settings.

Netty-specific settings are specified in the same way as general settings, in the configure parameter:

fun main() {
    embeddedServer(Netty, port = 8080, host = "0.0.0.0", configure = {
        requestReadTimeoutSeconds = 10
        responseWriteTimeoutSeconds = 10
        //other Netty properties
    }) {
        //...
    }.start(wait = true)
}

Here we set timeouts for reading and writing network requests. See the documentation for a complete list of parameters for Netty. CIO-specific parameters are also specified via configure. The CIO has only one such parameter at the moment, which you can also read in the documentation.

The configurations of Jetty and Tomcat are slightly different. Their options are also set with configure parameter, but they need to be wrapped in a special property configureServer with Jetty, and configureTomcat with Tomcat.

fun main() {
    embeddedServer(Jetty, port = 8080, host = "0.0.0.0", configure = {
        configureServer = {
            //Jetty properties
        }
    }) {
        //...
    }.start(wait = true)
}
fun main() {
    embeddedServer(Tomcat, port = 8080, host = "0.0.0.0", configure = {
        configureTomcat = {
            //Tomcat properties
        }
    }) {
        //...
    }.start(wait = true)
}

List of parameters: for Jetty, and for Tomcat.

Launching the server with EngineMain

Now consider a way to start the server using the EngineMain method, where all the server configuration settings are placed in a separate file. This provides more flexibility to configure a server and allows you to change a configuration without recompilation on your application.

Here, in main() we call the EngineMain.main method, and pass in an array of arguments, which we take the arguments passed to our program.

package com.example

import io.ktor.server.netty.*

fun main(args: Array<String>) {
    EngineMain.main(args)
}

This method is imported automatically, along with the engine you selected:

  • io.ktor.server.netty.EngineMain

  • io.ktor.server.jetty.EngineMain

  • io.ktor.server.tomcat.EngineMain

  • io.ktor.server.cio.EngineMain

Here you might wonder where we can define our routing handlers then. With this approach, we have to put our routing handlers outside of the main function in a separate module:

package com.example

import io.ktor.server.netty.*

fun main(args: Array<String>) {
    EngineMain.main(args)
}

fun Application.module() {
    routing {
        get("/") {
            call.respondText("Hello, world!")
        }
    }
}

The only thing left to do is to create a file that will contain our server settings.

The configuration file can be in either HCON (application.conf) or YAML (application.yaml) format.

In the resources folder, let's create an application.conf file, where we put the following contents:

ktor {
    deployment {
        port = 8080
    }
    application {
        modules = [ com.example.ApplicationKt.module ]
    }
}

Here in the deployment block, we specified the port on which our server will start. And in the application block, we put information about the router handler module, which will be called by the server.

You can also use the YAML format for configuration. Here, the contents of the application.yaml configuration file will look like this:

ktor:
    deployment:
        port: 8080
    application:
        modules:
            - com.example.ApplicationKt.module

As you can see, there is nothing complicated in this approach either.

Configuring engines with EngineMain

Where we have a configuration file, we should put the settings of the engines in it.

The parameters common to all the engines, which we already discussed earlier in the topic, we can simply put in the deployment block of the application.conf configuration file:

ktor {
    deployment {
        port = 8080

        callGroupSize = 10
        connectionGroupSize = 2
        workerGroupSize = 5
    }
    application {
        modules = [ com.example.ApplicationKt.module ]
    }
}

Or for YAML:

ktor:
    deployment:
        port: 8080
        callGroupSize: 10
        connectionGroupSize: 2
        workerGroupSize: 5
    application:
        modules:
            - com.example.ApplicationKt.module

Netty-specific parameters are also specified in the deployment block. If you need to specify settings specific to Tomcat or Jetty, use the embeddedServer method to start the server.

Reading Configuration

Ktor allows you to get the configuration parameters in the application code.

Let's, for example, get the value of the port variable, which is in the configuration file.

To do this, we can use the environment.config.propertyOrNull method, which takes as input the full name of the configuration parameter that we want to get. In our case, the full name of the parameter port is "ktor.deployment.port" because the port is in the deployment section, which is in the Ktor section.

In this way, we can get the port as follows:

environment.config.propertyOrNull("ktor.deployment.port")?.getString()

We can use the obtained value in the application code:

fun Application.module() {
    val port = environment.config.propertyOrNull("ktor.deployment.port")?.getString() ?: "8080"
    routing {
        get {
            call.respondText("Answering on port $port")
        }
    }
}

It can also be useful to know if the application is running in the developing or production mode. The configuration file can also help us with this.

Let's add to the configuration file, in the Ktor section, the environment variable:

ktor {
    environment = ${?KTOR_ENV}
}

In this case, the environment parameter is not built-in. We could have called it anything we wanted. The main thing is that we assign the value of the environment variable KTOR_ENV to it, which will be different depending on the mode of the application.

Then we can read this value from the configuration file just like we did before:

fun Application.module() {
    val env = environment.config.propertyOrNull("ktor.environment")?.getString()
    routing {
        get {
            if(env == "dev")
                call.respondText("Development")
            else if(env == "prod")
                call.respondText("Production")
            else
                call.respondText("Unknown")
        }
    }
}

Conclusion

In this topic, we discovered what web servers are, why they are needed, and what role they play. We learned how to integrate different servers into our Ktor application, as well as how to configure them.

There are two ways to connect a server to the Ktor application:

  • Using the embeddedServer function, engine parameters are passed as a configure argument.

  • Using the EngineMain method, engine parameters are defined in the application.conf file.

In the first case, the routing handlers are placed inside the embeddedServer function.

In the second case, the handlers are placed in a separate module, which is referenced in the application.conf.

7 learners liked this piece of theory. 0 didn't like it. What about you?
Report a typo