The Easy Way to Serverless Computing

Serverless computing itself (literally "serverless") gained wide popularity in 2014 after the announcement of AWS Lambda - one of the first Serverless platforms. Since then, the popularity of the Serverless approach has only been growing, but the development of tools, alas, is not keeping pace.



My name is Vladislav Tankov, in 2018-2020 I studied at the JetBrains corporate master's program at ITMO, and since 2017 I have been working at JetBrains .



In the summer of 2018, at the JetBrains hackathon, a few of my colleagues and I tried to make a tool for the Kotlin language that simplifies the creation of Serverless applications by analyzing the application code.



After the hackathon, already within the framework of scientific work in the corporate master's program at JetBrains, I decided to continue the development of this project. Over the course of two years, the tool has significantly expanded and acquired functionality, but it retained its name - Kotless , or Kotlin Serverless Framework.



What is Serverless



First, let's remember what the simplest Serverless computing platform consists of. Such a platform includes three main components:



  • the execution system of Serverless functions - small applications that process certain events;
  • a set of different interfaces from the outside world (or a cloud platform such as AWS) to the platform's event system, such as an HTTP interface;
  • the event system itself, which provides the transfer of events from interfaces to functions and processing results from functions to interfaces.


These three components are enough to build a fairly complex application. For example, a web application is just an external HTTP interface (in the case of AWS, this will be APIGateway ) and for each resource processed (like / route / my ) its own Serverless handler function. You can build a more complex application that uses databases and itself calls other Serverless functions, as in the picture.



Okay, you can build such applications, but why?



Serverless applications have several undeniable advantages that justify the architecture squat.



  • Serverless functions don't work when they are not needed. Indeed, the function only processes events - why should it take up computing resources if there are no events?
  • Serverless functions can handle events of the same type in parallel. That is, if / route / my has become very popular and a thousand users have requested it at once, then the Serverless platform can simply launch 1000 handlers, one per event.


Together, these points add up to perhaps one of the most important Serverless mantras: the Serverless application scales from zero to infinity. Such an application does not spend money when not in demand, and is capable of processing thousands of requests per second when needed.



Problem



Let's take a look at a very simple example in Kotlin language:



@Get("/my/route")
fun handler() = "Hello World"


It is pretty obvious that such an application can be implemented using the Serverless approach. At first glance, it is enough to create an HTTP interface with some DNS address and map / my / route to the fun handler () .



In fact, creating such an application would take a lot more than adding a single annotation. For example, in the case of AWS:



  • You will need to implement an interface handler for a specific event, in this case the RequestStreamHandler.
  • You will need to describe the infrastructure of the Serverless application: describe the HTTP API of the application, describe all the handler functions and associate their functions with the interface, carefully choosing the permissions.
  • Finally, you will have to collect all the handler functions, load them into the Serverless platform, and deploy the appropriate infrastructure.


There are not so few steps for such a simple application, is there?



For those initiated into the sacrament of Infrastructure as Code, I will note that, of course, part of the process can be automated, but this automation itself requires the study of a completely new approach (in fact, describing the infrastructure as a code) and a new language. This seems like an unnecessarily difficult task for a developer wanting to deploy a rudimentary application.



Is it possible to do something easier? In some cases (and specifically in this) - yes!



Infrastructure in Code



Let's look at the other side: instead of forcing the user to describe the infrastructure, we will try to derive it from the already written user code.



Consider the same example again:



@Get("/my/route")
fun handler() = "Hello World"


We know the user wants requests to / my / route to be handled by this function - so let's synthesize an infrastructure that will create an HTTP API with / my / route , create the required Serverless function, and do all the necessary magic to connect them!



In my article at Automated Software Engineering 2019, I called this approach Infrastructure in Code. In fact, we extract the description of the infrastructure from the application code that defines it implicitly, that is, it is actually contained "inside" the code.



It should be noted that hereinafter, only the synthesis of HTTP API applications is considered. A similar approach can be used for processing queues and for processing events on the cloud platform, but this is a matter of further development of Kotless.



Implementation



Hopefully at this point the idea is clear and there are three main questions left:



  • How to extract information from code?
  • How to create an infrastructure based on this information?
  • How do I run an application in the cloud?


Analysis



The Kotlin Compiler Embeddable will help us with this.



Despite the fact that the example is about annotations, in reality, the HTTP API of the application, depending on the library used, can be defined in completely different ways, for example:



//ktor-like style
get("my-route") {
    "Hello World"
}


For analyzing arbitrary code, the Kotlin Compiler Embeddable turned out to be both more familiar and more convenient (due to the large number of examples).



At the moment, Kotless can analyze three main frameworks:



  • Kotless DSL - Kotless's own annotation framework
  • Spring Boot is a popular Web framework, annotations are parsed;
  • Ktor is a popular Kotlin Web framework, extension functions are analyzed.


In the process of analyzing the code, the Kotless Schema is collected - this is some platform-independent representation of the Serverless application. It is used to synthesize infrastructure and makes the analysis process independent of a specific cloud platform.



Synthesis



We will synthesize Terraform code. Terraform was selected as one of the most popular Infrastructure as Code tools with a wide range of supported cloud platforms, which ensures Kotless is able to support new cloud platforms and stability in application deployment.



Synthesis is made from Kotless Schema, which contains a description of the application's HTTP API and its functions, as well as some additional data (for example, the desired DNS name).



For the synthesis itself, a specially created Terraform DSL library is used. The synthesis code looks something like this:



val resource = api_gateway_rest_api("tf_name") {
    name = "aws_name"
    binary_media_types = arrayOf(MimeType.PNG)
}


DSL guarantees formatting and referential integrity between different Terraform resources, making it much easier to expand the set of synthesized resources.



The synthesized code is deployed to the cloud platform with a simple Terraform application.



Running



It remains to run the application on the Serverless platform. As already mentioned, all Serverless functions are essentially handlers for some events, in our case, HTTP requests.



It is necessary to connect the framework with which the application is created (for example, Spring Boot) and the Serverless platform. To do this, at the time of building the application, Kotless adds a special "dispatcher" to the application code - a platform-specific event handler that serves as an adapter between the framework used in the application and the cloud platform.



Tool



The tool itself, which includes the entire described pipeline for creating the infrastructure, was implemented as a plugin to the Gradle build system. Moreover, all the main modules are separate libraries, which greatly simplifies the support of other build systems.



Using the plugin is straightforward. After configuration, the user has only one Gradle task - deploy , which takes all the necessary steps to deploy the current application to the cloud.



Customization from the user side is pretty straightforward too. The plugin itself is applied first:



plugins {
  io("io.kotless") version "0.1.5" apply true
}


After that, the user adds the framework he needs:



dependencies {
  //Kotless DSL 
  implementation("io.kotless", "lang", "0.1.5")
}


Finally, it sets up access to AWS so that Kotless can deploy:



kotless {
  config {
    bucket = "kotless.s3.example.com"

    terraform {
      profile = "example"
      region = "us-east-1"
    }
  }
}


Local launch



It is easy to see that the last point requires the user to be familiar with AWS and have at least an AWS account. Such requirements frightened off users who wanted to first try locally if the tool was right for them.



This is why Kotless supports local launch mode. Using the standard features of the chosen framework (both Ktor, Spring Boot, and Kotless DSL, of course, can run applications locally), Kotless deploys the application to the user's machine.



Moreover, Kotless can run AWS emulation (used by LocalStack ) so that the user can check locally that the application is behaving as expected.



Further development



While writing Kotless (and with it my master's thesis), I managed to present it at ASE 2019, KotlinConf 2019, and in the Talking Kotlin podcast. In general, the tool was received favorably, although by the end of 2019 it no longer seemed such a novelty (by that time, Zappa, Claudia.js, and AWS Chalice had become popular).



However, at the moment, Kotless is perhaps the most famous tool of its class in the Kotlin world, and I certainly plan to develop it.



In the near future, I plan to stabilize the current API and functionality, prepare tutorials and demo projects in order to make learning the tool easier for new users.



For example, we plan to prepare a set of tutorials on how to create chat bots using Kotless. It seems that Serverless technologies are great for this use case (and Kotless users are already writing Telegram bots), but the lack of suitable tools significantly hinders widespread use.



Finally, one of the most important aspects of the entire architecture of the tool is its platform independence. In the not too distant future, I hope to support the Google Cloud Platform and Microsoft Azure, which will allow applications to move from cloud to cloud with literally a single button.



I would like to hope that Kotless and similar tools will really help the introduction of Serverless technologies to the masses and more and more applications will consume resources only when they are running, slightly reducing the entropy of the universe :)



All Articles