Serverless hosting in AWS for ruby

In the first part, we deployed the application to the Oracle cloud. Now let’s try to do the same in AWS and ask ourselves if Rails is really needed.

So we have: SPA application, REST api, Terraform as a means of deployment and resource management in the cloud. Now with sourcesgo!

Differences from Oracle Cloud

Amazon has many more cloud services and plenty to choose from when designing an app. In fact, Amazon has already become the de facto standard and many providers bring their services to compliance with AWS services.

Of the key differences, it should be noted that in order to create a lambda in Amazon, it is enough to provide a zip with the source code. This allows us to exclude the layer with Docker images in our application. Also in our example, we will exclude the possibility of running locally for now, the application will live only in the cloud. But remember there is local stack, so it will probably be possible to run everything locally. I have not tried this yet, but in general it may be more convenient to have a separate set of resources for development in the cloud.

Also, following the recommendations of Amazon, we will use DynamoDb as a storage. This is the most significant change, now we do not have a relational database. But you can keep the scheme in the Terraform state.


The structure is the same as in the Oracle solution. Lambdas live in a folder functionsjs client in folder clientgem with models and common code in folder retro.

A task

The goal of the project will be a board for retrospectives, something like Board, three columns – “good”, “bad”, “actions”. Anonymous users write notes in these columns, which are then discussed at the meeting.

The user can create boards (Board). The board has a version (BoardVersion), which displays the state of the board at some point in time. The boards can be accessed via a direct link.


Disclaimer: please do not judge strictly the implementation, it was written for fun in haste in a free minute, if only it worked. Treat it like a PoC.

First, let’s set up the AWS CLI. Install aws vault (it will allow you not to store keys locally and more securely access the AWS API – more details here), install aws-cli, create a secret access key in the AWS console. Now create a ~/.aws/config file, for me it looks something like this:

[profile fwd-retro]
sts_regional_endpoints = regional
mfa_serial = arn:aws:iam::{your_aws_account_id}:mfa/{your_aws_username}
credential_process = aws-vault exec --json --prompt=osascript fwd-retro
role_session_name = {your_aws_username}
output = json

As you can see, I have mfa enabled and a separate profile (fwd-retro) for the application. This will allow me clearly specify in the Terraform provider which credentials to use and not call aws-vault exec when called terraform apply.

Next, configure S3 storage for the Terraform remote state and let’s go.

We create a dynamodb table that will store all the project data, since this is a document store, this can be done:

resource "aws_dynamodb_table" "data" {
  name         = "retro-data"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "pid"
  range_key    = "cid"

  attribute {
    name = "pid"
    type = "S"

  attribute {
    name = "cid"
    type = "S"

  global_secondary_index {
    name               = "cid_si"
    hash_key           = "cid"
    projection_type    = "KEYS_ONLY"

pid – this is a partition index, also known as parent id; cid is the sort key, also known as the child key of the record. They are both uuid and are needed to store the object hierarchy as well. At the top of the User hierarchy, which is one-to-many with the Board. Board, in turn, is one-to-many with BoardVersion, and it is already one-to-many already with Note.

secondary index is needed so that you can find children by their direct uuid without bothering with the parent.

On top of the Ruby AWS SDK, we will write a wrapper to make it more convenient to work with data as with models. In general, you can use something ready-made, like, but we are minimizing the number of dependencies. The most interesting in models is CRUD:

model Retro
  class Model
    def new?

    def destroy
      db.delete_item(api_params.merge(key: identifier_params, return_values: RETURN_OPTIONS[:all_old])).attributes

    def put
      push_attributes = item_attributes
      push_attributes["updated_at"] =
      db.put_item(api_params.merge(item: push_attributes, return_values: RETURN_OPTIONS[:all_old]))
      @attributes = push_attributes
    alias :save :put

    def update(method: ATTR_TRANSFORMATIONS[:put], **updates)
      attribute_updates = prepare_attributes(updates).transform_values do |value|
        { value: value, action: method }

      response = db.update_item(api_params.merge(
        key: identifier_params,
        attribute_updates: attribute_updates,
        return_values: RETURN_OPTIONS[:all_new])
      @attributes = response.attributes

Our lambdas each live in their own folder and have their own .tf files, which describe aws resources related specifically to these lambdas. Then we connect these files as terraform modules in the root config, for example functions/users/


resource "aws_lambda_function" "users_lambda" {
  filename      = local.dist_path
  function_name = "users_lambda"
  role          = aws_iam_role.lambda_role.arn
  handler       = "func.Retro.route"


  runtime = "ruby2.7"

  depends_on = [aws_iam_role_policy_attachment.attach_iam_policy_to_iam_role]

And already in


module "users_lambda" {
  source = "./functions/users"
  data_table =
  depends_on = [data.external.app_gem,]

Next, we create the actual gateway and write the path to the lambda:


resource "aws_apigatewayv2_api" "retro_api" {
  name          = "retro-api"
  protocol_type = "HTTP"

resource "aws_apigatewayv2_stage" "default" {
  api_id =
  name   = "$default"
  auto_deploy = true

resource "aws_apigatewayv2_integration" "users_lambda_integration" {
  api_id           =
  integration_type = "AWS_PROXY"

  connection_type           = "INTERNET"
  description               = "Users lambda integration"
  integration_method        = "POST"
  integration_uri           = module.users_lambda.lambda.invoke_arn
  request_parameters     = {
    "append:querystring.action" = "$request.path.action"

resource "aws_apigatewayv2_route" "users_lambda_route" {
  api_id    =
  route_key = "POST /users/{action}"

  target = "integrations/${aws_apigatewayv2_integration.users_lambda_integration}.id}"

Well, actually almost everything, we write the lambda itself, we describe the rule assembly ee zip and do terraform apply. After that, we can go to AWS and check what was created there.


We have a project structure where each part is separated in its own folder.

We have a deployed SPA application in the cloud and Terraform as a means of implementing it. We can upload the application in parts – we wanted to update the js client, we wanted to – we updated one lambda. Moreover, it is technically possible for different parts of the application to have different dependencies. This allows us to minimize the consumed resources and reduce the startup time of our application.

Service interdependencies are explicitly written in terraform language and visible to the naked eye.

We have an analogue of the rail console, we have access to data on the server through the shared gem console.

The frontend does not depend on the backend in any way.

We have a minimum of dependencies and can manage them ourselves. That allows us to forget about tasks like “update all rails everywhere”.

Our application is easily scaled by means of amazon, lambdas rule. We can easily use websockets in our application, since the api gateway in them can. We can assemble our solution from AWS service cubes, the infrastructure is also a code and lies next to the business logic. We can easily start using event communication between parts of our application through AWS SQS, SNS, …

From the differences with the Oracle solution, you can notice a significant shorter cold start time for functions and a significantly faster lambda deployment. But of course amazon is not as free as oracle. But the solution is simpler, there is no explicit assembly of docker images, only ruby ​​and terraform.

What else to think about

Yes, you do not immediately have a boxed solution that rails offer for various situations. But you have the choice of different AWS services and the freedom to make the implementation that best suits your needs.

For example, means for deferred execution of operations or the replacement of ActiveJob. There may be options and it all depends on the task. You can use lambdas not only as http request handlers, but actually as a service object. If so, then the service can be called asynchronously.

You can make an SQS queue where to add jobs and describe a lambda that will process these tasks. So it will be possible to provide for re-execution in case of an error, error monitoring through a dead letter queue. There are also limitations – 15 min limit for lambda execution.

You can also make an ECS task that will process events from the queue. Thus, we will avoid the time limit, but there will be a scaling issue if there are too many events in the queue.

And you can use AWS step functions and make a combine for processing jobs. So many options, remember how good it was in the Oracle cloud – there was nothing to choose from.

To send emails, you can also find the corresponding AWS Service. In general, it is possible to convert any Rails system into a properly configured AWS system of services in this way. It is not very difficult to do this, the connectivity of the system components is reduced, and the need to administer the infrastructure disappears. If you do it yourself, you will be able to throw out unused parts of Rails systems, which will have a positive effect on reducing the number of dependencies of your project. If you want everything at once, that is RubyOnJetswhich translates the Rails application to AWS services when deployed.

So the question is, is Rails really needed in the world of modern cloud services?

Similar Posts

Leave a Reply