AWS Communism – Part 1: How we cut our Load Balancing cost by more than 96%

AWS Communism – Part 1: How we cut our Load Balancing cost by more than 96%

Felix Seidel
Felix Seidel
In this series of posts, we provide you with fresh ideas on better utilizing your AWS resources and saving money. Much effort went into finding a cloud-native architecture, and we want to share our insights with you.

Recently, Cloudflare announced their object storage service Cloudflare R2 and got much buzz from the community. Essentially, they solve a huge pain point by removing egress traffic cost from the content hosting equation.

However, there are use cases where it's not as easy to remove AWS' exact-but-not-cheap pricing from the game. In our series "AWS Communism", we want to show yet another technique for cutting your AWS bill – resource sharing.

Resource sharing is not a new technique. In essence, we tell you to fully use all of the resources you create on AWS. The tricky point is managing these shared resources effectively without complicating the workflow for everyone.

For load balancers, the reason why they are often underutilized is easily shown by how easy it is to create and use them using Infrastructure as Code tools like Terraform:

resource "aws_lb" "main" {
  name               = "test-lb-tf"
  load_balancer_type = "application"
  # [...]

This Application Load Balancer (ALB) can host up to 100 applications with 25 different TLS certificates. However, if you wanted to share this ALB, you'd need to watch how many apps you assign to it. If you tried to use it across Terraform projects, you'd need to expose its ID. At best, it's additional work. More often, this is too much work. Thus, it's more economical for most cloud engineers to create dedicated resources and let the client pay the bill.

Our way is AWS-native and allows for maximum efficient sharing – without complicating it for the user. When you can share a single ALB between 25 to 100 apps, the large cost saving comes in.

Consumers of our shared load balancers create a CloudFormation (CF) stack resource in their Terraform project. This CF stack creates a custom resource that triggers a Lambda function on create/update/delete. The Lambda function is implemented in Golang and manages the shared ALBs, scaling the number up or down as necessary. The CF stack outputs the ALB ID, which ends up in the Terraform resource output ready to be consumed by dependent resources. When the Terraform project is modified or destroyed, the CF stack is updated too, and the lambda function runs again to free the resource allocation. The diagram below depicts the process.

We're using this design pattern for several use cases, and it has proven reliable and easy to understand. It allows us to embed custom logic natively into Terraform – without going (too) deep by implementing a custom Terraform provider. Our solution doesn't even depend on Terraform, as everything is managed within AWS. This enables easy sharing between Terraform projects or projects using other technologies such as the AWS CDK.

This design pattern can be applied for many other use cases, such as sharing a single database instance across several users. Let us know if this topic is interesting for you and you want to hear more!

Leading companies trust SetOps.