Monitoring a Next.js Application with Komiser

·

7 min read

Monitoring a Next.js Application with Komiser

Recently, my wife and I embarked on a trip to a charming village nestled in the northern reaches of the Portuguese mountains within Gerês National Park. It was a delightful experience, but I soon realized that my initial cost estimate for the weekend was woefully inadequate. Factors such as car rental insurance, fuel expenses, and the limited availability of affordable dining options in remote rural area significantly inflated the overall cost. In the end, the weekend turned out to be at least twice as expensive as I had initially anticipated.

This scenario is not uncommon for developers like us either. We are well aware that even the simplest proof-of-concept (POC) or personal projects can quickly accumulate unforeseen expenses. While no one has gone bankrupt from the cost of a weekend side project, it becomes crucial to adopt correct and best practice-aligned approaches to infrastructure cost management, especially when working on larger-scale enterprise projects. Implementing these strategies can lead to significant daily savings, ranging from hundreds to thousands of dollars.

Interestingly, the savings derived from efficient infrastructure cost management are not limited to larger organizations with numerous moving parts, teams, and provisioned infrastructure. The same cost-saving principles can be applied to small projects as well as complex infrastructures. In this article, I aim to demonstrate how even an incredibly simple Next.js app, when coupled with mismanaged cloud infrastructure, can quickly accumulate costs that could have a significant impact on your budget.

You are going to need an instance of Komiser up and running either locally or on a remote machine. You also need to make sure the config.toml file is configured to access the AWS account where you’ll be provisioning the resources below. If you need any help with this check out this video.

The plan

We are going to create a simple Next.js blog by executing create-next-app with npm, Yarn, or pnpm to bootstrap the example:

How to create a Next.JS application

Here is the link to the blog code used for the Next.js blog, and here is the terraform templates code.

We will then create a simple Dockerfile so that we can containerize our app and simply pull the image when we want to deploy it afterward. The Dockerfile content is listed below, it uses Node.JS latest stable version as a base image, sets /app as a working directory, installs the needed NPM dependencies, builds the app, and sets npm start as the image entry point.

Application Dockerfile

Once we have containerized our app, we can proceed with the deployment process. For this demo, I have chosen to deploy the Next.js blog container directly to an EC2 instance to keep it as simple as possible. However, it is worth noting that you can also utilize an EKS or ECS cluster as an alternative deployment destination.

Before delving into the code, it is important to select an Infrastructure as Code (IaC) framework. This choice will enable us to provision cloud resources in a declarative manner. In this demonstration, we will be utilizing Terraform, which offers several advantages. Not only does it eliminate the need for manual clicking in the AWS console, but it also provides a means to enforce a straightforward and consistent tagging convention. These tags will serve as the key instrument through which we can later identify and manage resources within the Komiser resource inventory. Let's explore how we can accomplish this seamlessly.

Infrastructure diagram

As you can see, we already have a series of billable cloud resources necessary to host and serve this ultra-simple app. Imagine how quickly your infrastructure will grow when working with more complex, real-world apps.

Within our chosen VPC, the Next.js blog will be hosted on a t2.micro EC2 instance. To grant the instance the necessary permissions, we will attach an IAM Instance Profile to it. Furthermore, we need to provision an ELB and a Route 53 domain name to ensure proper traffic routing to the blog. Additionally, it is important to create an S3 bucket manually beforehand, as it will be utilized to store the Terraform state file. This file tracks the current state of the provisioned infrastructure.

Now, let's examine the requisite Terraform files that facilitate this setup.

The terraform files

The terraform.tf is where we declare the S3 backend, where the terraform state file will be stored. The S3 bucket will have to exist before running terraform init.

Terraform backend provider

In the variables file we centralize all of the custom data that we want to insert into the resource.tf file below. The resources.tf is broken down into 4 sections (Route 53, IAM, ELB and EC2 resources) this is the file where we declare the different AWS cloud resources we will need to host, serve and protect our simple blog.

AWS resources

As you can see in the last EC2 resources section. We are loading an install.sh file through the resource user data. This is the script that install all the requisite dependencies on the EC2 instance that we have provisioned. Below is the content of the install.sh file:

Dependencies installation script

The mighty tag

As seen above, for each provisioned resource, we have included the AWS tag field. This metadata plays a pivotal role in facilitating the discovery and aggregation of all the resources associated with the blog post. By consistently applying these tags, we can easily identify and manage the various resources tied to our deployment.

Terraform tags

I have also created an outputs.tf file in order to have access to two key bits of data that we will need to deploy and reach out blog:

  • The provisioned EC2 instance IP address (later inserted into the Ansible playbook).

  • The blog domain name.

Let’s apply it!

Once you’ve updated the variables.tf file with all of your custom data as well as configuring the terraform.tf file with the correct backend you can now run the following commands to initialize and provision the AWS resources.

Terraform provisionning commands

Let’s deploy the app!

We will be using an Ansible playbook and an inventory file (to build the connection string to the EC2 instance) to deploy our containerized app to the EC2 instance. It will look something like this:

Ansible playbook

The playbook above runs three tasks:

  • Firstly uploading the docker-compose.yml to the remote EC2 instance.

  • Then, ensures the Docker daemon is running on the machine.

  • Lastly, we run the docker-compose command that will bring up the application.

Inventory file

This is the file where we build the command Ansible will use to connect to the remote EC2 instance. You will need to insert the previously outputted EC2 IP address as well as the path to the private SSH key to access the remote instance.

Once you have added the IP address and the private key you can the ansible-playbook command. Once the Ansible commands runs correctly without any dreaded failed tasks you can go over and hit your previously outputted domain to see if the app is running.

Screenshot of our simple blog app

As mentioned above, Komiser should be running and available on your localhost or at another pre-set-up address. Learn how to do so here.

In the resource inventory section, you can streamline your resource management by filtering your list of resources using a Specific tag. This allows you to easily locate all the resources associated with your blog. Once you have filtered, you have the option to save the filter search as a Custom view. Additionally, you can configure custom alerts for this view, ensuring that you receive notifications whenever the cost or the number of resources surpasses a specified threshold.

As an example, I have filtered the resources by tag and created a custom view named Jake-Blog App, where I am effectively tracking all the resources related to my app. This way, I can conveniently monitor and manage the resources associated with my blog.

Komiser dashboard

Video tutorial

Final thoughts

Reflecting on the earlier statement about nobody going bankrupt over a simple app like this, I am struck by how this app, being one of the simplest, perfectly exemplifies both the problem and the solution at hand. The problem lies in the fact that despite its simplicity, the app already entails at least three billable cloud resources, in addition to several other crucial yet non-billable resources. We must keep track of these resources and ensure their efficient utilization, especially when we no longer require them. Fortunately, the solution lies in the realization that we now have a tangible means to harness the power of tags. By implementing a consistent tagging policy across all provisioned resources and leveraging tools like Komiser, managing resource allocation and associated costs becomes incredibly straightforward, regardless of scale. Adopting an Infrastructure-as-Code (IaC) approach ensures that tagging can be effortlessly applied and maintained.

Did you find this article valuable?

Support Jake Page by becoming a sponsor. Any amount is appreciated!