Build a simple automated deployment pipeline for Cloud Run (Step by step tutorial)

ujwal dhakal
7 min readMay 3, 2021

--

Automating the deployment pipeline is always a challenging thing. It requires a lot of effort in setting up the server for multiple environments, preparing the build, and deploying the same build into the server.

We will go step by on configuring simple deployment pipelines so you could build furthermore from that.

Intro

In this post, I am going to show you how easily one can make a deployment pipeline for their next app by just copying down the source code. We will be using a simple NodeJs Dockerized application to be hosted on Cloud Run. By the end of this read, you will be able to make a deployment for any language.

In this tutorial, we will be using the following stacks -:

  • Container Registry -: It will store all Docker Images to be used in Cloud Run
  • Google Storage -: It will save our state of Terraform.
  • Cloud Run -: Serverless platform where our final app will be hosted
  • Terraform -: It will help us to spin up Cloud Run instance and to create multiple working environments like staging and production
  • Go -: It will help us to trigger Terraform command whenever we want to with Github Actions
  • Github Actions -: It will help us as an entry point to trigger those Go command and Go will trigger terraform

Why Cloud Run, Go, and Terraform?

Cloud Run

Cloud Run is a managed compute platform that enables you to run containers and Google Scales Containers as per the request and you pay per usage. The management of the infrastructure overhead is handled by Google itself so we can focus on building apps. Google has awesome doc at Cloud Run

Go

Go is a very popular programming language these days due to its fast, reliable, and simple architecture. With Go, we can generate binaries that will execute without installing anything so this feature is pretty handy while working with OS

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. So with Terraform, we can build the infrastructure like we build an app with Code i.e Infrastructure as Code.

Assumptions

  1. You are familiar with how Go works
  2. You are familiar with how Cloud Run works i.e specifying container image. If not check this article https://ujwaldhakal.medium.com/automate-cloud-run-deployment-in-a-minute-cb85e7db9f82
  3. You are familiar with how Terraform spins up a new server and how Terraform manages state
  4. You are familiar with Dockerizing the application.

Deployment tutorial steps

We will be building deployment pipelines in the following four steps-:

1. Dockerize application

First, we will create an application and dockerize it. Let's create a index.js the file inside the src directory and create a hello world response.

index.js

Now let's Dockerize it by adding Dockerfile. This is a simple Dockerfile where there is a multi-stage build and I have added as production it so that we could use the same Docker file for local and production. The same environment will be used while building the Docker image in the coming steps where we will build and push images to Google Container Registry.

2. Infrastructure setup with Terraform

First, we will write a terraform file where we will tell Terraform to spin up the Cloud Run instance. Let's create a GitHub project and create a folder cicd under which there are files need for terraform to create a cloud run instance with a given image.

and you only need to replace dev.tfvars and credentials/dev-cred.json with your actual credentials. We could actually test by running terraform apply for this to work you need an image in a Google Container Registry

terraform apply -var image_tag=docker_image_tag -var-file=dev.tfvars -auto-approve

Once we are able to push the Docker image to Container Registry then we are able to use this but we want to make it more simple and automated like what if we could deploy by running ./cicd/deploy dev master where dev is environment and master is the branch name in Git. Let's work on few files to achieve this

3. Triggering Terraform with Go

With Go, we can create binaries that will run without installing any further dependencies. Go will build a docker image and push it to the container registry if the image does not exist in the registry and trigger Terraform to deploy a new image to Cloud Run.

I am using https://github.com/ujwaldhakal/gcp-deployment-utils this package for all utilities in this demo. Getting a commit hash of the current branch, logging into Container Registry, and Pushing images are all done by the go-gcp-docker-utils package.

If you look at https://github.com/ujwaldhakal/gcp-deployment-utils/blob/master/docker/docker.go#LC24 this Build function closely there is the target being used in Docker File because oftentimes the way build will be different in production and local environment so make sure you use target in your docker file.

Inside function initTerraform

cmd := exec.Command(“terraform”, “init”, “-backend-config”, “bucket=tf-test-app”)

tf-test-app is the name of our bucket which one has to create manually in cloud storage

This deployer.go generates a deployerbinary by running go build deployer.go . Deploy binary expects one parameter which is the name environment and it will prepare a service account JSON dev-cred.json ,dev.tfvarsand Github Commit Hash to be used on Terraform apply. After that, it will try to try to check whether the image we are trying to build already exists or not. If it already exists, it won’t build. If not, it will build and push. And finally, it will apply to terraform with given credentials.

Since it will run inside GitHub action after checking out to the branch with given branch name with./cicd/deploy dev master .We will use the same commit hash to create tag in docker image and use the same image to deploy.

Note-: Make sure your IAM JSON has service to read bucket , create & destroy cloud run , read and add container images to Google Container Registry

If you want to modify anything you could just copy the same function from utils package and paste it in your actual project and start using your own instead of using the package

4. Glue everything with Github Action and Bash file

Finally, these two things will run inside Github Action Container upon manual command trigger i.e deploy dev master . As we have recently created deploybash inside cicdwhich will process our deploy dev master command and triggers Github Repository Dispatch which will trigger deploy.yml with payloads i.e name of environment and branch. Github Action will checkout to the branch given at deploy command and that will trigger deployer binary with environment name and as we already discussed how `deployer. go` handles the incoming request

In order to run this, you need to obtain a Github Personal Token follow the link to get one, and put it inside the .env file. Usually, we don't track the .env file on git so for this purpose I am pushing it to Github

Once you trigger that command ./cicd/deploy dev master ,deploy file will trigger GitHub action deploy.ymlpass those dev and master arguments where the GitHub action will checkout to that given branch i.e master and pass the name of the environment to deployer.gobinaries. Once binaries know the commit hash and environment name it will load the necessary credentials and trigger terraform apply with a new image dash and it will take minutes to get reflected.

You will see a green checkmark next to the deploy text once your deployment is successful. You can find the link to action at https://github.com/username/reponame/actions

Add More Environments

We have only seen setting up one environment i.e dev. But with more team members we will need more environment that could be production, staging,test1, test 2 based on your preference.

To add more environments is simple as just copying config files and editing them as most of the challenging part is already done i.e making up and running with one environment. Now we just need to create separate service account credentials by making a separate project from the same account on Google Cloud and make them accessible in deployer. go as we did for dev and production and you could add more environment either with an if-else pattern or just pass the dynamic variable to the environment name i.e ${environment}.tfvars and credentials/${environment}-cred.json.

Note -: If we don't want to create multiple Google Cloud Projects to handle multiple environments then we can use Terraform State to handle multiple environments with the same project.

func getTfVarFileName(env string) string {if env == "dev" {return "dev.tfvars"}if env == "production" {return "prof.tfvars"}panic("Please select correct environment only dev & production available at the moment")}func getCredentialsFilePath(env string) string {if env == "dev" {return "credentials/dev-cred.json"}if env == "production" {return "credentials/prod-cred.json"}panic("error on loading credentials")}

This demo does not cover any DNS automation of adding a domain & verifying in Cloud Run. Since there is no encryption done for exposed credentials, please do not use this approach in Public Repositories. Only use this on Private Repositories.

If you want to encrypt the credentials, You could use Google Kms to encrypt with a key and decrypt with a key while using it to authenticate to Google Cloud.

Conclusion

This is just a basic idea of how to make CI/CD with minimal tools. With this approach, one can make CI/CD for normal application in a company, attach other google services in Terraform too like Cloud SQL, Redis, etc.

Special thanks and credits to Ujjwal Ojha for helping me understand & make CI/CD

Let me know your thoughts!

Source -: https://github.com/ujwaldhakal/cloud-run-cicd-boilerplate

--

--

ujwal dhakal
ujwal dhakal

Written by ujwal dhakal

Software Development Enthusiast

No responses yet