Create and Deploy your API Handler in Cloud (beginners)

Hariharan Sivakumar
6 min readApr 14, 2020


Everyone loves to build applications, and the flow begins from design of the application to release of the product to its desired users.

This blog mainly focuses on beginners who would like to go through the complete cycle i,e creating applications from scratch to deploying it in cloud. I hope the reader has basic knowledge about services, saas, CI/CD, Docker.

This blog also deals with creating a backend handler and deploying it in cloud and how to use CI/CD (Continuous Integration and Continuous Deployment).

To begin with the blog, when you are creating an application that requires a backend handler, you may have to select from the existing stack pool which can be Springboot, Laravel, Django, Flask, Servlet, JSP, Golang, NodeJS, etc., (your selection will mainly depend on the applications requirement).

In this blog, I have chosen Golang as my backend handler. If you follow the instructions clearly you might end up having a working backend handler which is deployed in the cloud. For implementing continuous integration you can use any CI/CD tools such as Jenkins, GitLab CI, GoCD, Travis CI, etc.,

I have selected GitLab as my CI/CD tool.There are certain things, you should be aware of before proceeding with continuous integration. First is for GitLab, you need to create your CI/CD Configuration for various operations before actual deployment, the configuration will be mostly yaml scripts or json configurations, as a standard it is good to follow some targets in CI/CD Configuration like Test, build, lint, etc., to ensure the code quality and to make sure you have a compatible code that is going to deploy in the cloud, Test targets will let your test run and if any of the test fails which leads to failure in the pipeline.

All of these targets deals with multiple dependencies which we need to take care in the configuration, the configuration’s sample code is below:

image: golang:1.14

- /apt-cache
- /go/src/
- /go/src/
- /go/src/
- /go/src/

GROUP: repo-name
PROJECT: executable-api-name

- mkdir -p $GOPATH/src/$REPO/$GROUP $GOPATH/src/_/builds
- ln -s $GOPATH/src/$REPO/$GROUP $GOPATH/src/_/builds/$GROUP
- go get -v -d ./...
- apt-get update -qq
- apt-get install -qq git
# Setup SSH deploy keys
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'

- build
- deploy-staging

stage: build
- go build

stage: deploy-staging
name: staging
url: $TARGET
- go build
- touch ~/.ssh/config
- echo $SSH_PRIVATE_KEY >> temp
- scp -i temp executable-api-name root@$FILE_SYSTEM
#You can start your server here
- master

Yeah, The above code might look creepy, but this is just for the understanding and giving a try. let’s focus on the important part of the code that is stage listing and staging details, the above configuration contains two stages first is build and the second is deploy-staging in which it will follow the order of execution as top down. so once the build has been passed successfully the deploy will be triggered, in the build you can also specify the pipeline triggers on specific branches in the repositories, in above it is specified as master, you need to pull the required image to build the service and in this case since I’m using Golang, the build command for Golang has been used . The deploy target will be deploying your code / executable binary in the target system/cloud…. Some of the stuff in the above code like cache, before script those you can specify whatever commands to be executed before running the stages

Deployment Target

For the development purpose I prefer to use DigitalOcean since it is cost efficient to create development environments like dev, test, stage, etc., I created a simple droplet which is CentOS (you can choose whatever you prefer to) in the digital ocean and added my executable binary into systemctl service. and whenever the droplet instance restarts the API starts executing and I opened the firewall port for the port which my app is going to run in the droplet instance, (you can use any of the firewall commands to open port) and for DB setup since this is development environment I choose to have DB in my instance docker, if you want you can have your own DB from any of the providers like AWS RDBMS, GCP, etc.,


In the above code, there are some ssh keys and ssh features have been used, which is used to login to the Digital Ocean Machine, and for security purpose, you can store those ssh keys and ssh target IP’s in the environmental variables and GitLab supports these features which you can check out in the settings

SystemService Definition

Once you have purchased cloud engines or instance to deploy your targets you need to test the deployment locally like copy the executable binary and run it in the instance, so for that analyse whatever dependencies you would require and get those things to be done via GitLab Configuration. below is the code for SystemService Definition

Description = Test API
After =
ExecStart = /bin/bash /root/start_script/
WantedBy =

Please check out google on how to write systemctl service if you would be required to run your service as systemctl service, the other way you can run your service is just run the executable binary in you machine somewhere, the only thing is every time when your instance restarts, the API also should be working so I added the above test API service in systemctl service and the WantedBy = multi-user-target will run this service whenever the instance restarts. the ExecStart which is nothing but your executable shell file, that will be triggered when the instance restarts, so you need to start your executable binary inside that shell file or else you can directly run the binary from here itself since I have specified working directory points to the API binary folder, use wisely based on your requirement.

Dockerizing API

Dockerizing all the dependent stuffs

The other way you can run your service is to build the service as a docker image and run the docker container inside the DigitalOcean Instance, here is a sample code for creating a docker image for Golang

FROM golang


COPY go.mod .
COPY go.sum .

RUN go mod download

COPY . .

RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build


ENTRYPOINT ["/app/executable-api-name"]

This will build a docker image which you need to run inside the instance to create the container and expose the port of instance and bind the port with docker container (which Golang API is running) also expose the running port in the firewall, things can be done via Docker Compose. similarly, you can dockerize your API whatever language it is, you can create image and can run it in the container (Checkout the advantage and disadvantage of Dockerizing things if you’d like to know before doing this)

Database Creation (Docker)

version: "3.7"
image: mysql:5.7
restart: always
MYSQL_USER: 'db_user'
# You can use whatever password you like
MYSQL_PASSWORD: 'db_password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'root_password'
- '3306:3306'
- '3306'
- my-db:/var/lib/mysql
- test_mysql_net

driver: bridge

The above code will create a MySQL docker container which you can use for development, this should also be executed inside the instance, simply whatever things we are covering in the local to run the service, things should be done in instance too, you can create an automation script to all these stuff or else you can do by logging inside the instance manually, Note: ‘Expose the database port in the firewall to gain access through a real-time entity with any database connection tools’

That’s all for the beginners, Hope the above stuff will help you in setting up the development environment with reduced resources and in a cost-efficient manner.



Hariharan Sivakumar

Enthusiastic Software Engineer, Working as Software Development Engineer — II at Intel Corporation, Ex-ThoughtWorks