Experiment : Setting up Single Kubernetes Cluster on cloud to host multiple application

Concept and Goal

I wanted to do proof of concept to see what would it take to setup single Kubernetes cluster on cloud that could host multiple applications. Goal was to learn how does technology works, how much does it cost to run smallest cluster to host various application for development and testing and how does it compare with deploying multiple application on single VPS.

To setup experiment, I wanted to create infrastructure so that when user visit devlabca.com or www.devlabca.com then it request goes to WordPress website hosting but when user visits subdomain.k8s.devlabca.com ,it should receive response from Kubernetes cluster. And eventually, I want to automate the provisioning of infrastructure and app deployment using CICD .

Technology

For this experiment, I chose following technologies,

  • GKE ( Google Kubernetes Engine) : This is managed Kubernetes service in which you don’t pay for Master Node and don’t need to worry about system process. You would pay for worker nodes and anything else you use, For example storage, network etc.
  • Terraform: Using terraform , I would be able to provision infrastructure using code. There are couple of benefits of doing this way. I can write specification of my infrastructure as code and store it Git. That would allow be not just reproduce infrastructure but also I can track changes in future. In addition to that , I would be able to automate provisioning using Git CICD. Terraform also store “States” of the infrastructure.
  • Gitlab : To store code and establish CICD pipeline.

Specification and Architecture

To start with , I decide to start with minimum configuration with 1 node worker machine with e2-micro instant in single region in single zone. Later on , I could easily scale it as required. I also require to provision one global IP that I could use it in name server. Estimated cost of running experiment without scaling more than 1 e2-micro node (good enough for testing the concept with echo server).

Overall architecture is shown in following diagram.

In addition to setting up k8s , I require to configure name server such that it proper requests are redirect to GCP. Nameserver is configure as shown below,

Code

juned Munshi / gke-devlabca-poc ยท GitLab

In this repo you would find following

  • Terraform code to provision GKE.
  • Yaml files for two demo applications. I chose to deploy simple echo server on two different subdomain.

Result

Problems

Costing seems unexpected : Expected cost of running 1 e-2 micro node was approx. $5-6/month. However, I was seeing $1-$2/day and it already cost around $5 on day 3.

Cause 1: Defaulted to 100GB balanced persistent storage

In compute engine section , I noticed that it created 100GB disk . I left unspecify disk in my initial terraform code and it defaulted to 100GB.

main.tf
...............

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]


    disk_size_gb= 10 # This part added...
    disk_type = "pd-ssd"

    labels = {
      env = var.project_id
    }

 ............

Cause 2: Load balancer service

I used Load balancer type to create service . Which caused it to launch load balancer instance on compute engine and it was expensive. For the experiments , I don’t need load balancer so I switched to NodePort in service.yaml.

apiVersion: v1
kind: Service
metadata:
  name: app2-echo-server-service
spec:
  type: NodePort #LoadBalancer
  selector:
    app: app
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80

Few refrences , I found which talks about cost of load balancer and how to avoid or reduce it.

Anybody else find load-balancing forwarding rules stupid expensive? : r/googlecloud (reddit.com)

HTTP Load Balancing – reducing costs : r/googlecloud (reddit.com)

kubernetes – Is Ingress working with ClusterIP services? – Stack Overflow .

https://serverfault.com/questions/801189/expose-port-80-and-443-on-google-container-engine-without-load-balancer

Next Step

Next step is to install Argo cd on the cluster and deploy app in the cluster using GitOps practice.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top