Setting Up a Free Tier Kubernetes Cluster With Kops

Setting Up a Free Tier Kubernetes Cluster With Kops

scotty profile picture

Scotty Parlor

March 16, 2020

Read Time 4 min

I've been working through a job posting this week that requires kubernetes experience and I decided that the easiest way to get up to speed with it is via minikube. So as I prepare that tutorial, I keep thinking it would be beneficial to put out a spin-off article showing you how to deploy a free tier t2.micro using Kops.

 

Warning: Yes this is free tier, but make sure you setup billing alerts and remember to tear down your cluster when you are not practicing. This will absolutely burn through the free tier compute limit after a day or two of running.

If it wasn't clear above, we will be using aws EC2 for this tutorial. If you don't have an account, go set one up now.

 

Also, if you enjoy this content and want to follow more, consider subscribing to our Youtube channel.

 

AWS Project Setup

Our IAM user will need certain permissions to allow Kops to create and read what it needs, as well as an S3 bucket for our state.

Let's start with the user:

Here I have created a new IAM user called devops_02_kube with a new group that has access to AmazonEC2FullAccess, IAMFullAccess, AmazonS3FullAccess, and AmazonVPCFullAccess.

 

 

Grab those access and secrets keys and configure your profile with

aws configure

You can check those creds at

cat ~/.aws/credentials

Next, let's create the s3 bucket with versioning that we need for our kube state. I will call mine devops-02-kops-state. Make sure you turn on versioning.

 

 

Now let's work on the Kops and Kubernetes segment.

 

Kops and Kubectl Setup

Kubernetes is an amazing tool and likely the foreseeable future of container orchestration, but can also be a bit difficult to configure being in it's early stages. Thankfully, there are tools like Kops to help us get the job done easily and headache free.

As hinted at above, we will be taking a more extensive look at kubernetes in a future job post tutorial, so I am not going to go into a ton of detail here. But what we will need for now is Kops and Kubectl installed on our machine.

to install kops on linux:

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

For other operating systems see their documentation.

then install kubectl:

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

For more information or installation methods, see the docs.

 

Configuring and Provisioning our Cluster

Now the we have installed the requirements, let's export our variables and provision our cluster.

We need our AWS profile, keys, and S3 bucket name that we created. Export the following:

export AWS_PROFILE="devops_02_kube"
export AWS_ACCESS_KEY_ID=$(aws configure get devops_02_kube.aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get default.aws_secret_access_key)
export KOPS_STATE_STORE=s3://devops-02-kops-state

Once we have done that, let's run kops in the command line to create a master and 3 nodes.(I named mine sfs.k8s.local):

note: I create a new ssh key called kube.pub with

ssh-keygen
kops create cluster \
--name="sfs.k8s.local" \
--zones="us-east-1a" \
--master-size="t2.micro" \
--node-size="t2.micro" \
--node-count="3" \
--ssh-public-key="~/.ssh/kube.pub"

 

If you get any errors, try running:

kops update

and to validate:

kops validate

Once this cluster is complete, you should be able to see it in your EC2 Dashboard. It will also save your configuration to .kube/ in you home directory.

try:

kubectl get nodes

You should be able to see the nodes you just created.

You can now use this cluster to try things out, but again, i'd recommend minikube for testing. This is a good piece of knowledge to have in your professional tool belt.

When you are finished, go ahead and bring it down to save your free tier compute hours:

kops delete cluster sfs.k8s.local --yes

and verify that the cluster has been terminated in your EC2. And remember, your cluster state is stored in the S3 bucket that you created!