I've been working through a job posting this week that requires kubernetes experience and I decided that the easiest way to get up to speed with it is via minikube. So as I prepare that tutorial, I keep thinking it would be beneficial to put out a spin-off article showing you how to deploy a free tier t2.micro using Kops.
Warning: Yes this is free tier, but make sure you setup billing alerts and remember to tear down your cluster when you are not practicing. This will absolutely burn through the free tier compute limit after a day or two of running.If it wasn't clear above, we will be using aws EC2 for this tutorial. If you don't have an account, go set one up now.Also, if you enjoy this content and want to follow more, consider subscribing to our Youtube channel.
AWS Project SetupOur IAM user will need certain permissions to allow Kops to create and read what it needs, as well as an S3 bucket for our state.Let's start with the user:Here I have created a new IAM user called devops_02_kube with a new group that has access to AmazonEC2FullAccess, IAMFullAccess, AmazonS3FullAccess, and AmazonVPCFullAccess.
Grab those access and secrets keys and configure your profile withYou can check those creds atNext, let's create the s3 bucket with versioning that we need for our kube state. I will call mine devops-02-kops-state. Make sure you turn on versioning.
Now let's work on the Kops and Kubernetes segment.
Kops and Kubectl SetupKubernetes is an amazing tool and likely the foreseeable future of container orchestration, but can also be a bit difficult to configure being in it's early stages. Thankfully, there are tools like Kops to help us get the job done easily and headache free.As hinted at above, we will be taking a more extensive look at kubernetes in a future job post tutorial, so I am not going to go into a ton of detail here. But what we will need for now is Kops and Kubectl installed on our machine.to install kops on linux:For other operating systems see their documentation.then install kubectl:For more information or installation methods, see the docs.
Configuring and Provisioning our ClusterNow the we have installed the requirements, let's export our variables and provision our cluster.We need our AWS profile, keys, and S3 bucket name that we created. Export the following:Once we have done that, let's run kops in the command line to create a master and 3 nodes.(I named mine sfs.k8s.local):note: I create a new ssh key called kube.pub withIf you get any errors, try running:and to validate:Once this cluster is complete, you should be able to see it in your EC2 Dashboard. It will also save your configuration to .kube/ in you home directory.try:You should be able to see the nodes you just created.You can now use this cluster to try things out, but again, i'd recommend minikube for testing. This is a good piece of knowledge to have in your professional tool belt.When you are finished, go ahead and bring it down to save your free tier compute hours:and verify that the cluster has been terminated in your EC2. And remember, your cluster state is stored in the S3 bucket that you created!Look for the new tutorial to come out at the end of the week!SFS
Disclosure: scottyfullstack.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.