Kubernetes Cluster w/ Vagrant, conjure-up, juju on AWS

Kubernetes Cluster w/ Vagrant, conjure-up, juju on AWS

This guide is for those looking to configure a K8s clusters for testing purposes on AWS. It leverages Vagrant/Ubuntu/conjure-up/juju to provision a cluster on nearly any cloud provider in 10 minutes. This guide is specific to using AWSCLI and therefore AWS to configure the cluster. This cluster spins up 3 total EC2 instances within your account. It also configures your K8s cluster's IAM Roles, VPC, and Security Groups automatically.

Detailed info on conjure-up: https://conjure-up.io/.

Detailed info on juju: https://jaas.ai/.

Prerequisites:

VirtualBox: https://www.virtualbox.org/wiki/Downloads

Vagrant: https://www.vagrantup.com/downloads.html

AWS Account w/ a valid payment method

Create a directory for your Vagrant VM
cd to the created directory

vagrant init ubuntu/xenial64
vagrant box add ubuntu/xenial64

Open your vagrant file with a .txt editor. Uncomment line 26, and change ports: config.vm.network "forwarded_port", guest: 8001, host: 8001

run your virtualbox vm:

vagrant up --provider virtualbox

ssh to the local ubuntu vagrant vm:

vagrant ssh

run updates:

sudo apt-get update

install/configure ntp:

sudo apt-get install ntp ntpdate ntpstat
sudo service ntp stop
sudo ntpdate time.nist.gov
sudo service ntp start

install AWSCLI:

sudo apt-get install python3-pip
pip3 install awscli --user
aws configure

-access key id
-secret access key
-specify region (us-east-1)
-output format (table)

test connectivity:

aws ec2 describe-regions

install/configure conjure-up:

sudo snap install conjure-up --classic
conjure-up kubernetes-core

  • Deploy New Self-Hosted Controller
  • Use flannel as your Network Plugin
  • Leave sudo password empty
  • Wait for the deployment to complete, change instance types if you want to lower cost per/hr.

run juju status and ensure you see an active state for all apps/units listed.

Congrats, you've setup a full k8s cluster on EC2!

If for any reason you need to tear down your cluster, just run the following:

juju switch
juju destroy-controller switchname --destroy-all-models

Check that your instances in the AWS console now show as terminated.

You can spin a fresh cluster back up at anytime by running:

conjure-up kubernetes-core

Banner Art by: https://www.deviantart.com/grenadekitten