We'll go ahead and use the .k8s.local settings mentioned previously to simplify the DNS setup of the cluster. If you'd prefer, you can also use the name and state flags available within kops to avoid using environment variables. Let's prepare the local environment first:
$ export NAME=gswk8s3.k8s.local
$ export KOPS_STATE_STORE=s3://gsw-k8s-3-state-store
$ aws s3api create-bucket --bucket gsw-k8s-3-state-store --region us-east-1
{
"Location": "/gsw-k8s-3-state-store"
}
$
Let's spin up our cluster in Ohio, and verify that we can see that region first:
$ aws ec2 describe-availability-zones --region us-east-2
{
"AvailabilityZones": [
{
"State": "available",
"ZoneName": "us-east-2a",
"Messages": [],
"RegionName": "us-east-2"
},
{
"State": "available",
"ZoneName": "us-east-2b",
"Messages": [],
"RegionName": "us-east-2"
},
{
"State": "available",
"ZoneName": "us-east-2c",
"Messages": [],
"RegionName": "us-east-2"
}
]
}
Great! Let's make some Kubernetes. We're going to use the most basic kops cluster command available, though there are much more complex examples available in the documentation (https://github.com/kubernetes/kops/blob/master/docs/high_availability.md):
kops create cluster --zones us-east-2a ${NAME}
With kops and generally with Kubernetes, everything is going to be created within Auto Scaling groups (ASGs).
Once you run this command, you'll get a whole lot of configuration output in what we call a dry run format. This is similar to the Terraform idea of a Terraform plan, which lets you see what you're about to build in AWS and lets you edit the output accordingly.
At the end of the output, you'll see the following text, which gives you some basic suggestions on the next steps:
Must specify --yes to apply changes
Cluster configuration has been created.
Suggestions:
* list clusters with: kops get cluster
* edit this cluster with: kops edit cluster gwsk8s3.k8s.local
* edit your node instance group: kops edit ig --name=gwsk8s3.k8s.local nodes
* edit your master instance group: kops edit ig --name=gwsk8s3.k8s.local master-us-east-2a
Finally configure your cluster with: kops update cluster gwsk8s3.k8s.local --yes
Once you've confirmed that you like the look of the output, you can create the cluster:
kops update cluster gwsk8s3.k8s.local --yes
This will give you a lot of output about cluster creation that you can follow along with:
I0320 21:37:34.761784 29197 apply_cluster.go:450] Gossip DNS: skipping DNS validation
I0320 21:37:35.172971 29197 executor.go:91] Tasks: 0 done / 77 total; 30 can run
I0320 21:37:36.045260 29197 vfs_castore.go:435] Issuing new certificate: "apiserver-aggregator-ca"
I0320 21:37:36.070047 29197 vfs_castore.go:435] Issuing new certificate: "ca"
I0320 21:37:36.727579 29197 executor.go:91] Tasks: 30 done / 77 total; 24 can run
I0320 21:37:37.740018 29197 vfs_castore.go:435] Issuing new certificate: "apiserver-proxy-client"
I0320 21:37:37.758789 29197 vfs_castore.go:435] Issuing new certificate: "kubecfg"
I0320 21:37:37.830861 29197 vfs_castore.go:435] Issuing new certificate: "kube-controller-manager"
I0320 21:37:37.928930 29197 vfs_castore.go:435] Issuing new certificate: "kubelet"
I0320 21:37:37.940619 29197 vfs_castore.go:435] Issuing new certificate: "kops"
I0320 21:37:38.095516 29197 vfs_castore.go:435] Issuing new certificate: "kubelet-api"
I0320 21:37:38.124966 29197 vfs_castore.go:435] Issuing new certificate: "kube-proxy"
I0320 21:37:38.274664 29197 vfs_castore.go:435] Issuing new certificate: "kube-scheduler"
I0320 21:37:38.344367 29197 vfs_castore.go:435] Issuing new certificate: "apiserver-aggregator"
I0320 21:37:38.784822 29197 executor.go:91] Tasks: 54 done / 77 total; 19 can run
I0320 21:37:40.663441 29197 launchconfiguration.go:333] waiting for IAM instance profile "nodes.gswk8s3.k8s.local" to be ready
I0320 21:37:40.889286 29197 launchconfiguration.go:333] waiting for IAM instance profile "masters.gswk8s3.k8s.local" to be ready
I0320 21:37:51.302353 29197 executor.go:91] Tasks: 73 done / 77 total; 3 can run
I0320 21:37:52.464204 29197 vfs_castore.go:435] Issuing new certificate: "master"
I0320 21:37:52.644756 29197 executor.go:91] Tasks: 76 done / 77 total; 1 can run
I0320 21:37:52.916042 29197 executor.go:91] Tasks: 77 done / 77 total; 0 can run
I0320 21:37:53.360796 29197 update_cluster.go:248] Exporting kubecfg for cluster
kops has set your kubectl context to gswk8s3.k8s.local
As with GCE, the setup activity will take a few minutes. It will stage files in S3 and create the appropriate instances, Virtual Private Cloud (VPC), security groups, and so on in our AWS account. Then, the Kubernetes cluster will be set up and started. Once everything is finished and started, we should see some options on what comes next:
Cluster is starting. It should be ready in a few minutes.
Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa [email protected]
The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
* read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md
You'll be able to see instances and security groups, and a VPC will be created for your cluster. The kubectl context will also be pointed at your new AWS cluster so that you can interact with it:
Once again, we will SSH into master. This time, we can use the native SSH client and the admin user as the AMI for Kubernetes in kops is Debian. We'll find the key files in /home/<username>/.ssh:
$ ssh -v -i /home/<username>/.ssh/<your_id_rsa_file> admin@<Your master IP>
If you have trouble with your SSH key, you can set it manually on the cluster by creating a secret, adding it to the cluster, and checking if the cluster requires a rolling update:
$ kops create secret --name gswk8s3.k8s.local sshpublickey admin -i ~/.ssh/id_rsa.pub
$ kops update cluster --yes
Using cluster from kubectl context: gswk8s3.k8s.local
I0320 22:03:42.823049 31465 apply_cluster.go:450] Gossip DNS: skipping DNS validation
I0320 22:03:43.220675 31465 executor.go:91] Tasks: 0 done / 77 total; 30 can run
I0320 22:03:43.919989 31465 executor.go:91] Tasks: 30 done / 77 total; 24 can run
I0320 22:03:44.343478 31465 executor.go:91] Tasks: 54 done / 77 total; 19 can run
I0320 22:03:44.905293 31465 executor.go:91] Tasks: 73 done / 77 total; 3 can run
I0320 22:03:45.385288 31465 executor.go:91] Tasks: 76 done / 77 total; 1 can run
I0320 22:03:45.463711 31465 executor.go:91] Tasks: 77 done / 77 total; 0 can run
I0320 22:03:45.675720 31465 update_cluster.go:248] Exporting kubecfg for cluster
kops has set your kubectl context to gswk8s3.k8s.local
Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster
$ kops rolling-update cluster --name gswk8s3.k8s.local
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-us-east-2a Ready 0 1 1 1 1
nodes Ready 0 2 2 2 2
No rolling-update required.
$
Once you've gotten into the cluster master, we can look at the containers. We'll use sudo docker ps --format 'table {{.Image}}t{{.Status}}' to explore the running containers. We should see the following:
admin@ip-172-20-47-159:~$ sudo docker container ls --format 'table {{.Image}}\t{{.Status}}'
IMAGE STATUS
kope/dns-controller@sha256:97f80ad43ff833b254907a0341c7fe34748e007515004cf0da09727c5442f53b Up 29 minutes
gcr.io/google_containers/pause-amd64:3.0 Up 29 minutes
gcr.io/google_containers/kube-apiserver@sha256:71273b57d811654620dc7a0d22fd893d9852b6637616f8e7e3f4507c60ea7357 Up 30 minutes
gcr.io/google_containers/etcd@sha256:19544a655157fb089b62d4dac02bbd095f82ca245dd5e31dd1684d175b109947 Up 30 minutes
gcr.io/google_containers/kube-proxy@sha256:cc94b481f168bf96bd21cb576cfaa06c55807fcba8a6620b51850e1e30febeb4 Up 30 minutes
gcr.io/google_containers/kube-controller-manager@sha256:5ca59252abaf231681f96d07c939e57a05799d1cf876447fe6c2e1469d582bde Up 30 minutes
gcr.io/google_containers/etcd@sha256:19544a655157fb089b62d4dac02bbd095f82ca245dd5e31dd1684d175b109947 Up 30 minutes
gcr.io/google_containers/kube-scheduler@sha256:46d215410a407b9b5a3500bf8b421778790f5123ff2f4364f99b352a2ba62940 Up 30 minutes
gcr.io/google_containers/pause-amd64:3.0 Up 30 minutes
gcr.io/google_containers/pause-amd64:3.0 Up 30 minutes
gcr.io/google_containers/pause-amd64:3.0 Up 30 minutes
gcr.io/google_containers/pause-amd64:3.0 Up 30 minutes
gcr.io/google_containers/pause-amd64:3.0 Up 30 minutes
gcr.io/google_containers/pause-amd64:3.0 Up 30 minutes
protokube:1.8.1
We can see some of the same containers as our GCE cluster had. However, there are several missing. We can see the core Kubernetes components, but the fluentd-gcp service is missing, as well as some of the newer utilities such as node-problem-detector, rescheduler, glbc, kube-addon-manager, and etcd-empty-dir-cleanup. This reflects some of the subtle differences in the kube-up script between various public cloud providers. This is ultimately decided by the efforts of the large Kubernetes open-source community, but GCP often has many of the latest features first.
You also have a command that allows you to check on the state of the cluster in kops validate cluster, which allows you to make sure that the cluster is working as expected. There's also a lot of handy modes that kops provides that allow you to do various things with the output, provisioners, and configuration of the cluster.