Kubernetes worker nodes
In this post I’ll describe how to set up kubernetes worker from scratch. Therefore I’ll use Terraform to start the worker nodes on DigitalOcean and the other parts are done with ansible. In this post I only show how the ansible steps should look like, I’ll do it here without ansible.
The Terraform part is very easy we launch as many nodes as required via the following terraform part:
resource "digitalocean_droplet" "worker_server" {
image = "ubuntu-16-04-x64"
name = "k8s-worker-${count.index}.${var.cloudflare_domain}"
region = "fra1"
size = "512mb"
private_networking = true
count = 1
tags = ["${digitalocean_tag.master_tag.id}"]
ssh_keys = [
"${var.ssh_fingerprint}"
]
}
Now it is time to install the components required on each kubernetes worker:
- Flannel
- Docker
- Kubelet
- Kube-Proxy
Flannel
The first component, the flannel network, stretches a network between the worker nodes and the master. This allows pods to communicate to each other and to the kubernetes api. The configuration and the management of the network happen inside of etcd. Therefore the flannel on node and master is also connected to the etcd, to do this we reuse our recently written etcd-members finder script. This script looks with an api readonly key into the DigitalOcean api to find all etcd nodes and connect to them. Please find this script in the setup tutorial of the kubernetes-master nodes.
After this script and the depending services are installed we will download flannel and create a flanneld.service file in /etc/systemd/system.
wget https://github.com/coreos/flannel/releases/download/v0.9.0/flanneld-amd64
cp flanneld-amd64 /usr/bin/flanneld
chmod 740 /usr/bin/flanneld
In the flanneld.service file you can, we run the flanneld after the etcd-lookup has finished and inject the etcd-members environment file that the etcd-lookup.service has written to /etc/flannel/etcd-members. In this file there is a variable called ETCD_MEMBERS, this variable lists all etcd members commaseparated, so if one etcd member crashes the flanneld will jump over to the next etcd member. Next we specify in the -etcd-prefix variable were the flannel looks inside the etcd for network configuration information. This configuration was already created in our previous post about the kubernetes-master. Additionally we specify the client certificates used to authenticate via x.509 to the etcd.
[Unit]
Description=flanneld
Requires=etcd-lookup.service
After=etcd-lookup.service
[Service]
EnvironmentFile=/etc/flannel/etcd-members
ExecStart=/usr/bin/flanneld \
-etcd-endpoints=${ETCD_MEMBERS} \
-etcd-prefix=/flannel.com/network \
-etcd-cafile=/etc/flannel/etcd-ca.pem \
-etcd-certfile=/etc/flannel/etcd-client.pem \
-etcd-keyfile=/etc/flannel/etcd-client-key.pem \
-ip-masq
Restart=always
RestartSec=15
[Install]
WantedBy=multi-user.target
After this we should start the flanneld service (after daemon-reload) and should see in our network configuration a new network interface generated by flannel.
Docker
When we install docker, we need to take care that we install it with a dependency to flannel. First we download docker and place it at the /usr/bin directory:
wget https://download.docker.com/linux/static/stable/x86_64/docker-17.03.2-ce.tgz
tar xvzf docker-17.03.2-ce.tgz
cp docker/docker* /usr/bin
Then we install the docker.service file into /etc/systemd/system with the following configuration:
[Unit]
Description=dockerd
Requires=flanneld.service
After=network.target flanneld.service
[Service]
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \
--iptables=false \
--ip-masq=false \
--host=unix:///var/run/docker.sock \
--storage-driver=overlay2 \
--bip=${FLANNEL_SUBNET} \
--mtu=${FLANNEL_MTU}
Restart=always
RestartSec=15
[Install]
WantedBy=multi-user.target
In this docker.service file we specify that docker start after flanneld has started, because flannel generates a environment file /run/flannel/subnet.env that holds information about the flannel subnet and the maximum packet size (FLANNEL_MTU). Additonally we specify that docker not use iptables and ip-masq and that docker uses overlay as storage driver.
Now we can start docker, it should now be configured to use flannel for network communication.
Kubelet
Kubelet is the component that communicates to the kubernetes master via the kube-api server component. It also communicates to the docker.sock to run pods on the node. The service looks like this:
[Unit]
Description=kubelet
Requires=docker.service public_ip.service
After=network.target docker.service public_ip.service
[Service]
EnvironmentFile=/etc/kubelet/public_ip
ExecStart=/usr/bin/kubelet \
--allow-privileged=true \
--cert-dir=/etc/kubelet \
--cluster-dns=10.32.0.10 \
--cluster-domain=cluster.local \
--container-runtime=docker \
--docker-endpoint=unix:///var/run/docker.sock \
--hostname-override=${PUBLIC_IP} \
--kubeconfig=/etc/kubelet/kube-config.yml \
--network-plugin= \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin
Restart=always
RestartSec=15
[Install]
WantedBy=multi-user.target
First we specify that the kubelet.service start after docker and a new service called public_ip that only returns the ip of our droplet. We use this public_ip service to override the hostname via –hostname-override. Then we specify that kubelet should generate his own self signed certificates by defining a –cert-dir this indicates to kubelet to generate his own certificates. With the value –cluster-dns=10.32.0.10 we specify the ip were in the future the kube-dns service runs to solve dns requests from inside the kubernetes cluster. We also specifiy the path to the docker.sock and that kubelet should use docker as container runtime. The most important part is the kubeconfig location. In this file, kube-config.yml, we specify the connection information to the kube-api and the location of the certificates to connect to it:
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubelet/kube-ca.pem
server: { kube_api_endpoint }
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet
name: kubelet
current-context: kubelet
users:
- name: kubelet
user:
client-certificate: /etc/kubelet/kubelet.pem
client-key: /etc/kubelet/kubelet-key.pem
We have to place to /etc/kubelet/kube-ca.pem the public key of the kube apiserver and also the kubelet.pem and kubelet-key.pem. This two files are the certificates used for X.509 authentication. We can generate them again, as already described in the kubernetes-master setup tutorial with cfssl.
cfssl gencert -ca=root/ca.pem -ca-key=root/ca-key.pem -config=ca-config.json -profile=kubernetes kubelet.json | cfssljson -bare kubelet/kubelet
This will generate a kubelet public and private key by using the following configuration for the certificate:
{
"CN": "kubelet",
"hosts": [*."{public-domain}"],
"key": {
"algo": "ecdsa",
"size": 256
},
"names": [
{
"C": "DE",
"L": "NW",
"ST": "Wesel"
}
]
}
After this we can start the kubelet.service. The kubelet.service should connect now to the kubernetes-api and register it self as node to kubernetes.
Kube-proxy
The last and easiest part is the kube-proxy installation, it only needs to be downloaded and placed to /usr/bin and started with the following service file:
[Unit]
Description=kube-proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
--kubeconfig=/etc/kubelet/kube-config.yml \
--proxy-mode=iptables
Restart=always
RestartSec=15
[Install]
WantedBy=multi-user.target
In the service we specify where kube-proxy can find the connection information to connect to the kube-apiserver. We also specify that kube-proxy uses the iptables mode to receive requests.
After this
After we installed and started the four required components of the worker nodes we can do it with all other worker nodes. With kubectl
and the same kube-config.yml file we are now in a position to see if everything works fine. The kube-config.yml can be placed at ~/.kube to be loaded by kubectl or we can specify the KUBECONFIG environment variable that points to the place were our kube-config.yml is installed. Then we can run kubectl get cs
, this should list the component status of the kubernets installation:
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Unhealthy {"health": "true"}
And with kubectl get nodes
we can see all nodes connected to the kubernetes apiserver.
NAME STATUS AGE VERSION
public_ip Ready 16m v1.8.0
public_ip Ready 14m v1.8.0
public_ip Ready 15m v1.8.0
The last step is to install kube-dns for dns solving inside of the kubernetes network. Therefore we create the following service and deployment files and apply them via kubectl apply -f filename.yml
Deployment.yml:
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --no-resolv
- --server=127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default
serviceAccountName: kube-dns
Service.yml:
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.
# __MACHINE_GENERATED_WARNING__
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.32.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
Here is the part already described in the kubelet section. We specify in the kube-dns service, that it should watch for dns requests on the ip 10.32.0.10.