cka 2020.12


1 Create a new ClusterRole named deployment-clusterrole,which only allows to create the following resource types:

  • Deployment
  • StatefulSet
  • DeamonSet Create a new ServiceAccount named cicd-token in the existing namespace app-team1. Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limit to the namespace app-team1

Answer:

kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets 

kubectl create serviceaccount cicd-token --namespace=app-team1 

kubectl create rolebinding deployment-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=default:cicd-token --namespace=app-team1

2 Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.

Answer:

kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force

3 Given an existing Kubernetes cluster running version 1.18.8, upgrade all of the Kubernetes control plan and node components on the master node only to version 1.19.0. You are also expected to upgrade kubelet and kubectl on the master node.

Answer:

kubectl cordon k8s-master
kubectl drain k8s-master --delete-local-data --ignore-daemonsets --force
apt-get install kubeadm=1.19.0-00 kubelet=1.19.0-00 kubectl=1.19.0-00 --disableexcludes=kubernetes
kubeadm upgrade apply 1.19.0 --etcd-upgrade=false
systemctl daemon-reload
systemctl restart kubelet
kubectl uncordon k8s-master

4 Create a snapshot of the existing etcd instance running at https://127.0.0.1:2379 saving the snapshot to /srv/data/etcd-snapshot.db

Next, restore an existing, previous snameshot located at /var/lib/backup/etcd-snapshot-previous.db.

The following TLS certificates/key are supplied for connecting to the server with etcdctl:

CA certificate: /opt/KUIN00601/ca.crt Client certificate: /opt/KUIN00601/etcd-client.crt Clientkey:/opt/KUIN00601/etcd-client.key

Answer:

#backup
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot save /etc/data/etcd-snapshot.db

#restore
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot restore /var/lib/backup/etcd-snapshot-previoys.db

5 Create a new NetworkPolicy name allow-port-from-namespace that allows Pods in the existing namespace internal to connect to port 9000 of other Pods in the same namespace, Ensure that the new NetworkPolicy:

  • does not allow access to Pods not listening on port 9000
  • does not allow access from Pods not in namespace internal

Answer:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: all-port-from-namespace
  namespace: internal
spec:
  podSelector:
    matchLabels: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: namespacecorp-net
    - podSelector: {}
    ports:
    - port: 9000

6 Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx

Create a new service named front-end-svc exposing the container port http.

Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled

Answer:

kubectl expose deployment front-end --name=front-end-svc --port=80 --targetport=80 --type=NodePort

7 Create a new nginx Ingress resource as follows:

  • Name: pong
  • Namespace: ing-internal
  • Exposing service hi on path /hi using service port 5678 The availability of service hi can be checked using the following command, which should return hi:

Answer: curl -kL <INTERNAL_IP>/hi

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ping
  namespace: ing-internal
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /hello
        pathType: Prefix
        backend:
          service:
            name: hello
            port:
              number: 5678

8 Scale the deployment loadbalancer to 6 pods.

Answer:

kubectl scale deploy loadbalancer --replicas=6

9 Schedule a pod as follow:

  • Name: nginx-kusc00401
  • Image: nginx
  • Node selector: disk=spinning

Answer: kubectl run nginx –image=nginx –dry-run=client -oyaml edit

apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc00401
  labels:
    role: nginx-kusc00401
spec:
  nodeSelector:
    disk: spinning
  containers:
    - name: nginx
      image: nginx


10 Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum

Answer:

kubectl get node | grep -i ready
kubectl describe nodes <nodeName> | grep -i taints | grep -i noSchedule
相减,写入/opt/nodenum

11 Create a pod named kucc1 with a single container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached + consul

Answer: kubectl run kucc1 –image=nginx –dry-run=client -oyaml edit kubectl apply

apiVersion: v1
kind: Pod
metadata:
  name: kucc1
spec:
  containers:
  - image: nginx
    name: nginx
  - image: redis
    name: redis
  - image: memchached
    name: memcached
  - image: consul
    name: consul

12 Creae a persistent volume with name app-config of capacity 1Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /srv/app-config

Answer: refer to: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/srv/app-config"

13 Create a new PVC:

  • Name: pv-volume
  • Class: csi-hostpath-sc
  • Capacity: 10Mi Create a new Pod which mounts the PVC as a volume:
  • Name: web-server
  • Image: nginx
  • Mount path: /usr/share/nginx/html Configure the new Pod to have ReadWriteOnce access on the volume. Finally, using kubectl edit or kubectl path expand the PVC to a capacity 70Mi and record that change.

Answer:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Mi
  storageClassName: csi-hostpath-sc

#创建pod

apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
    - name: nginx
      image: nginx
      volumeMounts:
      - mountPath: "/usr/share/nginx/html"
        name: pv-volume
  volumes:
    - name: pv-volume
      persistentVolumeClaim:
        claimName: pv-volume

kubectl edit pvc pv-volume –save-config


14 Monitor the logs of pod foobar and:

  • Extract log lines corrsponding to error unable-to-access-website
  • Write them to /opt/KUTR00101/foobar

Answer:

kubectl logs foobar |grep unable-to-access-website > /opt/KUTR00101/foobar
cat /opt/KUTR00101/foobar

15 Without changing its existing containers, an existing Pod needs to be integrated into Kubernetes’s built-in logging architecture(e.g. kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.

Add a busybox sidecar container to the existing Pod legacy-app. The new sidecar container has to run the following command: /bin/sh -c tail -n+1 -f /var/log/legac-appp.log

Use a volume mount named logs to make the file /var/log/legacy-app.log available to the sidecar container.

  • Don’t modify the existing container.
  • Don’t modify the path of the log file, both containers must access it at /var/log/legacy-app.log.

Answer: kubectl get pod xxx -oyaml

apiVersion: v1
kind: Pod
metadata:
  name: podname
spec:
  containers:
  - name: count
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$(date) INFO $i" >> /var/log/legacy-ap.log;
        i=$((i+1));
        sleep 1;
      done
    volumeMounts:
    - name: logs
      mountPath: /var/log
  - name: count-log-1
    image: busybox
    args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-ap.log']
    volumeMounts:
    - name: logs
      mountPath: /var/log
  volumes:
  - name: logs
    emptyDir: {}

#验证: kubectl logs <pod_name> -c <container_name>


16 From the pod label name=cpu-user,find pods running high CPU workloads and write the name of the pod consuming most CPU to the fule /opt/KUT00401/KUT00401.txt (which already exists).

Answer:

kubectl top -l name=cpu-user -A
echo 'pod name' >> /opt/KUT00401/KUT00401.txt

17 A Kubernetes worker node, named wk8s-node-0 is in state NotReady. Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.

Answer:

sudo -i
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet