进入题目:

kubectl config use-context k8s

Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:

  • Deployment
  • StatefulSet
  • DaemonSet

Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limit to the namespace app-team1

# 新建一个clusterrole,命名为deployment-clusterrole。
kubectl create clusterrole deployment-clusterrole

# 新建一个clusterrole,命名为deployment-cluster,该clusterrole仅能创建deployment、statefulset、daemonsets资源类型。
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets

# 在已有的namespace app-team1中新建一个ServiceAccount,命名为cicd-token。
kubectl create serviceaccount cicd-token --namespace=app-team1

# 在namespace app-team1中,绑定clusterrole deployment-clusterrole 到 serviceaccount cicd-token
kubectl create rolebinding bindrole --clusterrole=deployment-clusterrole --serviceaccount=default:cicd-token --namespace=app-team1

Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.

# 使 ek8s-node-1 节点不可用
kubectl cordon ek8s-node-1

# 重新安排 ek8s-node-1 节点上面的所有pod
kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force

Given an existing Kubernetes cluster running version 1.18.8, upgrade all of the Kubernetes control plan and node components on the master node only to version 1.19.0.

You are also expected to upgrade kubelet and kubectl on the master node.

# 升级过程顺序:释放节点 - 安装 - 升级 - 恢复节点

# 将k8s-master节点释放
kubectl cordon k8s-master
kubectl drain k8s-master --delete-local-data --ignore-daemonsets --force

# 安装1.19.0版本
apt-get install kubeadm=1.19.0-00 kubelet=1.19.0-00 kubectl=1.19.0-00

# 升级到1.19.0版本
kubeadm upgrade apply 1.19.0 --etcd-upgrade=false

# 恢复节点
systemctl daemon-reload
systemctl restart kubelet
kubectl uncordon k8s-master

Create a snapshot of the existing etcd instance running at https://127.0.0.1:2379 saving the snapshot to /srv/data/etcd-snapshot.db

Next, restore an existing, previous snameshot located at /var/lib/backup/etcd-snapshot-previous.db.

The following TLS certificates/key are supplied for connecting to the server with etcdctl:

CA certificate: /opt/KUIN00601/ca.crt
Client certificate: /opt/KUIN00601/etcd-client.crt
Clientkey:/opt/KUIN00601/etcd-client.key

# 备份instance
ETCDCTL_API=3 etcdctl --endpoints="<instance URL>" --cacert=<CA Cert File> --cert=<Client Cert File> --key=<Client Key File> snapshot save <DB Path>

ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN))00601/etcd-client.crt --key=/opt/KUIN00606/etcd-client.key snapshot save /etc/data/etcd-snapshot.db

# 恢复instance
ETCDCTL_API=3 etcdctl --endpoints="<instance URL>" --cacert=<CA Cert File> --cert=<Client Cert File> --key=<Client Key File> snapshot restore <DB Path>

ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN))00601/etcd-client.crt --key=/opt/KUIN00606/etcd-client.key snapshot restore /etc/data/etcd-snapshot.db

# Ref:https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster

Create a new NetworkPolicy name allow-port-from-namespace that allows Pods in the existing namespace internal to connect to port 9000 of other Pods in the same namespace,
Ensure that the new NetworkPolicy:

  • does not allow access to Pods not listening on port 9000
  • does not allow access from Pods not in namespace internal
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
    name: all-port-from-namespace
    namespace: cka-practice
spec:
    podSelector: {}
    policyTypes:
        - Ingress
    ingress:
        - from:
            - namespaceSelector:
                matchLabels: {}
            - podSelector: {}
          ports:
            - protocol: TCP
              port: 9000

Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx

Create a new service named front-end-svc exposing the container port http.

Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled

kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=80 --type=NodePort

Create a new nginx Ingress resource as follows:

  • Name: pong
  • Namespace: ing-internal
  • Exposing service hello on path /hello using service port 5678
    The availability of service hello can be checked using the following command, which should return hello:
Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: pong
  namespace: ing-internal
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx-example
  rules:
  - http:
      paths:
      - path: /hello
        pathType: Prefix
        backend:
          service:
            name: hello
            port:
              number: 5678

Scale the deployment loadbalancer to 6 pods.

Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deployment

kubectl scale deployment loadbalancer --replicas=6

Schedule a pod as follow:

  • Name: nginx-kusc00401
  • Image: nginx
  • Node selector: disk=spinning
Ref: https://kubernetes.io/docs/concepts/workloads/pods/

apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc00401
spec:
  containers:
  - name: nginx
    image: nginx
  nodeSelector:
    disk: spinning

Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum

kubectl describe node $(kubectl get nodes | grep -i ready | awk '{print$1}') | grep -i taints | grep -ivc NoSchedule > /opt/nodenum

Create a pod named kucc1 with a single container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached + consul

apiVersion: v1
kind: Pod
metadata:
  name: kucc1
spec:
  containers:
  - name: nginx
    image: nginx
  - name: redis
    image: redis
  - name: memcached
    image: memcached
  - name: consul
    image: consul

Create a persistent volume with name app-config of capacity 1Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /srv/app-config

Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: “/srv/app-“config

Create a new PersistentVolumeClaim:

  • Name: pv-volume
  • Class: csi-hostpath-sc
  • Capacity: 10Mi

Create a new Pod which mounts the PVC as a volume:

  • Name: web-server
  • Image: nginx
  • Mount path: /usr/share/nginx/html

Configure the new Pod to have ReadWriteOnce access on the volume.
Finally, using kubectl edit or kubectl patch expand the PVC to a capacity 70Mi and record that change.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume
spec:
  storageClassName: csi-hostpath-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi

apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  volumes:
    - name: pv-volume
      persistentVolumeClaim:
        claimName: pv-volume
  containers:
    - name: web-server
      image: nginx
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: pv-volume

kubectl edit pvc pv-volume --save-config

Monitor the logs of pod foobar and:

  • Extract log lines corresponding to the error unable-to-access-website
  • Write them to /opt/KUTR00101/foobar
Ref: https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes

kubectl logs foobar |grep unable-to-access-website > /opt/KUTR00101/foobar

cat /opt/KUTR00101/foobar

Without changing its existing containers, an existing Pod needs to be integrated into Kubernetes’s built-in logging architecture(e.g. kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.

Add a busybox sidecar container to the existing Pod legacy-app. The new sidecar container has to run the following command:
/bin/sh -c tail -n+1 -f /var/log/legac-appp.log

Use a volume mount named logs to make the file /var/log/legacy-app.log available to the sidecar container.

  • Don’t modify the existing container.
  • Don’t modify the path of the log file, both containers must access it at /var/log/legacy-app.log.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox:1.28
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$(date) INFO $i" >> /var/log/legacy-app.log;
        i=$((i+1));
        sleep 1;
      done      
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  - name: count-log-1
    image: busybox:1.28
    args: [/bin/sh, -c, 'tail -n+1 -F /var/log/legacy-app.log']
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  volumes:
  - name: varlog
    emptyDir: {}

kubectl logs counter count-log-1

From the pod label name=cpu-user,find pods running high CPU workloads and write the name of the pod consuming most CPU to the fule /opt/KUT00401/KUT00401.txt (which already exists).

kubectl top -l name=cpu-user -A
echo 'pod name' >> /opt/KUT00401/KUT00401.txt

A Kubernetes worker node, named wk8s-node-0 is in state NotReady.
Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.

sudo -i
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet

Categories:

Tags: