Second Edition
In-Depth Guidance and Practice
A major addition to the curriculum, the Gateway API represents the future of ingress traffic management in Kubernetes. This modern alternative to traditional Ingress resources provides more expressive routing rules, better multi-tenancy support, and extensible traffic management capabilities. You’ll learn how to implement and manage Gateway resources, HTTPRoutes, and GatewayClasses.
Package management with Helm is now part of the CKA curriculum. You’ll need to understand how to use Helm for deploying applications, managing releases, and working with Helm charts. This reflects the reality that Helm has become the de facto standard for Kubernetes application packaging.
Declarative configuration management using Kustomize is now required knowledge. The curriculum covers how to use Kustomize for managing environment-specific configurations without templating, a critical skill for managing applications across multiple environments.
Understanding how to extend Kubernetes through CRDs and work with Operators is now essential. This addition reflects the widespread adoption of the Operator pattern in production Kubernetes environments.
The curriculum now includes coverage of extension interfaces like CNI (Container Network Interface), CSI (Container Storage Interface), and CRI (Container Runtime Interface). Understanding these interfaces is crucial for troubleshooting and configuring production clusters.
While DNS was always part of the curriculum, there’s now expanded focus on CoreDNS configuration, troubleshooting DNS issues, and understanding service discovery mechanics in detail.
Network policies are now explicitly mentioned in the curriculum. You’ll need to understand how to define and enforce network policies to control traffic flow between Pods, namespaces, and external endpoints. This addition emphasizes the growing importance of network segmentation and security in multi-tenant clusters.
The topic “Configure Pod admission and scheduling” has been made more prominent in the curriculum. This includes deeper coverage of scheduling mechanisms, node affinity, taints and tolerations, pod priority and preemption, and admission controllers. The increased emphasis reflects the complexity of workload placement in production clusters.
Indicates new terms, URLs, email addresses, filenames, and file extensions.
Constant widthUsed for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords.
Constant width boldShows commands or other text that should be typed literally by the user.
Constant width italicShows text that should be replaced with user-supplied values or by values determined by context.
The exam won’t ask you to install a Kubernetes cluster from scratch. Read up on the basics of Kubernetes and its architectural components. Reference Chapter 2 for a jump start on Kubernetes’ architecture and concepts.
kubectl CLI toolThe kubectl command-line tool is the central tool you will use during the exam to interact with the Kubernetes cluster. Even if you have only a little time to prepare for the exam, it’s essential to practice how to operate kubectl, as well as its commands and their relevant options. You will have no access to the web dashboard UI during the exam. Chapter 3 provides a short summary of the most important ways of interacting with a Kubernetes cluster.
Installing a Kubernetes cluster from scratch and upgrading the Kubernetes version of an existing cluster is performed using the tool kubeadm. It’s important to understand its usage and the relevant process to walk through the process. Reference Chapter 4 for more information. Additionally, you need to have a good understanding of the tools etcdctl and etcdutl including their command-line options for backing up and restoring the etcd database covered in Chapter 5.
Kubernetes uses a container runtime engine for managing images. The default container runtime engine in Kubernetes is containerd. At a minimum, understand the difference between container images and containers, and their purpose. This topic goes beyond the coverage of this book and isn’t directly tested for in the exam.
Kubernetes objects are represented by YAML or JSON. The content of this book will use examples in YAML, as it is more commonly used than JSON in the Kubernetes world. You will have to edit YAML during the exam to create a new object declaratively or when modifying the configuration of a live object. Ensure that you have a good handle on basic YAML syntax, data types, and indentation conforming to the specification. How do you edit the YAML definitions, you may ask? From the terminal, of course. The exam terminal environment comes with the tools vi and vim preinstalled. Practice the keyboard shortcuts for common operations, (especially how to exit the editor). The last tool I want to mention is GNU Bash. It’s imperative that you understand the basic syntax and operators of the scripting language. It’s absolutely possible that you may have to read, modify, or even extend a multiline Bash command running in a container.
$ kubectl config set-context <context-of-question> \ --namespace=<namespace-of-question> $ kubectl config use-context <context-of-question>
$ alias k=kubectl $ k version
$ kubectl api-resources NAME SHORTNAMES APIGROUP NAMESPACED KIND ... persistentvolumeclaims pvc true PersistentVolumeClaim ...
$ kubectl describe pvc my-claim
You do not have to write imperative code using a programming language to tell Kubernetes how to operate an application. All you need to do as an end user is to declare a desired state. The desired state can be defined using a YAML or JSON manifest that conforms to an API schema. Kubernetes then maintains the state and recovers it in case of a failure.
You will want to scale up resources when your application load increases, and scale down when traffic to your application decreases. This can be achieved in Kubernetes by manual or automated scaling. The most practical, optimized option is to let Kubernetes automatically scale resources needed by a containerized application.
Changes to applications, e.g., new features and bug fixes, are usually baked into a container image with a new tag. You can easily roll out those changes across all containers running them using Kubernetes’ convenient replication feature. If needed, Kubernetes also allows for rolling back to a previous application version in case of a blocking bug or if a security vulnerability is detected.
Containers offer only a temporary filesystem. Upon restart of the container, all data written to the filesystem is lost. Depending on the nature of your application, you may need to persist data for longer, for example, if your application interacts with a database. Kubernetes offers the ability to mount storage required by application workloads.
To support a microservices architecture, the container orchestrator needs to allow for communication between containers, and from end users to containers from outside of the cluster. Kubernetes employs internal and external load balancing for routing network traffic.
This node exposes the Kubernetes API through the API server and manages the nodes that make up the cluster. It also responds to cluster events, for example, when the end user requested to scale up the number of Pods to distribute the load for an application. Production clusters employ a highly available (HA) architecture that usually involves three or more control plane nodes.
The worker node executes workload in containers managed by Pods. Every worker node needs a container runtime engine installed on the host machine to be able to manage containers.
The API server exposes the API endpoints clients use to communicate with the Kubernetes cluster. For example, if you execute the tool kubectl, a command-line based Kubernetes client, you will make a RESTful API call to an endpoint exposed by the API server as part of its implementation. The API processing procedure inside of the API server will ensure aspects like authentication, authorization, and admission control. For more information on that topic, see Chapter 6.
The scheduler is a background process that watches for new Kubernetes Pods with no assigned nodes and assigns them to a worker node for execution.
The controller manager watches the state of your cluster and implements changes where needed. For example, if you make a configuration change to an existing object, the controller manager will try to bring the object into the desired state.
Cluster state data needs to be persisted over time so it can be reconstructed upon a node or even a full cluster restart. That’s the responsibility of etcd, an open source software Kubernetes integrates with. At its core, etcd is a key-value store used to persist all data related to the Kubernetes cluster.
The kubelet runs on every node in the cluster; however, it makes the most sense on a worker node. The reason is that the control plane node usually doesn’t execute workload, and the worker node’s primary responsibility is to run workload. The kubelet is an agent that makes sure that the necessary containers are running in a Pod. You could say that the kubelet is the glue between Kubernetes and the container runtime engine and ensures that containers are running and healthy.
The kube proxy is a network proxy that runs on each node in a cluster to maintain network rules and enable network communication. In part, this component is responsible for implementing the Service concept covered in Chapter 17.
As mentioned earlier, the container runtime is the software responsible for managing containers. Kubernetes can be configured to choose from a range of different container runtime engines. While you can install a container runtime engine on a control plane, it’s not necessary as the control plane node usually doesn’t handle workload.
A container runtime engine can manage a container independent of its runtime environment. The container image bundles everything it needs to work, including the application’s binary or code, its dependencies, and its configuration. Kubernetes can run applications in a container in on-premise and cloud environments. As an administrator, you can choose the platform you think is most suitable to your needs without having to rewrite the application. Many cloud offerings provide product-specific, opt-in features. While using product-specific features helps with operational aspects, be aware that they will diminish your ability to switch easily between platforms.
Kubernetes is designed as a declarative state machine. Controllers are reconciliation loops that watch the state of your cluster, then make or request changes where needed. The goal is to move the current cluster state closer to the desired state.
Enterprises run applications at scale. Just imagine how many software components retailers like Amazon, Walmart, or Target need to operate to run their businesses. Kubernetes can scale the number of Pods based on demand or automatically according to resource consumption or historical trends.
Kubernetes exposes its functionality through APIs. We learned that every client needs to interact with the API server to manage objects. It is easy to implement a new client that can make RESTful API calls to exposed endpoints.
The API aspect stretches even further. Sometimes, the core functionality of Kubernetes doesn’t fulfill your custom needs, but you can implement your own extensions to Kubernetes. With the help of specific extension points, the Kubernetes community can build custom functionality according to their requirements, e.g., monitoring or logging solutions.
The Kubernetes API version defines the structure of a primitive and uses it to validate the correctness of the data. The API version serves a similar purpose as XML schemas to an XML document or JSON schemas to a JSON document. The version usually undergoes a maturity process—for example, from alpha to beta to final. Sometimes you see different prefixes separated by a slash (apps). You can list the API versions compatible with your cluster version by running the command kubectl api-versions.
The kind defines the type of primitive—e.g., a Pod or a Service. It ultimately answers the question, “What kinds of resource are we dealing with here?”
Metadata describes higher-level information about the object—e.g., its name, what namespace it lives in, or whether it defines labels and annotations. This section also defines the UID.
The specification (spec for short) declares the desired state—e.g., how should this object look after it has been created? Which image should run in the container, or which environment variables should be set?
The status describes the actual state of an object. The Kubernetes controllers and their reconciliation loops constantly try to transition a Kubernetes object from the desired state into the actual state. The object has not yet been materialized if the YAML status shows the value {}.
$ kubectl [command] [TYPE] [NAME] [flags]
Kubectl usage pattern$ kubectl run frontend --image=nginx:1.24.0 --port=80 pod/frontend created
$ kubectl edit pod frontend
$ kubectl patch pod frontend -p '{"spec":{"containers":[{"name":"frontend",\
"image":"nginx:1.25.1"}]}}'
pod/frontend patched
$ kubectl delete pod frontend pod "frontend" deleted
$ kubectl delete pod nginx --now
.
├── app-stack
│ ├── mysql-pod.yaml
│ ├── mysql-service.yaml
│ ├── web-app-pod.yaml
│ └── web-app-service.yaml
├── nginx-deployment.yaml
└── web-app
├── config
│ ├── db-configmap.yaml
│ └── db-secret.yaml
└── web-app-pod.yaml
$ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment created
$ kubectl apply -f app-stack/ pod/mysql-db created service/mysql-service created pod/web-app created service/web-app-service created
$ kubectl apply -f web-app/ -R configmap/db-config configured secret/db-creds created pod/web-app created
$ kubectl apply -f https://raw.githubusercontent.com/bmuschko/\ cka-study-guide/master/ch03/object-management/nginx-deployment.yaml deployment.apps/nginx-deployment created
$ kubectl get pod web-app -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{}, \
"labels":{"app":"web-app"},"name":"web-app","namespace":"default"}, \
"spec":{"containers":[{"envFrom":[{"configMapRef":{"name":"db-config"}}, \
{"secretRef":{"name":"db-creds"}}],"image":"bmuschko/web-app:1.0.1", \
"name":"web-app","ports":[{"containerPort":3000,"protocol":"TCP"}]}], \
"restartPolicy":"Always"}}
...
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentlabels:app:nginxteam:redspec:replicas:5...
$ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment configured
$ kubectl get deployment nginx-deployment -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{}, \
"labels":{"app":"nginx","team":"red"},"name":"nginx-deployment", \
"namespace":"default"},"spec":{"replicas":5,"selector":{"matchLabels": \
{"app":"nginx"}},"template":{"metadata":{"labels":{"app":"nginx"}}, \
"spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", \
"ports":[{"containerPort":80}]}]}}}}
...
$ kubectl delete -f nginx-deployment.yaml deployment.apps "nginx-deployment" deleted
$ kubectl run frontend --image=nginx:1.25.1 --port=80 \ -o yaml --dry-run=client > pod.yaml
$ vim pod.yaml $ kubectl apply -f pod.yaml pod/frontend created
$ ssh kube-control-plane Welcome to Ubuntu 24.10 (GNU/Linux 6.11.0-8-generic aarch64) ...
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
...
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on \
each as root:
kubeadm join 172.16.0.5:6443 --token fi8io0.dtkzsy9kws56dmsp \
--discovery-token-ca-cert-hash \
sha256:cc89ea1f82d5ec460e21b69476e0c052d691d0c52cce83fbd7e403559c1ebdac
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/\ download/kube-flannel.yml namespace/kube-flannel created serviceaccount/flannel created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
$ kubectl create ns kube-flannel $ kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/\ enforce=privileged $ helm repo add flannel https://flannel-io.github.io/flannel/ $ helm install flannel --set podCidr="10.244.0.0/16" --namespace kube-flannel \ flannel/flannel namespace/kube-flannel created namespace/kube-flannel labeled "flannel" has been added to your repositories NAME: flannel LAST DEPLOYED: Wed Jan 29 23:03:33 2025 NAMESPACE: kube-flannel STATUS: deployed REVISION: 1 TEST SUITE: None
$ kubectl get pods -n kube-flannel NAME READY STATUS RESTARTS AGE kube-flannel-ds-h6455 1/1 Running 0 25s
$ kubectl get nodes NAME STATUS ROLES AGE VERSION kube-control-plane Ready control-plane 5m31s v1.31.1
$ exit logout ...
$ ssh kube-worker-1 Welcome to Ubuntu 24.10 (GNU/Linux 6.11.0-8-generic aarch64) ...
$ sudo kubeadm join 172.16.0.5:6443 --token fi8io0.dtkzsy9kws56dmsp \ --discovery-token-ca-cert-hash \ sha256:cc89ea1f82d5ec460e21b69476e0c052d691d0c52cce83fbd7e403559c1ebdac [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with \ 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file \ "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with \ flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control plane to see this node join the cluster.
$ ssh kube-control-plane Welcome to Ubuntu 24.10 (GNU/Linux 6.11.0-8-generic aarch64) ... $ kubectl get nodes NAME STATUS ROLES AGE VERSION kube-control-plane Ready control-plane 2m14s v1.31.1 kube-worker-1 Ready <none> 6m43s v1.31.1
$ ssh kube-control-plane Welcome to Ubuntu 24.10 (GNU/Linux 6.11.0-8-generic aarch64) ...
$ kubectl get nodes NAME STATUS ROLES AGE VERSION kube-control-plane Ready control-plane 4m54s v1.31.1 kube-worker-1 Ready <none> 3m18s v1.31.1
$ sudo apt update ... $ sudo apt-cache madison kubeadm kubeadm | 1.31.5-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb Packages kubeadm | 1.31.4-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb Packages kubeadm | 1.31.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb Packages kubeadm | 1.31.2-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb Packages kubeadm | 1.31.1-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb Packages kubeadm | 1.31.0-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb Packages
$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install \
-y kubeadm=1.31.5-1.1 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
...
Unpacking kubeadm (1.31.5-1.1) over (1.31.1-1.1) ...
Setting up kubeadm (1.31.5-1.1) ...
kubeadm set on hold.
$ sudo apt-get update && sudo apt-get install -y --allow-change-held-packages \
kubeadm=1.31.5-1.1
...
kubeadm is already the newest version (1.31.5-1.1).
0 upgraded, 0 newly installed, 0 to remove and 94 not upgraded.
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"31", GitVersion:"v1.31.5", \
GitCommit:"af64d838aacd9173317b39cf273741816bd82377", GitTreeState:"clean", \
BuildDate:"2025-01-15T14:39:21Z", GoVersion:"go1.22.10", Compiler:"gc", \
Platform:"linux/arm64"}
$ sudo kubeadm upgrade plan ... [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: 1.31.5 [upgrade/versions] kubeadm version: v1.31.5 I0130 22:26:53.887541 13574 version.go:261] remote version is \ much newer: v1.32.1; falling back to: stable-1.31 [upgrade/versions] Target version: v1.31.5 [upgrade/versions] Latest version in the v1.31 series: v1.31.
$ sudo kubeadm upgrade apply v1.31.5 ... [upgrade/version] You have chosen to change the cluster version to "v1.31.5" [upgrade/versions] Cluster version: v1.31.5 [upgrade/versions] kubeadm version: v1.31.5 ... [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.31.5". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed \ with upgrading your kubelets if you haven't already done so.
$ kubectl drain kube-control-plane --ignore-daemonsets node/kube-control-plane cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-qndb9, \ kube-system/kube-proxy-vpvms evicting pod kube-system/calico-kube-controllers-65f8bc95db-krp72 evicting pod kube-system/coredns-f9fd979d6-2brkq pod/calico-kube-controllers-65f8bc95db-krp72 evicted pod/coredns-f9fd979d6-2brkq evicted node/kube-control-plane evicted
$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo \ apt-get install -y kubelet=1.31.5-1.1 kubectl=1.31.5-1.1 && sudo apt-mark \ hold kubelet kubectl ... Setting up kubelet (1.31.5-1.1) ... Setting up kubectl (1.31.5-1.1) ... kubelet set on hold. kubectl set on hold.
$ sudo systemctl daemon-reload $ sudo systemctl restart kubelet
$ kubectl uncordon kube-control-plane node/kube-control-plane uncordoned
$ kubectl get nodes NAME STATUS ROLES AGE VERSION kube-control-plane Ready control-plane 21h v1.31.5 kube-worker-1 Ready <none> 21h v1.31.1
$ exit logout ...
$ ssh kube-worker-1 Welcome to Ubuntu 24.10 (GNU/Linux 6.11.0-8-generic aarch64) ...
$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install \
-y kubeadm=1.31.5-1.1 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
...
Unpacking kubeadm (1.31.5-1.1) over (1.31.1-1.1) ...
Setting up kubeadm (1.31.5-1.1) ...
kubeadm set on hold.
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"31", GitVersion:"v1.31.5", \
GitCommit:"af64d838aacd9173317b39cf273741816bd82377", GitTreeState:"clean", \
BuildDate:"2025-01-15T14:39:21Z", GoVersion:"go1.22.10", Compiler:"gc", \
Platform:"linux/arm64"}
$ sudo kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with \ 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [upgrade] Backing up kubelet config file to \ /etc/kubernetes/tmp/kubeadm-kubelet-config3058962439/config.yaml [kubelet-start] Writing kubelet configuration to file \ "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package \ using your package manager.
$ kubectl drain kube-worker-1 --ignore-daemonsets node/kube-worker-1 cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-2hrxg, \ kube-system/kube-proxy-qf6nl evicting pod kube-system/calico-kube-controllers-65f8bc95db-kggbr evicting pod kube-system/coredns-f9fd979d6-7zm4q evicting pod kube-system/coredns-f9fd979d6-tlmhq pod/calico-kube-controllers-65f8bc95db-kggbr evicted pod/coredns-f9fd979d6-7zm4q evicted pod/coredns-f9fd979d6-tlmhq evicted node/kube-worker-1 evicted
$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get \ install -y kubelet=1.31.5-1.1 kubectl=1.31.5-1.1 && sudo apt-mark hold kubelet \ kubectl ... Setting up kubelet (1.31.5-1.1) ... Setting up kubectl (1.31.5-1.1) ... kubelet set on hold. kubectl set on hold.
$ sudo systemctl daemon-reload $ sudo systemctl restart kubelet
$ kubectl uncordon kube-worker-1 node/kube-worker-1 uncordoned
$ kubectl get nodes NAME STATUS ROLES AGE VERSION kube-control-plane Ready control-plane 24h v1.31.5 kube-worker-1 Ready <none> 24h v1.31.5
$ exit logout ...
Every cluster node needs to run on a physical or virtual machine. Provisioning the hardware is the job of an administrator, though you will not have to have hands-on experience for the exam. Explore the manual and automated approaches for provisioning hardware as you will need it for installing a cluster.
Installing new cluster nodes and upgrading the version of an existing cluster node are typical tasks performed by a Kubernetes administrator. You do not need to memorize all the steps involved. The documentation provides a step-by-step, easy-to-follow manual for those operations. During the exam, pull the relevant documentation and copy-paste the commands.
The cluster upgrade process involves executing more commands than the installation process. It’s important to remember that you only jump up by a single minor version or multiple patch versions before tackling the next higher version. I’d suggest you open the upgrade documentation page and walk through the process a couple of times.
High-availability clusters help with redundancy and scalability. For the exam, you will need to understand the different HA topologies, though it’s unlikely that you’ll have to configure one of them as the process would involve a suite of different hosts.
$ ssh kube-control-plane Welcome to Ubuntu 24.04 LTS (GNU/Linux 6.8.0-51-generic x86_64) ...
$ etcdctl version etcdctl version: 3.5.15 API version: 3.5
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
...
etcd-kube-control-plane 1/1 Running 0 33m
...
$ kubectl describe pod etcd-kube-control-plane -n kube-system
...
Containers:
etcd:
Container ID: containerd://47a6cf3ed27d455be6c9b782d2e35ee77b429ee5c0b \
3c6c3d6282628f6492b15
Image: registry.k8s.io/etcd:3.5.15-0
Image ID: registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6 \
b629afb18f28d8aa3fab5a6e91b4af60026a
...
$ kubectl describe pod etcd-kube-control-plane -n kube-system
...
Containers:
etcd:
...
Command:
etcd
...
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=/etc/kubernetes/pki/etcd/server.key
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
...
$ sudo ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ snapshot save /opt/etcd-backup.db ... Snapshot saved at /opt/etcd-backup.db
$ exit logout ...
$ sudo ETCDCTL_API=3 etcdutl --data-dir=/var/lib/from-backup snapshot restore \ /opt/etcd-backup.db ... $ sudo ls /var/lib/from-backup member
$ cd /etc/kubernetes/manifests/
$ sudo vim etcd.yaml
...
spec:
volumes:
...
- hostPath:
path: /var/lib/from-backup
type: DirectoryOrCreate
name: etcd-data
...
$ kubectl get pod etcd-kube-control-plane -n kube-system NAME READY STATUS RESTARTS AGE etcd-kube-control-plane 1/1 Running 0 5m1s
$ exit logout ...
Backing up etcd requires the installation of a compatible version of the etcdctl executable. You can identify the version of etcd by inspecting the container image tag used to run the etcd Pod (assuming we are talking about a cluster without high-availability characteristics). The etcd process run in the container of the Pod also list the command line flags needed to drive the backup process. In the exam, you can assume that the etcdctl executable has been preinstalled.
Restoring etcd requires the use of the executable etcdutl. You will need to point the command to the snapshot file created in the backup process, and a target directory used to extract the etcd data in. Just extracting the etcd data into a directory doesn’t tell the etcd process to use it. You need to configure the host path to the directory in the configuration for etcd.
apiVersion:v1kind:Configclusters:-cluster:certificate-authority:/Users/bmuschko/.minikube/ca.crtextensions:-extension:last-update:Mon,09Oct202307:33:01MDTprovider:minikube.sigs.k8s.ioversion:v1.30.1name:cluster_infoserver:https://127.0.0.1:63709name:minikubecontexts:-context:cluster:minikubeuser:bmuschkoname:bmuschko-context:cluster:minikubeextensions:-extension:last-update:Mon,09Oct202307:33:01MDTprovider:minikube.sigs.k8s.ioversion:v1.30.1name:context_infonamespace:defaultuser:minikubename:minikubecurrent-context:minikubepreferences:{}users:-name:bmuschkouser:client-key-data:<REDACTED>-name:minikubeuser:client-certificate:/Users/bmuschko/.minikube/profiles/minikube/client.crtclient-key:/Users/bmuschko/.minikube/profiles/minikube/client.key
$ kubectl config view apiVersion: v1 kind: Config clusters: ...
$ kubectl config current-context minikube
$ kubectl config use-context bmuschko Switched to context "bmuschko".
$ kubectl config set-credentials myuser \ --client-key=myuser.key --client-certificate=myuser.crt \ --embed-certs=true
The user or service account that wants to access a resource
The Kubernetes API resource type (e.g., a Deployment or node)
The operation that can be executed on the resource (e.g., creating a Pod or deleting a Service)
The Role API primitive declares the API resources and their operations this rule should operate on in a specific namespace. For example, you may want to say “allow listing and deleting of Pods,” or you may express “allow watching the logs of Pods,” or even both with the same Role. Any operation that is not spelled out explicitly is disallowed as soon as it is bound to the subject.
The RoleBinding API primitive binds the Role object to the subject(s) in a specific namespace. It is the glue for making the rules active. For example, you may want to say “bind the Role that permits updating Services to the user John Doe.”
| Default ClusterRole | Description |
|---|---|
cluster-admin |
Allows read and write access to resources across all namespaces. |
admin |
Allows read and write access to resources in namespace including Roles and RoleBindings. |
edit |
Allows read and write access to resources in namespace except Roles and RoleBindings. Provides access to Secrets. |
view |
Allows read-only access to resources in namespace except Roles, RoleBindings, and Secrets. |
$ kubectl create role read-only --verb=list,get,watch \ --resource=pods,deployments,services role.rbac.authorization.k8s.io/read-only created
$ kubectl get roles NAME CREATED AT read-only 2021-06-23T19:46:48Z
$ kubectl describe role read-only Name: read-only Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- pods [] [] [list get watch] services [] [] [list get watch] deployments.apps [] [] [list get watch]
$ kubectl create rolebinding read-only-binding --role=read-only --user=bmuschko rolebinding.rbac.authorization.k8s.io/read-only-binding created
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:read-only-bindingroleRef:apiGroup:rbac.authorization.k8s.iokind:Rolename:read-onlysubjects:-apiGroup:rbac.authorization.k8s.iokind:Username:bmuschko
$ kubectl get rolebindings NAME ROLE AGE read-only-binding Role/read-only 24h
$ kubectl describe rolebinding read-only-binding Name: read-only-binding Labels: <none> Annotations: <none> Role: Kind: Role Name: read-only Subjects: Kind Name Namespace ---- ---- --------- User bmuschko
$ kubectl config current-context minikube $ kubectl create deployment myapp --image=:1.25.2 --port=80 --replicas=2 deployment.apps/myapp created
$ kubectl config use-context bmuschko-context Switched to context "bmuschko-context".
$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE myapp 2/2 2 2 8s
$ kubectl get replicasets Error from server (Forbidden): replicasets.apps is forbidden: User "bmuschko" \ cannot list resource "replicasets" in API group "apps" in the namespace "default"
$ kubectl delete deployment myapp Error from server (Forbidden): deployments.apps "myapp" is forbidden: User \ "bmuschko" cannot delete resource "deployments" in API group "apps" in the \ namespace "default"
$ kubectl auth can-i --list --as bmuschko Resources Non-Resource URLs Resource Names Verbs ... pods [] [] [list get watch] services [] [] [list get watch] deployments.apps [] [] [list get watch] $ kubectl auth can-i list pods --as bmuschko yes
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:list-podsnamespace:rbac-examplelabels:rbac-pod-list:"true"rules:-apiGroups:-""resources:-podsverbs:-list
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:delete-servicesnamespace:rbac-examplelabels:rbac-service-delete:"true"rules:-apiGroups:-""resources:-servicesverbs:-delete
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:pods-services-aggregation-rulesnamespace:rbac-exampleaggregationRule:clusterRoleSelectors:-matchLabels:rbac-pod-list:"true"-matchLabels:rbac-service-delete:"true"rules:[]
$ kubectl describe clusterroles pods-services-aggregation-rules -n rbac-example Name: pods-services-aggregation-rules Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- services [] [] [delete] pods [] [] [list]
$ kubectl get serviceaccounts NAME SECRETS AGE default 0 4d
$ kubectl create serviceaccount cicd-bot serviceaccount/cicd-bot created
apiVersion:v1kind:ServiceAccountmetadata:name:cicd-bot
apiVersion:v1kind:Namespacemetadata:name:k97---apiVersion:v1kind:ServiceAccountmetadata:name:sa-apinamespace:k97---apiVersion:v1kind:Podmetadata:name:list-objectsnamespace:k97spec:serviceAccountName:sa-apicontainers:-name:podsimage:alpine/curl:3.14command:['sh','-c','whiletrue;docurl-s-k-m5-H\"Authorization:Bearer$(cat/var/run/secrets/kubernetes.io/\serviceaccount/token)"https://kubernetes.default.svc.cluster.\local/api/v1/namespaces/k97/pods;sleep10;done']-name:deploymentsimage:alpine/curl:3.14command:['sh','-c','whiletrue;docurl-s-k-m5-H\"Authorization:Bearer$(cat/var/run/secrets/kubernetes.io/\serviceaccount/token)"https://kubernetes.default.svc.cluster.\local/apis/apps/v1/namespaces/k97/deployments;sleep10;done']
The service account referenced by name used for communicating with the Kubernetes API.
Performs an API call to retrieve the list of Pods in the namespace k97.
Performs an API call to retrieve the list of Deployments in the namespace k97.
$ kubectl apply -f setup.yaml namespace/k97 created serviceaccount/sa-api created pod/list-objects created
$ kubectl logs list-objects -c pods -n k97
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:k97:sa-api\" \
cannot list resource \"pods\" in API group \"\" in the \
namespace \"k97\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
$ kubectl logs list-objects -c deployments -n k97
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps is forbidden: User \
\"system:serviceaccount:k97:sa-api\" cannot list resource \
\"deployments\" in API group \"apps\" in the namespace \
\"k97\"",
"reason": "Forbidden",
"details": {
"group": "apps",
"kind": "deployments"
},
"code": 403
}
apiVersion:rbac.authorization.k8s.io/v1kind:Rolemetadata:name:list-pods-rolenamespace:k97rules:-apiGroups:[""]resources:["pods"]verbs:["list"]
$ kubectl apply -f role.yaml role.rbac.authorization.k8s.io/list-pods-role created
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:serviceaccount-pod-rolebindingnamespace:k97subjects:-kind:ServiceAccountname:sa-apiroleRef:kind:Rolename:list-pods-roleapiGroup:rbac.authorization.k8s.io
$ kubectl apply -f rolebinding.yaml rolebinding.rbac.authorization.k8s.io/serviceaccount-pod-rolebinding created
$ kubectl logs list-objects -c pods -n k97
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "628"
},
"items": [
{
"metadata": {
"name": "list-objects",
"namespace": "k97",
...
}
]
}
$ kubectl logs list-objects -c deployments -n k97
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps is forbidden: User \
\"system:serviceaccount:k97:sa-api\" cannot list resource \
\"deployments\" in API group \"apps\" in the namespace \
\"k97\"",
"reason": "Forbidden",
"details": {
"group": "apps",
"kind": "deployments"
},
"code": 403
}
$ kube-apiserver --enable-admission-plugins=NamespaceLifecycle,PodSecurity,\ LimitRanger
This chapter demonstrated some ways to communicate with the Kubernetes API. We performed API requests by switching to a user context and with the help of a RESTful API call using curl. Explore the Kubernetes API and its endpoints on your own for broader exposure.
Anonymous user requests to the Kubernetes API will not allow any substantial operations. For requests coming from a user or a service account, you will need to carefully analyze permissions granted to the subject. Learn the ins and outs of defining RBAC rules by creating the relevant objects to control permissions. Service accounts automount a token when used in a Pod. Expose the token as a volume only if you are intending to make API calls from the Pod.
The API server comes with preconfigured admission control plugins that support the functionality of Kubernetes primitives like the LimitRange. For the exam, you will not have to have a deep understanding of enabling or configuring admission control plugins.
$ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/\ releases/download/v0.31.0/install.sh | bash -s v0.31.0
$ kubectl create -f https://operatorhub.io/install/argocd-operator.yaml subscription.operators.coreos.com/my-argocd-operator created
$ kubectl get csv -n operators NAME DISPLAY VERSION REPLACES PHASE argocd-operator.v0.13.0 Argo CD 0.13.0 argocd-operator.v0.12.0 Succeeded
$ kubectl get crds NAME CREATED AT applications.argoproj.io 2025-03-21T23:02:40Z applicationsets.argoproj.io 2025-03-21T23:02:39Z appprojects.argoproj.io 2025-03-21T23:02:39Z argocdexports.argoproj.io 2025-03-21T23:02:39Z argocds.argoproj.io 2025-03-21T23:02:39Z notificationsconfigurations.argoproj.io 2025-03-21T23:02:39Z
$ kubectl describe crd applications.argoproj.io
apiVersion:argoproj.io/v1alpha1kind:Applicationmetadata:name:nginxspec:project:defaultsource:repoURL:https://github.com/bmuschko/cka-study-guide.gittargetRevision:HEADpath:./ch07/nginxdestination:server:https://kubernetes.default.svcnamespace:default
kubectl apply -f nginx-application.yaml application.argoproj.io/nginx created
$ kubectl describe application nginx ame: nginx Namespace: default Labels: <none> Annotations: <none> API Version: argoproj.io/v1alpha1 Kind: Application ...
$ kubectl delete application nginx application.argoproj.io "nginx" deleted
$ kubectl get deployments,pods -n operators NAME READY UP-TO-DATE ... deployment.apps/argocd-operator-controller-manager 1/1 1 ... NAME READY STATUS ... pod/argocd-operator-controller-manager-6998544bff-zx8bg 1/1 Running ...
Operators built and managed by the Kubernetes community are available on searchable sites like Artifact Hub and Operator Hub. You will find installation instructions on the corresponding web pages. You do not have to memorize them for the exam. If you want to explore further, install an open source operator, such as the Prometheus operator or the Jaeger operator.
You are not expected to implement a custom CRD schema. All you need to know is how to discover and interact with them using kubectl. Practice the definition of a CR in the form of a YAML manifest, and create the objects for it. Controller implementations are definitely outside the scope of the exam.
$ helm repo list Error: no repositories to show
$ helm repo add jenkinsci https://charts.jenkins.io/ "jenkinsci" has been added to your repositories
$ helm repo list NAME URL jenkinsci https://charts.jenkins.io/
$ helm search repo jenkinsci NAME CHART VERSION APP VERSION DESCRIPTION jenkinsci/jenkins 5.8.26 2.492.2 ...
$ helm install my-jenkins jenkinsci/jenkins --version 5.8.25 NAME: my-jenkins LAST DEPLOYED: Wed Mar 26 13:48:50 2025 NAMESPACE: default STATUS: deployed REVISION: 1 ...
$ kubectl get all NAME READY STATUS RESTARTS AGE pod/my-jenkins-0 2/2 Running 0 12m NAME TYPE CLUSTER-IP EXTERNAL-IP ... service/my-jenkins ClusterIP 10.99.166.189 <none> ... service/my-jenkins-agent ClusterIP 10.110.246.141 <none> ... NAME READY AGE statefulset.apps/my-jenkins 1/1 12m
$ helm show values jenkinsci/jenkins ... controller: # When enabling LDAP or another non-Jenkins identity source, the built-in \ # admin account will no longer exist. # If you disable the non-Jenkins identity store and instead use the Jenkins \ # internal one, # you should revert controller.adminUser to your preferred admin user: adminUser: "admin" # adminPassword: <defaults to random> ...
$ helm install my-jenkins jenkinsci/jenkins --version 4.6.4 \ --set controller.adminUser=boss --set controller.adminPassword=password \ -n jenkins --create-namespace
$ helm list --all-namespaces NAME NAMESPACE REVISION UPDATED STATUS CHART my-jenkins default 1 2023-09-28... deployed jenkins-4.6.4
$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "jenkinsci" chart repository Update Complete. *Happy Helming!*
$ helm upgrade my-jenkins jenkinsci/jenkins --version 5.8.26 Release "my-jenkins" has been upgraded. Happy Helming! ...
$ helm uninstall my-jenkins release "my-jenkins" uninstalled
The first mode uses the kustomize subcommand to render the produced result on the console but does not create the objects. This command works similarly to the dry-run option you might know from the run command:
$ kubectl kustomize <target>
The second mode uses the apply command in conjunction with the -k command-line option to apply the resources processed by Kustomize, as explained in the previous section:
$ kubectl apply -k <target>
. ├── kustomization.yaml ├── web-app-deployment.yaml └── web-app-service.yaml
resources:-web-app-deployment.yaml-web-app-service.yaml
$ kubectl kustomize ./
apiVersion: v1
kind: Service
metadata:
labels:
app: web-app-service
name: web-app-service
spec:
ports:
- name: web-app-port
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: web-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web-app-deployment
name: web-app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- env:
- name: DB_HOST
value: mysql-service
- name: DB_USER
value: root
- name: DB_PASSWORD
value: password
image: bmuschko/web-app:1.0.1
name: web-app
ports:
- containerPort: 3000
. ├── config │ ├── db-config.properties │ └── db-secret.properties ├── kustomization.yaml └── web-app-pod.yaml
configMapGenerator:-name:db-configfiles:-config/db-config.propertiessecretGenerator:-name:db-credsfiles:-config/db-secret.propertiesresources:-web-app-pod.yaml
$ kubectl apply -k ./ configmap/db-config-t4c79h4mtt unchanged secret/db-creds-4t9dmgtf9h unchanged pod/web-app created
$ kubectl kustomize ./
apiVersion: v1
data:
db-config.properties: |-
DB_HOST: mysql-service
DB_USER: root
kind: ConfigMap
metadata:
name: db-config-t4c79h4mtt
---
apiVersion: v1
data:
db-secret.properties: REJfUEFTU1dPUkQ6IGNHRnpjM2R2Y21RPQ==
kind: Secret
metadata:
name: db-creds-4t9dmgtf9h
type: Opaque
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: web-app
name: web-app
spec:
containers:
- envFrom:
- configMapRef:
name: db-config-t4c79h4mtt
- secretRef:
name: db-creds-4t9dmgtf9h
image: bmuschko/web-app:1.0.1
name: web-app
ports:
- containerPort: 3000
protocol: TCP
restartPolicy: Always
namespace:persistencecommonLabels:team:helixresources:-web-app-deployment.yaml-web-app-service.yaml
$ kubectl create namespace persistence namespace/persistence created $ kubectl apply -k ./ service/web-app-service created deployment.apps/web-app-deployment created
$ kubectl kustomize ./
apiVersion: v1
kind: Service
metadata:
labels:
app: web-app-service
team: helix
name: web-app-service
namespace: persistence
spec:
ports:
- name: web-app-port
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: web-app
team: helix
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web-app-deployment
team: helix
name: web-app-deployment
namespace: persistence
spec:
replicas: 3
selector:
matchLabels:
app: web-app
team: helix
template:
metadata:
labels:
app: web-app
team: helix
spec:
containers:
- env:
- name: DB_HOST
value: mysql-service
- name: DB_USER
value: root
- name: DB_PASSWORD
value: password
image: bmuschko/web-app:1.0.1
name: web-app
ports:
- containerPort: 3000
resources:-nginx-deployment.yamlpatchesStrategicMerge:-security-context.yaml
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:template:spec:containers:-name:nginxsecurityContext:runAsUser:1000runAsGroup:3000fsGroup:2000
$ kubectl kustomize ./
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
securityContext:
fsGroup: 2000
runAsGroup: 3000
runAsUser: 1000
Kustomize is bundled with the kubectl command line. You do not need to install another tool, nor do you have learn another templating engine.
Helm requires the installation of an executable and requires you to become familiar with its commands and workflow.
Kustomize builds upon the knowledge of an administrator or developer familiar with writing Kubernetes YAML manifests.
Helm has a steeper learning curse. You will have to become familiar with its package management system, the notation for its templating engine, and how to pass in user-defined values upon invocation.
Kustomize doesn’t require the end user to produce an archive file. All you need is a set of YAML manifests and the kustomization.yaml file, which you would check into a Git repository.
Helm on the other hand requires the creation of a metadata file named Chart.yaml, default values represented in a file named values.yaml, and a set of template files in a templates subdirectory. To be able to distribute the Helm chart, you’ll need to package it into a TAR file.
Kustomize solely focuses on generating the desired state in a cluster through YAML manifests. You can track changes over time by Git commit hashes or tags to indicate a version.
Helm’s rigid project structure requires the definition of a chart version inside of the Chart.yaml file. Every time you make a change, you’d bump up the version number, often represented by semantic versioning.
Unfortunately, the exam FAQ does not mention any details about the Helm and Kustomize executable. It’s fair to assume that it will be preinstalled for you and therefore you do not need to memorize installation instructions.
Artifact Hub provides a web-based UI for Helm charts. It’s worthwhile to explore the search capabilities and the details provided by individual charts, more specifically the repository the chart file lives in, and its configurable values. During the exam, you’ll likely not be asked to navigate to Artifact Hub because its URL hasn’t been listed as one of the permitted documentation pages. You can assume that the exam question will provide you with the repository URL.
The exam does not ask you to build and publish your own chart file. All you need to understand is how to consume an existing chart. You will need to be familiar with the helm repo add command to register a repository, the helm search repo to find available chart versions, and the helm install command to install a chart. You should have a basic understanding of the upgrade process for an already installed Helm chart using the helm upgrade command.
$ kubectl run hazelcast --image=hazelcast/hazelcast:5.1.7 \ --port=5701 --env="DNS_DOMAIN=cluster" --labels="app=hazelcast,env=prod"
| Option | Example value | Description |
|---|---|---|
|
nginx:1.25.1 |
The image for the container to run. |
|
8080 |
The port that this container exposes. |
|
N/A |
Deletes the Pod after command in the container finishes. See “Creating a Temporary Pod” for more information. |
|
PROFILE=dev |
The environment variables to set in the container. |
|
app=frontend |
A comma-separated list of labels to apply to the Pod. |
apiVersion:v1kind:Podmetadata:name:hazelcastlabels:app:hazelcastenv:prodspec:containers:-name:hazelcastimage:hazelcast/hazelcast:5.1.7env:-name:DNS_DOMAINvalue:clusterports:-containerPort:5701
Assigns the name of hazelcast to the Pod.
Specifies labels to the Pod.
Declares the container image to be executed in the container of the Pod.
Injects one or many environment variables to the container.
Number of port to expose on the Pod’s IP address.
$ kubectl apply -f pod.yaml pod/hazelcast created
$ kubectl get pods NAME READY STATUS RESTARTS AGE hazelcast 1/1 Running 0 17s
$ kubectl get pods hazelcast NAME READY STATUS RESTARTS AGE hazelcast 1/1 Running 0 17s
| Option | Description |
|---|---|
|
The Pod has been accepted by the Kubernetes system, but one or more of the container images has not been created. |
|
At least one container is still running or is in the process of starting or restarting. |
|
All containers in the Pod terminated successfully. |
|
Containers in the Pod terminated,; at least one failed with an error. |
|
The state of Pod could not be obtained. |
| Option | Description |
|---|---|
|
Automatically restarts the container after any termination. |
|
Only restarts the container if it exits with an error (non-zero exit status). |
|
Does not automatically restart the terminated container. |
apiVersion:v1kind:Podmetadata:name:hazelcastspec:containers:-name:hazelcastimage:hazelcast/hazelcast:5.1.7restartPolicy:Never
$ kubectl describe pods hazelcast
Name: hazelcast
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: docker-desktop/192.168.65.3
Start Time: Wed, 20 May 2020 19:35:47 -0600
Labels: app=hazelcast
env=prod
Annotations: <none>
Status: Running
IP: 10.1.0.41
Containers:
...
Events:
...
$ kubectl describe pods hazelcast | grep Image:
Image: hazelcast/hazelcast:5.1.7
$ kubectl logs hazelcast ... May 25, 2020 3:36:26 PM com.hazelcast.core.LifecycleService INFO: [10.1.0.46]:5701 [dev] [4.0.1] [10.1.0.46]:5701 is STARTED
$ kubectl exec -it hazelcast -- /bin/sh # ...
$ kubectl exec hazelcast -- env ... DNS_DOMAIN=cluster
$ kubectl run busybox --image=busybox:1.36.1 --rm -it --restart=Never -- env ... HOSTNAME=busybox pod "busybox" deleted
$ kubectl run nginx --image=nginx:1.25.1 --port=80 pod/nginx created $ kubectl get pod nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE \ NOMINATED NODE READINESS GATES nginx 1/1 Running 0 37s 10.244.0.5 minikube \ <none> <none> $ kubectl get pod nginx -o yaml ... status: podIP: 10.244.0.5 ...
$ kubectl run busybox --image=busybox:1.36.1 --rm -it --restart=Never \ -- wget 172.17.0.4:80 Connecting to 172.17.0.4:80 (172.17.0.4:80) saving to 'index.html' index.html 100% |********************************| 615 0:00:00 ETA 'index.html' saved pod "busybox" deleted
apiVersion:v1kind:Podmetadata:name:spring-boot-appspec:containers:-name:spring-boot-appimage:bmuschko/spring-boot-app:1.5.3env:-name:SPRING_PROFILES_ACTIVEvalue:prod-name:VERSIONvalue:'1.5.3'
$ kubectl run mypod --image=busybox:1.36.1 -o yaml --dry-run=client \ > pod.yaml -- /bin/sh -c "while true; do date; sleep 10; done"
apiVersion:v1kind:Podmetadata:name:mypodspec:containers:-name:mypodimage:busybox:1.36.1args:-/bin/sh--c-while true; do date; sleep 10; done
apiVersion:v1kind:Podmetadata:name:mypodspec:containers:-name:mypodimage:busybox:1.36.1command:["/bin/sh"]args:["-c","whiletrue;dodate;sleep10;done"]
$ kubectl apply -f pod.yaml pod/mypod created $ kubectl logs mypod -f Fri May 29 00:49:06 UTC 2020 Fri May 29 00:49:16 UTC 2020 Fri May 29 00:49:26 UTC 2020 ...
$ kubectl delete pod hazelcast pod "hazelcast" deleted
$ kubectl delete -f pod.yaml pod "hazelcast" deleted
$ kubectl get namespaces NAME STATUS AGE default Active 157d kube-node-lease Active 157d kube-public Active 157d kube-system Active 157d
$ kubectl create namespace code-red namespace/code-red created $ kubectl get namespace code-red NAME STATUS AGE code-red Active 16s
apiVersion:v1kind:Namespacemetadata:name:code-red
$ kubectl run pod --image=nginx:1.25.1 -n code-red pod/pod created $ kubectl get pods -n code-red NAME READY STATUS RESTARTS AGE pod 1/1 Running 0 13s
$ kubectl config set-context --current --namespace=code-red
Context "minikube" modified.
$ kubectl config view --minify | grep namespace:
namespace: code-red
$ kubectl get pods NAME READY STATUS RESTARTS AGE pod 1/1 Running 0 13s
$ kubectl config set-context --current --namespace=default Context "minikube" modified.
$ kubectl delete namespace code-red namespace "code-red" deleted $ kubectl get pods -n code-red No resources found in code-red namespace.
A Pod runs an application inside of a container. You can check on the status and the configuration of the Pod by inspecting the object with the kubectl get or kubectl describe commands. Get familiar with the life cycle phases of a Pod to be able to quickly diagnose errors. The command kubectl logs can be used to download the container log information without having to shell into the container. Use the command kubectl exec to further explore the container environment, e.g., to check on processes or to examine files.
Sometimes you have to start with the YAML manifest of a Pod and then create the Pod declaratively. This could be the case if you wanted to provide environment variables to the container or declare a custom command. Practice different configuration options by copy-pasting relevant code snippets from the Kubernetes documentation.
Most questions in the exam will ask you to work within a given namespace. You need to understand how to interact with that namespace from kubectl using the options --namespace and -n. To avoid accidentally working on the wrong namespace, know how to permanently set a namespace.
| Option | Example | Description |
|---|---|---|
|
|
Literal values, which are key-value pairs as plain text |
|
|
A file that contains key-value pairs and expects them to be environment variables |
|
|
A file with arbitrary contents |
|
|
A directory with one or many files |
$ kubectl create configmap db-config --from-literal=DB_HOST=mysql-service \ --from-literal=DB_USER=backend configmap/db-config created
apiVersion:v1kind:ConfigMapmetadata:name:db-configdata:DB_HOST:mysql-serviceDB_USER:backend
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendenvFrom:-configMapRef:name:db-config
$ kubectl exec backend -- env ... DB_HOST=mysql-service DB_USER=backend ...
{"db":{"host":"mysql-service","user":"backend"}}
$ kubectl create configmap db-config --from-file=db.json configmap/db-config created
apiVersion:v1kind:ConfigMapmetadata:name:db-configdata:db.json:|-{"db": {"host": "mysql-service","user": "backend"}}
The multiline string syntax (|-) used in this YAML structure removes the line feed and removes the trailing blank lines. For more information, see the YAML syntax for multiline string.
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendvolumeMounts:-name:db-config-volumemountPath:/etc/configvolumes:-name:db-config-volumeconfigMap:name:db-config
$ kubectl exec -it backend -- /bin/sh
# ls -1 /etc/config
db.json
# cat /etc/config/db.json
{
"db": {
"host": "mysql-service",
"user": "backend"
}
}
| CLI option | Description | Internal Type |
|---|---|---|
|
Creates a Secret from a file, directory, or literal value |
|
|
Creates a Secret for use with a Docker registry, e.g., to pull images from a private registry when requested by a Pod |
|
|
Creates a TLS Secret |
|
| Option | Example | Description |
|---|---|---|
|
|
Literal values, which are key-value pairs as plain text |
|
|
A file that contains key-value pairs and expects them to be environment variables |
|
|
A file with arbitrary contents |
|
|
A directory with one or many files |
$ kubectl create secret generic db-creds --from-literal=pwd=s3cre! secret/db-creds created
apiVersion:v1kind:Secretmetadata:name:db-credstype:Opaquedata:pwd:czNjcmUh
The value Opaque for the type has been assigned to represent generic sensitive data.
The plain-text value has been Base64-encoded automatically if the object has been created imperatively.
$ echo -n 's3cre!' | base64 czNjcmUh
apiVersion:v1kind:Secretmetadata:name:db-credstype:OpaquestringData:pwd:s3cre!
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendenvFrom:-secretRef:name:secret-basic-auth
$ kubectl exec backend -- env ... username=bmuschko password=secret ...
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendenv:-name:USERvalueFrom:secretKeyRef:name:secret-basic-authkey:username-name:PWDvalueFrom:secretKeyRef:name:secret-basic-authkey:password
$ kubectl exec backend -- env ... USER=bmuschko PWD=secret ...
$ cp ~/.ssh/id_rsa ssh-privatekey $ kubectl create secret generic secret-ssh-auth --from-file=ssh-privatekey \ --type=kubernetes.io/ssh-auth secret/secret-ssh-auth created
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendvolumeMounts:-name:ssh-volumemountPath:/var/appreadOnly:truevolumes:-name:ssh-volumesecret:secretName:secret-ssh-auth
Files provided by the Secret mounted as volume cannot be modified.
Note that the attribute secretName that points to the Secret name is not the same as for the ConfigMap (which is name).
$ kubectl exec -it backend -- /bin/sh # ls -1 /var/app ssh-privatekey # cat /var/app/ssh-privatekey -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC,8734C9153079F2E8497C8075289EBBF1 ... -----END RSA PRIVATE KEY-----
The quickest ways to create those objects are the imperative kubectl create configmap commands. Understand how to provide the data with the help of different command line flags. The ConfigMap specifies plain-text key-value pairs in the data section of YAML manifest.
Creating a Secret using the imperative command kubectl create secret does not require you to Base64-encode the provided values. kubectl performs the encoding operation automatically. The declarative approach requires the Secret YAML manifest to specify a Base64-encoded value with the data section. You can use the stringData convenience attribute in place of the data attribute if you prefer providing a plain-text value. The live object will use a Base64-encoded value. Functionally, there’s no difference at runtime between the use of data and stringData.
Secrets offer specialized types, e.g., kubernetes.io/basic-auth or kubernetes.io/service-account-token, to represent data for specific use cases. Read up on the different types in the Kubernetes documentation and understand their
purpose.
The exam may confront you with existing ConfigMap and Secret objects. You need to understand how to use the kubectl get or the kubectl describe command to inspect the data of those objects. The live object of a Secret will always represent the value in a Base64-encoded format.
The primary use case for ConfigMaps and Secrets is the consumption of the data from a Pod. Pods can inject configuration data into a container as environment variables or mount the configuration data as Volumes. For the exam, you need to be familiar with both consumption methods.
$ kubectl create deployment app-cache --image=memcached:1.6.8 --replicas=4 deployment.apps/app-cache created
apiVersion:apps/v1kind:Deploymentmetadata:name:app-cachelabels:app:app-cachespec:replicas:4selector:matchLabels:app:app-cachetemplate:metadata:labels:app:app-cachespec:containers:-name:memcachedimage:memcached:1.6.8
$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE app-cache 4/4 4 4 125m
| Column Title | Description |
|---|---|
READY |
Lists the number of replicas available to end users in the format of <ready>/<desired>. The number of desired replicas corresponds to the value of |
UP-TO-DATE |
Lists the number of replicas that have been updated to achieve the desired state. |
AVAILABLE |
Lists the number of replicas available to end users. |
$ kubectl get replicasets,pods NAME DESIRED CURRENT READY AGE replicaset.apps/app-cache-596bc5586d 4 4 4 6h5m NAME READY STATUS RESTARTS AGE app-cache-596bc5586d-84dkv 1/1 Running 0 6h5m app-cache-596bc5586d-8bzfs 1/1 Running 0 6h5m app-cache-596bc5586d-rc257 1/1 Running 0 6h5m app-cache-596bc5586d-tvm4d 1/1 Running 0 6h5m
$ kubectl describe deployment app-cache
Name: app-cache
Namespace: default
CreationTimestamp: Sat, 07 Aug 2021 09:44:18 -0600
Labels: app=app-cache
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=app-cache
Replicas: 4 desired | 4 updated | 4 total | 4 available | \
0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=app-cache
Containers:
memcached:
Image: memcached:1.6.10
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: app-cache-596bc5586d (4/4 replicas created)
Events: <none>
$ kubectl get replicasets,pods NAME DESIRED CURRENT READY AGE replicaset.apps/app-cache-596bc5586d 4 4 4 6h47m NAME READY STATUS RESTARTS AGE pod/app-cache-596bc5586d-84dkv 1/1 Running 0 6h47m pod/app-cache-596bc5586d-8bzfs 1/1 Running 0 6h47m pod/app-cache-596bc5586d-rc257 1/1 Running 0 6h47m pod/app-cache-596bc5586d-tvm4d 1/1 Running 0 6h47m
$ kubectl delete pod app-cache-596bc5586d-rc257 pod "app-cache-596bc5586d-rc257" deleted
$ kubectl get replicasets,pods NAME DESIRED CURRENT READY AGE replicaset.apps/app-cache-596bc5586d 4 4 4 6h47m NAME READY STATUS RESTARTS AGE pod/app-cache-596bc5586d-84dkv 1/1 Running 0 6h47m pod/app-cache-596bc5586d-8bzfs 1/1 Running 0 6h47m pod/app-cache-596bc5586d-lwflz 1/1 Running 0 5s pod/app-cache-596bc5586d-tvm4d 1/1 Running 0 6h47m
$ kubectl get deployments,replicasets,pods NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/app-cache 4/4 4 4 6h47m NAME DESIRED CURRENT READY AGE replicaset.apps/app-cache-596bc5586d 4 4 4 6h47m NAME READY STATUS RESTARTS AGE pod/app-cache-596bc5586d-84dkv 1/1 Running 0 6h47m pod/app-cache-596bc5586d-8bzfs 1/1 Running 0 6h47m pod/app-cache-596bc5586d-rc257 1/1 Running 0 6h47m pod/app-cache-596bc5586d-tvm4d 1/1 Running 0 6h47m
$ kubectl delete deployment app-cache deployment.apps "app-cache" deleted $ kubectl get deployments,replicasets,pods No resources found in default namespace.
$ kubectl apply -f deployment.yaml
$ kubectl edit deployment web-server
$ kubectl set image deployment web-server nginx=nginx:1.25.2
$ kubectl replace -f deployment.yaml
$ kubectl patch deployment web-server -p '{"spec":{"template":{"spec":\
{"containers":[{"name":"nginx","image":"nginx:1.25.2"}]}}}}'
$ kubectl set image deployment app-cache memcached=memcached:1.6.10 deployment.apps/app-cache image updated
$ kubectl rollout status deployment app-cache Waiting for rollout to finish: 2 out of 4 new replicas have been updated... deployment "app-cache" successfully rolled out
$ kubectl rollout history deployment app-cache deployment.apps/app-cache REVISION CHANGE-CAUSE 1 <none> 2 <none>
$ kubectl rollout history deployments app-cache --revision=2
deployment.apps/app-cache with revision #2
Pod Template:
Labels: app=app-cache
pod-template-hash=596bc5586d
Containers:
memcached:
Image: memcached:1.6.10
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
$ kubectl annotate deployment app-cache kubernetes.io/change-cause=\ "Image updated to 1.6.10" deployment.apps/app-cache annotated
$ kubectl rollout history deployment app-cache deployment.apps/app-cache REVISION CHANGE-CAUSE 1 <none> 2 Image updated to 1.6.10
$ kubectl rollout undo deployment app-cache --to-revision=1 deployment.apps/app-cache rolled back
$ kubectl rollout history deployment app-cache deployment.apps/app-cache REVISION CHANGE-CAUSE 2 Image updated to 1.16.10 3 <none>
Given that a Deployment is such a central primitive in Kubernetes, you can expect that the exam will test you on it. Know how to create and configure a Deployment.
Learn how to scale to multiple replicas. One of the superior features of a ReplicaSet is its rollout functionality for new revisions. Practice how to roll out a new revision, inspect the rollout history, and roll back to a previous revision.
Learn how to configure the built-in strategies in the Deployment primitive and their options for fine-tuning the runtime behavior. You can implement even more sophisticated deployment scenarios with the help of the Deployment and Service primitives. Examples are the blue-green and canary deployment strategies, which require a multi-phased rollout process, but will not be covered by the exam.
$ kubectl scale deployment app-cache --replicas=6 deployment.apps/app-cache scaled
$ kubectl get pods -w NAME READY STATUS RESTARTS AGE app-cache-5d6748d8b9-6cc4j 1/1 ContainerCreating 0 11s app-cache-5d6748d8b9-6rmlj 1/1 Running 0 28m app-cache-5d6748d8b9-6z7g5 1/1 ContainerCreating 0 11s app-cache-5d6748d8b9-96dzf 1/1 Running 0 28m app-cache-5d6748d8b9-jkjsv 1/1 Running 0 28m app-cache-5d6748d8b9-svrxw 1/1 Running 0 28m
apiVersion:apps/v1kind:StatefulSetmetadata:name:redisspec:selector:matchLabels:app:redisreplicas:1serviceName:"redis"template:metadata:labels:app:redisspec:containers:-name:redisimage:redis:6.2.5command:["redis-server","--appendonly","yes"]ports:-containerPort:6379name:webvolumeMounts:-name:redis-volmountPath:/datavolumeClaimTemplates:-metadata:name:redis-volspec:accessModes:["ReadWriteOnce"]resources:requests:storage:1Gi
$ kubectl apply -f redis.yaml service/redis created statefulset.apps/redis created $ kubectl get statefulset redis NAME READY AGE redis 1/1 2m10s $ kubectl get pods NAME READY STATUS RESTARTS AGE redis-0 1/1 Running 0 2m
$ kubectl scale statefulset redis --replicas=3 statefulset.apps/redis scaled $ kubectl get statefulset redis NAME READY AGE redis 3/3 3m43s $ kubectl get pods NAME READY STATUS RESTARTS AGE redis-0 1/1 Running 0 101m redis-1 1/1 Running 0 97m redis-2 1/1 Running 0 97m
$ kubectl autoscale deployment app-cache --cpu-percent=80 --min=3 --max=5 horizontalpodautoscaler.autoscaling/app-cache autoscaled
apiVersion:autoscaling/v2kind:HorizontalPodAutoscalermetadata:name:app-cachespec:scaleTargetRef:apiVersion:apps/v1kind:Deploymentname:app-cacheminReplicas:3maxReplicas:5metrics:-resource:name:cputarget:averageUtilization:80type:Utilizationtype:Resource
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS \ AGE app-cache Deployment/app-cache <unknown>/80% 3 5 4 \ 58s
# ...spec:# ...template:# ...spec:containers:-name:memcached# ...resources:requests:cpu:250mlimits:cpu:500m
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE app-cache Deployment/app-cache 15%/80% 3 5 4 58s
$ kubectl describe hpa app-cache
Name: app-cache
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sun, 15 Aug 2021 \
15:54:11 -0600
Reference: Deployment/app-cache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (1m) / 80%
Min replicas: 3
Max replicas: 5
Deployment pods: 3 current / 3 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully \
calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited True TooFewReplicas the desired replica count is less \
than the minimum replica count
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 13m horizontal-pod-autoscaler New size: 3; \
reason: All metrics below target
apiVersion:autoscaling/v2kind:HorizontalPodAutoscalermetadata:name:app-cachespec:scaleTargetRef:apiVersion:apps/v1kind:Deploymentname:app-cacheminReplicas:3maxReplicas:5metrics:-type:Resourceresource:name:cputarget:type:UtilizationaverageUtilization:80-type:Resourceresource:name:memorytarget:type:AverageValueaverageValue:500Mi
...spec:...template:...spec:containers:-name:memcached...resources:requests:cpu:250mmemory:100Milimits:cpu:500mmemory:500Mi
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS \ REPLICAS AGE app-cache Deployment/app-cache 1994752/500Mi, 0%/80% 3 5 \ 3 2m14s
Kubernetes workloads can be scaled either manually or automatically. In real-world scenarios, autoscaling is the preferred approach, as it allows the number of replicas to adjust dynamically based on actual resource consumption relative to defined thresholds. This ensures optimal performance and resource efficiency without the need for constant manual oversight.
For the HPA to function correctly, it’s essential to have the metrics server installed and to define resource requests for your containers. Without these in place, the HPA won’t have the necessary data to make informed scaling decisions, rendering it ineffective.
An HPA YAML manifest is built around three essential components: the scaling target (the named resource the HPA will manage e.g. a Deployment or a ReplicaSet), the minimum and maximum number of replicas the HPA can scale between, and the scaling rules - the conditions that trigger scaling, typically based on resource usage thresholds like CPU or memory utilization.
| YAML attribute | Description | Example value |
|---|---|---|
|
CPU resource type |
|
|
Memory resource type |
|
|
Huge page resource type |
|
|
Ephemeral storage resource type |
|
apiVersion:v1kind:Podmetadata:name:rate-limiterspec:containers:-name:business-appimage:bmuschko/nodejs-business-app:1.0.0ports:-containerPort:8080resources:requests:memory:"256Mi"cpu:"1"-name:ambassadorimage:bmuschko/nodejs-ambassador:1.0.0ports:-containerPort:8081resources:requests:memory:"64Mi"cpu:"250m"
| YAML attribute | Description | Example value |
|---|---|---|
|
CPU resource type |
|
|
Memory resource type |
|
|
Huge page resource type |
|
|
Ephemeral storage resource type |
|
apiVersion:v1kind:Podmetadata:name:rate-limiterspec:containers:-name:business-appimage:bmuschko/nodejs-business-app:1.0.0ports:-containerPort:8080resources:limits:memory:"256Mi"-name:ambassadorimage:bmuschko/nodejs-ambassador:1.0.0ports:-containerPort:8081resources:limits:memory:"64Mi"
apiVersion:v1kind:Podmetadata:name:rate-limiterspec:containers:-name:business-appimage:bmuschko/nodejs-business-app:1.0.0ports:-containerPort:8080resources:requests:memory:"256Mi"cpu:"1"limits:memory:"256Mi"-name:ambassadorimage:bmuschko/nodejs-ambassador:1.0.0ports:-containerPort:8081resources:requests:memory:"64Mi"cpu:"250m"limits:memory:"64Mi"
$ kubectl create namespace team-awesome namespace/team-awesome created
apiVersion:v1kind:ResourceQuotametadata:name:awesome-quotanamespace:team-awesomespec:hard:pods:2requests.cpu:"1"requests.memory:1024Milimits.cpu:"4"limits.memory:4096Mi
Limit the number of Pods to 2.
Define the minimum resources requested across all Pods in a non-terminal state to 1 CPU and 1024Mi of RAM.
Define the maximum resources used by all Pods in a non-terminal state to 4 CPUs and 4096Mi of RAM.
$ kubectl create -f awesome-quota.yaml resourcequota/awesome-quota created
$ kubectl describe resourcequota awesome-quota -n team-awesome Name: awesome-quota Namespace: team-awesome Resource Used Hard -------- ---- ---- limits.cpu 0 4 limits.memory 0 4Gi pods 0 2 requests.cpu 0 1 requests.memory 0 1Gi
apiVersion:v1kind:Podmetadata:name:nginxnamespace:team-awesomespec:containers:-image:nginx:1.25.3name:nginx
$ kubectl apply -f nginx-pod.yaml Error from server (Forbidden): error when creating "nginx-pod.yaml": \ pods "nginx" is forbidden: failed quota: awesome-quota: must specify \ limits.cpu for: nginx; limits.memory for: nginx; requests.cpu for: \ nginx; requests.memory for: nginx
apiVersion:v1kind:Podmetadata:name:nginxnamespace:team-awesomespec:containers:-image:nginx:1.25.3name:nginxresources:requests:cpu:"0.5"memory:"512Mi"limits:cpu:"1"memory:"1024Mi"
$ kubectl apply -f nginx-pod1.yaml pod/nginx1 created $ kubectl apply -f nginx-pod2.yaml pod/nginx2 created $ kubectl describe resourcequota awesome-quota -n team-awesome Name: awesome-quota Namespace: team-awesome Resource Used Hard -------- ---- ---- limits.cpu 2 4 limits.memory 2Gi 4Gi pods 2 2 requests.cpu 1 1 requests.memory 1Gi 1Gi
$ kubectl apply -f nginx-pod3.yaml Error from server (Forbidden): error when creating "nginx-pod3.yaml": \ pods "nginx3" is forbidden: exceeded quota: awesome-quota, requested: \ pods=1,requests.cpu=500m,requests.memory=512Mi, used: pods=2,requests.cpu=1,\ requests.memory=1Gi, limited: pods=2,requests.cpu=1,requests.memory=1Gi
apiVersion:v1kind:LimitRangemetadata:name:cpu-resource-constraintspec:limits:-type:ContainerdefaultRequest:cpu:200mdefault:cpu:200mmin:cpu:100mmax:cpu:"2"
The context to apply the constraints to. In this case, to a container running in a Pod.
The default CPU resource request value assigned to a container if not provided.
The default CPU resource limit value assigned to a container if not provided.
The minimum and maximum CPU resource request and limit value assignable to a container.
$ kubectl apply -f cpu-resource-constraint.yaml limitrange/cpu-resource-constraint created
$ kubectl describe limitrange cpu-resource-constraint Name: cpu-resource-constraint Namespace: default Type Resource Min Max Default Request Default Limit ... ---- -------- --- --- --------------- ------------- Container cpu 100m 2 200m 200m ...
apiVersion:v1kind:Podmetadata:name:nginx-without-resource-requirementsspec:containers:-image:nginx:1.25.3name:nginx
$ kubectl apply -f nginx-without-resource-requirements.yaml pod/nginx-without-resource-requirements created
$ kubectl describe pod nginx-without-resource-requirements
...
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu \
request for container nginx; cpu limit for container nginx
...
Containers:
nginx:
...
Limits:
cpu: 200m
Requests:
cpu: 200m
...
apiVersion:v1kind:Podmetadata:name:nginx-with-resource-requirementsspec:containers:-image:nginx:1.25.3name:nginxresources:requests:cpu:"50m"limits:cpu:"3"
$ kubectl apply -f nginx-with-resource-requirements.yaml Error from server (Forbidden): error when creating "nginx-with-resource-\ requirements.yaml": pods "nginx-with-resource-requirements" is forbidden: \ [minimum cpu usage per Container is 100 m, but request is 50 m, maximum cpu \ usage per Container is 2, but limit is 3]
A container defined by a Pod can specify resource requests and limits. Work through scenarios where you define those requirements individually and together for single- and multi-container Pods. Upon creation of the Pod, you should be able to see the effects on scheduling the object on a node. Furthermore, practice how to identify the available resource capacity of a node.
A ResourceQuota defines the resource boundaries for objects living within a namespace. The most commonly used boundaries apply to computing resources. Practice defining them and understand their effect on the creation of Pods. It’s important to know the command for listing the hard requirements of a ResourceQuota and the resources currently in use. You will find that a ResourceQuota offers other options. Discover them in more detail for a broader exposure to the topic.
A LimitRange can specify resource constraints and defaults of specific primitives. Should you run into a situation where you receive an error message upon creation of an object, check if a limit range object enforces those constraints. Unfortunately, the error message does not point out the object that enforces it so you may have to proactively list LimitRange objects to identify the constraints.
apiVersion:v1kind:ResourceQuotametadata:name:appspec:hard:pods:"2"requests.cpu:"2"requests.memory:500Mi
$ kubectl get nodes NAME STATUS ROLES AGE VERSION multi-node Ready control-plane 2m33s v1.32.2 multi-node-m02 Ready <none> 2m22s v1.32.2 multi-node-m03 Ready <none> 2m15s v1.32.2 multi-node-m04 Ready <none> 2m9s v1.32.2
$ minikube start --kubernetes-version=v1.32.2 --nodes=4 -p multi-node
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE kube-scheduler-multi-node 1/1 Running 0 5m10s
$ kubectl get pod nginx -o=wide NAME READY STATUS RESTARTS AGE IP NODE ... nginx 1/1 Running 0 3m49s 10.244.2.2 multi-node-m03 ... $ kubectl get pod nginx -o yaml | grep nodeName: nodeName: multi-node-m03 $ kubectl describe pod nginx Name: nginx Namespace: default Priority: 0 Service Account: default Node: multi-node-m03/192.168.49.4 ...
$ kubectl label node multi-node-m03 disk=ssd node/multi-node-m03 labeled
$ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS multi-node Ready control-plane 14m v1.32.2 ... multi-node-m02 Ready <none> 14m v1.32.2 ... multi-node-m03 Ready <none> 14m v1.32.2 ... multi-node-m04 Ready <none> 14m v1.32.2 ...,disk=ssd,...
| Type | Description |
|---|---|
|
Rules that must be met for a Pod to be scheduled onto a node. |
|
Rules that specify preferences that the scheduler will try to enforce but will not guarantee. |
| Operator | Behavior |
|---|---|
|
A node has an assigned label value in the given set of strings. |
|
Only those nodes are selected that do not have an assigned label value in he given set of strings. |
|
A node has a label key assigned to it that matches the given string. |
|
A node does not have a label key assigned to it that matches the given string. |
|
A node does not have a label key assigned to it that matches the given string. |
|
A node does not have a label key assigned to it that matches the given string. |
$ kubectl taint node multi-node-m02 special=true:NoSchedule node/multi-node-m02 tainted
$ kubectl get node multi-node-m02 -o yaml | grep -C 3 taints:
...
spec:
taints:
- effect: NoSchedule
key: special
value: "true"
| Effect | Description |
|---|---|
|
Unless a Pod has matching toleration, it won’t be scheduled on the node. |
|
Try not to place a Pod that does not tolerate the taint on the node, but it is not required. |
|
Evict Pod from node if already running on it. No future scheduling on the node. |
apiVersion:apps/v1kind:Deploymentmetadata:name:webspec:replicas:6selector:matchLabels:app:webtemplate:metadata:labels:app:webspec:topologySpreadConstraints:-maxSkew:1topologyKey:topology.kubernetes.io/zonewhenUnsatisfiable:DoNotSchedulelabelSelector:matchLabels:app:webcontainers:-name:nginximage:nginx:1.27.1
As an end user to Kubernetes, you can easily find out which node a Pod runs on. Become familiar with the relevant kubectl commands that give you access to this information. During the exam, you may be asked which Pods run on which nodes of the cluster.
You’ll need to be familiar with a wide range of Pod scheduling concepts. Tasks in the exam may ask you select the most suitable concept to define soft or hard requirements for specific scheduling scenarios. Most likely, the Pod scheduling concept is spelled out explicitly, and you will need to be able to apply the syntax appropriately.
Applications running in a container can use the temporary filesystem to read and write files. In the case of a container crash or a cluster/node restart, the kubelet will restart the container. Any data that had been written to the temporary filesystem is lost and cannot be retrieved anymore. The container effectively starts with a clean slate again. Volumes can provide persistent storage that survives container restarts, ensuring important data isn’t lost.
There are many uses cases for wanting to mount a volume in a container. One of the most prominent use cases are multi-container Pods that use a volume to exchange data between a main application container and a sidecar, enabling them to share files and communicate through the filesystem.
Volumes abstract storage details from the application, allowing you to change storage backends without modifying container images.
| Type | Description |
|---|---|
|
Empty directory in Pod with read/write access. Persisted for only the lifespan of a Pod. A good choice for cache implementations or data exchange between containers of a Pod. |
|
File or directory from the host node’s filesystem. |
|
Provides a way to inject configuration data. For practical examples, see Chapter 10. |
|
An existing Network File System (NFS) share. Preserves data after Pod restart. |
|
Claims a persistent volume. For more information, see “Creating PersistentVolumeClaims”. |
apiVersion:v1kind:Podmetadata:name:business-appspec:volumes:-name:shared-dataemptyDir:{}containers:-name:nginximage:nginx:1.27.1volumeMounts:-name:shared-datamountPath:/usr/share/nginx/html-name:sidecarimage:busybox:1.37.0volumeMounts:-name:shared-datamountPath:/data
Specifies a volume of type emptyDir. The curly braces mean that we don’t want to provide additional configuration, e.g. a size limit.
Mounts the volume to containers with different mount paths inside of the container.
$ kubectl apply -f pod-with-volume.yaml pod/business-app created $ kubectl get pod business-app NAME READY STATUS RESTARTS AGE business-app 2/2 Running 0 43s
$ kubectl exec business-app -it -c nginx -- /bin/sh # cd /usr/share/nginx/html # pwd /usr/share/nginx/html # ls # touch example.html # ls example.html
Many production-ready application stacks running in a cloud-native environment need to persist data. Read up on common use cases and explore recipes that describe typical scenarios. You can find some examples in the O’Reilly books Kubernetes Best Practices, and Cloud Native DevOps with Kubernetes.
Volumes are a cross-cutting concept applied in different areas of the exam. Know where to find the relevant documentation for defining a volume and the multitude of ways to consume a volume from a container. Definitely revisit Chapter 10 for a deep dive on how to mount ConfigMaps and Secrets as a volume.
| Type | Description |
|---|---|
|
Mounts a file or directory from the host node’s filesystem into the Pod. Useful for development and testing but not recommended for production multi-node clusters as it ties the Pod to a specific node. |
|
Represents a mounted local storage device such as a disk, partition, or directory. Provides better performance than remote storage but requires node affinity to ensure Pods are scheduled on the correct node. |
|
Allows multiple Pods to share the same Network File System (NFS) mount. Supports |
|
Container Storage Interface driver that provides a standardized way to expose storage systems to containerized workloads. Most modern storage solutions use CSI drivers. |
|
Fibre Channel volume that allows existing FC storage to be attached to Pods. Requires FC hardware and proper configuration on nodes. |
|
iSCSI (Internet Small Computer Systems Interface) volume that allows existing iSCSI storage to be mounted to Pods. Provides block-level storage over IP networks. |
apiVersion:v1kind:PersistentVolumemetadata:name:db-pvspec:capacity:storage:1GiaccessModes:-ReadWriteOncehostPath:path:/data/db
The storage capacity available to persistent volume.
The read-write access modes applicable to the persistent volume.
$ kubectl apply -f db-pv.yaml
persistentvolume/db-pv created
$ kubectl get pv db-pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS \
CLAIM STORAGECLASS REASON AGE
db-pv 1Gi RWO Retain Available \
10s
| Type | Description |
|---|---|
|
Default. Mounts the volume into a directory of the consuming Pod. Creates a filesystem first if the volume is backed by a block device and the device is empty. |
|
Used for a volume as a raw block device without a filesystem on it. |
$ kubectl get pv -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS \
CLAIM STORAGECLASS REASON AGE VOLUMEMODE
db-pv 1Gi RWO Retain Available \
19m Filesystem
| Type | Short Form | Description |
|---|---|---|
|
RWO |
Read/write access by a single node |
|
ROX |
Read-only access by many nodes |
|
RWX |
Read/write access by many nodes |
|
RWOP |
Read/write access mounted by a single Pod |
$ kubectl get pv db-pv -o jsonpath='{.spec.accessModes}'
["ReadWriteOnce"]
| Type | Description |
|---|---|
|
Default. When PersistentVolumeClaim is deleted, the PersistentVolume is “released” and can be reclaimed. |
|
Deletion removes PersistentVolume and its associated storage. |
|
This value is deprecated. You should use one of the other values. |
$ kubectl get pv db-pv -o jsonpath='{.spec.persistentVolumeReclaimPolicy}'
Retain
apiVersion:v1kind:PersistentVolumemetadata:name:local-pvspec:capacity:storage:10GiaccessModes:-ReadWriteOncepersistentVolumeReclaimPolicy:RetainstorageClassName:local-storagelocal:path:/mnt/datanodeAffinity:required:nodeSelectorTerms:-matchExpressions:-key:kubernetes.io/hostnameoperator:Invalues:-node01-node02
Uses the local volume type, which requires node affinity.
Defines node affinity rules for this PersistentVolume.
Restricts the volume to nodes with hostnames node01 or node02.
kind:PersistentVolumeClaimapiVersion:v1metadata:name:db-pvcspec:accessModes:-ReadWriteOncestorageClassName:""resources:requests:storage:256Mi
The access modes we’re asking an unbound persistent volume to provide.
Uses an empty string assignment to indicate that we want to use static provisioning.
The minimum amount of storage an unbound persistent volume needs to have available.
$ kubectl apply -f db-pvc.yaml persistentvolumeclaim/db-pvc created $ kubectl get pvc db-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-pvc Bound db-pv 1Gi RWO 111s
$ kubectl describe pvc db-pvc ... Used By: <none> ...
apiVersion:v1kind:Podmetadata:name:app-consuming-pvcspec:volumes:-name:app-storagepersistentVolumeClaim:claimName:db-pvccontainers:-image:alpine:3.18.2name:appcommand:["/bin/sh"]args:["-c","whiletrue;dosleep60;done;"]volumeMounts:-mountPath:"/mnt/data"name:app-storage
The volume type that selects a persistent volume claim by name.
The name of the persistent volume claim object we want to bind to.
$ kubectl apply -f app-consuming-pvc.yaml
pod/app-consuming-pvc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-consuming-pvc 1/1 Running 0 3s
$ kubectl describe pod app-consuming-pvc
...
Volumes:
app-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim \
in the same namespace)
ClaimName: db-pvc
ReadOnly: false
...
$ kubectl describe pvc db-pvc ... Used By: app-consuming-pvc ...
$ kubectl exec app-consuming-pvc -it -- /bin/sh / # cd /mnt/data /mnt/data # ls -l total 0 /mnt/data # touch test.db /mnt/data # ls -l total 0 -rw-r--r-- 1 root root 0 Sep 29 23:59 test.db
$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY \ VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard (default) k8s.io/minikube-hostpath Delete \ Immediate false 108d
apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:fastprovisioner:kubernetes.io/gce-pdparameters:type:pd-ssdreplication-type:regional-pd
$ kubectl create -f fast-sc.yaml storageclass.storage.k8s.io/fast created $ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY \ VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE fast kubernetes.io/gce-pd Delete \ Immediate false 4s ...
kind:PersistentVolumeClaimapiVersion:v1metadata:name:db-pvcspec:accessModes:-ReadWriteOnceresources:requests:storage:512MistorageClassName:standard
$ kubectl get pv,pvc
NAME CAPACITY \
ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS \
REASON AGE
persistentvolume/pvc-b820b919-f7f7-4c74-9212-ef259d421734 512Mi \
RWO Delete Bound default/db-pvc standard \
2s
NAME STATUS VOLUME \
CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/db-pvc Bound pvc-b820b919-f7f7-4c74-9212-ef259d421734 \
512Mi RWO standard 2s
Creating a PersistentVolume involves a couple of moving parts. Understand the configuration options for PersistentVolumes and PersistentVolumeClaims and how they play together. Try to emulate situations that prevent a successful binding of a PersistentVolumeClaim. Then fix the situation by taking counteractions. Internalize the short-form commands pv and pvc to save precious time during the exam.
A PersistentVolume can be created statically by creating the object from a YAML manifest using the create command. Alternatively, you can let Kubernetes provision a PersistentVolume dynamically without your direct involvement. For this to happen, assign a storage class to the PersistentVolumeClaim. The provisioner of the storage class takes care of creating the PersistentVolume object for you.
| Type | Description |
|---|---|
Exposes the Service on a cluster-internal IP. Reachable only from within the cluster. Kubernetes uses a round-robin algorithm to distribute traffic evenly among the targeted Pods. |
|
Exposes the Service on each node’s IP address at a static port. Accessible from outside of the cluster. The Service type does not provide any load balancing across multiple nodes. |
|
Exposes the Service externally using a cloud provider’s load balancer. |
$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 pod/echoserver created
$ kubectl create service clusterip echoserver --tcp=80:8080 service/echoserver created
$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 --expose service/echoserver created pod/echoserver created
$ kubectl create deployment echoserver --image=k8s.gcr.io/echoserver:1.10 \ --replicas=5 deployment.apps/echoserver created $ kubectl expose deployment echoserver --port=80 --target-port=8080 service/echoserver exposed
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echoserver ClusterIP 10.109.241.68 <none> 80/TCP 6s
$ kubectl describe service echoserver Name: echoserver Namespace: default Labels: app=echoserver Annotations: <none> Selector: app=echoserver Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.109.241.68 IPs: 10.109.241.68 Port: <unset> 80/TCP TargetPort: 8080/TCP Endpoints: 172.17.0.4:8080,172.17.0.5:8080,172.17.0.7:8080 + 2 more... Session Affinity: None Events: <none>
$ kubectl get endpoints echoserver NAME ENDPOINTS AGE echoserver 172.17.0.4:8080,172.17.0.5:8080,172.17.0.7:8080 + 2 more... 8m5s
$ kubectl describe endpoints echoserver
Name: echoserver
Namespace: default
Labels: app=echoserver
Annotations: endpoints.kubernetes.io/last-change-trigger-time: \
2021-11-15T19:09:04Z
Subsets:
Addresses: 172.17.0.4,172.17.0.5,172.17.0.7,172.17.0.8,172.17.0.9
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 8080 TCP
Events: <none>
ClusterIP$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 -l app=echoserver pod/echoserver created $ kubectl create service clusterip echoserver --tcp=5005:8080 service/echoserver created
apiVersion:v1kind:Servicemetadata:name:echoserverspec:type:ClusterIPclusterIP:10.96.254.0selector:app:echoserverports:-port:5005targetPort:8080protocol:TCP
$ kubectl get service echoserver NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echoserver ClusterIP 10.96.254.0 <none> 5005/TCP 8s
$ wget 10.96.254.0:5005 --timeout=5 --tries=1 --2021-11-15 15:45:36-- http://10.96.254.0:5005/ Connecting to 10.96.254.0:5005... failed: Operation timed out. Giving up.
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -- wget 10.96.254.0:5005 Connecting to 10.96.254.0:5005 (10.96.254.0:5005) saving to 'index.html' index.html 100% |********************************| 408 0:00:00 ETA 'index.html' saved pod "tmp" deleted
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -- wget echoserver:5005 Connecting to echoserver:5005 (10.96.254.0:5005) saving to 'index.html' index.html 100% |********************************| 408 0:00:00 ETA 'index.html' saved pod "tmp" deleted
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -n other -- wget echoserver.default:5005 Connecting to echoserver.default:5005 (10.96.254.0:5005) saving to 'index.html' index.html 100% |********************************| 408 0:00:00 ETA 'index.html' saved pod "tmp" deleted
$ kubectl exec -it echoserver -- env ECHOSERVER_SERVICE_HOST=10.96.254.0 ECHOSERVER_SERVICE_PORT=8080 ...
NodePort$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 -l app=echoserver pod/echoserver created $ kubectl create service nodeport echoserver --tcp=5005:8080 service/echoserver created
apiVersion:v1kind:Servicemetadata:name:echoserverspec:type:NodePortclusterIP:10.96.254.0selector:app:echoserverports:-port:5005nodePort:30158targetPort:8080protocol:TCP
The Service type set to NodePort.
The statically-assigned node port that makes the Service accessible from outside of the cluster.
$ kubectl get service echoserver NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echoserver NodePort 10.101.184.152 <none> 5005:30158/TCP 5s
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -- wget 10.101.184.152:5005 Connecting to 10.101.184.152:5005 (10.101.184.152:5005) saving to 'index.html' index.html 100% |********************************| 414 0:00:00 ETA 'index.html' saved pod "tmp" deleted
$ kubectl get nodes -o \
jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'
192.168.64.15
$ wget 192.168.64.15:30158
--2021-11-16 14:10:16-- http://192.168.64.15:30158/
Connecting to 192.168.64.15:30158... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: ‘index.html’
...
LoadBalancer$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 -l app=echoserver pod/echoserver created $ kubectl create service loadbalancer echoserver --tcp=5005:8080 service/echoserver created
apiVersion:v1kind:Servicemetadata:name:echoserverspec:type:LoadBalancerclusterIP:10.96.254.0loadBalancerIP:10.109.76.157selector:app:echoserverports:-port:5005targetPort:8080nodePort:30158protocol:TCP
$ kubectl get service echoserver NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echoserver LoadBalancer 10.109.76.157 10.109.76.157 5005:30642/TCP 5s
$ wget 10.109.76.157:5005 --2021-11-17 11:30:44-- http://10.109.76.157:5005/ Connecting to 10.109.76.157:5005... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/plain] Saving to: ‘index.html’ ...
Pod-to-Pod communication via their IP addresses doesn’t guarantee a stable network interface over time. A restart of the Pod will lease a new virtual IP address. The purpose of a Service is to provide that stable network interface so that you can operate complex microservice architecture that runs in a Kubernetes cluster. In most cases, Pods call a Service by hostname. The hostname is provided by the DNS server named CoreDNS running as a Pod in the kube-system namespace.
The exam expects you to understand the differences between the Service types ClusterIP, NodePort, and LoadBalancer. Depending on the assigned type, a Service becomes accessible from inside the cluster or from outside the cluster.
It’s easy to get the configuration of a Service wrong. Any misconfiguration won’t allow network traffic to reach the set of Pod it was intended for. Common misconfigurations include incorrect label selection and port assignments. The kubectl get endpoints command will give you an idea which Pods a Service can route traffic to.
$ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-qqhrp 0/1 Completed 0 60s ingress-nginx-admission-patch-56z26 0/1 Completed 1 60s ingress-nginx-controller-7c6974c4d8-2gg8c 1/1 Running 0 60s
$ kubectl get ingressclasses NAME CONTROLLER PARAMETERS AGE nginx k8s.io/ingress-nginx <none> 14m
| Type | Example | Description |
|---|---|---|
An optional host |
|
If provided, the rules apply to that host. If no host is defined, all inbound HTTP(S) traffic is handled (e.g., if made through the IP address of the Ingress). |
A list of paths |
|
Incoming traffic must match the host and path to correctly forward the traffic to a Service. |
The backend |
|
A combination of a Service name and port. |
$ kubectl create ingress next-app \ --rule="next.example.com/app=app-service:8080" \ --rule="next.example.com/metrics=metrics-service:9090" ingress.networking.k8s.io/next-app created
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:next-appannotations:nginx.ingress.kubernetes.io/rewrite-target:/$1spec:rules:-host:next.example.comhttp:paths:-backend:service:name:app-serviceport:number:8080path:/apppathType:Exact-host:next.example.comhttp:paths:-backend:service:name:metrics-serviceport:number:9090path:/metricspathType:Exact
| Path Type | Rule | Incoming Request |
|---|---|---|
|
|
Matches |
|
|
Matches |
$ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE next-app nginx next.example.com 192.168.66.4 80 5m38s
$ kubectl describe ingress next-app
Name: next-app
Labels: <none>
Namespace: default
Address: 192.168.66.4
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
next.example.com
/app app-service:8080 (<error: endpoints \
"app-service" not found>)
/metrics metrics-service:9090 (<error: endpoints \
"metrics-service" not found>)
Annotations: <none>
Events:
Type Reason Age From ...
---- ------ ---- ---- ...
Normal Sync 6m45s (x2 over 7m3s) nginx-ingress-controller ...
$ kubectl run app --image=k8s.gcr.io/echoserver:1.10 --port=8080 \ -l app=app-service pod/app created $ kubectl run metrics --image=k8s.gcr.io/echoserver:1.10 --port=8080 \ -l app=metrics-service pod/metrics created $ kubectl create service clusterip app-service --tcp=8080:8080 service/app-service created $ kubectl create service clusterip metrics-service --tcp=9090:8080 service/metrics-service created
$ kubectl describe ingress next-app
Name: next-app
Labels: <none>
Namespace: default
Address: 192.168.66.4
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
next.example.com
/app app-service:8080 (10.244.0.6:8080)
/metrics metrics-service:9090 (10.244.0.7:8080)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 13m (x2 over 13m) nginx-ingress-controller Scheduled for sync
$ kubectl get ingress next-app \
--output=jsonpath="{.status.loadBalancer.ingress[0]['ip']}"
192.168.66.4
$ sudo vim /etc/hosts
...
192.168.66.4 next-app
$ wget next.example.com/app --timeout=5 --tries=1 --2021-11-30 19:34:57-- http://next.example.com/app Resolving next.example.com (next.example.com)... 192.168.66.4 Connecting to next.example.com (next.example.com)|192.168.66.4|:80... \ connected. HTTP request sent, awaiting response... 200 OK
$ wget next.example.com/app/ --timeout=5 --tries=1 --2021-11-30 15:36:26-- http://next.example.com/app/ Resolving next.example.com (next.example.com)... 192.168.66.4 Connecting to next.example.com (next.example.com)|192.168.66.4|:80... \ connected. HTTP request sent, awaiting response... 404 Not Found 2021-11-30 15:36:26 ERROR 404: Not Found.
An Ingress is not to be confused with a Service. The Ingress is meant for routing cluster-external HTTP(S) traffic to one or many Services based on an optional hostname and mandatory path. A Service routes traffic to a set of Pods.
An Ingress controller needs to be installed before an Ingress can function properly. Without installing an Ingress controller, Ingress rules will have no effect. You can choose from a range of Ingress controller implementations, all documented on the Kubernetes documentation page. Assume that an Ingress controller will be preinstalled for you in the exam environment.
You can define one or many rules in an Ingress. Every rule consists of an optional host, the URL context path, and the Service DNS name and port. Try defining more than a single rule and how to access the endpoint. You will not have to understand the process for configuring TLS termination for an Ingress—this aspect is covered by the CKS exam.
Advanced features like traffic splitting, rating limiting, request/response manipulation are provided by non-portable annotations for specific Ingress implementations.
The Ingress API is not well suited for multi-tenant environments that call for a strong permission model.
Defines an instance of traffic handling infrastructure, such as a cloud load balancer.
Each Gateway is associated with a GatewayClass, which describes the actual kind of gateway controller that will handle traffic for the Gateway.
Defines HTTP or GRPC-specific rules for mapping traffic from a Gateway listener to a representation of backend network endpoints. These endpoints are often represented as a Service.
Can be used to enable cross-namespace references within the Gateway API, e.g. routes may forward traffic to backends in other namespaces.
$ kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/\ download/v1.3.0/standard-install.yaml customresourcedefinition.apiextensions.k8s.io/\ gatewayclasses.gateway.networking.k8s.io created customresourcedefinition.apiextensions.k8s.io/\ gateways.gateway.networking.k8s.io created customresourcedefinition.apiextensions.k8s.io/\ grpcroutes.gateway.networking.k8s.io created customresourcedefinition.apiextensions.k8s.io/\ httproutes.gateway.networking.k8s.io created customresourcedefinition.apiextensions.k8s.io/\ referencegrants.gateway.networking.k8s.io created
$ kubectl get crds | grep gateway.networking.k8s.io gatewayclasses.gateway.networking.k8s.io 2025-08-07T18:14:16Z gateways.gateway.networking.k8s.io 2025-08-07T18:14:16Z grpcroutes.gateway.networking.k8s.io 2025-08-07T18:14:16Z httproutes.gateway.networking.k8s.io 2025-08-07T18:14:16Z referencegrants.gateway.networking.k8s.io 2025-08-07T18:14:17Z
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.4.2 \ -n envoy-gateway-system --create-namespace
$ kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway \ --for=condition=Available deployment.apps/envoy-gateway condition met
apiVersion:gateway.networking.k8s.io/v1kind:GatewayClassmetadata:name:envoyspec:controllerName:gateway.envoyproxy.io/gatewayclass-controller
$ kubectl apply -f gateway-class.yaml gatewayclass.gateway.networking.k8s.io/envoy created
$ kubectl get gatewayclasses NAME CONTROLLER ACCEPTED AGE envoy gateway.envoyproxy.io/gatewayclass-controller True 31s
apiVersion:gateway.networking.k8s.io/v1kind:Gatewaymetadata:name:hello-world-gatewayspec:gatewayClassName:envoylisteners:-name:httpprotocol:HTTPport:80
$ kubectl apply -f gateway.yaml gateway.gateway.networking.k8s.io/hello-world-gateway created
$ kubectl get gateways NAME CLASS ADDRESS PROGRAMMED AGE hello-world-gateway envoy False 16s
apiVersion:gateway.networking.k8s.io/v1kind:HTTPRoutemetadata:name:hello-world-httproutespec:parentRefs:-name:hello-world-gatewayhostnames:-"hello-world.exposed"rules:-backendRefs:-group:""kind:Servicename:webport:3000weight:1matches:-path:type:PathPrefixvalue:/
The hostname to make traffic routing available for.
The Service backend to route the traffic to.
Determines the proportion of traffic that will be sent to that specific Service.
The routing rules based on the requested URL context.
$ kubectl apply -f httproute.yaml httproute.gateway.networking.k8s.io/hello-world-httproute created
$ kubectl get httproutes NAME HOSTNAMES AGE hello-world-httproute ["hello-world.exposed"] 64s
$ export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system \
--selector=gateway.envoyproxy.io/owning-gateway-namespace=default,\
gateway.envoyproxy.io/owning-gateway-name=hello-world-gateway \
-o jsonpath='{.items[0].metadata.name}')
$ kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8889:80 &
[2] 93490
Forwarding from 127.0.0.1:8889 -> 10080
Forwarding from [::1]:8889 -> 10080
$ curl hello-world.exposed:8889 Handling connection for 8889 Hello World
Remember that exam environment may have pre-installed Gateway controllers, so always check available GatewayClasses before creating resources.
Practice creating different Gateway configurations, experiment with various HTTPRoute patterns, and understand how the components work together to build a solid foundation for managing production traffic.
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE cilium-k5td6 1/1 Running 0 110s cilium-operator-f5dcdcc8d-njfbk 1/1 Running 0 110s
$ kubectl run grocery-store --image=nginx:1.25.3-alpine \ -l app=grocery-store,role=backend --port 80 pod/grocery-store created $ kubectl run payment-processor --image=nginx:1.25.3-alpine \ -l app=payment-processor,role=api --port 80 pod/payment-processor created $ kubectl run coffee-shop --image=nginx:1.25.3-alpine \ -l app=coffee-shop,role=backend --port 80
$ kubectl get pod payment-processor --template '{{.status.podIP}}'
10.244.0.136
$ kubectl exec grocery-store -it -- wget --spider --timeout=1 10.244.0.136
Connecting to 10.244.0.136 (10.244.0.136:80)
remote file exists
$ kubectl exec coffee-shop -it -- wget --spider --timeout=1 10.244.0.136
Connecting to 10.244.0.136 (10.244.0.136:80)
remote file exists
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:api-allowspec:podSelector:matchLabels:app:payment-processorrole:apiingress:-from:-podSelector:matchLabels:app:coffee-shop
Selects the Pod the policy should apply to by label selection.
Allows incoming traffic from the Pod with matching labels within the same namespace.
| Attribute | Description |
|---|---|
|
Selects the Pods in the namespace to apply the network policy to. |
|
Defines the type of traffic (i.e., ingress and/or egress) the network policy applies to. |
|
Lists the rules for incoming traffic. Each rule can define |
|
Lists the rules for outgoing traffic. Each rule can define |
| Attribute | Description |
|---|---|
|
Selects Pods by label(s) in the same namespace as the network policy that should be allowed as ingress sources or egress destinations. |
|
Selects namespaces by label(s) for which all Pods should be allowed as ingress sources or egress destinations. |
|
Selects Pods by label(s) within namespaces by label(s). |
$ kubectl apply -f networkpolicy-api-allow.yaml networkpolicy.networking.k8s.io/api-allow created
$ kubectl exec grocery-store -it -- wget --spider --timeout=1 10.244.0.136 Connecting to 10.244.0.136 (10.244.0.136:80) wget: download timed out command terminated with exit code 1 $ kubectl exec coffee-shop -it -- wget --spider --timeout=1 10.244.0.136 Connecting to 10.244.0.136 (10.244.0.136:80) remote file exists
$ kubectl get networkpolicy api-allow NAME POD-SELECTOR AGE api-allow app=payment-processor,role=api 83m
$ kubectl describe networkpolicy api-allow
Name: api-allow
Namespace: default
Created on: 2024-01-10 09:06:59 -0700 MST
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=payment-processor,role=api
Allowing ingress traffic:
To Port: <any> (traffic allowed to all ports)
From:
PodSelector: app=coffee-shop
Not affecting egress traffic
Policy Types: Ingress
$ kubectl create namespace internal-tools namespace/internal-tools created $ kubectl run metrics-api --image=nginx:1.25.3-alpine --port=80 \ -l app=api -n internal-tools pod/metrics-api created $ kubectl run metrics-consumer --image=nginx:1.25.3-alpine --port=80 \ -l app=consumer -n internal-tools pod/metrics-consumer created
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:default-deny-allnamespace:internal-toolsspec:podSelector:{}policyTypes:-Ingress-Egress
The curly braces for spec.podSelector mean “apply to all Pods in the
namespace.”
Defines the types of traffic the rule should apply to, in this case ingress and egress traffic.
$ kubectl apply -f networkpolicy-deny-all.yaml networkpolicy.networking.k8s.io/default-deny-all created
$ kubectl get pod metrics-api --template '{{.status.podIP}}' -n internal-tools
10.244.0.182
$ kubectl exec metrics-consumer -it -n internal-tools \
-- wget --spider --timeout=1 10.244.0.182
Connecting to 10.244.0.182 (10.244.0.182:80)
wget: download timed out
command terminated with exit code 1
$ kubectl get pod metrics-consumer --template '{{.status.podIP}}' \
-n internal-tools
10.244.0.70
$ kubectl exec metrics-api -it -n internal-tools \
-- wget --spider --timeout=1 10.244.0.70
Connecting to 10.244.0.70 (10.244.0.70:80)
wget: download timed out
command terminated with exit code 1
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:port-allownamespace:internal-toolsspec:podSelector:matchLabels:app:apiingress:-from:-podSelector:matchLabels:app:consumerports:-protocol:TCPport:80
By default, Pod-to-Pod communication is unrestricted. Instantiate a default deny rule to restrict Pod-to-Pod network traffic with the principle of least privilege. The attribute spec.podSelector of a network policy selects the target Pod the rules apply to based on label selection. The ingress and egress rules define Pods, namespaces, IP addresses, and ports for allowing incoming and outgoing traffic.
Network policies can be aggregated. A default deny rule can disallow ingress and/or egress traffic. An additional network policy can open up those rules with a more fine-grained definition.
To explore common scenarios, look at the GitHub repository named “Kubernetes Network Policy Recipes”. The repository comes with a visual representation for each scenario and walks you through the steps to set up the network policy and the involved Pods. This is a great practice resource.
$ kubectl get pods NAME READY STATUS RESTARTS AGE misbehaving-pod 0/1 ErrImagePull 0 2s
| Status | Root cause | Potential fix |
|---|---|---|
|
Image could not be pulled from registry. |
Check correct image name, check that image name exists in registry, verify network access from node to registry, ensure proper authentication. |
|
Application or command run in container crashes. |
Check command executed in container, ensure that image can properly execute (e.g., by creating a container with Docker). |
|
ConfigMap or Secret referenced by container cannot be found. |
Check correct name of the configuration object, verify the existence of the configuration object in the namespace. |
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
secret-pod 0/1 ContainerCreating 0 4m57s
$ kubectl describe pod secret-pod
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully \
assigned
default/secret-pod \
to minikube
Warning FailedMount 3m15s kubelet, minikube Unable to attach or \
mount volumes: \
unmounted \
volumes=[mysecret], \
unattached volumes= \
[default-token-bf8rh \
mysecret]: timed out \
waiting for the \
condition
Warning FailedMount 68s (x10 over 5m18s) kubelet, minikube MountVolume.SetUp \
failed for volume \
"mysecret" : secret \
"mysecret" not found
Warning FailedMount 61s kubelet, minikube Unable to attach or \
mount volumes: \
unmounted volumes= \
[mysecret], \
unattached \
volumes=[mysecret \
default-token-bf8rh \
]: timed out \
waiting for the \
condition
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
3m14s Warning BackOff pod/custom-cmd Back-off \
restarting \
failed container
2s Warning FailedNeedsStart cronjob/google-ping Cannot determine \
if job needs to \
be started: too \
many missed start \
time (> 100). Set \
or decrease \
.spec. \
startingDeadline \
Seconds or check \
clock skew
$ kubectl create deployment nginx --image=nginx:1.24.0 --replicas=3 --port=80 deployment.apps/nginx created
$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-595dff4799-pfgdg 1/1 Running 0 6m25s nginx-595dff4799-ph4js 1/1 Running 0 6m25s nginx-595dff4799-s76s8 1/1 Running 0 6m25s
$ kubectl port-forward nginx-595dff4799-ph4js 2500:80 Forwarding from 127.0.0.1:2500 -> 80 Forwarding from [::1]:2500 -> 80
curl -Is localhost:2500 | head -n 1 HTTP/1.1 200 OK
apiVersion:v1kind:Podmetadata:name:incorrect-cmd-podspec:containers:-name:test-containerimage:busybox:1.36.1command:["/bin/sh","-c","unknown"]
$ kubectl create -f crash-loop-backoff.yaml pod/incorrect-cmd-pod created $ kubectl get pods incorrect-cmd-pod NAME READY STATUS RESTARTS AGE incorrect-cmd-pod 0/1 CrashLoopBackOff 5 3m20s $ kubectl logs incorrect-cmd-pod /bin/sh: unknown: not found
apiVersion:v1kind:Podmetadata:name:failing-podspec:containers:-args:-/bin/sh--c-while true; do echo $(date) >> ~/tmp/curr-date.txt; sleep \5; done;image:busybox:1.36.1name:failing-pod
$ kubectl create -f failing-pod.yaml pod/failing-pod created $ kubectl get pods failing-pod NAME READY STATUS RESTARTS AGE failing-pod 1/1 Running 0 5s $ kubectl logs failing-pod /bin/sh: can't create /root/tmp/curr-date.txt: nonexistent directory
$ kubectl exec failing-pod -it -- /bin/sh # mkdir -p ~/tmp # cd ~/tmp # ls -l total 4 -rw-r--r-- 1 root root 112 May 9 23:52 curr-date.txt
apiVersion:v1kind:Podmetadata:name:minimal-podspec:containers:-image:k8s.gcr.io/pause:3.1name:pause
$ kubectl create -f minimal-pod.yaml pod/minimal-pod created $ kubectl get pods minimal-pod NAME READY STATUS RESTARTS AGE minimal-pod 1/1 Running 0 8s $ kubectl exec minimal-pod -it -- /bin/sh OCI runtime exec failed: exec failed: container_linux.go:349: starting \ container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file \ or directory": unknown command terminated with exit code 126
$ kubectl alpha debug -it minimal-pod --image=busybox Defaulting debug container name to debugger-jf98g. If you don't see a command prompt, try pressing enter. / # pwd / / # exit Session ended, resume using 'kubectl alpha attach minimal-pod -c \ debugger-jf98g -i -t' command when the pod is running
$ kubectl describe service myservice ... Selector: app=myapp ... $ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-68bf896d89-qfhlv 1/1 Running 0 7m39s app=hello myapp-68bf896d89-tzt55 1/1 Running 0 7m37s app=world
$ kubectl get service myapp -o yaml | grep targetPort:
targetPort: 80
$ kubectl get pods myapp-68bf896d89-qfhlv -o yaml | grep containerPort:
- containerPort: 80
$ kubectl get endpoints myservice NAME ENDPOINTS AGE myservice 172.17.0.5:80,172.17.0.6:80 9m31s
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myservice ClusterIP 10.99.155.165 <none> 80/TCP 15m ...
$ kubectl run tmp --image=busybox:1.37.0 -it --rm -- wget 10.99.155.165:80
$ kubectl run tmp --image=busybox:1.37.0 -it --rm -- wget myservice:80
$ kubectl run tmp --image=busybox:1.37.0 -it --rm -- wget \ myservice.default.svc.cluster.local:80
$ kubectl get pods -n kube-system -l k8s-app=kube-dns
$ kubectl logs -n kube-system -l k8s-app=kube-dns --tail=50
$ kubectl get networkpolicies -n production
$ kubectl describe networkpolicy api-policy -n production
$ kubectl exec -it frontend-pod -n production -- curl http://backend-service:8080
$ minikube addons enable metrics-server The 'metrics-server' addon is enabled
$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% minikube 283m 14% 1262Mi 32% $ kubectl top pod frontend NAME CPU(cores) MEMORY(bytes) frontend 0m 2Mi
Applications running in a Pod can easily break due to misconfiguration. Think of possible scenarios that can occur and try to model them proactively to represent a failure situation. Then using the commands get, logs, and exec, get to the bottom of the issue and fix it. Try to dream up obscure scenarios to become more comfortable with finding and fixing application issues for different resource types. Refer to the Kubernetes documentation to learn more about debugging other Kubernetes resource types.
Accessing container logs is straightforward. Simply use the logs command. Practice the use of all relevant command-line options. The option -c targets a specific container. The option does not have to be used explicitly for single-container Pods. The option -f tails the log entries if you want to see live processing in an application. The -p option can be used for accessing logs if the container needed to be restarted, but you still want to take a look at the previous container logs.
Follow a systematic approach: first verify basic connectivity and DNS, then check service endpoints and selectors, examine port configurations, investigate any network policies, and finally test the complete traffic path from source to destination. Always remember that the exam environment may have pre-configured network policies or specific networking plugins that affect troubleshooting strategies.
Monitoring a Kubernetes cluster is an important aspect of successfully operating in a real-world environment. You should read up on commercial monitoring products and which data the metrics server can collect. You can assume that the exam environment provides you with an installation of the metrics server. Learn how to use the kubectl top command to render Pod and node resource metrics and how to interpret them.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION control-plane-node Ready control-plane 2m45s v1.33.2 worker-node-1 Ready <none> 2m36s v1.33.2 worker-node-2 Ready <none> 2m29s v1.33.2 worker-node-3 Ready <none> 2m22s v1.33.2
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd 1/1 Running 1 (11d ago) 29d kube-apiserver 1/1 Running 1 (11d ago) 29d kube-controller-manager 1/1 Running 1 (11d ago) 29d kube-scheduler 1/1 Running 1 (11d ago) 29d ...
$ kubectl logs kube-apiserver -n kube-system
$ kubectl cluster-info Kubernetes control plane is running at https://192.168.64.21:8443 CoreDNS is running at https://192.168.64.21:8443/api/v1/namespaces/ \ kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use kubectl cluster-info dump.
$ kubectl cluster-info dump
$ kubectl get nodes NAME STATUS ROLES AGE VERSION control-plane Ready control-plane 4d20h v1.33.2 worker-1 NotReady <none> 4d20h v1.33.2 worker-2 Ready <none> 4d20h v1.33.2
$ kubectl describe node worker-1
....
Conditions:
Type Status LastHeartbeatTime \
LastTransitionTime Reason Message
---- ------ ----------------- \
------------------ ------ -------
NetworkUnavailable False Thu, 20 Jan 2022 18:12:13 +0000 \
Thu, 20 Jan 2022 18:12:13 +0000 CalicoIsUp \
Calico is running on this node
MemoryPressure False Tue, 25 Jan 2022 15:59:18 +0000 \
Thu, 20 Jan 2022 18:11:47 +0000 KubeletHasSufficientMemory \
kubelet has sufficient memory available
DiskPressure False Tue, 25 Jan 2022 15:59:18 +0000 \
Thu, 20 Jan 2022 18:11:47 +0000 KubeletHasNoDiskPressure \
kubelet has no disk pressure
PIDPressure False Tue, 25 Jan 2022 15:59:18 +0000 \
Thu, 20 Jan 2022 18:11:47 +0000 KubeletHasSufficientPID \
kubelet has sufficient PID available
Ready True Tue, 25 Jan 2022 15:59:18 +0000 \
Thu, 20 Jan 2022 18:12:07 +0000 KubeletReady \
kubelet is posting ready status. AppArmor enabled
...
$ top top - 18:45:09 up 1 day, 2:21, 1 user, load average: 0.13, 0.13, 0.15 Tasks: 116 total, 3 running, 70 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.5 us, 0.8 sy, 0.0 ni, 97.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 1008552 total, 134660 free, 264604 used, 609288 buff/cache KiB Swap: 0 total, 0 free, 0 used. 611248 avail Mem ...
$ df -h Filesystem Size Used Avail Use% Mounted on udev 480M 0 480M 0% /dev tmpfs 99M 1.0M 98M 2% /run /dev/sda1 39G 2.7G 37G 7% / tmpfs 493M 0 493M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 493M 0 493M 0% /sys/fs/cgroup vagrant 1.9T 252G 1.6T 14% /vagrant tmpfs 99M 0 99M 0% /run/user/1000
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; \
vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2022-01-20 18:11:41 UTC; 5 days ago
Docs: https://kubernetes.io/docs/home/
Main PID: 6537 (kubelet)
Tasks: 15 (limit: 1151)
CGroup: /system.slice/kubelet.service
└─6537 /usr/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
--kubeconfig=/etc/kubernetes/kubelet.conf \
--config=/var/lib/kubelet/config.yaml --network-lines 1-10/10
$ journalctl -u kubelet.service -- Logs begin at Thu 2022-01-20 18:10:41 UTC, end at Tue 2022-01-25 18:44:05 UTC. -- Jan 20 18:11:31 worker-1 systemd[1]: Started kubelet: The Kubernetes Node Agent. Jan 20 18:11:31 worker-1 systemd[1]: kubelet.service: Current command vanished \ from the unit file, execution of the command list won't be resumed. Jan 20 18:11:31 worker-1 systemd[1]: Stopping kubelet: The Kubernetes Node Agent... Jan 20 18:11:31 worker-1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Jan 20 18:11:31 worker-1 systemd[1]: Started kubelet: The Kubernetes Node Agent. ....
$ systemctl restart kubelet
$ openssl x509 -in /var/lib/kubelet/pki/kubelet.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 2 (0x2)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = worker-1-ca@1642702301
Validity
Not Before: Jan 20 17:11:41 2022 GMT
Not After : Jan 20 17:11:41 2023 GMT
Subject: CN = worker-1@1642702301
...
$ kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n \ kube-system get cm kubeadm-config -o yaml' CERTIFICATE EXPIRES RESIDUAL TIME ... admin.conf Aug 31, 2026 14:28 UTC 364d ... apiserver Aug 31, 2026 14:28 UTC 364d ... apiserver-etcd-client Aug 31, 2026 14:28 UTC 364d ... apiserver-kubelet-client Aug 31, 2026 14:28 UTC 364d ... controller-manager.conf Aug 31, 2026 14:28 UTC 364d ... etcd-healthcheck-client Aug 31, 2026 14:28 UTC 364d ... etcd-peer Aug 31, 2026 14:28 UTC 364d ... etcd-server Aug 31, 2026 14:28 UTC 364d ... front-proxy-client Aug 31, 2026 14:28 UTC 364d ... scheduler.conf Aug 31, 2026 14:28 UTC 364d ... super-admin.conf Aug 31, 2026 14:28 UTC 364d ... CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME ... ca Aug 29, 2035 14:28 UTC 9y ... etcd-ca Aug 29, 2035 14:28 UTC 9y ... front-proxy-ca Aug 29, 2035 14:28 UTC 9y ...
$ kubeadm certs renew all
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE ... kube-proxy-csrww 1/1 Running 0 4d22h kube-proxy-fjd48 1/1 Running 0 4d22h kube-proxy-tvf52 1/1 Running 0 4d22h
$ kubectl describe pod kube-proxy-csrww -n kube-system $ kubectl describe daemonset kube-proxy -n kube-system
$ kubectl describe pod kube-proxy-csrww -n kube-system | grep Node: Node: worker-1/10.0.2.15 $ kubectl logs kube-proxy-csrww -n kube-system
In the exam, you must rapidly identify why nodes are “NotReady” and fix them within minutes. Memorize this sequence: kubectl get nodes, kubectl describe node <node-name>, check the “Conditions” section for specific issues (MemoryPressure, DiskPressure, PIDPressure, Ready). Know how to SSH into nodes and use systemctl status kubelet, systemctl restart kubelet, and journalctl -u kubelet | tail -50 to diagnose and fix kubelet issues.
You must instantly recognize and fix control plane component failures. Remember that static pods (API server, controller-manager, scheduler, etcd) have their manifests in the directory /etc/kubernetes/manifests and their logs accessible via kubectl logs <component>-<node-name> -n kube-system. For crashed components, know that editing the manifest file directly will trigger the kubelet to restart the Pod automatically.
Understand how node issues prevent Pod scheduling and know the quick fixes. When Pods are “Pending”, check for node taints (kubectl describe node | grep Taint), verify node capacity (kubectl describe node | grep -A5 "Allocated resources"), and look for cordoned nodes (kubectl get nodes | grep SchedulingDisabled). Master these recovery commands: kubectl uncordon <node> to enable scheduling, kubectl taint nodes <node> <taint-key>- to remove taints (note the minus sign), and kubectl drain <node> --ignore-daemonsets --delete-emptydir-data for node maintenance.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane 2m20s v1.32.2 minikube-m02 Ready <none> 2m10s v1.32.2 minikube-m03 Ready <none> 2m3s v1.32.2 minikube-m04 Ready <none> 116s v1.32.2
$ kubectl run nginx --image=nginx:1.27.4-alpine
$ kubectl get pod nginx -o jsonpath='{.spec.nodeName}'
minikube-m02
$ kubectl drain minikube-m02 --ignore-daemonsets --force evicting pod default/nginx pod/nginx evicted node/minikube-m02 drained
$ vagrant ssh kube-control-plane
$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get \ install -y kubeadm=1.32.2-1.1 && sudo apt-mark hold kubeadm $ sudo kubeadm upgrade apply v1.32.2
$ kubectl drain kube-control-plane --ignore-daemonsets $ sudo apt-get update && sudo apt-get install -y \ --allow-change-held-packages kubelet=1.32.2-1.1 kubectl=1.32.2-1.1 $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet $ kubectl uncordon kube-control-plane
$ kubectl get nodes $ exit
$ vagrant ssh kube-worker-1
$ sudo apt-get update && sudo apt-get install -y \ --allow-change-held-packages kubeadm=1.32.2-1.1 $ sudo kubeadm upgrade node
$ kubectl drain kube-worker-1 --ignore-daemonsets $ sudo apt-get update && sudo apt-get install -y \ --allow-change-held-packages kubelet=1.32.2-1.1 kubectl=1.32.2-1.1 $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet $ kubectl uncordon kube-worker-1
$ kubectl get nodes $ exit
$ vagrant ssh kube-control-plane
$ kubectl get pods -n kube-system
$ kubectl get pod etcd-kube-control-plane -n kube-system \
-o jsonpath="{.spec.containers[0].image}"
$ echo "3.5.16" > etcd-version.txt
$ exit
$ vagrant ssh kube-control-plane
$ kubectl describe pod etcd-kube-control-plane -n kube-system $ sudo ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/etcd.bak
$ sudo ETCDCTL_API=3 etcdutl --data-dir=/var/bak snapshot restore \ /opt/etcd.bak $ sudo vim /etc/kubernetes/manifests/etcd.yaml
$ kubectl get pod etcd-kube-control-plane -n kube-system $ exit
$ kubectl create clusterrole service-view --verb=get,list \ --resource=services
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:service-viewrules:-apiGroups:[""]resources:["services"]verbs:["get","list"]
$ kubectl apply -f service-view-clusterrole.yaml
$ kubectl create rolebinding ellasmith-service-view --user=ellasmith \ --clusterrole=service-view -n development
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:ellasmith-service-viewnamespace:developmentroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:service-viewsubjects:-kind:Username:ellasmithapiGroup:rbac.authorization.k8s.io
$ kubectl apply -f ellasmith-service-view-rolebinding.yaml
$ kubectl create clusterrole combined \ --aggregation-rule="rbac.cka.cncf.com/aggregate=true"
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:combinedaggregationRule:clusterRoleSelectors:-matchLabels:rbac.cka.cncf.com/aggregate:"true"rules:[]
$ kubectl apply -f combined-clusterrole.yaml
$ kubectl describe clusterrole combined Name: combined Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- -----
$ kubectl create clusterrole deployment-modify \ --verb=create,delete,patch,update --resource=deployments \ --dry-run=client -o yaml > deployment-modify-clusterrole.yaml
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:deployment-modifylabels:rbac.cka.cncf.com/aggregate:"true"rules:-apiGroups:["apps"]resources:["deployments"]verbs:["create","delete","patch","update"]
$ kubectl apply -f deployment-modify-clusterrole.yaml
$ kubectl describe clusterrole combined Name: combined Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- deployments.apps [] [] [create \ delete patch update]
$ kubectl auth can-i list services --as=ellasmith --namespace=development yes
$ echo 'yes' > list-services-ellasmith.txt
$ kubectl auth can-i watch deployments --as=ellasmith --namespace=production no
$ echo 'no' > watch-deployments-ellasmith.txt
$ kubectl create namespace apps $ kubectl create serviceaccount api-access -n apps
apiVersion:v1kind:Namespacemetadata:name:apps
$ kubectl apply -f apps-namespace.yaml
apiVersion:v1kind:ServiceAccountmetadata:name:api-accessnamespace:apps
$ kubectl create -f api-serviceaccount.yaml
$ kubectl create clusterrole api-clusterrole --verb=watch,list,get \ --resource=pods
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:api-clusterrolerules:-apiGroups:[""]resources:["pods"]verbs:["watch","list","get"]
$ kubectl apply -f api-clusterrole.yaml
$ kubectl create clusterrolebinding api-clusterrolebinding \ --serviceaccount=apps:api-access --clusterrole=api-clusterrole
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:api-clusterrolebindingroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:api-clusterrolesubjects:-apiGroup:""kind:ServiceAccountname:api-accessnamespace:apps
$ kubectl apply -f api-clusterrolebinding.yaml
$ kubectl run operator --image=nginx:1.21.1 --restart=Never \ --port=80 --serviceaccount=api-access -n apps $ kubectl create namespace rm $ kubectl run disposable --image=nginx:1.21.1 --restart=Never \ -n rm
apiVersion:v1kind:Namespacemetadata:name:rm
apiVersion:v1kind:Podmetadata:name:operatornamespace:appsspec:serviceAccountName:api-accesscontainers:-name:operatorimage:nginx:1.21.1ports:-containerPort:80---apiVersion:v1kind:Podmetadata:name:disposablenamespace:rmspec:containers:-name:disposableimage:nginx:1.21.1
$ kubectl create -f rm-namespace.yaml $ kubectl create -f api-pods.yaml
$ kubectl config view --minify -o \
jsonpath='{.clusters[0].cluster.server}'
https://192.168.64.4:8443
$ kubectl get secret $(kubectl get serviceaccount api-access -n apps \
-o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' -n apps \
| base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1hOUhI...
$ kubectl exec operator -it -n apps -- /bin/sh
# curl https://192.168.64.4:8443/api/v1/namespaces/rm/pods --header \
"Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1hOUhI..." \
--insecure
{
"kind": "PodList",
"apiVersion": "v1",
...
}
# curl -X DELETE https://192.168.64.4:8443/api/v1/namespaces \
/rm/pods/disposable --header \
"Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1hOUhI..." \
--insecure
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "pods \"disposable\" is forbidden: User \
\"system:serviceaccount:apps:api-access\" cannot delete \
resource \"pods\" in
API group \"\" in the namespace \"rm\"",
"reason": "Forbidden",
"details": {
"name": "disposable",
"kind": "pods"
},
"code": 403
}
$ kubectl apply -f https://raw.githubusercontent.com/mongodb/\ mongodb-kubernetes-operator/master/config/crd/bases/mongodbcommunity.\ mongodb.com_mongodbcommunity.yaml customresourcedefinition.apiextensions.k8s.io/mongodbcommunity.\ mongodbcommunity.mongodb.com created
$ kubectl get crds NAME CREATED AT mongodbcommunity.mongodbcommunity.mongodb.com 2023-12-18T23:44:04Z
$ kubectl describe crds mongodbcommunity.mongodbcommunity.mongodb.com
$ kubectl apply -f backup-resource.yaml customresourcedefinition.apiextensions.k8s.io/backups.example.com created
$ kubectl get crd backups.example.com NAME CREATED AT backups.example.com 2023-05-24T15:11:15Z $ kubectl describe crd backups.example.com ...
apiVersion:example.com/v1kind:Backupmetadata:name:nginx-backupspec:cronExpression:"00***"podName:nginxpath:/usr/local/nginx
$ kubectl apply -f backup.yaml backup.example.com/nginx-backup created
$ kubectl get backups NAME AGE nginx-backup 24s $ kubectl describe backup nginx-backup ...
$ helm repo add prometheus-community https://prometheus-community.\ github.io/helm-charts "prometheus-community" has been added to your repositories
$ helm repo update prometheus-community Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "prometheus-community" \ chart repository Update Complete. ⎈Happy Helming!⎈
$ helm search hub prometheus-community URL CHART VERSION ... https://artifacthub.io/packages/helm/prometheus... 70.3.0 ...
$ helm install prometheus prometheus-community/kube-prometheus-stack NAME: prometheus LAST DEPLOYED: Wed Mar 26 14:14:53 2025 NAMESPACE: default STATUS: deployed REVISION: 1 ...
$ helm list NAME NAMESPACE REVISION UPDATED ... prometheus default 1 2025-03-26 ...
$ kubectl get service prometheus-operated NAME TYPE CLUSTER-IP EXTERNAL-IP ... prometheus-operated ClusterIP None <none> ...
$ kubectl port-forward service/prometheus-operated 8080:9090 Forwarding from 127.0.0.1:8080 -> 9090 Forwarding from [::1]:8080 -> 9090
$ helm uninstall prometheus release "prometheus" uninstalled
$ kubectl apply -f manifests/ -R configmap/logs-config created pod/nginx created
$ vim manifests/configmap.yaml $ kubectl apply -f manifests/configmap.yaml configmap/logs-config configured
$ kubectl delete -f manifests/ -R configmap "logs-config" deleted pod "nginx" deleted
namespace:t012resources:-pod.yaml
$ kubectl kustomize ./
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: t012
spec:
containers:
- image: nginx:1.21.1
name: nginx
$ kubectl create namespace j43
$ kubectl run nginx --image=nginx:1.17.10 --port=80 --namespace=j43
apiVersion:v1kind:Namespacemetadata:name:j43
$ kubectl apply -f namespace.yaml
apiVersion:v1kind:Podmetadata:name:nginxspec:containers:-name:nginximage:nginx:1.17.10ports:-containerPort:80
$ kubectl apply -f nginx-pod.yaml --namespace=j43
$ kubectl get pod nginx --namespace=j43 -o wide
$ kubectl describe pod nginx --namespace=j43 | grep IP:
$ kubectl run busybox --image=busybox:1.36.1 --restart=Never --rm -it \ -n j43 -- wget -O- 10.1.0.66:80
$ kubectl logs nginx --namespace=j43
$ kubectl edit pod nginx --namespace=j43
$ kubectl delete pod nginx --namespace=j43
apiVersion:v1kind:Podmetadata:name:nginxspec:containers:-name:nginximage:nginx:1.17.10ports:-containerPort:80env:-name:DB_URLvalue:postgresql://mydb:5432-name:DB_USERNAMEvalue:admin
$ kubectl apply -f nginx-pod.yaml --namespace=j43
$ kubectl exec -it nginx --namespace=j43 -- /bin/sh # ls -l
$ kubectl run loop --image=busybox:1.36.1 -o yaml --dry-run=client \ --restart=Never -- /bin/sh -c 'for i in 1 2 3 4 5 6 7 8 9 10; \ do echo "Welcome $i times"; done' \ > pod.yaml
$ kubectl apply -f pod.yaml --namespace=j43
$ kubectl get pod loop --namespace=j43
$ kubectl delete pod loop --namespace=j43
apiVersion:v1kind:Podmetadata:creationTimestamp:nulllabels:run:loopname:loopspec:containers:-args:-/bin/sh--c-while true; do date; sleep 10; doneimage:busybox:1.36.1name:loopresources:{}dnsPolicy:ClusterFirstrestartPolicy:Neverstatus:{}
$ kubectl apply -f pod.yaml --namespace=j43
$ kubectl describe pod loop --namespace=j43 | grep -C 10 Events:
$ kubectl delete namespace j43
$ kubectl create configmap app-config --from-file=application.yaml configmap/app-config created
$ kubectl get configmap app-config -o yaml
apiVersion: v1
data:
application.yaml: |-
dev:
url: http://dev.bar.com
name: Developer Setup
prod:
url: http://foo.bar.com
name: My Cool App
kind: ConfigMap
metadata:
creationTimestamp: "2023-05-22T17:47:52Z"
name: app-config
namespace: default
resourceVersion: "7971"
uid: 00cf4ce2-ebec-48b5-a721-e1bde2aabd84
$ kubectl run backend --image=nginx:1.23.4-alpine -o yaml \ --dry-run=client --restart=Never > pod.yaml
apiVersion:v1kind:Podmetadata:labels:run:backendname:backendspec:containers:-image:nginx:1.23.4-alpinename:backendvolumeMounts:-name:config-volumemountPath:/etc/configvolumes:-name:config-volumeconfigMap:name:app-config
$ kubectl apply -f pod.yaml pod/backend created
$ kubectl exec backend -it -- /bin/sh / # cd /etc/config /etc/config # ls application.yaml /etc/config # cat application.yaml dev: url: http://dev.bar.com name: Developer Setup prod: url: http://foo.bar.com name: My Cool App /etc/config # exit
$ kubectl create secret generic db-credentials --from-literal=\ db-password=passwd secret/db-credentials created
$ kubectl get secret db-credentials -o yaml apiVersion: v1 data: db-password: cGFzc3dk kind: Secret metadata: creationTimestamp: "2023-05-22T16:47:33Z" name: db-credentials namespace: default resourceVersion: "7557" uid: 2daf580a-b672-40dd-8c37-a4adb57a8c6c type: Opaque
$ kubectl run backend --image=nginx:1.23.4-alpine -o yaml \ --dry-run=client --restart=Never > pod.yaml
apiVersion:v1kind:Podmetadata:labels:run:backendname:backendspec:containers:-image:nginx:1.23.4-alpinename:backendenv:-name:DB_PASSWORDvalueFrom:secretKeyRef:name:db-credentialskey:db-password
$ kubectl apply -f pod.yaml pod/backend created
$ kubectl exec -it backend -- env DB_PASSWORD=passwd
$ kubectl apply -f fix-me-deployment.yaml
The Deployment "nginx-deployment" is invalid: spec.template.metadata.labels:\
Invalid value: map[string]string{"app":"nginx"}: `selector` does not \
match template `labels`
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentlabels:app:nginxspec:replicas:3selector:matchLabels:run:servertemplate:metadata:labels:run:serverspec:containers:-name:nginximage:nginx:1.14.2ports:-containerPort:80
$ kubectl apply -f fix-me-deployment.yaml deployment.apps/nginx-deployment created
apiVersion:apps/v1kind:Deploymentmetadata:name:nginxlabels:tier:backendspec:replicas:3selector:matchLabels:app:v1template:metadata:labels:app:v1spec:containers:-image:nginx:1.23.0name:nginx
$ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx created $ kubectl get deployment nginx NAME READY UP-TO-DATE AVAILABLE AGE nginx 3/3 3 3 10s
$ kubectl set image deployment/nginx nginx=nginx:1.23.4
deployment.apps/nginx image updated
$ kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
$ kubectl rollout history deployment nginx --revision=2
deployment.apps/nginx with revision #2
Pod Template:
Labels: app=v1
pod-template-hash=5bd95c598
Containers:
nginx:
Image: nginx:1.23.4
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
$ kubectl annotate deployment nginx kubernetes.io/change-cause=\ "Pick up patch version" deployment.apps/nginx annotated
$ kubectl rollout history deployment nginx deployment.apps/nginx REVISION CHANGE-CAUSE 1 <none> 2 Pick up patch version
$ kubectl scale deployment nginx --replicas=5 deployment.apps/nginx scaled $ kubectl get pod -l app=v1 NAME READY STATUS RESTARTS AGE nginx-5bd95c598-25z4j 1/1 Running 0 3m39s nginx-5bd95c598-46mlt 1/1 Running 0 3m38s nginx-5bd95c598-bszvp 1/1 Running 0 48s nginx-5bd95c598-dwr8r 1/1 Running 0 48s nginx-5bd95c598-kjrvf 1/1 Running 0 3m37s
$ kubectl rollout undo deployment/nginx --to-revision=1
deployment.apps/nginx rolled back
$ kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
2 Pick up patch version
3 <none>
$ kubectl rollout history deployment nginx --revision=3
deployment.apps/nginx with revision #3
Pod Template:
Labels: app=v1
pod-template-hash=f48dc88cd
Containers:
nginx:
Image: nginx:1.23.0
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
apiVersion:apps/v1kind:Deploymentmetadata:name:hello-worldspec:replicas:3selector:matchLabels:run:hello-worldtemplate:metadata:labels:run:hello-worldspec:containers:-image:bmuschko/nodejs-hello-world:1.0.0name:hello-world
$ kubectl apply -f hello-world-deployment.yaml deployment.apps/hello-world created
$ kubectl get deployment hello-world NAME READY UP-TO-DATE AVAILABLE AGE hello-world 3/3 3 3 32s
$ vim hello-world-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-world spec: replicas: 8 ...
$ kubectl apply -f hello-world-deployment.yaml deployment.apps/hello-world configured
$ kubectl get deployment hello-world NAME READY UP-TO-DATE AVAILABLE AGE hello-world 8/8 8 8 56s
apiVersion:apps/v1kind:Deploymentmetadata:name:nginxspec:replicas:1selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-image:nginx:1.23.4name:nginxresources:requests:cpu:"0.5"memory:"500Mi"limits:memory:"500Mi"
$ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx created
$ kubectl get deployment nginx NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 49s $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-5bbd9746c-9b4np 1/1 Running 0 24s
apiVersion:autoscaling/v2kind:HorizontalPodAutoscalermetadata:name:nginx-hpaspec:scaleTargetRef:apiVersion:apps/v1kind:Deploymentname:nginxminReplicas:3maxReplicas:8metrics:-type:Resourceresource:name:cputarget:type:UtilizationaverageUtilization:75-type:Resourceresource:name:memorytarget:type:UtilizationaverageUtilization:60
$ kubectl apply -f nginx-hpa.yaml horizontalpodautoscaler.autoscaling/nginx-hpa created
$ kubectl get hpa nginx-hpa NAME REFERENCE TARGETS MINPODS MAXPODS \ REPLICAS AGE nginx-hpa Deployment/nginx 0%/60%, 0%/75% 3 8 \ 3 2m19s
apiVersion:v1kind:Podmetadata:name:hellospec:containers:-image:bmuschko/nodejs-hello-world:1.0.0name:helloports:-name:nodejs-portcontainerPort:3000volumeMounts:-name:log-volumemountPath:"/var/log"resources:requests:cpu:100mmemory:500Miephemeral-storage:1Gilimits:memory:500Miephemeral-storage:2Givolumes:-name:log-volumeemptyDir:{}
$ kubectl apply -f pod.yaml pod/hello created
$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane 65s v1.32.2 minikube-m02 Ready <none> 44s v1.32.2 minikube-m03 Ready <none> 26s v1.32.2
$ kubectl get pod hello -o wide NAME READY STATUS RESTARTS AGE IP NODE hello 1/1 Running 0 25s 10.244.2.2 minikube-m03
$ kubectl describe pod hello
...
Containers:
hello:
...
Limits:
ephemeral-storage: 2Gi
memory: 500Mi
Requests:
cpu: 100m
ephemeral-storage: 1Gi
memory: 500M
...
$ kubectl create namespace rq-demo namespace/rq-demo created $ kubectl apply -f resourcequota.yaml --namespace=rq-demo resourcequota/app created
$ kubectl describe quota app --namespace=rq-demo Name: app Namespace: rq-demo Resource Used Hard -------- ---- ---- pods 0 2 requests.cpu 0 2 requests.memory 0 500Mi
apiVersion:v1kind:Podmetadata:name:mypodspec:containers:-image:nginxname:mypodresources:requests:cpu:"0.5"memory:"1Gi"restartPolicy:Never
$ kubectl apply -f pod.yaml --namespace=rq-demo Error from server (Forbidden): error when creating "pod.yaml": pods \ "mypod" is forbidden: exceeded quota: app, requested: \ requests.memory=1Gi, used: requests.memory=0, limited: \ requests.memory=500Mi
$ kubectl apply -f pod.yaml --namespace=rq-demo pod/mypod created
$ kubectl describe quota --namespace=rq-demo Name: app Namespace: rq-demo Resource Used Hard -------- ---- ---- pods 1 2 requests.cpu 500m 2 requests.memory 255Mi 500Mi
$ kubectl apply -f setup.yaml namespace/d92 created limitrange/cpu-limit-range created
$ kubectl describe limitrange cpu-limit-range -n d92 Name: cpu-limit-range Namespace: d92 Type Resource Min Max Default Request Default Limit ... ---- -------- --- --- --------------- ------------- Container cpu 200m 500m 500m 500m ...
apiVersion:v1kind:Podmetadata:name:pod-without-resource-requirementsnamespace:d92spec:containers:-image:nginx:1.23.4-alpinename:nginx
$ kubectl apply -f pod-without-resource-requirements.yaml pod/pod-without-resource-requirements created
$ kubectl describe pod pod-without-resource-requirements -n d92
...
Containers:
nginx:
Limits:
cpu: 500m
Requests:
cpu: 500m
apiVersion:v1kind:Podmetadata:name:pod-with-more-cpu-resource-requirementsnamespace:d92spec:containers:-image:nginx:1.23.4-alpinename:nginxresources:requests:cpu:400mlimits:cpu:1.5
$ kubectl apply -f pod-with-more-cpu-resource-requirements.yaml Error from server (Forbidden): error when creating \ "pod-with-more-cpu-resource-requirements.yaml": pods \ "pod-with-more-cpu-resource-requirements" is forbidden: \ maximum cpu usage per Container is 500m, but limit is 1500m
apiVersion:v1kind:Podmetadata:name:pod-with-less-cpu-resource-requirementsnamespace:d92spec:containers:-image:nginx:1.23.4-alpinename:nginxresources:requests:cpu:350mlimits:cpu:400m
$ kubectl apply -f pod-with-less-cpu-resource-requirements.yaml pod/pod-with-less-cpu-resource-requirements created
$ kubectl describe pod pod-with-less-cpu-resource-requirements -n d92
...
Containers:
nginx:
Limits:
cpu: 400m
Requests:
cpu: 350m
$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane 3m16s v1.32.2 minikube-m02 Ready <none> 3m6s v1.32.2 minikube-m03 Ready <none> 2m59s v1.32.2
$ kubectl label nodes minikube-m02 color=green node/minikube-m02 labeled $ kubectl label nodes minikube-m03 color=red node/minikube-m03 labeled
$ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS minikube Ready control-plane 27m v1.32.2 ... minikube-m02 Ready <none> 27m v1.32.2 ...,color=green,... minikube-m03 Ready <none> 26m v1.32.2 ...,color=red,...
apiVersion:v1kind:Podmetadata:name:appspec:nodeSelector:color:greencontainers:-name:nginximage:nginx:1.27.1
$ kubectl apply -f pod.yaml pod/app created $ kubectl get pod app -o=wide NAME READY STATUS RESTARTS AGE IP NODE ... app 1/1 Running 0 21s 10.244.1.2 minikube-m02 ...
apiVersion:v1kind:Podmetadata:name:appspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:coloroperator:Invalues:-green-redcontainers:-name:nginximage:nginx:1.27.1
$ kubectl delete -f pod.yaml pod "app" deleted $ kubectl apply -f pod.yaml pod/app created $ kubectl get pod app -o=wide NAME READY STATUS RESTARTS AGE IP NODE ... app 1/1 Running 0 12s 10.244.1.3 minikube-m02 ...
$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane 3m16s v1.32.2 minikube-m02 Ready <none> 3m6s v1.32.2 minikube-m03 Ready <none> 2m59s v1.32.2
apiVersion:v1kind:Podmetadata:name:appspec:containers:-name:nginximage:nginx:1.27.1
$ kubectl apply -f pod.yaml pod/app created $ kubectl get pod app -o=wide NAME READY STATUS RESTARTS AGE IP NODE ... app 1/1 Running 0 89s 10.244.2.2 minikube-m03 ...
$ kubectl taint nodes minikube-m03 exclusive=yes:NoExecute node/minikube-m03 tainted
$ kubectl get pods No resources found in default namespace.
apiVersion:v1kind:Podmetadata:name:appspec:tolerations:-key:"exclusive"operator:"Equal"value:"yes"effect:"NoExecute"containers:-name:nginximage:nginx:1.27.1
$ kubectl apply -f pod.yaml pod/app created $ kubectl get pod app -o=wide NAME READY STATUS RESTARTS AGE IP NODE ... app 1/1 Running 0 9s 10.244.2.3 minikube-m03 ...
$ kubectl taint nodes minikube-m03 exclusive- node/minikube-m03 untainted $ kubectl get pod app -o=wide NAME READY STATUS RESTARTS AGE IP NODE ... app 1/1 Running 0 37s 10.244.2.3 minikube-m03 ...
$ kubectl run alpine --image=alpine:3.12.0 --dry-run=client \ --restart=Never -o yaml -- /bin/sh -c "while true; do sleep 60; \ done;" > multi-container-alpine.yaml $ vim multi-container-alpine.yaml
apiVersion:v1kind:Podmetadata:creationTimestamp:nulllabels:run:alpinename:alpinespec:containers:-args:-/bin/sh--c-while true; do sleep 60; done;image:alpine:3.12.0name:container1resources:{}-args:-/bin/sh--c-while true; do sleep 60; done;image:alpine:3.12.0name:container2resources:{}dnsPolicy:ClusterFirstrestartPolicy:Alwaysstatus:{}
apiVersion:v1kind:Podmetadata:creationTimestamp:nulllabels:run:alpinename:alpinespec:volumes:-name:shared-volemptyDir:{}containers:-args:-/bin/sh--c-while true; do sleep 60; done;image:alpine:3.12.0name:container1volumeMounts:-name:shared-volmountPath:/etc/aresources:{}-args:-/bin/sh--c-while true; do sleep 60; done;image:alpine:3.12.0name:container2volumeMounts:-name:shared-volmountPath:/etc/bresources:{}dnsPolicy:ClusterFirstrestartPolicy:Alwaysstatus:{}
$ kubectl apply -f multi-container-alpine.yaml pod/alpine created $ kubectl get pods NAME READY STATUS RESTARTS AGE alpine 2/2 Running 0 18s
$ kubectl exec alpine -c container1 -it -- /bin/sh / # cd /etc/a /etc/a # ls -l total 0 /etc/a # mkdir data /etc/a # cd data/ /etc/a/data # echo "Hello World" > hello.txt /etc/a/data # cat hello.txt Hello World /etc/a/data # exit
$ kubectl exec alpine -c container2 -it -- /bin/sh / # cat /etc/b/data/hello.txt Hello World / # exit
kind:PersistentVolumeapiVersion:v1metadata:name:logs-pvspec:capacity:storage:5GiaccessModes:-ReadWriteOnce-ReadOnlyManyhostPath:path:/var/logs
$ kubectl apply -f logs-pv.yaml
persistentvolume/logs-pv created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM \
STORAGECLASS REASON AGE
logs-pv 5Gi RWO,ROX Retain Available \
18s
kind:PersistentVolumeClaimapiVersion:v1metadata:name:logs-pvcspec:accessModes:-ReadWriteOnceresources:requests:storage:2GistorageClassName:""
$ kubectl apply -f logs-pvc.yaml persistentvolumeclaim/logs-pvc created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS ... logs-pvc Bound logs-pv 5Gi RWO,ROX ...
$ kubectl run nginx --image=nginx:1.25.1 --dry-run=client \ -o yaml > nginx-pod.yaml
apiVersion:v1kind:Podmetadata:creationTimestamp:nulllabels:run:nginxname:nginxspec:volumes:-name:logs-volumepersistentVolumeClaim:claimName:logs-pvccontainers:-image:nginx:1.25.1name:nginxvolumeMounts:-mountPath:"/var/log/nginx"name:logs-volumeresources:{}dnsPolicy:ClusterFirstrestartPolicy:Neverstatus:{}
$ kubectl apply -f nginx-pod.yaml pod/nginx created $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 8s
$ kubectl exec nginx -it -- /bin/sh # cd /var/log/nginx # touch my-nginx.log # ls access.log error.log my-nginx.log # exit
$ kubectl delete pod nginx pod "nginx" deleted $ kubectl apply -f nginx-pod.yaml pod/nginx created $ kubectl exec nginx -it -- /bin/sh # cd /var/log/nginx # ls access.log error.log my-nginx.log # exit
apiVersion:v1kind:PersistentVolumeClaimmetadata:name:db-pvcnamespace:persistencespec:accessModes:-ReadWriteOncestorageClassName:local-pathresources:requests:storage:128Mi
$ kubectl apply -f db-pvc.yaml persistentvolumeclaim/db-pvc created
$ kubectl get pvc -n persistence NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS ... db-pvc Pending local-path ...
$ kubectl get pv -n persistence No resources found
$ kubectl run app-consuming-pvc --image=alpine:3.21.3 -n persistence \ --dry-run=client --restart=Never -o yaml \ -- /bin/sh -c "while true; do sleep 60; done;" \ > alpine-pod.yaml
apiVersion:v1kind:Podmetadata:name:app-consuming-pvcnamespace:persistencespec:volumes:-name:app-storagepersistentVolumeClaim:claimName:db-pvccontainers:-image:alpine:3.21.3name:appcommand:["/bin/sh"]args:["-c","whiletrue;dosleep60;done;"]volumeMounts:-mountPath:"/mnt/data"name:app-storagerestartPolicy:Never
$ kubectl apply -f alpine-pod.yaml pod/app-consuming-pvc created
$ kubectl get pods -n persistence NAME READY STATUS RESTARTS AGE app-consuming-pvc 1/1 Running 0 8s
$ kubectl get pv -n persistence NAME CAPACITY ACCESS MODES ... pvc-af39068d-0cc2-4625-8a56-7b5207b79ace 128Mi RWO ...
$ kubectl exec app-consuming-pvc -n persistence -it -- /bin/sh # cd /mnt/data # touch test.db # ls test.db # exit
apiVersion:apps/v1kind:Deploymentmetadata:name:webappspec:replicas:3selector:matchLabels:app:webapptemplate:metadata:labels:app:webappspec:containers:-name:webappimage:nginxdemos/hello:0.4-plain-textports:-containerPort:80-containerPort:9090
apiVersion:v1kind:Servicemetadata:name:webapp-servicespec:type:NodePortselector:app:webappports:-name:webport:80targetPort:80nodePort:30080-name:metricsport:9090targetPort:9090
$ kubectl apply -f deployment.yaml $ kubectl apply -f service.yaml
$ kubectl get deployment,service
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/webapp 3/3 3 3 6s
NAME TYPE CLUSTER-IP EXTERNAL-IP \
PORT(S) AGE
service/webapp-service NodePort 10.111.80.190 <none> \
80:30080/TCP,9090:31231/TCP 6s
$ kubectl port-forward service/webapp-service 9091:80 & Forwarding from 127.0.0.1:9091 -> 80 Forwarding from [::1]:9091 -> 80 Handling connection for 9091
$ curl localhost:9091 Server address: 127.0.0.1:80 Server name: webapp-6dc64898b-hwll6 Date: 21/Aug/2025:02:09:01 +0000 URI: / Request ID: b5e0f6324ad7bac1b513dba9d2c1cf64
apiVersion:apps/v1kind:Deploymentmetadata:name:databasespec:replicas:1selector:matchLabels:app:databasetemplate:metadata:labels:app:databasespec:containers:-name:mysqlimage:mysql:9.4.0env:-name:MYSQL_ROOT_PASSWORDvalue:secretpass-name:MYSQL_DATABASEvalue:myappports:-containerPort:3306
apiVersion:v1kind:Servicemetadata:name:database-servicespec:type:ClusterIPselector:app:databaseports:-port:3306targetPort:3306
apiVersion:apps/v1kind:Deploymentmetadata:name:frontendspec:replicas:2selector:matchLabels:app:frontendtemplate:metadata:labels:app:frontendspec:containers:-name:frontendimage:busybox:1.35command:-sh--c-"whiletrue;donc-zvdatabase-service3306;sleep5;done"
$ kubectl apply -f database-deployment.yaml $ kubectl apply -f database-service.yaml $ kubectl apply -f frontend-deployment.yaml
$ kubectl logs -l app=frontend database-service (10.101.125.103:3306) open database-service (10.101.125.103:3306) open
apiVersion:v1kind:Namespacemetadata:name:webapp
apiVersion:apps/v1kind:Deploymentmetadata:name:frontendnamespace:webappspec:replicas:2selector:matchLabels:app:frontendtemplate:metadata:labels:app:frontendspec:containers:-name:frontendimage:nginx:1.29.1-alpineports:-containerPort:80---apiVersion:apps/v1kind:Deploymentmetadata:name:apinamespace:webappspec:replicas:2selector:matchLabels:app:apitemplate:metadata:labels:app:apispec:containers:-name:apiimage:httpd:2.4.65-alpineports:-containerPort:80
apiVersion:v1kind:Servicemetadata:name:frontend-servicenamespace:webappspec:selector:app:frontendports:-port:80targetPort:80protocol:TCPtype:ClusterIP---apiVersion:v1kind:Servicemetadata:name:api-servicenamespace:webappspec:selector:app:apiports:-port:80targetPort:80protocol:TCPtype:ClusterIP
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:webapp-ingressnamespace:webappannotations:nginx.ingress.kubernetes.io/rewrite-target:/spec:ingressClassName:nginxrules:-host:app.example.comhttp:paths:-path:/pathType:Prefixbackend:service:name:frontend-serviceport:number:80-path:/apppathType:Prefixbackend:service:name:frontend-serviceport:number:80-path:/apipathType:Prefixbackend:service:name:api-serviceport:number:80
$ kubectl apply -f namespace.yaml $ kubectl apply -f deployments.yaml $ kubectl apply -f services.yaml $ kubectl apply -f ingress.yaml
sudo vim /etc/hosts 127.0.0.1 app.example.com
$ kubectl get ingress webapp-ingress -n webapp NAME CLASS HOSTS ADDRESS PORTS AGE webapp-ingress nginx app.example.com 192.168.49.2 80 2m
$ curl -H "Host: app.example.com" http://localhost/ $ curl -H "Host: app.example.com" http://localhost/app $ curl -H "Host: app.example.com" http://localhost/api
$ kubectl create namespace production-apps
$ kubectl create deployment app-blue \ --image=nginxdemos/hello:0.3-plain-text \ --replicas=3 -n production-apps $ kubectl create deployment app-green \ --image=nginxdemos/hello:0.4-plain-text \ --replicas=3 -n production-apps
$ kubectl expose deployment app-blue --name=app-blue-svc \ --port=80 -n production-apps $ kubectl expose deployment app-green --name=app-green-svc \ --port=80 -n production-apps
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:app-mainnamespace:production-appsspec:ingressClassName:nginxrules:-host:app.production.comhttp:paths:-path:/pathType:Prefixbackend:service:name:app-blue-svcport:number:80
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:app-canary-weightnamespace:production-appsannotations:nginx.ingress.kubernetes.io/canary:"true"nginx.ingress.kubernetes.io/canary-weight:"20"spec:ingressClassName:nginxrules:-host:app.production.comhttp:paths:-path:/pathType:Prefixbackend:service:name:app-green-svcport:number:80
$ kubectl apply -f blue-ingress.yaml $ kubectl apply -f green-ingress.yaml
sudo vim /etc/hosts 127.0.0.1 app.production.com
$ kubectl get ingresses -n production-apps NAME CLASS HOSTS ADDRESS PORTS ... app-canary-weight nginx app.production.com 192.168.49.2 80 ... app-main nginx app.production.com 192.168.49.2 80 ...
for i in {1..10}; do curl -s -H "Host: app.production.com" \
http://localhost | grep -o "Server name:.*"; done
$ kubectl apply -f setup.yaml
$ kubectl get deployments,services,pods
apiVersion:gateway.networking.k8s.io/v1kind:Gatewaymetadata:name:main-gatewayspec:gatewayClassName:nginxlisteners:-name:httpport:80protocol:HTTPhostname:example.local
apiVersion:gateway.networking.k8s.io/v1kind:HTTPRoutemetadata:name:app-routesspec:parentRefs:-name:main-gatewayhostnames:-example.localrules:-matches:-path:type:PathPrefixvalue:/webbackendRefs:-name:web-appport:80-matches:-path:type:PathPrefixvalue:/apibackendRefs:-name:api-appport:80
$ kubectl apply -f gateway.yaml $ kubectl apply -f httproute.yaml
$ kubectl get gateway main-gateway -o \
jsonpath='{.status.addresses[0].value}'
$ kubectl get httproute app-routes -o \
jsonpath='{.status.parents[*].conditions[?(@.type=="Accepted")].status}'
$ kubectl port-forward svc/main-gateway-nginx 8080:80 &
$ curl -H "Host: example.local" http://localhost:8080/web/ $ curl -H "Host: example.local" http://localhost:8080/api/
$ kubectl apply -f setup.yaml
apiVersion:gateway.networking.k8s.io/v1kind:Gatewaymetadata:name:gatewaynamespace:productionspec:gatewayClassName:nginxlisteners:-name:httpport:80protocol:HTTPhostname:example.comallowedRoutes:namespaces:from:All
apiVersion:gateway.networking.k8s.io/v1kind:HTTPRoutemetadata:name:prod-routenamespace:productionspec:parentRefs:-name:gatewayhostnames:-example.comrules:-matches:-path:type:Exactvalue:/appbackendRefs:-name:prod-webport:80filters:-type:URLRewriteurlRewrite:path:type:ReplaceFullPathreplaceFullPath:/-type:RequestHeaderModifierrequestHeaderModifier:set:-name:X-Environmentvalue:production
apiVersion:gateway.networking.k8s.io/v1kind:HTTPRoutemetadata:name:staging-routenamespace:stagingspec:parentRefs:-name:gatewaynamespace:productionhostnames:-example.comrules:-matches:-path:type:PathPrefixvalue:/stagingbackendRefs:-name:staging-webport:80filters:-type:URLRewriteurlRewrite:path:type:ReplacePrefixMatchreplacePrefixMatch:/-type:RequestHeaderModifierrequestHeaderModifier:set:-name:X-Environmentvalue:staging
apiVersion:gateway.networking.k8s.io/v1beta1kind:ReferenceGrantmetadata:name:allow-staging-to-gatewaynamespace:productionspec:from:-group:gateway.networking.k8s.iokind:HTTPRoutenamespace:stagingto:-group:gateway.networking.k8s.iokind:Gatewayname:gateway
$ kubectl apply -f gateway.yaml $ kubectl apply -f production-route.yaml $ kubectl apply -f staging-route.yaml $ kubectl apply -f reference-grant.yaml
$ kubectl port-forward -n production svc/gateway-nginx 8080:80 &
$ curl -H "Host: example.com" http://localhost:8080/app $ curl -H "Host: example.com" http://localhost:8080/staging
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:alpha-app-policynamespace:team-alphaspec:podSelector:matchLabels:app:alpha-apppolicyTypes:-Egressegress:-to:-namespaceSelector:matchLabels:team:beta-to:ports:-protocol:UDPport:53-protocol:TCPport:53
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:beta-app-policynamespace:team-betaspec:podSelector:matchLabels:app:beta-apppolicyTypes:-Ingressingress:-from:-namespaceSelector:matchLabels:team:alphaports:-protocol:TCPport:80
$ kubectl apply -f alpha-app-policy.yaml $ kubectl apply -f beta-app-policy.yaml
$ kubectl exec -it alpha-app -n team-alpha -- \ curl -v --connect-timeout 2 \ http://beta-app.team-beta.svc.cluster.local:8080
$ kubectl exec -it alpha-app -n team-alpha -- \ curl -v --connect-timeout 2 http://google.com
$ kubectl run test-pod --image=alpine/curl:8.14.1 -n team-beta --rm -it \ --restart=Never -- curl -v --connect-timeout 2 http://beta-app:8080
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:default-deny-ingressnamespace:productionspec:podSelector:{}policyTypes:-Ingress
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:database-policynamespace:productionspec:podSelector:matchLabels:tier:databasepolicyTypes:-Ingressingress:-from:-podSelector:matchLabels:tier:backendports:-protocol:TCPport:6379
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:backend-policynamespace:productionspec:podSelector:matchLabels:tier:backendpolicyTypes:-Ingress-Egressingress:-from:-podSelector:matchLabels:tier:frontendports:-protocol:TCPport:80egress:-to:-podSelector:matchLabels:tier:databaseports:-protocol:TCPport:6379-to:ports:-protocol:UDPport:53-protocol:TCPport:53
$ kubectl apply -f default-deny-all.yaml $ kubectl apply -f database-policy.yaml $ kubectl apply -f backend-policy.yaml
$ kubectl exec -it frontend -n production -- curl -v --connect-timeout 2 \ telnet://database:3306
$ kubectl exec -it frontend -n production -- curl -v --connect-timeout 2 \ http://backend:80
$ kubectl exec -it backend -n production -- curl -v --connect-timeout 2 \ telnet://database:3306
$ kubectl apply -f setup.yaml pod/date-recorder created
$ kubectl get pods NAME READY STATUS RESTARTS AGE date-recorder 1/1 Running 0 5s
$ kubectl logs date-recorder
[Error: ENOENT: no such file or directory, open \
'/root/tmp/startup-marker.txt'] {
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '/root/tmp/curr-date.txt'
}
$ kubectl exec -it date-recorder -- /bin/sh OCI runtime exec failed: exec failed: unable to start container \ process: exec: "/bin/sh": stat /bin/sh: no such file or \ directory: unknown command terminated with exit code 126
$ kubectl debug -it date-recorder --image=busybox --target=debian \
--share-processes
Targeting container "debian". If you don't see processes from this \
container it may be because the container runtime doesn't support \
this feature.
Defaulting debug container name to debugger-rns89.
If you don't see a command prompt, try pressing enter.
/ # ps
PID USER TIME COMMAND
1 root 4:21 /nodejs/bin/node -e const fs = require('fs'); \
let timestamp = Date.now(); fs.writeFile('/root/tmp/startup-m
35 root 0:00 sh
41 root 0:00 ps
$ kubectl exec failing-pod -it -- /bin/sh / # ls /root/tmp ls: /root/tmp: No such file or directory
apiVersion:v1kind:Podmetadata:name:date-recorderspec:containers:-name:debianimage:gcr.io/distroless/nodejs20-debian11command:["/nodejs/bin/node","-e","constfs=require('fs');\lettimestamp=Date.now();fs.writeFile('/var/startup/\startup-marker.txt',timestamp.toString(),err=>{if(err){\console.error(err);}while(true){}});"]volumeMounts:-mountPath:/var/startupname:init-volumevolumes:-name:init-volumeemptyDir:{}
$ kubectl apply -f setup.yaml namespace/y72 created deployment.apps/web-app created service/web-app created
$ kubectl get all -n y72 NAME READY STATUS RESTARTS AGE pod/web-app-5f77f59c78-8svdm 1/1 Running 0 10m pod/web-app-5f77f59c78-mhvjz 1/1 Running 0 10m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) service/web-app ClusterIP 10.106.215.153 <none> 80/TCP NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/web-app 2/2 2 2 10m NAME DESIRED CURRENT READY AGE replicaset.apps/web-app-5f77f59c78 2 2 2 10m
$ kubectl run tmp --image=busybox --restart=Never -it --rm -n y72 \ -- wget web-app Connecting to web-app (10.106.215.153:80) wget: can't connect to remote host (10.106.215.153): Connection refused pod "tmp" deleted pod y72/tmp terminated (Error)
$ kubectl get endpoints -n y72 NAME ENDPOINTS AGE web-app <none> 15m
$ kubectl describe service web-app -n y72 Name: web-app Namespace: y72 Labels: <none> Annotations: <none> Selector: run=myapp Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.106.215.153 IPs: 10.106.215.153 Port: <unset> 80/TCP TargetPort: 3001/TCP Endpoints: <none> Session Affinity: None Events: <none>
$ kubectl get endpoints -n y72 NAME ENDPOINTS AGE web-app 10.244.0.3:3000,10.244.0.4:3000 24m
$ kubectl edit service web-app -n y72 service/web-app edited
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm -n y72 \ -- wget web-app Connecting to web-app (10.106.215.153:80) saving to 'index.html' index.html 100% |********************************| ... 'index.html' saved pod "tmp" deleted
$ kubectl create ns stress-test namespace/stress-test created
$ kubectl apply -f ./ pod/stress-1 created pod/stress-2 created pod/stress-3 created
$ kubectl top pods -n stress-test NAME CPU(cores) MEMORY(bytes) stress-1 50m 77Mi stress-2 74m 138Mi stress-3 58m 94Mi
$ kubectl get nodes NAME STATUS ROLES AGE VERSION control-plane Ready control-plane 23m v1.33.2 worker-1 Ready <none> 23m v1.33.2 worker-2 NotReady,SchedulingDisabled <none> 23m v1.33.2
$ kubectl describe node worker-node-2 | grep -i taint
$ kubectl describe node worker-node-2 | grep -A10 Conditions
$ kubectl uncordon worker-node-2
$ ssh worker-node-2 $ sudo systemctl status kubelet
$ sudo systemctl restart kubelet $ sudo systemctl status kubelet
$ kubectl get nodes
$ kubectl run test-pod --image=nginx:1.29.1 --overrides=\
'{"spec":{"nodeSelector":{"kubernetes.io/hostname":"worker-node-2"}}}'
$ kubectl get pod test-pod -o wide
$ kubectl create deployment test-app --image=nginx:1.29.1 --replicas=3
$ kubectl get pods NAME READY STATUS RESTARTS AGE test-app-5d4d5b6c7b-h2x4m 0/1 Pending 0 2m test-app-5d4d5b6c7b-k9p3n 0/1 Pending 0 2m test-app-5d4d5b6c7b-x7v2q 0/1 Pending 0 2m
$ kubectl describe pod test-app-5d4d5b6c7b-h2x4m | tail -10
$ kubectl get pods -n kube-system | grep -E \ "scheduler|controller|apiserver|etcd"
$ kubectl describe pod kube-scheduler-master-node -n kube-system \ | grep -A5 Events
$ ssh control-plane $ sudo vi /etc/kubernetes/manifests/kube-scheduler.yaml
$ kubectl get pods NAME READY STATUS RESTARTS AGE test-app-5d4d5b6c7b-h2x4m 1/1 Running 0 8m test-app-5d4d5b6c7b-k9p3n 1/1 Running 0 8m test-app-5d4d5b6c7b-x7v2q 1/1 Running 0 8m
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Manage role based access control (RBAC) |
Using RBAC Authorization, Role Based Access Control Good Practices |
N/A |
|
Prepare underlying infrastructure for installing a Kubernetes cluster |
N/A |
N/A |
|
Create and manage Kubernetes clusters using kubeadm |
|||
Manage the lifecycle of Kubernetes clusters |
|||
Implement and configure a highly-available control plane |
|||
Use Helm and Kustomize to install cluster components |
Declarative Management of Kubernetes Objects Using Kustomize, Managing Secrets using Kustomize |
||
Understand extension interfaces (CNI, CSI, CRI, etc.) |
Infrastructure extensions, Compute, Storage, and Networking Extensions |
N/A |
|
Understand CRDs, install and configure operators |
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Understand application deployments and how to perform rolling update and rollbacks |
|||
Use ConfigMaps and Secrets to configure applications |
Configure a Pod to Use a ConfigMap, Managing Secrets using kubectl, Managing Secrets using Configuration File |
||
Configure workload autoscaling |
Running Multiple Instances of Your App, Horizontal Pod Autoscaling |
||
Understand the primitives used to create robust, self-healing, application deployments |
N/A |
||
Configure Pod admission and scheduling (limits, node affinity, etc.) |
Resource Management for Pods and Containers, Limit Ranges, Resource Quotas, Assigning Pods to Nodes, Taints and Tolerations, Pod Topology Spread Constraints |
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Implement storage classes and dynamic volume provisioning |
N/A |
||
Configure volume types, access modes and reclaim policies |
N/A |
||
Manage persistent volumes and persistent volume claims |
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Understand connectivity between Pods |
N/A |
||
Define and enforce Network Policies |
|||
Understand ClusterIP, NodePort, LoadBalancer service types and endpoints |
|||
Use the Gateway API to manage Ingress traffic |
N/A |
||
Know how to use Ingress controllers and Ingress resources |
Set up Ingress on Minikube with the NGINX Ingress Controller |
||
Understand and use CoreDNS |
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Troubleshoot clusters and nodes |
N/A |
||
Troubleshoot cluster components |
N/A |
||
Monitor cluster and application resource usage |
N/A |
||
Manage and evaluate container output streams |
N/A |
||
Troubleshoot services and networking |
N/A |