Benjamin Muschko is a cloud native guru and has expanded and sharpened this CKAD study guide. There is much to learn in Kubernetes, yet Ben keeps his focus on the concepts that are expected in the exam. Add this guide to ease your journey toward certification or to simply be a well-informed cloud native developer.
Jonathan Johnson, Independent Software Architect
A direct, example-driven guide essential for CKAD exam preparation.
Bilgin Ibryam, Coauthor of Kubernetes Patterns, Principal Product Manager at Diagrid
As someone who oversaw the creation of the CKAD in CNCF, I’m happy to see an updated study guide that covers the latest version of the ever evolving certification.
Chris Aniszczyk, CTO and Cofounder, CNCF
Second Edition
In-Depth Guidance and Practice
The coverage of deployment strategies goes beyond the ones directly supported by the Deployment primitive. You will need to understand how to implement and manage blue/green deployments and canary deployments.
Helm is a tool that automates the bundling, configuration, and deployment of Kubernetes applications by combining your configuration files into a single reusable package. You will need to understand how to use Helm for discovering and installing existing Helm charts.
You need to be aware of Kubernetes’ release process and what it means to the usage of APIs that are deprecated and removed. You will learn how to handle situations that require you to switch to a newer or replaced API version.
CRDs allow for extending the Kubernetes API by creating your own custom resource types. You need to aware of how to create CRDs, as well as how to manage objects based on the CRD type.
Every call to the Kubernetes API needs to be authenticated. As daily users of kubectl, application developers need to understand how to manage and use their credentials. Once authenticated, the request to the API also needs to pass the authorization phase. You need a rough understanding of role-based access control (RBAC), the concept that guards access to Kubernetes resources. Admission control is a topic covered by the CKA and Certified Kubernetes Security Specialist (CKS) exams, and therefore I’ll just scratch the surface of this aspect.
You will want to expose your customer-facing applications running in Kubernetes to outside consumers. The Ingress primitive routes HTTPS traffic to one of many Service backends. The Ingress is now part of the CKAD curriculum.
Indicates new terms, URLs, email addresses, filenames, and file extensions.
Constant widthUsed for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords.
Constant width boldShows commands or other text that should be typed literally by the user.
Constant width italicShows text that should be replaced with user-supplied values or by values determined by context.
The exam won’t ask you to install a Kubernetes cluster from scratch. Read up on the basics of Kubernetes and its architectural components. Reference Chapter 2 for a jump start on Kubernetes’ architecture and concepts.
kubectl CLI toolThe kubectl command-line tool is the central tool you will use during the exam to interact with the Kubernetes cluster. Even if you have only a little time to prepare for the exam, it’s essential to practice how to operate kubectl, as well as its commands and their relevant options. You will have no access to the web dashboard UI during the exam. Chapter 3 provides a short summary of the most important ways of interacting with a Kubernetes cluster.
Kubernetes uses a container runtime engine for managing images. A widely used container runtime engine is Docker Engine. At a minimum, understand container files, container images, containers, and relevant CLI commands. Chapter 4 explains all you need to know about containers for the exam.
Kubernetes objects are represented by YAML or JSON. The content of this book will use examples in YAML, as it is more commonly used than JSON in the Kubernetes world. You will have to edit YAML during the exam to create a new object declaratively or when modifying the configuration of a live object. Ensure that you have a good handle on basic YAML syntax, data types, and indentation conforming to the specification. How do you edit the YAML definitions, you may ask? From the terminal, of course. The exam terminal environment comes with the tools vi and vim preinstalled. Practice the keyboard shortcuts for common operations, (especially how to exit the editor). The last tool I want to mention is GNU Bash. It’s imperative that you understand the basic syntax and operators of the scripting language. It’s absolutely possible that you may have to read, modify, or even extend a multiline Bash command running in a container.
$ kubectl config set-context <context-of-question> \ --namespace=<namespace-of-question> $ kubectl config use-context <context-of-question>
$ alias k=kubectl $ k version
$ kubectl api-resources NAME SHORTNAMES APIGROUP NAMESPACED KIND ... persistentvolumeclaims pvc true PersistentVolumeClaim ...
$ kubectl describe pvc my-claim
You do not have to write imperative code using a programming language to tell Kubernetes how to operate an application. All you need to do as an end user is to declare a desired state. The desired state can be defined using a YAML or JSON manifest that conforms to an API schema. Kubernetes then maintains the state and recovers it in case of a failure.
You will want to scale up resources when your application load increases, and scale down when traffic to your application decreases. This can be achieved in Kubernetes by manual or automated scaling. The most practical, optimized option is to let Kubernetes automatically scale resources needed by a containerized application.
Changes to applications, e.g., new features and bug fixes, are usually baked into a container image with a new tag. You can easily roll out those changes across all containers running them using Kubernetes’ convenient replication feature. If needed, Kubernetes also allows for rolling back to a previous application version in case of a blocking bug or if a security vulnerability is detected.
Containers offer only a temporary filesystem. Upon restart of the container, all data written to the filesystem is lost. Depending on the nature of your application, you may need to persist data for longer, for example, if your application interacts with a database. Kubernetes offers the ability to mount storage required by application workloads.
To support a microservices architecture, the container orchestrator needs to allow for communication between containers, and from end users to containers from outside of the cluster. Kubernetes employs internal and external load balancing for routing network traffic.
This node exposes the Kubernetes API through the API server and manages the nodes that make up the cluster. It also responds to cluster events, for example, when the end user requested to scale up the number of Pods to distribute the load for an application. Production clusters employ a highly available (HA) architecture that usually involves three or more control plane nodes.
The worker node executes workload in containers managed by Pods. Every worker node needs a container runtime engine installed on the host machine to be able to manage containers.
The API server exposes the API endpoints clients use to communicate with the Kubernetes cluster. For example, if you execute the tool kubectl, a command-line based Kubernetes client, you will make a RESTful API call to an endpoint exposed by the API server as part of its implementation. The API processing procedure inside of the API server will ensure aspects like authentication, authorization, and admission control. For more information on that topic, see Chapter 17.
The scheduler is a background process that watches for new Kubernetes Pods with no assigned nodes and assigns them to a worker node for execution.
The controller manager watches the state of your cluster and implements changes where needed. For example, if you make a configuration change to an existing object, the controller manager will try to bring the object into the desired state.
Cluster state data needs to be persisted over time so it can be reconstructed upon a node or even a full cluster restart. That’s the responsibility of etcd, an open source software Kubernetes integrates with. At its core, etcd is a key-value store used to persist all data related to the Kubernetes cluster.
The kubelet runs on every node in the cluster; however, it makes the most sense to exist on a worker node. The reason is that the control plane node usually doesn’t execute workload, and the worker node’s primary responsibility is to run workload. The kubelet is an agent that makes sure that the necessary containers are running in a Pod. You could say that the kubelet is the glue between Kubernetes and the container runtime engine and ensures that containers are running and healthy. We’ll have a touch point with the kubelet in Chapter 14.
The kube proxy is a network proxy that runs on each node in a cluster to maintain network rules and enable network communication. In part, this component is responsible for implementing the Service concept covered in Chapter 21.
As mentioned earlier, the container runtime is the software responsible for managing containers. Kubernetes can be configured to choose from a range of different container runtime engines. While you can install a container runtime engine on a control plane, it’s not necessary as the control plane node usually doesn’t handle workload. We’ll use a container runtime in Chapter 4 to create a container image and run a container with the produced image.
A container runtime engine can manage a container independent of its runtime environment. The container image bundles everything it needs to work, including the application’s binary or code, its dependencies, and its configuration. Kubernetes can run applications in a container in on-premise and cloud environments. As an administrator, you can choose the platform you think is most suitable to your needs without having to rewrite the application. Many cloud offerings provide product-specific, opt-in features. While using product-specific features helps with operational aspects, be aware that they will diminish your ability to switch easily between platforms.
Kubernetes is designed as a declarative state machine. Controllers are reconciliation loops that watch the state of your cluster, then make or request changes where needed. The goal is to move the current cluster state closer to the desired state.
Enterprises run applications at scale. Just imagine how many software components retailers like Amazon, Walmart, or Target need to operate to run their businesses. Kubernetes can scale the number of Pods based on demand or automatically according to resource consumption or historical trends.
Kubernetes exposes its functionality through APIs. We learned that every client needs to interact with the API server to manage objects. It is easy to implement a new client that can make RESTful API calls to exposed endpoints.
The API aspect stretches even further. Sometimes, the core functionality of Kubernetes doesn’t fulfill your custom needs, but you can implement your own extensions to Kubernetes. With the help of specific extension points, the Kubernetes community can build custom functionality according to their requirements, e.g., monitoring or logging solutions.
The Kubernetes API version defines the structure of a primitive and uses it to validate the correctness of the data. The API version serves a similar purpose as XML schemas to an XML document or JSON schemas to a JSON document. The version usually undergoes a maturity process—for example, from alpha to beta to final. Sometimes you see different prefixes separated by a slash (apps). You can list the API versions compatible with your cluster version by running the command kubectl api-versions.
The kind defines the type of primitive—e.g., a Pod or a Service. It ultimately answers the question, “What kinds of resource are we dealing with here?”
Metadata describes higher-level information about the object—e.g., its name, what namespace it lives in, or whether it defines labels and annotations. This section also defines the UID.
The specification (“spec” for short) declares the desired state—e.g., how should this object look after it has been created? Which image should run in the container, or which environment variables should be set?
The status describes the actual state of an object. The Kubernetes controllers and their reconciliation loops constantly try to transition a Kubernetes object from the desired state into the actual state. The object has not yet been materialized if the YAML status shows the value {}.
$ kubectl [command] [TYPE] [NAME] [flags]
Kubectl usage pattern$ kubectl run frontend --image=nginx:1.24.0 --port=80 pod/frontend created
$ kubectl edit pod frontend
$ kubectl patch pod frontend -p '{"spec":{"containers":[{"name":"frontend",\
"image":"nginx:1.25.1"}]}}'
pod/frontend patched
$ kubectl delete pod frontend pod "frontend" deleted
$ kubectl delete pod nginx --now
.
├── app-stack
│ ├── mysql-pod.yaml
│ ├── mysql-service.yaml
│ ├── web-app-pod.yaml
│ └── web-app-service.yaml
├── nginx-deployment.yaml
└── web-app
├── config
│ ├── db-configmap.yaml
│ └── db-secret.yaml
└── web-app-pod.yaml
$ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment created
$ kubectl apply -f app-stack/ pod/mysql-db created service/mysql-service created pod/web-app created service/web-app-service created
$ kubectl apply -f web-app/ -R configmap/db-config configured secret/db-creds created pod/web-app created
$ kubectl apply -f https://raw.githubusercontent.com/bmuschko/\ ckad-study-guide/master/ch03/object-management/nginx-deployment.yaml deployment.apps/nginx-deployment created
$ kubectl get pod web-app -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{}, \
"labels":{"app":"web-app"},"name":"web-app","namespace":"default"}, \
"spec":{"containers":[{"envFrom":[{"configMapRef":{"name":"db-config"}}, \
{"secretRef":{"name":"db-creds"}}],"image":"bmuschko/web-app:1.0.1", \
"name":"web-app","ports":[{"containerPort":3000,"protocol":"TCP"}]}], \
"restartPolicy":"Always"}}
...
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentlabels:app:nginxteam:redspec:replicas:5...
$ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment configured
$ kubectl get deployment nginx-deployment -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{}, \
"labels":{"app":"nginx","team":"red"},"name":"nginx-deployment", \
"namespace":"default"},"spec":{"replicas":5,"selector":{"matchLabels": \
{"app":"nginx"}},"template":{"metadata":{"labels":{"app":"nginx"}}, \
"spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", \
"ports":[{"containerPort":80}]}]}}}}
...
$ kubectl delete -f nginx-deployment.yaml deployment.apps "nginx-deployment" deleted
$ kubectl run frontend --image=nginx:1.25.1 --port=80 \ -o yaml --dry-run=client > pod.yaml
$ vim pod.yaml $ kubectl apply -f pod.yaml pod/frontend created
FROMazul/zulu-openjdk:21-jreWORKDIR/appCOPYtarget/java-hello-world-0.0.1.jarjava-hello-world.jarENTRYPOINT["java","-jar","/app/java-hello-world.jar"]EXPOSE8080
Defines the base image.
Sets the working directory of a container. Any RUN, CMD, ADD, COPY, or ENTRYPOINT instruction will be executed in the specified working directory.
Copies the JAR containing the compiled application code into the working directory.
Sets the default command that executes when a container starts from an image.
Documents the network port(s) the container should listen on.
$ docker build -t java-hello-world:1.1.0 . [+] Building 2.0s (9/9) FINISHED => [internal] load .dockerignore => => transferring context: 2B => [internal] load build definition from Dockerfile => => transferring dockerfile: 284B => [internal] load metadata for docker.io/azul/zulu-openjdk:21-jre => [auth] azul/zulu-openjdk:pull token for registry-1.docker.io => [1/3] FROM docker.io/azul/zulu-openjdk:21-jre@sha256:d1e675cac0e5... => => resolve docker.io/azul/zulu-openjdk:21-jre@sha256:d1e675cac0e5... => => sha256:d1e675cac0e5ce9604283df2a6600d3b46328d32d83927320757ca7... => => sha256:67aa3090031eac26c946908c33959721730e42f9195f4f70409e4ce... => => sha256:ba408da684370e4d8448bec68b36fadf15c3819b282729df3bc8494... => [internal] load build context => => transferring context: 19.71MB => [2/3] WORKDIR /app => [3/3] COPY target/java-hello-world-0.0.1.jar java-hello-world.jar => exporting to image => => exporting layers => => writing image sha256:4b676060678b63de137536da24a889fc9d2d5fe0c... => => naming to docker.io/library/java-hello-world:1.1.0 What's Next? View a summary of image vulnerabilities and recommendations → ...
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE java-hello-world 1.1.0 4b676060678b 49 seconds ago 342MB
$ docker run -d -p 8080:8080 java-hello-world:1.1.0 b0ee04accf078ea7c73cfe3be0f9d1ac6a099ac4e0e903773bc6bf6258acbb66
$ curl localhost:8080 Hello World!
$ docker container ls CONTAINER ID IMAGE COMMAND ... b0ee04accf07 java-hello-world:1.1.0 "java -jar /app/java…" ...
$ docker logs b0ee04accf07 ... 2023-06-19 21:06:27.757 INFO 1 --- [nio-8080-exec-1] \ o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing \ Spring DispatcherServlet 'dispatcherServlet' 2023-06-19 21:06:27.757 INFO 1 --- [nio-8080-exec-1] \ o.s.web.servlet.DispatcherServlet : Initializing \ Servlet 'dispatcherServlet' 2023-06-19 21:06:27.764 INFO 1 --- [nio-8080-exec-1] \ o.s.web.servlet.DispatcherServlet : Completed \ initialization in 7 ms
$ docker exec -it b0ee04accf07 bash root@b0ee04accf07:/app# pwd /app root@b0ee04accf07:/app# exit exit
$ docker tag java-hello-world:1.1.0 bmuschko/java-hello-world:1.1.0
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE bmuschko/java-hello-world 1.1.0 4b676060678b 6 minutes ago 342MB java-hello-world 1.1.0 4b676060678b 6 minutes ago 342MB
$ docker login --username=bmuschko Password: ***** Login Succeeded
$ docker push bmuschko/java-hello-world:1.1.0 The push refers to repository [docker.io/bmuschko/java-hello-world] a7b86a39983a: Pushed df1b2befe5f0: Pushed e4db97f0e9ef: Mounted from azul/zulu-openjdk 8e87ff28f1b5: Mounted from azul/zulu-openjdk 1.1.0: digest: sha256:6a5069bd9396a7eded10bf8e24ab251df434c121f8f4293c2d3ef...
$ docker save -o java-hello-world.tar java-hello-world:1.1.0
$ docker load --input java-hello-world.tar Loaded image: java-hello-world:1.1.0
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE java-hello-world 1.1.0 4b676060678b 7 minutes ago 342MB
Pods run container images inside of a container. You need to understand how to define, build, run, and publish a container image apart from Kubernetes. Practice the use of the container runtime engine’s command-line tool to fulfill the workflow.
You should get familiar with Docker Engine specifically for understanding the containerization process. At the time of writing, Docker Engine is still the most widely used container runtime engine. Branch out by playing around with other container runtime engines like containerd or Podman.
As an application developer, you will deal with defining, building, and modifying container images daily. Container runtime engine support other, less-known features and workflows. It can’t hurt to read through the container runtime engine’s documentation to gain broader exposure.
$ kubectl run hazelcast --image=hazelcast/hazelcast:5.1.7 \ --port=5701 --env="DNS_DOMAIN=cluster" --labels="app=hazelcast,env=prod"
| Option | Example value | Description |
|---|---|---|
|
nginx:1.25.1 |
The image for the container to run. |
|
8080 |
The port that this container exposes. |
|
N/A |
Deletes the Pod after command in the container finishes. See “Creating a Temporary Pod” for more information. |
|
PROFILE=dev |
The environment variables to set in the container. |
|
app=frontend |
A comma-separated list of labels to apply to the Pod. Chapter 9 explains labels in more detail. |
apiVersion:v1kind:Podmetadata:name:hazelcastlabels:app:hazelcastenv:prodspec:containers:-name:hazelcastimage:hazelcast/hazelcast:5.1.7env:-name:DNS_DOMAINvalue:clusterports:-containerPort:5701
Assigns the name of hazelcast to the Pod.
Specifies labels to the Pod.
Declares the container image to be executed in the container of the Pod.
Injects one or many environment variables to the container.
Number of port to expose on the Pod’s IP address.
$ kubectl apply -f pod.yaml pod/hazelcast created
$ kubectl get pods NAME READY STATUS RESTARTS AGE hazelcast 1/1 Running 0 17s
$ kubectl get pods hazelcast NAME READY STATUS RESTARTS AGE hazelcast 1/1 Running 0 17s
| Option | Description |
|---|---|
|
The Pod has been accepted by the Kubernetes system, but one or more of the container images has not been created. |
|
At least one container is still running or is in the process of starting or restarting. |
|
All containers in the Pod terminated successfully. |
|
Containers in the Pod terminated,; at least one failed with an error. |
|
The state of Pod could not be obtained. |
$ kubectl describe pods hazelcast
Name: hazelcast
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: docker-desktop/192.168.65.3
Start Time: Wed, 20 May 2020 19:35:47 -0600
Labels: app=hazelcast
env=prod
Annotations: <none>
Status: Running
IP: 10.1.0.41
Containers:
...
Events:
...
$ kubectl describe pods hazelcast | grep Image:
Image: hazelcast/hazelcast:5.1.7
$ kubectl logs hazelcast ... May 25, 2020 3:36:26 PM com.hazelcast.core.LifecycleService INFO: [10.1.0.46]:5701 [dev] [4.0.1] [10.1.0.46]:5701 is STARTED
$ kubectl exec -it hazelcast -- /bin/sh # ...
$ kubectl exec hazelcast -- env ... DNS_DOMAIN=cluster
$ kubectl run busybox --image=busybox:1.36.1 --rm -it --restart=Never -- env ... HOSTNAME=busybox pod "busybox" deleted
$ kubectl run nginx --image=nginx:1.25.1 --port=80 pod/nginx created $ kubectl get pod nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE \ NOMINATED NODE READINESS GATES nginx 1/1 Running 0 37s 10.244.0.5 minikube \ <none> <none> $ kubectl get pod nginx -o yaml ... status: podIP: 10.244.0.5 ...
$ kubectl run busybox --image=busybox:1.36.1 --rm -it --restart=Never \ -- wget 172.17.0.4:80 Connecting to 172.17.0.4:80 (172.17.0.4:80) saving to 'index.html' index.html 100% |********************************| 615 0:00:00 ETA 'index.html' saved pod "busybox" deleted
apiVersion:v1kind:Podmetadata:name:spring-boot-appspec:containers:-name:spring-boot-appimage:bmuschko/spring-boot-app:1.5.3env:-name:SPRING_PROFILES_ACTIVEvalue:prod-name:VERSIONvalue:'1.5.3'
$ kubectl run mypod --image=busybox:1.36.1 -o yaml --dry-run=client \ > pod.yaml -- /bin/sh -c "while true; do date; sleep 10; done"
apiVersion:v1kind:Podmetadata:name:mypodspec:containers:-name:mypodimage:busybox:1.36.1args:-/bin/sh--c-while true; do date; sleep 10; done
apiVersion:v1kind:Podmetadata:name:mypodspec:containers:-name:mypodimage:busybox:1.36.1command:["/bin/sh"]args:["-c","whiletrue;dodate;sleep10;done"]
$ kubectl apply -f pod.yaml pod/mypod created $ kubectl logs mypod -f Fri May 29 00:49:06 UTC 2020 Fri May 29 00:49:16 UTC 2020 Fri May 29 00:49:26 UTC 2020 ...
$ kubectl delete pod hazelcast pod "hazelcast" deleted
$ kubectl delete -f pod.yaml pod "hazelcast" deleted
$ kubectl get namespaces NAME STATUS AGE default Active 157d kube-node-lease Active 157d kube-public Active 157d kube-system Active 157d
$ kubectl create namespace code-red namespace/code-red created $ kubectl get namespace code-red NAME STATUS AGE code-red Active 16s
apiVersion:v1kind:Namespacemetadata:name:code-red
$ kubectl run pod --image=nginx:1.25.1 -n code-red pod/pod created $ kubectl get pods -n code-red NAME READY STATUS RESTARTS AGE pod 1/1 Running 0 13s
$ kubectl config set-context --current --namespace=code-red
Context "minikube" modified.
$ kubectl config view --minify | grep namespace:
namespace: hello
$ kubectl get pods NAME READY STATUS RESTARTS AGE pod 1/1 Running 0 13s
$ kubectl config set-context --current --namespace=default Context "minikube" modified.
$ kubectl delete namespace code-red namespace "code-red" deleted $ kubectl get pods -n code-red No resources found in code-red namespace.
A Pod runs an application inside of a container. You can check on the status and the configuration of the Pod by inspecting the object with the kubectl get or kubectl describe commands. Get familiar with the life cycle phases of a Pod to be able to quickly diagnose errors. The command kubectl logs can be used to download the container log information without having to shell into the container. Use the command kubectl exec to further explore the container environment, e.g., to check on processes or to examine files.
Sometimes you have to start with the YAML manifest of a Pod and then create the Pod declaratively. This could be the case if you wanted to provide environment variables to the container or declare a custom command. Practice different configuration options by copy-pasting relevant code snippets from the Kubernetes documentation.
Most questions in the exam will ask you to work within a given namespace. You need to understand how to interact with that namespace from kubectl using the options --namespace and -n. To avoid accidentally working on the wrong namespace, know how to permanently set a namespace.
$ kubectl create job counter --image=nginx:1.25.1 \ -- /bin/sh -c 'counter=0; while [ $counter -lt 3 ]; do \ counter=$((counter+1)); echo "$counter"; sleep 3; done;' job.batch/counter created
apiVersion:batch/v1kind:Jobmetadata:name:counterspec:template:spec:containers:-name:counterimage:nginx:1.25.1command:-/bin/sh--c-counter=0;while[$counter-lt3];docounter=$((counter+1));\echo"$counter";sleep3;done;restartPolicy:Never
$ kubectl get jobs NAME COMPLETIONS DURATION AGE counter 0/1 13s 13s $ kubectl get jobs NAME COMPLETIONS DURATION AGE counter 1/1 15s 19s $ kubectl get pods NAME READY STATUS RESTARTS AGE counter-z6kdj 0/1 Completed 0 51s
$ kubectl logs counter-z6kdj 1 2 3
$ kubectl get jobs counter -o yaml | grep -C 1 "completions" ... completions: 1 parallelism: 1 ...
| Type | spec.completions | spec.parallelism | Description |
|---|---|---|---|
Non-parallel with one completion count |
1 |
1 |
Completes as soon as its Pod terminates successfully. |
Parallel with a fixed completion count |
>= 1 |
>= 1 |
Completes when specified number of tasks finish successfully. |
Parallel with worker queue |
unset |
>= 1 |
Completes when at least one Pod has terminated successfully and all Pods are terminated. |
Never
$ kubectl create cronjob current-date --schedule="* * * * *" \ --image=nginx:1.25.1 -- /bin/sh -c 'echo "Current date: $(date)"' cronjob.batch/current-date created
apiVersion:batch/v1kind:CronJobmetadata:name:current-datespec:schedule:"*****"jobTemplate:spec:template:spec:containers:-name:current-dateimage:nginx:1.25.1args:-/bin/sh--c-'echo"Currentdate:$(date)"'restartPolicy:OnFailure
Defines the cron expression that determines when a new Job object needs to be created.
The section that describes the Job template.
$ kubectl get cronjobs NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE current-date * * * * * False 0 <none> 28s $ kubectl get cronjobs NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE current-date * * * * * False 1 14s 53s
$ kubectl get jobs,pods NAME COMPLETIONS DURATION AGE job.batch/current-date-28473049 1/1 3s 2m23s job.batch/current-date-28473050 1/1 3s 83s job.batch/current-date-28473051 1/1 3s 23s NAME READY STATUS RESTARTS AGE pod/current-date-28473049-l6hc7 0/1 Completed 0 2m23s pod/current-date-28473050-csq7n 0/1 Completed 0 83s pod/current-date-28473051-jg8st 0/1 Completed 0 23s
$ kubectl get cronjobs current-date -o yaml | grep successfulJobsHistoryLimit: successfulJobsHistoryLimit: 3 $ kubectl get cronjobs current-date -o yaml | grep failedJobsHistoryLimit: failedJobsHistoryLimit: 1
apiVersion:batch/v1kind:CronJobmetadata:name:current-datespec:successfulJobsHistoryLimit:5failedJobsHistoryLimit:3schedule:"*****"jobTemplate:spec:template:spec:containers:-name:current-dateimage:nginx:1.25.1args:-/bin/sh--c-'echo"Currentdate:$(date)"'restartPolicy:OnFailure
Jobs and CronJobs manage Pods that should finish the work at least once or periodically. You will need to understand the creation of those objects and how to inspect them at runtime. Make sure to play around with the different configuration options and how they affect the runtime behavior.
Jobs can operate in three modes: non-parallel with one completion count, in parallel with a fixed completion count, and in parallel with worker queue.
The default behavior of a Job is to run the workload in a single Pod and expect one successful completion (non-parallel Job). The attribute spec.completions controls the number of required successful completions. The attribute spec.parallelism allows for executing the workload by multiple Pods in parallel.
These exist for the lifespan of a Pod. Ephemeral Volumes are useful if you want to share data between multiple containers running in the Pod or if you can easily reconstruct the data stored on the Volume upon a Pod restart.
These preserve data beyond the lifespan of a Pod. Persistent Volumes are a good option for applications that require data to exist longer, for example, in the form of storage for a database-driven application.
| Type | Description |
|---|---|
|
Empty directory in Pod with read/write access. Only persisted for the lifespan of a Pod. A good choice for cache implementations or data exchange between containers of a Pod. |
|
File or directory from the host node’s filesystem. Supported only on single-node clusters and not meant for production. |
|
Provides a way to inject configuration data. For practical examples, see Chapter 19. |
|
An existing NFS (Network File System) share. Preserves data after Pod restart. |
|
Claims a Persistent Volume. For more information, see “Creating PersistentVolumeClaims”. |
apiVersion:v1kind:Podmetadata:name:business-appspec:volumes:-name:logs-volumeemptyDir:{}containers:-image:nginx:1.25.1name:nginxvolumeMounts:-mountPath:/var/log/nginxname:logs-volume
$ kubectl create -f pod-with-volume.yaml pod/business-app created $ kubectl get pod business-app NAME READY STATUS RESTARTS AGE business-app 1/1 Running 0 43s $ kubectl exec business-app -it -- /bin/sh # cd /var/log/nginx # pwd /var/log/nginx # ls # touch app-logs.txt # ls app-logs.txt
apiVersion:v1kind:PersistentVolumemetadata:name:db-pvspec:capacity:storage:1GiaccessModes:-ReadWriteOncehostPath:path:/data/db
$ kubectl create -f db-pv.yaml
persistentvolume/db-pv created
$ kubectl get pv db-pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS \
CLAIM STORAGECLASS REASON AGE
db-pv 1Gi RWO Retain Available \
10s
| Type | Description |
|---|---|
|
Default. Mounts the volume into a directory of the consuming Pod. Creates a filesystem first if the volume is backed by a block device and the device is empty. |
|
Used for a volume as a raw block device without a filesystem on it. |
$ kubectl get pv -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS \
CLAIM STORAGECLASS REASON AGE VOLUMEMODE
db-pv 1Gi RWO Retain Available \
19m Filesystem
| Type | Short Form | Description |
|---|---|---|
|
RWO |
Read/write access by a single node |
|
ROX |
Read-only access by many nodes |
|
RWX |
Read/write access by many nodes |
|
RWOP |
Read/write access mounted by a single Pod |
$ kubectl get pv db-pv -o jsonpath='{.spec.accessModes}'
["ReadWriteOnce"]
| Type | Description |
|---|---|
|
Default. When PersistentVolumeClaim is deleted, the PersistentVolume is “released” and can be reclaimed. |
|
Deletion removes PersistentVolume and its associated storage. |
|
This value is deprecated. You should use one of the other values. |
$ kubectl get pv db-pv -o jsonpath='{.spec.persistentVolumeReclaimPolicy}'
Retain
kind:PersistentVolumeClaimapiVersion:v1metadata:name:db-pvcspec:accessModes:-ReadWriteOncestorageClassName:""resources:requests:storage:256Mi
$ kubectl create -f db-pvc.yaml persistentvolumeclaim/db-pvc created $ kubectl get pvc db-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-pvc Bound db-pv 1Gi RWO 111s
$ kubectl describe pvc db-pvc ... Used By: <none> ...
apiVersion:v1kind:Podmetadata:name:app-consuming-pvcspec:volumes:-name:app-storagepersistentVolumeClaim:claimName:db-pvccontainers:-image:alpine:3.18.2name:appcommand:["/bin/sh"]args:["-c","whiletrue;dosleep60;done;"]volumeMounts:-mountPath:"/mnt/data"name:app-storage
$ kubectl create -f app-consuming-pvc.yaml
pod/app-consuming-pvc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-consuming-pvc 1/1 Running 0 3s
$ kubectl describe pod app-consuming-pvc
...
Volumes:
app-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim \
in the same namespace)
ClaimName: db-pvc
ReadOnly: false
...
$ kubectl describe pvc db-pvc ... Used By: app-consuming-pvc ...
$ kubectl exec app-consuming-pvc -it -- /bin/sh / # cd /mnt/data /mnt/data # ls -l total 0 /mnt/data # touch test.db /mnt/data # ls -l total 0 -rw-r--r-- 1 root root 0 Sep 29 23:59 test.db
$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY \ VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard (default) k8s.io/minikube-hostpath Delete \ Immediate false 108d
apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:fastprovisioner:kubernetes.io/gce-pdparameters:type:pd-ssdreplication-type:regional-pd
$ kubectl create -f fast-sc.yaml storageclass.storage.k8s.io/fast created $ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY \ VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE fast kubernetes.io/gce-pd Delete \ Immediate false 4s ...
kind:PersistentVolumeClaimapiVersion:v1metadata:name:db-pvcspec:accessModes:-ReadWriteOnceresources:requests:storage:512MistorageClassName:standard
$ kubectl get pv,pvc
NAME CAPACITY \
ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS \
REASON AGE
persistentvolume/pvc-b820b919-f7f7-4c74-9212-ef259d421734 512Mi \
RWO Delete Bound default/db-pvc standard \
2s
NAME STATUS VOLUME \
CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/db-pvc Bound pvc-b820b919-f7f7-4c74-9212-ef259d421734 \
512Mi RWO standard 2s
Volumes are a cross-cutting concept applied in different areas of the exam. Know where to find the relevant documentation for defining a Volume as well as the multitude of ways to consume a Volume from a container. Definitely read Chapter 19 for a deep dive on how to mount ConfigMaps and Secrets as a Volume, and Chapter 8 for sharing a Volume between two containers.
Creating a PersistentVolume involves a couple of moving parts. Understand the configuration options for PersistentVolumes and PersistentVolumeClaims and how they play together. Try to emulate situations that prevent a successful binding of a PersistentVolumeClaim. Then fix the situation by taking counteractions. Internalize the short-form commands pv and pvc to save precious time during the exam.
apiVersion:v1kind:Podmetadata:name:business-appspec:initContainers:-name:configurerimage:busybox:1.36.1command:['sh','-c','echoConfiguringapplication...&&\mkdir-p/usr/shared/app&&echo-e"{\"dbConfig\":\{\"host\":\"localhost\",\"port\":5432,\"dbName\":\"customers\"}}"\>/usr/shared/app/config.json']volumeMounts:-name:configdirmountPath:"/usr/shared/app"containers:-image:bmuschko/nodejs-read-config:1.0.0name:webports:-containerPort:8080volumeMounts:-name:configdirmountPath:"/usr/shared/app"volumes:-name:configdiremptyDir:{}
$ kubectl create -f init.yaml pod/business-app created $ kubectl get pod business-app NAME READY STATUS RESTARTS AGE business-app 0/1 Init:0/1 0 2s $ kubectl get pod business-app NAME READY STATUS RESTARTS AGE business-app 1/1 Running 0 8s
$ kubectl logs business-app -c configurer Configuring application...
apiVersion:v1kind:Podmetadata:name:webserverspec:containers:-name:nginximage:nginx:1.25.1volumeMounts:-name:logs-volmountPath:/var/log/nginx-name:sidecarimage:busybox:1.36.1command:["sh","-c","whiletrue;doif[\"$(cat/var/log/nginx/error.log\|grep'error')\"!=\"\"];thenecho'Errordiscovered!';fi;\sleep10;done"]volumeMounts:-name:logs-volmountPath:/var/log/nginxvolumes:-name:logs-volemptyDir:{}
$ kubectl create -f sidecar.yaml pod/webserver created $ kubectl get pods webserver NAME READY STATUS RESTARTS AGE webserver 0/2 ContainerCreating 0 4s $ kubectl get pods webserver NAME READY STATUS RESTARTS AGE webserver 2/2 Running 0 5s
$ kubectl logs webserver -c sidecar $ kubectl exec webserver -it -c sidecar -- /bin/sh / # wget -O- localhost?unknown Connecting to localhost (127.0.0.1:80) wget: server returned error: HTTP/1.1 404 Not Found / # cat /var/log/nginx/error.log 2020/07/18 17:26:46 [error] 29#29: *2 open() "/usr/share/nginx/html/unknown" \ failed (2: No such file or directory), client: 127.0.0.1, server: localhost, \ request: "GET /unknown HTTP/1.1", host: "localhost" / # exit $ kubectl logs webserver -c sidecar Error discovered!
apiVersion:v1kind:Podmetadata:name:adapterspec:containers:-args:-/bin/sh--c-'whiletrue;doecho"$(date)|$(du-sh~)">>/var/logs/diskspace.txt;\sleep5;done;'image:busybox:1.36.1name:appvolumeMounts:-name:config-volumemountPath:/var/logs-image:busybox:1.36.1name:transformerargs:-/bin/sh--c-'sleep20;whiletrue;dowhilereadLINE;doecho"$LINE"|cut-f2-d"|"\>>$(date+%Y-%m-%d-%H-%M-%S)-transformed.txt;done<\/var/logs/diskspace.txt;sleep20;done;'volumeMounts:-name:config-volumemountPath:/var/logsvolumes:-name:config-volumeemptyDir:{}
$ kubectl create -f adapter.yaml pod/adapter created $ kubectl get pods adapter NAME READY STATUS RESTARTS AGE adapter 2/2 Running 0 10s $ kubectl exec adapter --container=transformer -it -- /bin/sh / # cat /var/logs/diskspace.txt Sun Jul 19 20:28:07 UTC 2020 | 4.0K /root Sun Jul 19 20:28:12 UTC 2020 | 4.0K /root / # ls -l total 40 -rw-r--r-- 1 root root 60 Jul 19 20:28 2020-07-19-20-28-28-transformed.txt ... / # cat 2020-07-19-20-28-28-transformed.txt 4.0K /root 4.0K /root
constexpress=require('express');constapp=express();constrateLimit=require('express-rate-limit');consthttps=require('https');constrateLimiter=rateLimit({windowMs:15*60*1000,max:5,message:'Too many requests have been made from this IP, please try again after an hour'});app.get('/test',rateLimiter,function(req,res){console.log('Received request...');varid=req.query.id;varurl='https://postman-echo.com/get?test='+id;console.log("Calling URL %s",url);https.get(url,(resp)=>{letdata='';resp.on('data',(chunk)=>{data+=chunk;});resp.on('end',()=>{res.send(data);});}).on("error",(err)=>{res.send(err.message);});})varserver=app.listen(8081,function(){varport=server.address().portconsole.log("Ambassador listening on port %s...",port)})
apiVersion:v1kind:Podmetadata:name:rate-limiterspec:containers:-name:business-appimage:bmuschko/nodejs-business-app:1.0.0ports:-containerPort:8080-name:ambassadorimage:bmuschko/nodejs-ambassador:1.0.0ports:-containerPort:8081
$ kubectl create -f ambassador.yaml
pod/rate-limiter created
$ kubectl get pods rate-limiter
NAME READY STATUS RESTARTS AGE
rate-limiter 2/2 Running 0 5s
$ kubectl exec rate-limiter -it -c business-app -- /bin/sh
# curl localhost:8080/test
{"args":{"test":"123"},"headers":{"x-forwarded-proto":"https", \
"x-forwarded-port":"443","host":"postman-echo.com", \
"x-amzn-trace-id":"Root=1-5f177dba-e736991e882d12fcffd23f34"}, \
"url":"https://postman-echo.com/get?test=123"}
...
# curl localhost:8080/test
Too many requests have been made from this IP, please try again after an hour
Pods can run multiple containers. You will need to understand the difference between init containers and sidecar containers and their respective life cycles. Practice accessing a specific container in a multi-container Pod with the help of the command-line option --container or -c.
Init containers see a lot of use in enterprise Kubernetes cluster environments. Understand the need for using them in their respective scenarios. Practice defining a Pod with one or even more init containers and observe their linear execution when creating the Pod. It’s important to experience the behavior of a Pod in failure situations that occur in an init container.
Multi-container Pods are best understood by implementing a scenario for one of the established patterns. Based on what you’ve learned, come up with your own applicable use case and create a multi-container Pod to solve it. It’s helpful to be able to identify sidecar patterns and understand why they are important in practice and how to stand them up yourself. As you implement your own sidecars, you may notice that you have to brush up on your knowledge of bash.
$ kubectl run labeled-pod --image=nginx:1.25.1 \ --labels=tier=backend,env=dev pod/labeled-pod created
apiVersion:v1kind:Podmetadata:name:labeled-podlabels:env:devtier:backendspec:containers:-image:nginx:1.25.1name:nginx
$ kubectl describe pod labeled-pod | grep -C 2 Labels:
...
Labels: env=dev
tier=backend
...
$ kubectl get pod labeled-pod -o yaml | grep -C 1 labels:
metadata:
labels:
env: dev
tier: backend
...
$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS labeled-pod 1/1 Running 0 38m env=dev,tier=backend
$ kubectl label pod labeled-pod region=eu pod/labeled-pod labeled $ kubectl get pod labeled-pod --show-labels NAME READY STATUS RESTARTS AGE LABELS labeled-pod 1/1 Running 0 22h env=dev,region=eu,tier=backend $ kubectl label pod labeled-pod region=us --overwrite pod/labeled-pod labeled $ kubectl get pod labeled-pod --show-labels NAME READY STATUS RESTARTS AGE LABELS labeled-pod 1/1 Running 0 22h env=dev,region=us,tier=backend $ kubectl label pod labeled-pod region- pod/labeled-pod labeled $ kubectl get pod labeled-pod --show-labels NAME READY STATUS RESTARTS AGE LABELS labeled-pod 1/1 Running 0 22h env=dev,tier=backend
$ kubectl run frontend --image=nginx:1.25.1 --labels=env=prod,team=shiny pod/frontend created $ kubectl run backend --image=nginx:1.25.1 --labels=env=prod,team=legacy,\ app=v1.2.4 pod/backend created $ kubectl run database --image=nginx:1.25.1 --labels=env=prod,team=storage pod/database created $ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS backend 1/1 Running 0 37s app=v1.2.4,env=prod,team=legacy database 1/1 Running 0 32s env=prod,team=storage frontend 1/1 Running 0 42s env=prod,team=shiny
$ kubectl get pods -l env=prod --show-labels NAME READY STATUS RESTARTS AGE LABELS backend 1/1 Running 0 37s app=v1.2.4,env=prod,team=legacy database 1/1 Running 0 32s env=prod,team=storage frontend 1/1 Running 0 42s env=prod,team=shiny
$ kubectl get pods -l 'team in (shiny, legacy)' --show-labels NAME READY STATUS RESTARTS AGE LABELS backend 1/1 Running 0 19m app=v1.2.4,env=prod,team=legacy frontend 1/1 Running 0 20m env=prod,team=shiny
$ kubectl get pods -l 'team in (shiny, legacy)',app=v1.2.4 --show-labels NAME READY STATUS RESTARTS AGE LABELS backend 1/1 Running 0 29m app=v1.2.4,env=prod,team=legacy
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:frontend-network-policyspec:podSelector:matchLabels:tier:frontend...
apiVersion:v1kind:Podmetadata:name:nginxlabels:app.kubernetes.io/version:"1.25.1"app.kubernetes.io/component:serverspec:containers:-name:nginximage:nginx:1.25.1
apiVersion:v1kind:Podmetadata:name:annotated-podannotations:commit:866a8dcauthor:'BenjaminMuschko'branch:'bm/bugfix'spec:containers:-image:nginx:1.25.1name:nginx
$ kubectl describe pod annotated-pod | grep -C 2 Annotations:
...
Annotations: author: Benjamin Muschko
branch: bm/bugfix
commit: 866a8dc
...
$ kubectl get pod annotated-pod -o yaml | grep -C 3 annotations:
metadata:
annotations:
author: Benjamin Muschko
branch: bm/bugfix
commit: 866a8dc
...
$ kubectl annotate pod annotated-pod oncall='800-555-1212' pod/annotated-pod annotated $ kubectl annotate pod annotated-pod oncall='800-555-2000' --overwrite pod/annotated-pod annotated $ kubectl annotate pod annotated-pod oncall- pod/annotated-pod annotated
apiVersion:v1kind:Namespacemetadata:name:securedannotations:pod-security.kubernetes.io/enforce:"baseline"
Labels are an extremely important concept in Kubernetes, as many other primitives work with label selection. Practice how to declare labels for different objects, and use the -l command-line option to query for them based on equality-based and set-based requirements. Label selection in a YAML manifest might look slightly different depending on the API version of the spec. Extensively practice label selection for primitives that use them heavily.
All you need to know about annotations is their declaration from the command line and in a YAML manifest. Be aware that annotations are meant only for assigning metadata to objects and they cannot be queried for.
$ kubectl create deployment app-cache --image=memcached:1.6.8 --replicas=4 deployment.apps/app-cache created
apiVersion:apps/v1kind:Deploymentmetadata:name:app-cachelabels:app:app-cachespec:replicas:4selector:matchLabels:app:app-cachetemplate:metadata:labels:app:app-cachespec:containers:-name:memcachedimage:memcached:1.6.8
$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE app-cache 4/4 4 4 125m
| Column Title | Description |
|---|---|
READY |
Lists the number of replicas available to end users in the format of <ready>/<desired>. The number of desired replicas corresponds to the value of |
UP-TO-DATE |
Lists the number of replicas that have been updated to achieve the desired state. |
AVAILABLE |
Lists the number of replicas available to end users. |
$ kubectl get pods NAME READY STATUS RESTARTS AGE app-cache-596bc5586d-84dkv 1/1 Running 0 6h5m app-cache-596bc5586d-8bzfs 1/1 Running 0 6h5m app-cache-596bc5586d-rc257 1/1 Running 0 6h5m app-cache-596bc5586d-tvm4d 1/1 Running 0 6h5m
$ kubectl describe deployment app-cache
Name: app-cache
Namespace: default
CreationTimestamp: Sat, 07 Aug 2021 09:44:18 -0600
Labels: app=app-cache
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=app-cache
Replicas: 4 desired | 4 updated | 4 total | 4 available | \
0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=app-cache
Containers:
memcached:
Image: memcached:1.6.10
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: app-cache-596bc5586d (4/4 replicas created)
Events: <none>
$ kubectl get deployments,pods,replicasets NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/app-cache 4/4 4 4 6h47m NAME READY STATUS RESTARTS AGE pod/app-cache-596bc5586d-84dkv 1/1 Running 0 6h47m pod/app-cache-596bc5586d-8bzfs 1/1 Running 0 6h47m pod/app-cache-596bc5586d-rc257 1/1 Running 0 6h47m pod/app-cache-596bc5586d-tvm4d 1/1 Running 0 6h47m NAME DESIRED CURRENT READY AGE replicaset.apps/app-cache-596bc5586d 4 4 4 6h47m
$ kubectl delete deployment app-cache deployment.apps "app-cache" deleted $ kubectl get deployments,pods,replicasets No resources found in default namespace.
$ kubectl apply -f deployment.yaml
$ kubectl edit deployment web-server
$ kubectl set image deployment web-server nginx=nginx:1.25.2
$ kubectl replace -f deployment.yaml
$ kubectl patch deployment web-server -p '{"spec":{"template":{"spec":\
{"containers":[{"name":"nginx","image":"nginx:1.25.2"}]}}}}'
$ kubectl set image deployment app-cache memcached=memcached:1.6.10 deployment.apps/app-cache image updated
$ kubectl rollout status deployment app-cache Waiting for rollout to finish: 2 out of 4 new replicas have been updated... deployment "app-cache" successfully rolled out
$ kubectl rollout history deployment app-cache deployment.apps/app-cache REVISION CHANGE-CAUSE 1 <none> 2 <none>
$ kubectl rollout history deployments app-cache --revision=2
deployment.apps/app-cache with revision #2
Pod Template:
Labels: app=app-cache
pod-template-hash=596bc5586d
Containers:
memcached:
Image: memcached:1.6.10
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
$ kubectl annotate deployment app-cache kubernetes.io/change-cause=\ "Image updated to 1.6.10" deployment.apps/app-cache annotated
$ kubectl rollout history deployment app-cache deployment.apps/app-cache REVISION CHANGE-CAUSE 1 <none> 2 Image updated to 1.6.10
$ kubectl rollout undo deployment app-cache --to-revision=1 deployment.apps/app-cache rolled back
$ kubectl rollout history deployment app-cache deployment.apps/app-cache REVISION CHANGE-CAUSE 2 Image updated to 1.16.10 3 <none>
$ kubectl scale deployment app-cache --replicas=6 deployment.apps/app-cache scaled
$ kubectl get pods -w NAME READY STATUS RESTARTS AGE app-cache-5d6748d8b9-6cc4j 1/1 ContainerCreating 0 11s app-cache-5d6748d8b9-6rmlj 1/1 Running 0 28m app-cache-5d6748d8b9-6z7g5 1/1 ContainerCreating 0 11s app-cache-5d6748d8b9-96dzf 1/1 Running 0 28m app-cache-5d6748d8b9-jkjsv 1/1 Running 0 28m app-cache-5d6748d8b9-svrxw 1/1 Running 0 28m
$ kubectl autoscale deployment app-cache --cpu-percent=80 --min=3 --max=5 horizontalpodautoscaler.autoscaling/app-cache autoscaled
apiVersion:autoscaling/v2kind:HorizontalPodAutoscalermetadata:name:app-cachespec:scaleTargetRef:apiVersion:apps/v1kind:Deploymentname:app-cacheminReplicas:3maxReplicas:5metrics:-resource:name:cputarget:averageUtilization:80type:Utilizationtype:Resource
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS \ AGE app-cache Deployment/app-cache <unknown>/80% 3 5 4 \ 58s
# ...spec:# ...template:# ...spec:containers:-name:memcached# ...resources:requests:cpu:250mlimits:cpu:500m
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE app-cache Deployment/app-cache 15%/80% 3 5 4 58s
$ kubectl describe hpa app-cache
Name: app-cache
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sun, 15 Aug 2021 \
15:54:11 -0600
Reference: Deployment/app-cache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (1m) / 80%
Min replicas: 3
Max replicas: 5
Deployment pods: 3 current / 3 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully \
calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited True TooFewReplicas the desired replica count is less \
than the minimum replica count
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 13m horizontal-pod-autoscaler New size: 3; \
reason: All metrics below target
apiVersion:autoscaling/v2kind:HorizontalPodAutoscalermetadata:name:app-cachespec:scaleTargetRef:apiVersion:apps/v1kind:Deploymentname:app-cacheminReplicas:3maxReplicas:5metrics:-type:Resourceresource:name:cputarget:type:UtilizationaverageUtilization:80-type:Resourceresource:name:memorytarget:type:AverageValueaverageValue:500Mi
...spec:...template:...spec:containers:-name:memcached...resources:requests:cpu:250mmemory:100Milimits:cpu:500mmemory:500Mi
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS \ REPLICAS AGE app-cache Deployment/app-cache 1994752/500Mi, 0%/80% 3 5 \ 3 2m14s
Given that a Deployment is such a central primitive in Kubernetes, you can expect that the exam will test you on it. Know how to create a Deployment and learn how to scale to multiple replicas. One of the superior features of a Deployment is its rollout functionality for new revisions. Practice how to roll out a new revision, inspect the rollout history, and roll back to a previous revision.
The number of replicas controlled by a Deployment can be scaled up or down using the Horizontal Pod Autoscaler (HPA). An HPA defines thresholds for resources like CPU and memory that will tell the object that a scaling event needs to happen. It’s important to understand that the HPA functions properly only if you install the metrics server component and define resource requests and limits for containers.
apiVersion:apps/v1kind:Deploymentmetadata:name:web-serverspec:replicas:4strategy:type:RollingUpdaterollingUpdate:maxUnavailable:40%maxSurge:10%minReadySeconds:60selector:matchLabels:app:httpdtemplate:metadata:labels:app:httpdspec:containers:-name:httpdimage:httpd:2.4.23-alpineports:-containerPort:80protocol:TCPreadinessProbe:httpGet:path:/port:80
The percentage of Pods that can be unavailable during the update.
The percentage of Pods that can temporarily exceed the total number of replicas.
The number of seconds for which the readiness probe in a Pod needs to be healthy until the rollout process can continue.
The readiness probe for all replicas referred to by spec.minReadySeconds.
apiVersion:apps/v1kind:Deploymentmetadata:name:web-serverspec:replicas:4strategy:type:Recreateselector:matchLabels:app:httpdtemplate:metadata:labels:app:httpdspec:containers:-name:httpdimage:httpd:2.4.23-alpineports:-containerPort:80protocol:TCP
apiVersion:apps/v1kind:Deploymentmetadata:name:web-server-bluespec:replicas:4selector:matchLabels:type:bluetemplate:metadata:labels:type:bluespec:containers:-name:httpdimage:httpd:2.4.23-alpineports:-containerPort:80protocol:TCP
Uses the label assignment type: blue to any replica managed by the corresponding ReplicaSet.
The old application version 2.4.23-alpine.
apiVersion:apps/v1kind:Deploymentmetadata:name:web-server-greenspec:replicas:4selector:matchLabels:type:greentemplate:metadata:labels:type:greenspec:containers:-name:httpdimage:httpd:2.4.57-alpineports:-containerPort:80protocol:TCP
The exam may confront you with different deployment strategies. You need to understand how to implement the most common strategies and how to modify an existing deployment scenario. Learn how to configure the built-in strategies in the Deployment primitive and their options for fine-tuning the runtime behavior.
You can implement even more sophisticated deployment scenarios with the help of the Deployment and Service primitives. Examples are the blue-green and canary deployment strategies, which require a multi-phased rollout process. Expose yourself to implementation techniques and rollout procedures. Operators provided by the Kubernetes community, e.g., Argo Rollouts, offer higher-level abstractions for more sophisticated deployment strategies. The exam does not require you to understand external tooling to implement deployment strategies.
apiVersion:apps/v1kind:Deploymentmetadata:name:grafanaspec:replicas:6selector:matchLabels:app:grafanatemplate:metadata:labels:app:grafanaspec:containers:-image:grafana/grafana:9.5.9name:grafanaports:-containerPort:3000
$ helm repo list Error: no repositories to show
$ helm repo add jenkinsci https://charts.jenkins.io/ "jenkinsci" has been added to your repositories
$ helm repo list NAME URL jenkinsci https://charts.jenkins.io/
$ helm search repo jenkinsci NAME CHART VERSION APP VERSION DESCRIPTION jenkinsci/jenkins 4.6.5 2.414.2 ...
$ helm install my-jenkins jenkinsci/jenkins --version 4.6.4 NAME: my-jenkins LAST DEPLOYED: Thu Sep 28 09:47:21 2023 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: ...
$ kubectl get all NAME READY STATUS RESTARTS AGE pod/my-jenkins-0 2/2 Running 0 12m NAME TYPE CLUSTER-IP EXTERNAL-IP ... service/my-jenkins ClusterIP 10.99.166.189 <none> ... service/my-jenkins-agent ClusterIP 10.110.246.141 <none> ... NAME READY AGE statefulset.apps/my-jenkins 1/1 12m
$ helm show values jenkinsci/jenkins ... controller: # When enabling LDAP or another non-Jenkins identity source, the built-in \ # admin account will no longer exist. # If you disable the non-Jenkins identity store and instead use the Jenkins \ # internal one, # you should revert controller.adminUser to your preferred admin user: adminUser: "admin" # adminPassword: <defaults to random> ...
$ helm install my-jenkins jenkinsci/jenkins --version 4.6.4 \ --set controller.adminUser=boss --set controller.adminPassword=password \ -n jenkins --create-namespace
$ helm list --all-namespaces NAME NAMESPACE REVISION UPDATED STATUS CHART my-jenkins default 1 2023-09-28... deployed jenkins-4.6.4
$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "jenkinsci" chart repository Update Complete. ⎈Happy Helming!⎈
$ helm upgrade my-jenkins jenkinsci/jenkins --version 4.6.5 Release "my-jenkins" has been upgraded. Happy Helming! ...
$ helm uninstall my-jenkins release "my-jenkins" uninstalled
Unfortunately, the exam FAQ does not mention any details about the Helm executable or the Helm version to expect. It’s fair to assume that it will be preinstalled for you and therefore you do not need to memorize installation instructions. You will be able to browse the Helm documentation pages.
Artifact Hub provides a web-based UI for Helm charts. It’s worthwhile to explore the search capabilities and the details provided by individual charts, more specifically the repository the chart file lives in, and its configurable values. During the exam, you’ll likely not be asked to navigate to Artifact Hub because its URL hasn’t been listed as one of the permitted documentation pages. You can assume that the exam question will provide you with the repository URL.
The exam does not ask you to build and publish your own chart file. All you need to understand is how to consume an existing chart. You will need to be familiar with the helm repo add command to register a repository, the helm search repo to find available chart versions, and the helm install command to install a chart. You should have a basic understanding of the upgrade process for an already installed Helm chart using the helm upgrade command.
$ kubectl api-versions admissionregistration.k8s.io/v1 apiextensions.k8s.io/v1 apiregistration.k8s.io/v1 apps/v1 authentication.k8s.io/v1 authorization.k8s.io/v1 autoscaling/v1 autoscaling/v2 batch/v1 certificates.k8s.io/v1 coordination.k8s.io/v1 discovery.k8s.io/v1 events.k8s.io/v1 flowcontrol.apiserver.k8s.io/v1beta2 flowcontrol.apiserver.k8s.io/v1beta3 networking.k8s.io/v1 node.k8s.io/v1 policy/v1 rbac.authorization.k8s.io/v1 scheduling.k8s.io/v1 storage.k8s.io/v1 v1
apiVersion:autoscaling/v2beta2kind:HorizontalPodAutoscalermetadata:name:php-apachespec:scaleTargetRef:apiVersion:apps/v1kind:Deploymentname:php-apacheminReplicas:1maxReplicas:10
$ kubectl create -f hpa.yaml Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, \ unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
The
autoscaling/v2beta2API version of HorizontalPodAutoscaler is no longer served as of v1.26.
Migrate manifests and API clients to use the
autoscaling/v2API version, available since v1.23.All existing persisted objects are accessible via the new API
Deprecated API Migration Guide
rbac.authorization.k8s.io/v1beta1apiVersion:rbac.authorization.k8s.io/v1beta1kind:ClusterRolemetadata:name:pod-readerrules:-apiGroups:[""]resources:["pods"]verbs:["get","watch","list"]
$ kubectl apply -f clusterrole.yaml error: resource mapping not found for name: "pod-reader" namespace: "" from \ "clusterrole.yaml": no matches for kind "ClusterRole" in version \ "rbac.authorization.k8s.io/v1beta1"
The
rbac.authorization.k8s.io/v1beta1API version of ClusterRole, ClusterRoleBinding, Role, and RoleBinding is no longer served as of v1.22.
Migrate manifests and API clients to use the
rbac.authorization.k8s.io/v1API version, available since v1.8.All existing persisted objects are accessible via the new APIs
No notable changes
Deprecated API Migration Guide
The exam will likely expose you to an object that uses a deprecated API. You will need to keep the Deprecated API Migration Guide documentation page handy to tackle such scenarios. The page describes deprecated, removed, and replaced APIs categorized by Kubernetes version. Use the browser’s search capability to quickly find relevant information about an API. The quick reference links on the right side of the page let you quickly navigate to a specific Kubernetes version.
Even after an application has started up, it may still need to execute configuration procedures—for example, connecting to a database and preparing data. This probe checks if the application is ready to serve incoming requests. Figure 14-1 shows the readiness probe.
Once the application is running, you want to make sure that it still works as expected without issues. This probe periodically checks for the application’s responsiveness. Kubernetes restarts the container automatically if the probe considers the application be in an unhealthy state, as shown in Figure 14-2.
Legacy applications in particular can take a long time to start up—possibly several minutes. A startup probe can be instantiated to wait for a predefined amount of time before a liveness probe is allowed to start probing. By setting up a startup probe, you can prevent the application process from being overwhelmed with probing requests. Startup probes kill the container if the application can’t start within the set time frame. Figure 14-3 illustrates the behavior of a startup probe.
| Method | Option | Description |
|---|---|---|
Custom command |
|
Executes a command inside the container (e.g., a |
HTTP GET request |
|
Sends an HTTP GET request to an endpoint exposed by the application. An HTTP response code in the range of 200 to 399 indicates success. Any other response code is regarded as an error. |
TCP socket connection |
|
Tries to open a TCP socket connection to a port. If the connection could be established, the probing attempt was successful. The inability to connect is accounted for as an error. |
gRPC |
|
The application implements the GRPC Health Checking Protocol, which verifies whether the server is able to handle a Remote Procedure Call (RPC). |
| Attribute | Default value | Description |
|---|---|---|
|
0 |
Delay in seconds until first check is executed. |
|
10 |
Interval for executing a check (e.g., every 20 seconds). |
|
1 |
Maximum number of seconds until check operation times out. |
|
1 |
Number of successful check attempts until probe is considered successful after a failure. |
|
3 |
Number of failures for check attempts before probe is marked failed and action taken. |
|
30 |
Grace period before forcing a container to stop upon failure. |
apiVersion:v1kind:Podmetadata:name:readiness-podspec:containers:-image:bmuschko/nodejs-hello-world:1.0.0name:hello-worldports:-name:nodejs-portcontainerPort:3000readinessProbe:httpGet:path:/port:nodejs-portinitialDelaySeconds:2periodSeconds:8
You can assign a name to a port so that it can be referenced in a probe.
Instead of assigning port 3000 again, we simply use the port name.
$ kubectl apply -f readiness-probe.yaml
pod/readiness-pod created
$ kubectl get pod readiness-pod
NAME READY STATUS RESTARTS AGE
pod/readiness-pod 0/1 Running 0 6s
$ kubectl get pod readiness-pod
NAME READY STATUS RESTARTS AGE
pod/readiness-pod 1/1 Running 0 68s
$ kubectl describe pod readiness-pod
...
Containers:
hello-world:
...
Readiness: http-get http://:nodejs-port/ delay=2s timeout=1s \
period=8s #success=1 #failure=3
...
apiVersion:v1kind:Podmetadata:name:liveness-podspec:containers:-image:busybox:1.36.1name:appargs:-/bin/sh--c-'whiletrue;dotouch/tmp/heartbeat.txt;sleep5;done;'livenessProbe:exec:command:-test `find /tmp/heartbeat.txt -mmin -1`initialDelaySeconds:5periodSeconds:30
$ kubectl apply -f liveness-probe.yaml
pod/liveness-pod created
$ kubectl get pod liveness-pod
NAME READY STATUS RESTARTS AGE
pod/liveness-pod 1/1 Running 0 22s
$ kubectl describe pod liveness-pod
...
Containers:
app:
...
Restart Count: 0
Liveness: exec [test `find /tmp/heartbeat.txt -mmin -1`] delay=5s \
timeout=1s period=30s #success=1 #failure=3
...
apiVersion:v1kind:Podmetadata:name:startup-podspec:containers:-image:httpd:2.4.46name:http-serverstartupProbe:tcpSocket:port:80initialDelaySeconds:3periodSeconds:15livenessProbe:...
$ kubectl apply -f startup-probe.yaml
pod/startup-pod created
$ kubectl get pod startup-pod
NAME READY STATUS RESTARTS AGE
pod/startup-pod 1/1 Running 0 31s
$ kubectl describe pod startup-pod
...
Containers:
http-server:
...
Startup: tcp-socket :80 delay=3s timeout=1s period=15s \
#success=1 #failure=3
...
To prepare for this section of the exam, focus on understanding and using health probes. You should understand the purpose of startup, readiness, and liveness probes and practice how to configure them. In your Kubernetes cluster, try to emulate success and failure conditions to see the effects of probes and the actions they take.
You can choose from a variety of verification methods applicable to probes. Gain a high-level understanding when to apply which verification method, and how to configure each one of them.
$ kubectl get pods NAME READY STATUS RESTARTS AGE misbehaving-pod 0/1 ErrImagePull 0 2s
| Status | Root cause | Potential fix |
|---|---|---|
|
Image could not be pulled from registry. |
Check correct image name, check that image name exists in registry, verify network access from node to registry, ensure proper authentication. |
|
Application or command run in container crashes. |
Check command executed in container, ensure that image can properly execute (e.g., by creating a container with Docker). |
|
ConfigMap or Secret referenced by container cannot be found. |
Check correct name of the configuration object, verify the existence of the configuration object in the namespace. |
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
secret-pod 0/1 ContainerCreating 0 4m57s
$ kubectl describe pod secret-pod
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully \
assigned
default/secret-pod \
to minikube
Warning FailedMount 3m15s kubelet, minikube Unable to attach or \
mount volumes: \
unmounted \
volumes=[mysecret], \
unattached volumes= \
[default-token-bf8rh \
mysecret]: timed out \
waiting for the \
condition
Warning FailedMount 68s (x10 over 5m18s) kubelet, minikube MountVolume.SetUp \
failed for volume \
"mysecret" : secret \
"mysecret" not found
Warning FailedMount 61s kubelet, minikube Unable to attach or \
mount volumes: \
unmounted volumes= \
[mysecret], \
unattached \
volumes=[mysecret \
default-token-bf8rh \
]: timed out \
waiting for the \
condition
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
3m14s Warning BackOff pod/custom-cmd Back-off \
restarting \
failed container
2s Warning FailedNeedsStart cronjob/google-ping Cannot determine \
if job needs to \
be started: too \
many missed start \
time (> 100). Set \
or decrease \
.spec. \
startingDeadline \
Seconds or check \
clock skew
$ kubectl create deployment nginx --image=nginx:1.24.0 --replicas=3 --port=80 deployment.apps/nginx created
$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-595dff4799-pfgdg 1/1 Running 0 6m25s nginx-595dff4799-ph4js 1/1 Running 0 6m25s nginx-595dff4799-s76s8 1/1 Running 0 6m25s
$ kubectl port-forward nginx-595dff4799-ph4js 2500:80 Forwarding from 127.0.0.1:2500 -> 80 Forwarding from [::1]:2500 -> 80
curl -Is localhost:2500 | head -n 1 HTTP/1.1 200 OK
apiVersion:v1kind:Podmetadata:name:incorrect-cmd-podspec:containers:-name:test-containerimage:busybox:1.36.1command:["/bin/sh","-c","unknown"]
$ kubectl create -f crash-loop-backoff.yaml pod/incorrect-cmd-pod created $ kubectl get pods incorrect-cmd-pod NAME READY STATUS RESTARTS AGE incorrect-cmd-pod 0/1 CrashLoopBackOff 5 3m20s $ kubectl logs incorrect-cmd-pod /bin/sh: unknown: not found
apiVersion:v1kind:Podmetadata:name:failing-podspec:containers:-args:-/bin/sh--c-while true; do echo $(date) >> ~/tmp/curr-date.txt; sleep \5; done;image:busybox:1.36.1name:failing-pod
$ kubectl create -f failing-pod.yaml pod/failing-pod created $ kubectl get pods failing-pod NAME READY STATUS RESTARTS AGE failing-pod 1/1 Running 0 5s $ kubectl logs failing-pod /bin/sh: can't create /root/tmp/curr-date.txt: nonexistent directory
$ kubectl exec failing-pod -it -- /bin/sh # mkdir -p ~/tmp # cd ~/tmp # ls -l total 4 -rw-r--r-- 1 root root 112 May 9 23:52 curr-date.txt
apiVersion:v1kind:Podmetadata:name:minimal-podspec:containers:-image:k8s.gcr.io/pause:3.1name:pause
$ kubectl create -f minimal-pod.yaml pod/minimal-pod created $ kubectl get pods minimal-pod NAME READY STATUS RESTARTS AGE minimal-pod 1/1 Running 0 8s $ kubectl exec minimal-pod -it -- /bin/sh OCI runtime exec failed: exec failed: container_linux.go:349: starting \ container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file \ or directory": unknown command terminated with exit code 126
$ kubectl alpha debug -it minimal-pod --image=busybox Defaulting debug container name to debugger-jf98g. If you don't see a command prompt, try pressing enter. / # pwd / / # exit Session ended, resume using 'kubectl alpha attach minimal-pod -c \ debugger-jf98g -i -t' command when the pod is running
$ minikube addons enable metrics-server The 'metrics-server' addon is enabled
$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% minikube 283m 14% 1262Mi 32% $ kubectl top pod frontend NAME CPU(cores) MEMORY(bytes) frontend 0m 2Mi
In this chapter, we mainly focused on troubleshooting problematic Pods and containers. Practice all relevant kubectl commands that can help with diagnosing issues. Refer to the Kubernetes documentation to learn more about debugging other Kubernetes resource types.
Monitoring a Kubernetes cluster is an important aspect of successfully operating in a real-world environment. You should read up on commercial monitoring products and which data the metrics server can collect. You can assume that the exam environment provides you with an installation of the metrics server. Learn how to use the kubectl top command to render Pod and node resource metrics and how to interpret them.
Assume you are in charge of managing a web-based application. The Kubernetes objects needed to manage the application consist of a Deployment for running an application in Pods and the Service object that routes the network traffic to the replicas.
A smoke test should be triggered automatically after deploying the Kubernetes objects responsible for operating the application. At runtime, the smoke test executes HTTPS calls to an endpoint of the application by targeting the Service’s DNS name. The result of the smoke test (in the case of success or failure) will be sent to an external service so it can be rendered as charts and graphs in a dashboard.
To implement this functionality, you could decide to write a CRD and controller. In this chapter, we are only going to cover the CRD for a smoke test, not the controller that would do the heavy lifting of executing the smoke test.
apiVersion:apiextensions.k8s.io/v1kind:CustomResourceDefinitionmetadata:name:smoketests.stable.bmuschko.comspec:group:stable.bmuschko.comversions:-name:v1served:truestorage:trueschema:openAPIV3Schema:type:objectproperties:spec:type:objectproperties:service:type:stringpath:type:stringtimeout:type:integerretries:type:integerscope:Namespacednames:plural:smoketestssingular:smoketestkind:SmokeTestshortNames:-st
The combination of the identifiers <plural>.<group>.
The API group to be used by CRD.
The versions supported by the CRD. A version can define 0..n attributes.
The attributes to be set by the custom type.
The identifiers for the custom type, e.g., the kind and the singular/plural/short names.
$ kubectl apply -f smoketest-resource.yaml customresourcedefinition.apiextensions.k8s.io/smoketests.stable.bmuschko.com \ created
apiVersion:stable.bmuschko.com/v1kind:SmokeTestmetadata:name:backend-smoke-testspec:service:backendpath:/healthtimeout:600retries:3
The group and Version of the custom kind.
The kind defined by CRD.
The attributes and their values that make the custom kind configurable.
$ kubectl apply -f smoketest.yaml smoketest.stable.bmuschko.com/backend-smoke-test created
$ kubectl get smoketest backend-smoke-test NAME AGE backend-smoke-test 12s $ kubectl delete smoketest backend-smoke-test smoketest.stable.bmuschko.com "backend-smoke-test" deleted
$ kubectl api-resources --api-group=stable.bmuschko.com NAME SHORTNAMES APIVERSION NAMESPACED KIND smoketests st stable.bmuschko.com/v1 true SmokeTest
$ kubectl get crds NAME CREATED AT smoketests.stable.bmuschko.com 2023-05-04T14:49:40Z
You are not expected to implement a CRD schema. All you need to know is how to discover and use them with kubectl. Controller implementations are definitely outside the scope of the exam.
Learn how to use the kubectl get crds command to discover installed CRDs, and how to create objects from a CRD schema. If you want to explore further, install an open source CRD, such as the Prometheus operator or the Jaeger operator, and inspect its schema.
apiVersion:v1kind:Configclusters:-cluster:certificate-authority:/Users/bmuschko/.minikube/ca.crtextensions:-extension:last-update:Mon,09Oct202307:33:01MDTprovider:minikube.sigs.k8s.ioversion:v1.30.1name:cluster_infoserver:https://127.0.0.1:63709name:minikubecontexts:-context:cluster:minikubeuser:bmuschkoname:bmuschko-context:cluster:minikubeextensions:-extension:last-update:Mon,09Oct202307:33:01MDTprovider:minikube.sigs.k8s.ioversion:v1.30.1name:context_infonamespace:defaultuser:minikubename:minikubecurrent-context:minikubepreferences:{}users:-name:bmuschkouser:client-key-data:<REDACTED>-name:minikubeuser:client-certificate:/Users/bmuschko/.minikube/profiles/minikube/client.crtclient-key:/Users/bmuschko/.minikube/profiles/minikube/client.key
$ kubectl config view apiVersion: v1 kind: Config clusters: ...
$ kubectl config current-context minikube
$ kubectl config use-context bmuschko Switched to context "bmuschko".
$ kubectl config set-credentials myuser \ --client-key=myuser.key --client-certificate=myuser.crt \ --embed-certs=true
The user or service account that wants to access a resource
The Kubernetes API resource type (e.g., a Deployment or node)
The operation that can be executed on the resource (e.g., creating a Pod or deleting a Service)
The Role API primitive declares the API resources and their operations this rule should operate on in a specific namespace. For example, you may want to say “allow listing and deleting of Pods,” or you may express “allow watching the logs of Pods,” or even both with the same Role. Any operation that is not spelled out explicitly is disallowed as soon as it is bound to the subject.
The RoleBinding API primitive binds the Role object to the subject(s) in a specific namespace. It is the glue for making the rules active. For example, you may want to say “bind the Role that permits updating Services to the user John Doe.”
| Default ClusterRole | Description |
|---|---|
cluster-admin |
Allows read and write access to resources across all namespaces. |
admin |
Allows read and write access to resources in namespace including Roles and RoleBindings. |
edit |
Allows read and write access to resources in namespace except Roles and RoleBindings. Provides access to Secrets. |
view |
Allows read-only access to resources in namespace except Roles, RoleBindings, and Secrets. |
$ kubectl create role read-only --verb=list,get,watch \ --resource=pods,deployments,services role.rbac.authorization.k8s.io/read-only created
$ kubectl get roles NAME CREATED AT read-only 2021-06-23T19:46:48Z
$ kubectl describe role read-only Name: read-only Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- pods [] [] [list get watch] services [] [] [list get watch] deployments.apps [] [] [list get watch]
$ kubectl create rolebinding read-only-binding --role=read-only --user=bmuschko rolebinding.rbac.authorization.k8s.io/read-only-binding created
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:read-only-bindingroleRef:apiGroup:rbac.authorization.k8s.iokind:Rolename:read-onlysubjects:-apiGroup:rbac.authorization.k8s.iokind:Username:bmuschko
$ kubectl get rolebindings NAME ROLE AGE read-only-binding Role/read-only 24h
$ kubectl describe rolebinding read-only-binding Name: read-only-binding Labels: <none> Annotations: <none> Role: Kind: Role Name: read-only Subjects: Kind Name Namespace ---- ---- --------- User bmuschko
$ kubectl config current-context minikube $ kubectl create deployment myapp --image=:1.25.2 --port=80 --replicas=2 deployment.apps/myapp created
$ kubectl config use-context bmuschko-context Switched to context "bmuschko-context".
$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE myapp 2/2 2 2 8s
$ kubectl get replicasets Error from server (Forbidden): replicasets.apps is forbidden: User "bmuschko" \ cannot list resource "replicasets" in API group "apps" in the namespace "default"
$ kubectl delete deployment myapp Error from server (Forbidden): deployments.apps "myapp" is forbidden: User \ "bmuschko" cannot delete resource "deployments" in API group "apps" in the \ namespace "default"
$ kubectl auth can-i --list --as bmuschko Resources Non-Resource URLs Resource Names Verbs ... pods [] [] [list get watch] services [] [] [list get watch] deployments.apps [] [] [list get watch] $ kubectl auth can-i list pods --as bmuschko yes
$ kubectl get serviceaccounts NAME SECRETS AGE default 0 4d
$ kubectl create serviceaccount cicd-bot serviceaccount/cicd-bot created
apiVersion:v1kind:ServiceAccountmetadata:name:cicd-bot
apiVersion:v1kind:Namespacemetadata:name:k97---apiVersion:v1kind:ServiceAccountmetadata:name:sa-apinamespace:k97---apiVersion:v1kind:Podmetadata:name:list-objectsnamespace:k97spec:serviceAccountName:sa-apicontainers:-name:podsimage:alpine/curl:3.14command:['sh','-c','whiletrue;docurl-s-k-m5-H\"Authorization:Bearer$(cat/var/run/secrets/kubernetes.io/\serviceaccount/token)"https://kubernetes.default.svc.cluster.\local/api/v1/namespaces/k97/pods;sleep10;done']-name:deploymentsimage:alpine/curl:3.14command:['sh','-c','whiletrue;docurl-s-k-m5-H\"Authorization:Bearer$(cat/var/run/secrets/kubernetes.io/\serviceaccount/token)"https://kubernetes.default.svc.cluster.\local/apis/apps/v1/namespaces/k97/deployments;sleep10;done']
The service account referenced by name used for communicating with the Kubernetes API.
Performs an API call to retrieve the list of Pods in the namespace k97.
Performs an API call to retrieve the list of Deployments in the namespace k97.
$ kubectl apply -f setup.yaml namespace/k97 created serviceaccount/sa-api created pod/list-objects created
$ kubectl logs list-objects -c pods -n k97
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:k97:sa-api\" \
cannot list resource \"pods\" in API group \"\" in the \
namespace \"k97\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
$ kubectl logs list-objects -c deployments -n k97
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps is forbidden: User \
\"system:serviceaccount:k97:sa-api\" cannot list resource \
\"deployments\" in API group \"apps\" in the namespace \
\"k97\"",
"reason": "Forbidden",
"details": {
"group": "apps",
"kind": "deployments"
},
"code": 403
}
apiVersion:rbac.authorization.k8s.io/v1kind:Rolemetadata:name:list-pods-rolenamespace:k97rules:-apiGroups:[""]resources:["pods"]verbs:["list"]
$ kubectl apply -f role.yaml role.rbac.authorization.k8s.io/list-pods-role created
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:serviceaccount-pod-rolebindingnamespace:k97subjects:-kind:ServiceAccountname:sa-apiroleRef:kind:Rolename:list-pods-roleapiGroup:rbac.authorization.k8s.io
$ kubectl apply -f rolebinding.yaml rolebinding.rbac.authorization.k8s.io/serviceaccount-pod-rolebinding created
$ kubectl logs list-objects -c pods -n k97
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "628"
},
"items": [
{
"metadata": {
"name": "list-objects",
"namespace": "k97",
...
}
]
}
$ kubectl logs list-objects -c deployments -n k97
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps is forbidden: User \
\"system:serviceaccount:k97:sa-api\" cannot list resource \
\"deployments\" in API group \"apps\" in the namespace \
\"k97\"",
"reason": "Forbidden",
"details": {
"group": "apps",
"kind": "deployments"
},
"code": 403
}
$ kube-apiserver --enable-admission-plugins=NamespaceLifecycle,PodSecurity,\ LimitRanger
This chapter demonstrated some ways to communicate with the Kubernetes API. We performed API requests by switching to a user context and with the help of a RESTful API call using curl. Explore the Kubernetes API and its endpoints on your own for broader exposure.
Anonymous user requests to the Kubernetes API will not allow any substantial operations. For requests coming from a user or a service account, you will need to carefully analyze permissions granted to the subject. Learn the ins and outs of defining RBAC rules by creating the relevant objects to control permissions. Service accounts automount a token when used in a Pod. Expose the token as a volume only if you are intending to make API calls from the Pod.
For the exam you will not need to understand how to configure admission control plugins in the API server. Developers interact with them, but configuration tasks are up to the cluster administrator. Read up on different plugins to gain a better understanding of the admission control landscape.
| YAML attribute | Description | Example value |
|---|---|---|
|
CPU resource type |
|
|
Memory resource type |
|
|
Huge page resource type |
|
|
Ephemeral storage resource type |
|
apiVersion:v1kind:Podmetadata:name:rate-limiterspec:containers:-name:business-appimage:bmuschko/nodejs-business-app:1.0.0ports:-containerPort:8080resources:requests:memory:"256Mi"cpu:"1"-name:ambassadorimage:bmuschko/nodejs-ambassador:1.0.0ports:-containerPort:8081resources:requests:memory:"64Mi"cpu:"250m"
| YAML attribute | Description | Example value |
|---|---|---|
|
CPU resource type |
|
|
Memory resource type |
|
|
Huge page resource type |
|
|
Ephemeral storage resource type |
|
apiVersion:v1kind:Podmetadata:name:rate-limiterspec:containers:-name:business-appimage:bmuschko/nodejs-business-app:1.0.0ports:-containerPort:8080resources:limits:memory:"256Mi"-name:ambassadorimage:bmuschko/nodejs-ambassador:1.0.0ports:-containerPort:8081resources:limits:memory:"64Mi"
apiVersion:v1kind:Podmetadata:name:rate-limiterspec:containers:-name:business-appimage:bmuschko/nodejs-business-app:1.0.0ports:-containerPort:8080resources:requests:memory:"256Mi"cpu:"1"limits:memory:"256Mi"-name:ambassadorimage:bmuschko/nodejs-ambassador:1.0.0ports:-containerPort:8081resources:requests:memory:"64Mi"cpu:"250m"limits:memory:"64Mi"
$ kubectl create namespace team-awesome namespace/team-awesome created
apiVersion:v1kind:ResourceQuotametadata:name:awesome-quotanamespace:team-awesomespec:hard:pods:2requests.cpu:"1"requests.memory:1024Milimits.cpu:"4"limits.memory:4096Mi
Limit the number of Pods to 2.
Define the minimum resources requested across all Pods in a non-terminal state to 1 CPU and 1024Mi of RAM.
Define the maximum resources used by all Pods in a non-terminal state to 4 CPUs and 4096Mi of RAM.
$ kubectl create -f awesome-quota.yaml resourcequota/awesome-quota created
$ kubectl describe resourcequota awesome-quota -n team-awesome Name: awesome-quota Namespace: team-awesome Resource Used Hard -------- ---- ---- limits.cpu 0 4 limits.memory 0 4Gi pods 0 2 requests.cpu 0 1 requests.memory 0 1Gi
apiVersion:v1kind:Podmetadata:name:nginxnamespace:team-awesomespec:containers:-image:nginx:1.25.3name:nginx
$ kubectl apply -f nginx-pod.yaml Error from server (Forbidden): error when creating "nginx-pod.yaml": \ pods "nginx" is forbidden: failed quota: awesome-quota: must specify \ limits.cpu for: nginx; limits.memory for: nginx; requests.cpu for: \ nginx; requests.memory for: nginx
apiVersion:v1kind:Podmetadata:name:nginxnamespace:team-awesomespec:containers:-image:nginx:1.25.3name:nginxresources:requests:cpu:"0.5"memory:"512Mi"limits:cpu:"1"memory:"1024Mi"
$ kubectl apply -f nginx-pod1.yaml pod/nginx1 created $ kubectl apply -f nginx-pod2.yaml pod/nginx2 created $ kubectl describe resourcequota awesome-quota -n team-awesome Name: awesome-quota Namespace: team-awesome Resource Used Hard -------- ---- ---- limits.cpu 2 4 limits.memory 2Gi 4Gi pods 2 2 requests.cpu 1 1 requests.memory 1Gi 1Gi
$ kubectl apply -f nginx-pod3.yaml Error from server (Forbidden): error when creating "nginx-pod3.yaml": \ pods "nginx3" is forbidden: exceeded quota: awesome-quota, requested: \ pods=1,requests.cpu=500m,requests.memory=512Mi, used: pods=2,requests.cpu=1,\ requests.memory=1Gi, limited: pods=2,requests.cpu=1,requests.memory=1Gi
apiVersion:v1kind:LimitRangemetadata:name:cpu-resource-constraintspec:limits:-type:ContainerdefaultRequest:cpu:200mdefault:cpu:200mmin:cpu:100mmax:cpu:"2"
The context to apply the constraints to. In this case, to a container running in a Pod.
The default CPU resource request value assigned to a container if not provided.
The default CPU resource limit value assigned to a container if not provided.
The minimum and maximum CPU resource request and limit value assignable to a container.
$ kubectl apply -f cpu-resource-constraint.yaml limitrange/cpu-resource-constraint created
$ kubectl describe limitrange cpu-resource-constraint Name: cpu-resource-constraint Namespace: default Type Resource Min Max Default Request Default Limit ... ---- -------- --- --- --------------- ------------- Container cpu 100m 2 200m 200m ...
apiVersion:v1kind:Podmetadata:name:nginx-without-resource-requirementsspec:containers:-image:nginx:1.25.3name:nginx
$ kubectl apply -f nginx-without-resource-requirements.yaml pod/nginx-without-resource-requirements created
$ kubectl describe pod nginx-without-resource-requirements
...
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu \
request for container nginx; cpu limit for container nginx
...
Containers:
nginx:
...
Limits:
cpu: 200m
Requests:
cpu: 200m
...
apiVersion:v1kind:Podmetadata:name:nginx-with-resource-requirementsspec:containers:-image:nginx:1.25.3name:nginxresources:requests:cpu:"50m"limits:cpu:"3"
$ kubectl apply -f nginx-with-resource-requirements.yaml Error from server (Forbidden): error when creating "nginx-with-resource-\ requirements.yaml": pods "nginx-with-resource-requirements" is forbidden: \ [minimum cpu usage per Container is 100 m, but request is 50 m, maximum cpu \ usage per Container is 2, but limit is 3]
A container defined by a Pod can specify resource requests and limits. Work through scenarios where you define those requirements individually and together for single- and multi-container Pods. Upon creation of the Pod, you should be able to see the effects on scheduling the object on a node. Furthermore, practice how to identify the available resource capacity of a node.
A ResourceQuota defines the resource boundaries for objects living within a namespace. The most commonly used boundaries apply to computing resources. Practice defining them and understand their effect on the creation of Pods. It’s important to know the command for listing the hard requirements of a ResourceQuota and the resources currently in use. You will find that a ResourceQuota offers other options. Discover them in more detail for a broader exposure to the topic.
A LimitRange can specify resource constraints and defaults of specific primitives. Should you run into a situation where you receive an error message upon creation of an object, check if a limit range object enforces those constraints. Unfortunately, the error message does not point out the object that enforces it so you may have to proactively list LimitRange objects to identify the constraints.
apiVersion:v1kind:ResourceQuotametadata:name:appspec:hard:pods:"2"requests.cpu:"2"requests.memory:500Mi
| Option | Example | Description |
|---|---|---|
|
|
Literal values, which are key-value pairs as plain text |
|
|
A file that contains key-value pairs and expects them to be environment variables |
|
|
A file with arbitrary contents |
|
|
A directory with one or many files |
$ kubectl create configmap db-config --from-literal=DB_HOST=mysql-service \ --from-literal=DB_USER=backend configmap/db-config created
apiVersion:v1kind:ConfigMapmetadata:name:db-configdata:DB_HOST:mysql-serviceDB_USER:backend
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendenvFrom:-configMapRef:name:db-config
$ kubectl exec backend -- env ... DB_HOST=mysql-service DB_USER=backend ...
{"db":{"host":"mysql-service","user":"backend"}}
$ kubectl create configmap db-config --from-file=db.json configmap/db-config created
apiVersion:v1kind:ConfigMapmetadata:name:db-configdata:db.json:|-{"db": {"host": "mysql-service","user": "backend"}}
The multiline string syntax (|-) used in this YAML structure removes the line feed and removes the trailing blank lines. For more information, see the YAML syntax for multiline string.
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendvolumeMounts:-name:db-config-volumemountPath:/etc/configvolumes:-name:db-config-volumeconfigMap:name:db-config
$ kubectl exec -it backend -- /bin/sh
# ls -1 /etc/config
db.json
# cat /etc/config/db.json
{
"db": {
"host": "mysql-service",
"user": "backend"
}
}
| CLI option | Description | Internal Type |
|---|---|---|
|
Creates a secret from a file, directory, or literal value |
|
|
Creates a secret for use with a Docker registry, e.g., to pull images from a private registry when requested by a Pod |
|
|
Creates a TLS secret |
|
| Option | Example | Description |
|---|---|---|
|
|
Literal values, which are key-value pairs as plain text |
|
|
A file that contains key-value pairs and expects them to be environment variables |
|
|
A file with arbitrary contents |
|
|
A directory with one or many files |
$ kubectl create secret generic db-creds --from-literal=pwd=s3cre! secret/db-creds created
apiVersion:v1kind:Secretmetadata:name:db-credstype:Opaquedata:pwd:czNjcmUh
The value Opaque for the type has been assigned to represent generic sensitive data.
The plain-text value has been Base64-encoded automatically if the object has been created imperatively.
$ echo -n 's3cre!' | base64 czNjcmUh
apiVersion:v1kind:Secretmetadata:name:db-credstype:OpaquestringData:pwd:s3cre!
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendenvFrom:-secretRef:name:secret-basic-auth
$ kubectl exec backend -- env ... username=bmuschko password=secret ...
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendenv:-name:USERvalueFrom:secretKeyRef:name:secret-basic-authkey:username-name:PWDvalueFrom:secretKeyRef:name:secret-basic-authkey:password
$ kubectl exec backend -- env ... USER=bmuschko PWD=secret ...
$ cp ~/.ssh/id_rsa ssh-privatekey $ kubectl create secret generic secret-ssh-auth --from-file=ssh-privatekey \ --type=kubernetes.io/ssh-auth secret/secret-ssh-auth created
apiVersion:v1kind:Podmetadata:name:backendspec:containers:-image:bmuschko/web-app:1.0.1name:backendvolumeMounts:-name:ssh-volumemountPath:/var/appreadOnly:truevolumes:-name:ssh-volumesecret:secretName:secret-ssh-auth
Files provided by the Secret mounted as volume cannot be modified.
Note that the attribute secretName that points to the Secret name is not the same as for the ConfigMap (which is name).
$ kubectl exec -it backend -- /bin/sh # ls -1 /var/app ssh-privatekey # cat /var/app/ssh-privatekey -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC,8734C9153079F2E8497C8075289EBBF1 ... -----END RSA PRIVATE KEY-----
The quickest ways to create those objects are the imperative kubectl create configmap commands. Understand how to provide the data with the help of different command line flags. The ConfigMap specifies plain-text key-value pairs in the data section of YAML manifest.
Creating a Secret using the imperative command kubectl create secret does not require you to Base64-encode the provided values. kubectl performs the encoding operation automatically. The declarative approach requires the Secret YAML manifest to specify a Base64-encoded value with the data section. You can use the stringData convenience attribute in place of the data attribute if you prefer providing a plain-text value. The live object will use a Base64-encoded value. Functionally, there’s no difference at runtime between the use of data and stringData.
Secrets offer specialized types, e.g., kubernetes.io/basic-auth or kubernetes.io/service-account-token, to represent data for specific use cases. Read up on the different types in the Kubernetes documentation and understand their
purpose.
The exam may confront you with existing ConfigMap and Secret objects. You need to understand how to use the kubectl get or the kubectl describe command to inspect the data of those objects. The live object of a Secret will always represent the value in a Base64-encoded format.
The primary use case for ConfigMaps and Secrets is the consumption of the data from a Pod. Pods can inject configuration data into a container as environment variables or mount the configuration data as Volumes. For the exam, you need to be familiar with both consumption methods.
apiVersion:v1kind:Podmetadata:name:nginx-non-rootspec:securityContext:runAsNonRoot:truecontainers:-image:nginx:1.25.3name:secured-container
$ kubectl apply -f container-nginx-root-user.yaml pod/nginx-non-root created
$ kubectl get pod nginx-non-root NAME READY STATUS RESTARTS AGE nginx-non-root 0/1 CreateContainerConfigError 0 7s
$ kubectl describe pod nginx-non-root
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned \
default/non-root to \
minikube
Normal Pulling 18s kubelet, minikube Pulling image \
"nginx:1.25.3"
Normal Pulled 14s kubelet, minikube Successfully pulled \
image "nginx:1.25.3"
Warning Failed 0s (x3 over 14s) kubelet, minikube Error: container has \
runAsNonRoot and image \
will run as root
apiVersion:v1kind:Podmetadata:name:bitnami-ngnix-non-rootspec:securityContext:runAsNonRoot:truecontainers:-image:bitnami/nginx:1.25.3name:secured-container
$ kubectl apply -f container-bitnami-nginx-root-user.yaml pod/bitnami-ngnix-non-root created
$ kubectl get pod nginx-non-root NAME READY STATUS RESTARTS AGE bitnami-ngnix-non-root 1/1 Running 0 7s
$ kubectl exec -it bitnami-ngnix-non-root -- id -u 1001
apiVersion:v1kind:Podmetadata:name:fs-securedspec:containers:-image:nginx:1.25.3name:secured-containersecurityContext:fsGroup:3500volumeMounts:-name:data-volumemountPath:/data/appvolumes:-name:data-volumeemptyDir:{}
$ kubectl apply -f pod-file-system-group.yaml pod/fs-secured created $ kubectl get pods NAME READY STATUS RESTARTS AGE fs-secured 1/1 Running 0 24s
$ kubectl exec -it fs-secured -- /bin/sh # cd /data/app # touch logs.txt # ls -l -rw-r--r-- 1 root 3500 0 Jul 9 01:41 logs.txt
apiVersion:v1kind:Podmetadata:name:non-root-user-overridespec:securityContext:runAsNonRoot:truecontainers:-image:nginx:1.25.3name:rootsecurityContext:runAsNonRoot:false-image:bitnami/nginx:1.25.3name:non-root
Assign the default value true to all containers of the Pod.
The value false will take precedence even though true has been assigned on the Pod level.
$ kubectl apply -f pod-non-root-user-override.yaml pod/non-root-user-override created
$ kubectl exec -it -c root non-root-user-override -- id -u 0 $ kubectl exec -it -c non-root non-root-user-override -- id -u 1001
The Kubernetes user documentation and API documentation is a good starting point for exploring security context options. You will find that there’s an overlap in the options available via the PodSecurityContext and a SecurityContext APIs. While working through the different use cases solved by a security context option, verify their outcome by running an operation that should either be permitted or disallowed.
You can define a security context on the Pod level with spec.securityContext, and on the container level with spec.containers[].securityContext. If defined on the Pod level, settings can be overridden by specifying them with a different value on the container level. The exam may confront you with existing Pods that set a security context on both levels. Understand which value will take effect.
| Type | Description |
|---|---|
Exposes the Service on a cluster-internal IP. Reachable only from within the cluster. Kubernetes uses a round-robin algorithm to distribute traffic evenly among the targeted Pods. |
|
Exposes the Service on each node’s IP address at a static port. Accessible from outside of the cluster. The Service type does not provide any load balancing across multiple nodes. |
|
Exposes the Service externally using a cloud provider’s load balancer. |
$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 pod/echoserver created
$ kubectl create service clusterip echoserver --tcp=80:8080 service/echoserver created
$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 --expose service/echoserver created pod/echoserver created
$ kubectl create deployment echoserver --image=k8s.gcr.io/echoserver:1.10 \ --replicas=5 deployment.apps/echoserver created $ kubectl expose deployment echoserver --port=80 --target-port=8080 service/echoserver exposed
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echoserver ClusterIP 10.109.241.68 <none> 80/TCP 6s
$ kubectl describe service echoserver Name: echoserver Namespace: default Labels: app=echoserver Annotations: <none> Selector: app=echoserver Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.109.241.68 IPs: 10.109.241.68 Port: <unset> 80/TCP TargetPort: 8080/TCP Endpoints: 172.17.0.4:8080,172.17.0.5:8080,172.17.0.7:8080 + 2 more... Session Affinity: None Events: <none>
$ kubectl get endpoints echoserver NAME ENDPOINTS AGE echoserver 172.17.0.4:8080,172.17.0.5:8080,172.17.0.7:8080 + 2 more... 8m5s
$ kubectl describe endpoints echoserver
Name: echoserver
Namespace: default
Labels: app=echoserver
Annotations: endpoints.kubernetes.io/last-change-trigger-time: \
2021-11-15T19:09:04Z
Subsets:
Addresses: 172.17.0.4,172.17.0.5,172.17.0.7,172.17.0.8,172.17.0.9
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 8080 TCP
Events: <none>
ClusterIP$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 -l app=echoserver pod/echoserver created $ kubectl create service clusterip echoserver --tcp=5005:8080 service/echoserver created
apiVersion:v1kind:Servicemetadata:name:echoserverspec:type:ClusterIPclusterIP:10.96.254.0selector:app:echoserverports:-port:5005targetPort:8080protocol:TCP
$ kubectl get service echoserver NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echoserver ClusterIP 10.96.254.0 <none> 5005/TCP 8s
$ wget 10.96.254.0:5005 --timeout=5 --tries=1 --2021-11-15 15:45:36-- http://10.96.254.0:5005/ Connecting to 10.96.254.0:5005... ]failed: Operation timed out. Giving up.
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -- wget 10.96.254.0:5005 Connecting to 10.96.254.0:5005 (10.96.254.0:5005) saving to 'index.html' index.html 100% |********************************| 408 0:00:00 ETA 'index.html' saved pod "tmp" deleted
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -- wget echoserver:5005 Connecting to echoserver:5005 (10.96.254.0:5005) saving to 'index.html' index.html 100% |********************************| 408 0:00:00 ETA 'index.html' saved pod "tmp" deleted
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -n other -- wget echoserver.default:5005 Connecting to echoserver.default:5005 (10.96.254.0:5005) saving to 'index.html' index.html 100% |********************************| 408 0:00:00 ETA 'index.html' saved pod "tmp" deleted
$ kubectl exec -it echoserver -- env ECHOSERVER_SERVICE_HOST=10.96.254.0 ECHOSERVER_SERVICE_PORT=8080 ...
NodePort$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 -l app=echoserver pod/echoserver created $ kubectl create service nodeport echoserver --tcp=5005:8080 service/echoserver created
apiVersion:v1kind:Servicemetadata:name:echoserverspec:type:NodePortclusterIP:10.96.254.0selector:app:echoserverports:-port:5005nodePort:30158targetPort:8080protocol:TCP
The Service type set to NodePort.
The statically-assigned node port that makes the Service accessible from outside of the cluster.
$ kubectl get service echoserver NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echoserver NodePort 10.101.184.152 <none> 5005:30158/TCP 5s
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -- wget 10.101.184.152:5005 Connecting to 10.101.184.152:5005 (10.101.184.152:5005) saving to 'index.html' index.html 100% |********************************| 414 0:00:00 ETA 'index.html' saved pod "tmp" deleted
$ kubectl get nodes -o \
jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'
192.168.64.15
$ wget 192.168.64.15:30158
--2021-11-16 14:10:16-- http://192.168.64.15:30158/
Connecting to 192.168.64.15:30158... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: ‘index.html’
...
LoadBalancer$ kubectl run echoserver --image=k8s.gcr.io/echoserver:1.10 --restart=Never \ --port=8080 -l app=echoserver pod/echoserver created $ kubectl create service loadbalancer echoserver --tcp=5005:8080 service/echoserver created
apiVersion:v1kind:Servicemetadata:name:echoserverspec:type:LoadBalancerclusterIP:10.96.254.0loadBalancer:10.109.76.157selector:app:echoserverports:-port:5005targetPort:8080nodePort:30158protocol:TCP
$ kubectl get service echoserver NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echoserver LoadBalancer 10.109.76.157 10.109.76.157 5005:30642/TCP 5s
$ wget 10.109.76.157:5005 --2021-11-17 11:30:44-- http://10.109.76.157:5005/ Connecting to 10.109.76.157:5005... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/plain] Saving to: ‘index.html’ ...
Pod-to-Pod communication via their IP addresses doesn’t guarantee a stable network interface over time. A restart of the Pod will lease a new virtual IP address. The purpose of a Service is to provide that stable network interface so that you can operate complex microservice architecture that runs in a Kubernetes cluster. In most cases, Pods call a Service by hostname. The hostname is provided by the DNS server named CoreDNS running as a Pod in the kube-system namespace.
The exam expects you to understand the differences between the Service types ClusterIP, NodePort, and LoadBalancer. Depending on the assigned type, a Service becomes accessible from inside the cluster or from outside the cluster.
It’s easy to get the configuration of a Service wrong. Any misconfiguration won’t allow network traffic to reach the set of Pod it was intended for. Common misconfigurations include incorrect label selection and port assignments. The kubectl get endpoints command will give you an idea which Pods a Service can route traffic to.
$ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-qqhrp 0/1 Completed 0 60s ingress-nginx-admission-patch-56z26 0/1 Completed 1 60s ingress-nginx-controller-7c6974c4d8-2gg8c 1/1 Running 0 60s
$ kubectl get ingressclasses NAME CONTROLLER PARAMETERS AGE nginx k8s.io/ingress-nginx <none> 14m
| Type | Example | Description |
|---|---|---|
An optional host |
|
If provided, the rules apply to that host. If no host is defined, all inbound HTTP(S) traffic is handled (e.g., if made through the IP address of the Ingress). |
A list of paths |
|
Incoming traffic must match the host and path to correctly forward the traffic to a Service. |
The backend |
|
A combination of a Service name and port. |
$ kubectl create ingress next-app \ --rule="next.example.com/app=app-service:8080" \ --rule="next.example.com/metrics=metrics-service:9090" ingress.networking.k8s.io/next-app created
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:next-appannotations:nginx.ingress.kubernetes.io/rewrite-target:/$1spec:rules:-host:next.example.comhttp:paths:-backend:service:name:app-serviceport:number:8080path:/apppathType:Exact-host:next.example.comhttp:paths:-backend:service:name:metrics-serviceport:number:9090path:/metricspathType:Exact
| Path Type | Rule | Incoming Request |
|---|---|---|
|
|
Matches |
|
|
Matches |
$ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE next-app nginx next.example.com 192.168.66.4 80 5m38s
$ kubectl describe ingress next-app
Name: next-app
Labels: <none>
Namespace: default
Address: 192.168.66.4
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
next.example.com
/app app-service:8080 (<error: endpoints \
"app-service" not found>)
/metrics metrics-service:9090 (<error: endpoints \
"metrics-service" not found>)
Annotations: <none>
Events:
Type Reason Age From ...
---- ------ ---- ---- ...
Normal Sync 6m45s (x2 over 7m3s) nginx-ingress-controller ...
$ kubectl run app --image=k8s.gcr.io/echoserver:1.10 --port=8080 \ -l app=app-service pod/app created $ kubectl run metrics --image=k8s.gcr.io/echoserver:1.10 --port=8080 \ -l app=metrics-service pod/metrics created $ kubectl create service clusterip app-service --tcp=8080:8080 service/app-service created $ kubectl create service clusterip metrics-service --tcp=9090:8080 service/metrics-service created
$ kubectl describe ingress next-app
Name: next-app
Labels: <none>
Namespace: default
Address: 192.168.66.4
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
next.example.com
/app app-service:8080 (10.244.0.6:8080)
/metrics metrics-service:9090 (10.244.0.7:8080)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 13m (x2 over 13m) nginx-ingress-controller Scheduled for sync
$ kubectl get ingress next-app \
--output=jsonpath="{.status.loadBalancer.ingress[0]['ip']}"
192.168.66.4
$ sudo vim /etc/hosts
...
192.168.66.4 next-app
$ wget next.example.com/app --timeout=5 --tries=1 --2021-11-30 19:34:57-- http://next.example.com/app Resolving next.example.com (next.example.com)... 192.168.66.4 Connecting to next.example.com (next.example.com)|192.168.66.4|:80... \ connected. HTTP request sent, awaiting response... 200 OK
$ wget next.example.com/app/ --timeout=5 --tries=1 --2021-11-30 15:36:26-- http://next.example.com/app/ Resolving next.example.com (next.example.com)... 192.168.66.4 Connecting to next.example.com (next.example.com)|192.168.66.4|:80... \ connected. HTTP request sent, awaiting response... 404 Not Found 2021-11-30 15:36:26 ERROR 404: Not Found.
An Ingress is not to be confused with a Service. The Ingress is meant for routing cluster-external HTTP(S) traffic to one or many Services based on an optional hostname and mandatory path. A Service routes traffic to a set of Pods.
An Ingress controller needs to be installed before an Ingress can function properly. Without installing an Ingress controller, Ingress rules will have no effect. You can choose from a range of Ingress controller implementations, all documented on the Kubernetes documentation page. Assume that an Ingress controller will be preinstalled for you in the exam environment.
You can define one or many rules in an Ingress. Every rule consists of an optional host, the URL context path, and the Service DNS name and port. Try defining more than a single rule and how to access the endpoint. You will not have to understand the process for configuring TLS termination for an Ingress—this aspect is covered by the CKS exam.
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE cilium-k5td6 1/1 Running 0 110s cilium-operator-f5dcdcc8d-njfbk 1/1 Running 0 110s
$ kubectl run grocery-store --image=nginx:1.25.3-alpine \ -l app=grocery-store,role=backend --port 80 pod/grocery-store created $ kubectl run payment-processor --image=nginx:1.25.3-alpine \ -l app=payment-processor,role=api --port 80 pod/payment-processor created $ kubectl run coffee-shop --image=nginx:1.25.3-alpine \ -l app=coffee-shop,role=backend --port 80
$ kubectl get pod payment-processor --template '{{.status.podIP}}'
10.244.0.136
$ kubectl exec grocery-store -it -- wget --spider --timeout=1 10.244.0.136
Connecting to 10.244.0.136 (10.244.0.136:80)
remote file exists
$ kubectl exec coffee-shop -it -- wget --spider --timeout=1 10.244.0.136
Connecting to 10.244.0.136 (10.244.0.136:80)
remote file exists
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:api-allowspec:podSelector:matchLabels:app:payment-processorrole:apiingress:-from:-podSelector:matchLabels:app:coffee-shop
Selects the Pod the policy should apply to by label selection.
Allows incoming traffic from the Pod with matching labels within the same namespace.
| Attribute | Description |
|---|---|
|
Selects the Pods in the namespace to apply the network policy to. |
|
Defines the type of traffic (i.e., ingress and/or egress) the network policy applies to. |
|
Lists the rules for incoming traffic. Each rule can define |
|
Lists the rules for outgoing traffic. Each rule can define |
| Attribute | Description |
|---|---|
|
Selects Pods by label(s) in the same namespace as the network policy that should be allowed as ingress sources or egress destinations. |
|
Selects namespaces by label(s) for which all Pods should be allowed as ingress sources or egress destinations. |
|
Selects Pods by label(s) within namespaces by label(s). |
$ kubectl apply -f networkpolicy-api-allow.yaml networkpolicy.networking.k8s.io/api-allow created
kubectl exec grocery-store -it -- wget --spider --timeout=1 10.244.0.136 Connecting to 10.244.0.136 (10.244.0.136:80) wget: download timed out command terminated with exit code 1 $ kubectl exec coffee-shop -it -- wget --spider --timeout=1 10.244.0.136 Connecting to 10.244.0.136 (10.244.0.136:80) remote file exists
$ kubectl get networkpolicy api-allow NAME POD-SELECTOR AGE api-allow app=payment-processor,role=api 83m
$ kubectl describe networkpolicy api-allow
Name: api-allow
Namespace: default
Created on: 2024-01-10 09:06:59 -0700 MST
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=payment-processor,role=api
Allowing ingress traffic:
To Port: <any> (traffic allowed to all ports)
From:
PodSelector: app=coffee-shop
Not affecting egress traffic
Policy Types: Ingress
$ kubectl create namespace internal-tools namespace/internal-tools created $ kubectl run metrics-api --image=nginx:1.25.3-alpine --port=80 \ -l app=api -n internal-tools pod/metrics-api created $ kubectl run metrics-consumer --image=nginx:1.25.3-alpine --port=80 \ -l app=consumer -n internal-tools pod/metrics-consumer created
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:default-deny-allnamespace:internal-toolsspec:podSelector:{}policyTypes:-Ingress-Egress
The curly braces for spec.podSelector mean “apply to all Pods in the
namespace.”
Defines the types of traffic the rule should apply to, in this case ingress and egress traffic.
$ kubectl apply -f networkpolicy-deny-all.yaml networkpolicy.networking.k8s.io/default-deny-all created
$ kubectl get pod metrics-api --template '{{.status.podIP}}' -n internal-tools
10.244.0.182
$ kubectl exec metrics-consumer -it -n internal-tools \
-- wget --spider --timeout=1 10.244.0.182
Connecting to 10.244.0.182 (10.244.0.182:80)
wget: download timed out
command terminated with exit code 1
$ kubectl get pod metrics-consumer --template '{{.status.podIP}}' \
-n internal-tools
10.244.0.70
$ kubectl exec metrics-api -it -n internal-tools \
-- wget --spider --timeout=1 10.244.0.70
Connecting to 10.244.0.70 (10.244.0.70:80)
wget: download timed out
command terminated with exit code 1
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:port-allownamespace:internal-toolsspec:podSelector:matchLabels:app:apiingress:-from:-podSelector:matchLabels:app:consumerports:-protocol:TCPport:80
By default, Pod-to-Pod communication is unrestricted. Instantiate a default deny rule to restrict Pod-to-Pod network traffic with the principle of least privilege. The attribute spec.podSelector of a network policy selects the target Pod the rules apply to based on label selection. The ingress and egress rules define Pods, namespaces, IP addresses, and ports for allowing incoming and outgoing traffic.
Network policies can be aggregated. A default deny rule can disallow ingress and/or egress traffic. An additional network policy can open up those rules with a more fine-grained definition.
To explore common scenarios, look at the GitHub repository named “Kubernetes Network Policy Recipes”. The repository comes with a visual representation for each scenario and walks you through the steps to set up the network policy and the involved Pods. This is a great practice resource.
$ docker build -t nodejs-hello-world:1.0.0 .
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE nodejs-hello-world 1.0.0 0cc723ca8b06 15 seconds ago 180MB
$ docker run -d -p 80:3000 nodejs-hello-world:1.0.0 9e0f1abcef415e902422117de7644544cdd08ae158a1cd0b2a2d182fcf056cab
$ docker container ls CONTAINER ID IMAGE COMMAND ... 9e0f1abcef41 nodejs-hello-world:1.0.0 "docker-entrypoint.s…" ...
$ curl localhost Hello World $ wget localhost --2023-05-09 08:38:30-- http://localhost/ Resolving localhost (localhost)... ::1, 127.0.0.1 Connecting to localhost (localhost)|::1|:80... connected. HTTP request sent, awaiting response... 200 OK ... 2023-05-09 08:38:30 (2.29 MB/s) - ‘index.html’ saved [12/12]
$ docker logs 9e0f1abcef41 Magic happens on port 3000
FROMnode:20.4-alpineWORKDIR/node...
$ docker build -t nodejs-hello-world:1.1.0 .
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE nodejs-hello-world 1.1.0 d332031cb5b6 4 seconds ago 181MB
$ docker pull alpine:3.18.2
$ docker images alpine REPOSITORY TAG IMAGE ID CREATED SIZE alpine 3.18.2 c1aabb73d233 4 weeks ago 7.33MB
$ docker save -o alpine-3.18.2.tar alpine:3.18.2 $ ls alpine-3.18.2.tar
$ docker image rm alpine:3.18.2 Untagged: alpine:3.18.2 Untagged: alpine@sha256:82d1e9d7ed48a7523bdebc18cf6290bdb97b82302a8a9 \ c27d4fe885949ea94d1 Deleted: sha256:c1aabb73d2339c5ebaa3681de2e9d9c18d57485045a4e311d9f80 \ 04bec208d67 Deleted: sha256:78a822fe2a2d2c84f3de4a403188c45f623017d6a4521d23047c9 \ fbb0801794c
$ docker images alpine REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker load --input alpine-3.18.2.tar 78a822fe2a2d: Loading layer [======================================== \ ==========>] 7.622MB/7.622MB Loaded image: alpine:3.18.2
$ docker images alpine REPOSITORY TAG IMAGE ID CREATED SIZE alpine 3.18.2 c1aabb73d233 4 weeks ago 7.33MB
$ kubectl create namespace ckad
$ kubectl run nginx --image=nginx:1.17.10 --port=80 --namespace=ckad
apiVersion:v1kind:Namespacemetadata:name:ckad
$ kubectl apply -f ckad-namespace.yaml
apiVersion:v1kind:Podmetadata:name:nginxspec:containers:-name:nginximage:nginx:1.17.10ports:-containerPort:80
$ kubectl apply -f nginx-pod.yaml --namespace=ckad
$ kubectl get pod nginx --namespace=ckad -o wide
$ kubectl describe pod nginx --namespace=ckad | grep IP:
$ kubectl run busybox --image=busybox:1.36.1 --restart=Never --rm -it \ -n ckad -- wget -O- 10.1.0.66:80
$ kubectl logs nginx --namespace=ckad
$ kubectl edit pod nginx --namespace=ckad
$ kubectl delete pod nginx --namespace=ckad
apiVersion:v1kind:Podmetadata:name:nginxspec:containers:-name:nginximage:nginx:1.17.10ports:-containerPort:80env:-name:DB_URLvalue:postgresql://mydb:5432-name:DB_USERNAMEvalue:admin
$ kubectl apply -f nginx-pod.yaml --namespace=ckad
$ kubectl exec -it nginx --namespace=ckad -- /bin/sh # ls -l
$ kubectl run loop --image=busybox:1.36.1 -o yaml --dry-run=client \ --restart=Never -- /bin/sh -c 'for i in 1 2 3 4 5 6 7 8 9 10; \ do echo "Welcome $i times"; done' \ > pod.yaml
$ kubectl apply -f pod.yaml --namespace=ckad
$ kubectl get pod loop --namespace=ckad
$ kubectl delete pod loop --namespace=ckad
apiVersion:v1kind:Podmetadata:creationTimestamp:nulllabels:run:loopname:loopspec:containers:-args:-/bin/sh--c-while true; do date; sleep 10; doneimage:busybox:1.36.1name:loopresources:{}dnsPolicy:ClusterFirstrestartPolicy:Neverstatus:{}
$ kubectl apply -f pod.yaml --namespace=ckad
$ kubectl describe pod loop --namespace=ckad | grep -C 10 Events:
$ kubectl delete namespace ckad
apiVersion:batch/v1kind:Jobmetadata:name:random-hashspec:parallelism:2completions:5backoffLimit:4template:spec:containers:-name:random-hashimage:alpine:3.17.3command:["/bin/sh","-c","echo$RANDOM|base64|head-c20"]restartPolicy:Never
$ kubectl apply -f random-hash-job.yaml job.batch/random-hash created
$ kubectl get pods | grep "random-hash-" NAME READY STATUS RESTARTS AGE random-hash-4qk96 0/1 Completed 0 46s random-hash-ld2sl 0/1 Completed 0 39s random-hash-xcmts 0/1 Completed 0 35s random-hash-xxlhk 0/1 Completed 0 46s random-hash-z9xc4 0/1 Completed 0 39s
$ kubectl logs random-hash-4qk96 MTgxMTIK
$ kubectl delete job random-hash job.batch "random-hash" deleted $ kubectl get pods | grep -E "random-hash-" -c 0
$ kubectl create cronjob google-ping --schedule="*/2 * * * *" \ --image=nginx:1.25.1 -- /bin/sh -c 'curl google.com' cronjob.batch/google-ping created
$ kubectl get cronjob -w NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE google-ping */2 * * * * False 0 115s 2m10s google-ping */2 * * * * False 1 6s 2m21s google-ping */2 * * * * False 0 16s 2m31s google-ping */2 * * * * False 1 6s 4m21s google-ping */2 * * * * False 0 16s 4m31s
...spec:successfulJobsHistoryLimit:7...
...spec:concurrencyPolicy:Forbid...
$ kubectl run alpine --image=alpine:3.12.0 --dry-run=client \ --restart=Never -o yaml -- /bin/sh -c "while true; do sleep 60; \ done;" > multi-container-alpine.yaml $ vim multi-container-alpine.yaml
apiVersion:v1kind:Podmetadata:creationTimestamp:nulllabels:run:alpinename:alpinespec:containers:-args:-/bin/sh--c-while true; do sleep 60; done;image:alpine:3.12.0name:container1resources:{}-args:-/bin/sh--c-while true; do sleep 60; done;image:alpine:3.12.0name:container2resources:{}dnsPolicy:ClusterFirstrestartPolicy:Alwaysstatus:{}
apiVersion:v1kind:Podmetadata:creationTimestamp:nulllabels:run:alpinename:alpinespec:volumes:-name:shared-volemptyDir:{}containers:-args:-/bin/sh--c-while true; do sleep 60; done;image:alpine:3.12.0name:container1volumeMounts:-name:shared-volmountPath:/etc/aresources:{}-args:-/bin/sh--c-while true; do sleep 60; done;image:alpine:3.12.0name:container2volumeMounts:-name:shared-volmountPath:/etc/bresources:{}dnsPolicy:ClusterFirstrestartPolicy:Alwaysstatus:{}
$ kubectl apply -f multi-container-alpine.yaml pod/alpine created $ kubectl get pods NAME READY STATUS RESTARTS AGE alpine 2/2 Running 0 18s
$ kubectl exec alpine -c container1 -it -- /bin/sh / # cd /etc/a /etc/a # ls -l total 0 /etc/a # mkdir data /etc/a # cd data/ /etc/a/data # echo "Hello World" > hello.txt /etc/a/data # cat hello.txt Hello World /etc/a/data # exit
$ kubectl exec alpine -c container2 -it -- /bin/sh / # cat /etc/b/data/hello.txt Hello World / # exit
kind:PersistentVolumeapiVersion:v1metadata:name:logs-pvspec:capacity:storage:5GiaccessModes:-ReadWriteOnce-ReadOnlyManyhostPath:path:/var/logs
$ kubectl create -f logs-pv.yaml
persistentvolume/logs-pv created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM \
STORAGECLASS REASON AGE
logs-pv 5Gi RWO,ROX Retain Available \
18s
kind:PersistentVolumeClaimapiVersion:v1metadata:name:logs-pvcspec:accessModes:-ReadWriteOnceresources:requests:storage:2GistorageClassName:""
$ kubectl create -f logs-pvc.yaml persistentvolumeclaim/logs-pvc created $ kubectl get pvc NAME STATUS VOLUME CAPACITY \ ACCESS MODES STORAGECLASS AGE logs-pvc Bound pvc-47ac2593-2cd2-4213-9e31-450bc98bb43f 2Gi \ RWO standard 11s
$ kubectl run nginx --image=nginx:1.25.1 --dry-run=client \ -o yaml > nginx-pod.yaml
apiVersion:v1kind:Podmetadata:creationTimestamp:nulllabels:run:nginxname:nginxspec:volumes:-name:logs-volumepersistentVolumeClaim:claimName:logs-pvccontainers:-image:nginx:1.25.1name:nginxvolumeMounts:-mountPath:"/var/log/nginx"name:logs-volumeresources:{}dnsPolicy:ClusterFirstrestartPolicy:Neverstatus:{}
$ kubectl apply -f nginx-pod.yaml pod/nginx created $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 8s
$ kubectl exec nginx -it -- /bin/sh # cd /var/log/nginx # touch my-nginx.log # ls access.log error.log my-nginx.log # exit
$ kubectl delete pod nginx $ kubectl apply -f nginx-pod.yaml pod/nginx created $ kubectl exec nginx -it -- /bin/sh # cd /var/log/nginx # ls access.log error.log my-nginx.log # exit
$ kubectl run complex-pod --image=nginx:1.25.1 --port=80 \ --restart=Never -o yaml --dry-run=client > complex-pod.yaml
apiVersion:v1kind:Podmetadata:name:complex-podspec:initContainers:-image:busybox:1.36.1name:setupcommand:['sh','-c','wget-O-google.com']containers:-image:nginx:1.25.1name:appports:-containerPort:80resources:{}dnsPolicy:ClusterFirstrestartPolicy:Neverstatus:{}
$ kubectl apply -f complex-pod.yaml pod/complex-pod created $ kubectl get pod complex-pod NAME READY STATUS RESTARTS AGE complex-pod 1/1 Running 0 27s
$ kubectl logs complex-pod -c setup Connecting to google.com (172.217.1.206:80) Connecting to www.google.com (172.217.2.4:80) writing to stdout ...
$ kubectl exec complex-pod -it -c app -- /bin/sh # ls bin dev docker-entrypoint.sh home lib64 mnt proc run \ srv tmp var boot docker-entrypoint.d etclib media opt \ root sbin sys usr # exit
$ kubectl delete pod complex-pod --grace-period=0 --force warning: Immediate deletion does not wait for confirmation that the \ running resource has been terminated. The resource may continue to run \ on the cluster indefinitely. pod "complex-pod" force deleted
$ kubectl run data-exchange --image=busybox:1.36.1 --restart=Never \ -o yaml --dry-run=client > data-exchange.yaml
apiVersion:v1kind:Podmetadata:name:data-exchangespec:containers:-image:busybox:1.36.1name:main-appcommand:['sh','-c','counter=1;whiletrue;dotouch\"/var/app/data/$counter-data.txt";counter=$((counter+1));\sleep30;done']resources:{}dnsPolicy:ClusterFirstrestartPolicy:Neverstatus:{}
apiVersion:v1kind:Podmetadata:name:data-exchangespec:containers:-image:busybox:1.36.1name:main-appcommand:['sh','-c','counter=1;whiletrue;dotouch\"/var/app/data/$counter-data.txt";counter=$((counter+1));\sleep30;done']resources:{}-image:busybox:1.36.1name:sidecarcommand:['sh','-c','whiletrue;dols-dq/var/app/data/*-data.txt\|wc-l;sleep30;done']dnsPolicy:ClusterFirstrestartPolicy:Neverstatus:{}
apiVersion:v1kind:Podmetadata:name:data-exchangespec:containers:-image:busybox:1.36.1name:main-appcommand:['sh','-c','counter=1;whiletrue;dotouch\"/var/app/data/$counter-data.txt";counter=$((counter+1));\sleep30;done']volumeMounts:-name:data-dirmountPath:"/var/app/data"resources:{}-image:busybox:1.36.1name:sidecarcommand:['sh','-c','whiletrue;dols-d/var/app/data/*-data.txt\|wc-l;sleep30;done']volumeMounts:-name:data-dirmountPath:"/var/app/data"volumes:-name:data-diremptyDir:{}dnsPolicy:ClusterFirstrestartPolicy:Neverstatus:{}
$ kubectl apply -f data-exchange.yaml pod/data-exchange created $ kubectl get pod data-exchange NAME READY STATUS RESTARTS AGE data-exchange 2/2 Running 0 31s $ kubectl logs data-exchange -c sidecar -f 1 2 ...
$ kubectl delete pod data-exchange pod "data-exchange" deleted
$ kubectl run pod-1 --image=nginx:1.25.1 \ --labels=tier=frontend,team=artemidis pod/pod-1 created $ kubectl run pod-2 --image=nginx:1.25.1 \ --labels=tier=backend,team=artemidis pod/pod-2 created $ kubectl run pod-3 --image=nginx:1.25.1 \ --labels=tier=backend,team=artemidis pod/pod-3 created $ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-1 1/1 Running 0 30s team=artemidis,tier=frontend pod-2 1/1 Running 0 24s team=artemidis,tier=backend pod-3 1/1 Running 0 16s team=artemidis,tier=backend
$ kubectl annotate pod pod-1 pod-3 deployer='Benjamin Muschko' pod/pod-1 annotated pod/pod-3 annotated $ kubectl describe pod pod-1 pod-3 | grep Annotations: Annotations: deployer: Benjamin Muschko Annotations: deployer: Benjamin Muschko
$ kubectl get pods -l tier=backend,'team in (artemidis,aircontrol)' \ --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-2 1/1 Running 0 6m38s team=artemidis,tier=backend pod-3 1/1 Running 0 6m30s team=artemidis,tier=backend
apiVersion:v1kind:Podmetadata:name:nginxlabels:app.kubernetes.io/name:F5-nginxapp.kubernetes.io/managed-by:helmspec:containers:-image:nginx:1.25.1name:nginx
$ kubectl apply -f pod-well-known.yaml pod/nginx created
$ kubectl describe pod nginx
...
Labels: app.kubernetes.io/managed-by=helm
app.kubernetes.io/name=F5-nginx
...
$ kubectl get pod nginx --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 18s app.kubernetes.io/\
managed-by=helm,app.kubernetes.io/name=F5-nginx
apiVersion:apps/v1kind:Deploymentmetadata:name:nginxlabels:tier:backendspec:replicas:3selector:matchLabels:app:v1template:metadata:labels:app:v1spec:containers:-image:nginx:1.23.0name:nginx
$ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx created $ kubectl get deployment nginx NAME READY UP-TO-DATE AVAILABLE AGE nginx 3/3 3 3 10s
$ kubectl set image deployment/nginx nginx=nginx:1.23.4
deployment.apps/nginx image updated
$ kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
$ kubectl rollout history deployment nginx --revision=2
deployment.apps/nginx with revision #2
Pod Template:
Labels: app=v1
pod-template-hash=5bd95c598
Containers:
nginx:
Image: nginx:1.23.4
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
$ kubectl annotate deployment nginx kubernetes.io/change-cause=\ "Pick up patch version" deployment.apps/nginx annotated
$ kubectl rollout history deployment nginx deployment.apps/nginx REVISION CHANGE-CAUSE 1 <none> 2 Pick up patch version
$ kubectl scale deployment nginx --replicas=5 deployment.apps/nginx scaled $ kubectl get pod -l app=v1 NAME READY STATUS RESTARTS AGE nginx-5bd95c598-25z4j 1/1 Running 0 3m39s nginx-5bd95c598-46mlt 1/1 Running 0 3m38s nginx-5bd95c598-bszvp 1/1 Running 0 48s nginx-5bd95c598-dwr8r 1/1 Running 0 48s nginx-5bd95c598-kjrvf 1/1 Running 0 3m37s
$ kubectl rollout undo deployment/nginx --to-revision=1
deployment.apps/nginx rolled back
$ kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
2 Pick up patch version
3 <none>
$ kubectl rollout history deployment nginx --revision=3
deployment.apps/nginx with revision #3
Pod Template:
Labels: app=v1
pod-template-hash=f48dc88cd
Containers:
nginx:
Image: nginx:1.23.0
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
apiVersion:apps/v1kind:Deploymentmetadata:name:nginxspec:replicas:1selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-image:nginx:1.23.4name:nginxresources:requests:cpu:"0.5"memory:"500Mi"limits:memory:"500Mi"
$ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx created
$ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 49s $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-5bbd9746c-9b4np 1/1 Running 0 24s
apiVersion:autoscaling/v2kind:HorizontalPodAutoscalermetadata:name:nginx-hpaspec:scaleTargetRef:apiVersion:apps/v1kind:Deploymentname:nginxminReplicas:3maxReplicas:8metrics:-type:Resourceresource:name:cputarget:type:UtilizationaverageUtilization:75-type:Resourceresource:name:memorytarget:type:UtilizationaverageUtilization:60
$ kubectl apply -f nginx-hpa.yaml horizontalpodautoscaler.autoscaling/nginx-hpa created
$ kubectl get hpa nginx-hpa NAME REFERENCE TARGETS MINPODS MAXPODS \ REPLICAS AGE nginx-hpa Deployment/nginx 0%/60%, 0%/75% 3 8 \ 3 2m19s
$ kubectl apply -f deployment-grafana.yaml deployment.apps/grafana created
$ kubectl get deployments,pods NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/grafana 6/6 6 6 39s NAME READY STATUS RESTARTS AGE pod/grafana-5f6b77b687-4h7bq 1/1 Running 0 39s pod/grafana-5f6b77b687-88bnb 1/1 Running 0 39s pod/grafana-5f6b77b687-97d6g 1/1 Running 0 39s pod/grafana-5f6b77b687-h8mhq 1/1 Running 0 39s pod/grafana-5f6b77b687-lfgcf 1/1 Running 0 39s pod/grafana-5f6b77b687-v9nkq 1/1 Running 0 39s
apiVersion:apps/v1kind:Deploymentmetadata:name:grafanaspec:replicas:6strategy:rollingUpdate:maxSurge:2maxUnavailable:2type:RollingUpdateminReadySeconds:20selector:matchLabels:app:grafanatemplate:metadata:labels:app:grafanaspec:containers:-image:grafana/grafana:10.1.2name:grafanaports:-containerPort:3000readinessProbe:httpGet:path:/port:3000
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-bluespec:replicas:3selector:matchLabels:version:bluetemplate:metadata:labels:version:bluespec:containers:-image:nginx:1.23.0name:nginxports:-containerPort:80
$ kubectl apply -f blue-deployment.yaml deployment.apps/nginx-blue created $ kubectl get pods -l version=blue NAME READY STATUS RESTARTS AGE nginx-blue-99f499479-h9wq4 1/1 Running 0 9s nginx-blue-99f499479-trsjf 1/1 Running 0 9s nginx-blue-99f499479-wndkg 1/1 Running 0 9s
apiVersion:v1kind:Servicemetadata:name:nginxspec:selector:version:blueports:-protocol:TCPport:80targetPort:80
$ kubectl apply -f service.yaml service/nginx created
$ kubectl run tmp --image=alpine/curl:3.14 --restart=Never -it \ --rm -- curl -sI nginx.default.svc.cluster.local | grep Server Server: nginx/1.23.0
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-greenspec:replicas:3selector:matchLabels:version:greentemplate:metadata:labels:version:greenspec:containers:-image:nginx:1.23.4name:nginxports:-containerPort:80
$ kubectl apply -f green-deployment.yaml deployment.apps/nginx-green created $ kubectl get pods -l version=green NAME READY STATUS RESTARTS AGE nginx-green-658cfdc9c6-8pvpp 1/1 Running 0 11s nginx-green-658cfdc9c6-fdgm6 1/1 Running 0 11s nginx-green-658cfdc9c6-zg6gl 1/1 Running 0 11s
apiVersion:v1kind:Servicemetadata:name:nginxspec:selector:version:greenports:-protocol:TCPport:80targetPort:80
$ kubectl apply -f service.yaml service/nginx configured
$ kubectl delete deployment nginx-blue deployment.apps "nginx-blue" deleted
$ kubectl run tmp --image=alpine/curl:3.14 --restart=Never -it --rm \ -- curl -sI nginx.default.svc.cluster.local | grep Server Server: nginx/1.23.4
$ helm repo add prometheus-community https://prometheus-community.\ github.io/helm-charts "prometheus-community" has been added to your repositories
$ helm repo update prometheus-community Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "prometheus-community" \ chart repository Update Complete. ⎈Happy Helming!⎈
$ helm search hub prometheus-community URL CHART VERSION ... https://artifacthub.io/packages/helm/prometheus... 45.28.1 ...
$ helm install prometheus prometheus-community/kube-prometheus-stack NAME: prometheus LAST DEPLOYED: Thu May 18 11:32:31 2023 NAMESPACE: default STATUS: deployed REVISION: 1 ...
$ helm list NAME NAMESPACE REVISION UPDATED ... prometheus default 1 2023-05-18 ...
$ kubectl get service prometheus-operated NAME TYPE CLUSTER-IP EXTERNAL-IP ... prometheus-operated ClusterIP None <none> ...
$ kubectl port-forward service/prometheus-operated 8080:9090 Forwarding from 127.0.0.1:8080 -> 9090 Forwarding from [::1]:8080 -> 9090
$ helm uninstall prometheus release "prometheus" uninstalled
$ kubectl apply -f ./ configmap/data-config created error: resource mapping not found for name: "nginx" namespace: "" \ from "deployment.yaml": no matches for kind "Deployment" in version \ "apps/v1beta2" ensure CRDs are installed first
$ kubectl api-versions admissionregistration.k8s.io/v1 apiextensions.k8s.io/v1 apiregistration.k8s.io/v1 apps/v1 authentication.k8s.io/v1 authorization.k8s.io/v1 autoscaling/v1 autoscaling/v2 batch/v1 certificates.k8s.io/v1 coordination.k8s.io/v1 discovery.k8s.io/v1 events.k8s.io/v1 flowcontrol.apiserver.k8s.io/v1beta2 flowcontrol.apiserver.k8s.io/v1beta3 networking.k8s.io/v1 node.k8s.io/v1 policy/v1 rbac.authorization.k8s.io/v1 scheduling.k8s.io/v1 storage.k8s.io/v1 v1
Migrate to use the
apps/v1API version, available since v1.9. Existing persisted data can be retrieved/updated via the new version.Deprecated API Migration Guide
apiVersion:apps/v1kind:Deploymentmetadata:name:nginxspec:replicas:2template:metadata:labels:run:appspec:containers:-image:nginx:1.23.4name:nginxports:-containerPort:80envFrom:-configMapRef:name:data-config
$ kubectl apply -f ./
configmap/data-config unchanged
The Deployment "nginx" is invalid:
* spec.selector: Required value
* spec.template.metadata.labels: Invalid value: \
map[string]string{"run":"app"}: `selector` does \
not match template `labels`
apiVersion:apps/v1kind:Deploymentmetadata:name:nginxspec:replicas:2selector:matchLabels:run:apptemplate:metadata:labels:run:appspec:containers:-image:nginx:1.23.4name:nginxports:-containerPort:80envFrom:-configMapRef:name:data-config
$ kubectl apply -f ./ configmap/data-config unchanged deployment.apps/nginx created $ kubectl get deployments,configmaps NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx 2/2 2 2 45s NAME DATA AGE configmap/data-config 2 19m
$ kubectl run web-server --image=nginx:1.23.0 --port=80 -o yaml \ --dry-run=client > probed-pod.yaml
apiVersion:v1kind:Podmetadata:name:web-serverspec:containers:-image:nginx:1.23.0name:web-serverports:-containerPort:80name:nginx-portstartupProbe:httpGet:path:/port:nginx-port
apiVersion:v1kind:Podmetadata:name:web-serverspec:containers:-image:nginx:1.23.0name:web-serverports:-containerPort:80name:nginx-portstartupProbe:httpGet:path:/port:nginx-portreadinessProbe:httpGet:path:/port:nginx-portinitialDelaySeconds:5
apiVersion:v1kind:Podmetadata:name:web-serverspec:containers:-image:nginx:1.23.0name:web-serverports:-containerPort:80name:nginx-portstartupProbe:httpGet:path:/port:nginx-portreadinessProbe:httpGet:path:/port:nginx-portinitialDelaySeconds:5livenessProbe:httpGet:path:/port:nginx-portinitialDelaySeconds:10periodSeconds:30
$ kubectl create -f probed-pod.yaml pod/probed-pod created $ kubectl get pod web-server NAME READY STATUS RESTARTS AGE web-server 0/1 ContainerCreating 0 7s $ kubectl get pod web-server NAME READY STATUS RESTARTS AGE web-server 0/1 Running 0 8s $ kubectl get pod web-server NAME READY STATUS RESTARTS AGE web-server 1/1 Running 0 38s
$ kubectl describe pod web-server
...
Containers:
web-server:
...
Ready: True
Restart Count: 0
Liveness: http-get http://:nginx-port/ delay=10s timeout=1s \
period=30s #success=1 #failure=3
Readiness: http-get http://:nginx-port/ delay=5s timeout=1s \
period=10s #success=1 #failure=3
Startup: http-get http://:nginx-port/ delay=0s timeout=1s \
period=10s #success=1 #failure=3
...
$ kubectl apply -f pod.yaml pod/date-recorder created
$ kubectl get pods NAME READY STATUS RESTARTS AGE date-recorder 1/1 Running 0 5s
$ kubectl logs date-recorder
[Error: ENOENT: no such file or directory, open \
'/root/tmp/startup-marker.txt'] {
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '/root/tmp/curr-date.txt'
}
$ kubectl exec -it date-recorder -- /bin/sh OCI runtime exec failed: exec failed: unable to start container \ process: exec: "/bin/sh": stat /bin/sh: no such file or \ directory: unknown command terminated with exit code 126
$ kubectl debug -it date-recorder --image=busybox --target=debian \
--share-processes
Targeting container "debian". If you don't see processes from this \
container it may be because the container runtime doesn't support \
this feature.
Defaulting debug container name to debugger-rns89.
If you don't see a command prompt, try pressing enter.
/ # ps
PID USER TIME COMMAND
1 root 4:21 /nodejs/bin/node -e const fs = require('fs'); \
let timestamp = Date.now(); fs.writeFile('/root/tmp/startup-m
35 root 0:00 sh
41 root 0:00 ps
$ kubectl exec failing-pod -it -- /bin/sh / # ls /root/tmp ls: /root/tmp: No such file or directory
apiVersion:v1kind:Podmetadata:name:date-recorderspec:containers:-name:debianimage:gcr.io/distroless/nodejs20-debian11command:["/nodejs/bin/node","-e","constfs=require('fs');\lettimestamp=Date.now();fs.writeFile('/var/startup/\startup-marker.txt',timestamp.toString(),err=>{if(err){\console.error(err);}while(true){}});"]volumeMounts:-mountPath:/var/startupname:init-volumevolumes:-name:init-volumeemptyDir:{}
$ kubectl create ns stress-test namespace/stress-test created
$ kubectl apply -f ./ pod/stress-1 created pod/stress-2 created pod/stress-3 created
$ kubectl top pods -n stress-test NAME CPU(cores) MEMORY(bytes) stress-1 50m 77Mi stress-2 74m 138Mi stress-3 58m 94Mi
$ kubectl apply -f https://raw.githubusercontent.com/mongodb/\ mongodb-kubernetes-operator/master/config/crd/bases/mongodbcommunity.\ mongodb.com_mongodbcommunity.yaml customresourcedefinition.apiextensions.k8s.io/mongodbcommunity.\ mongodbcommunity.mongodb.com created
$ kubectl get crds NAME CREATED AT mongodbcommunity.mongodbcommunity.mongodb.com 2023-12-18T23:44:04Z
$ kubectl describe crds mongodbcommunity.mongodbcommunity.mongodb.com
apiVersion:apiextensions.k8s.io/v1kind:CustomResourceDefinitionmetadata:name:backups.example.comspec:group:example.comversions:-name:v1served:truestorage:trueschema:openAPIV3Schema:type:objectproperties:spec:type:objectproperties:cronExpression:type:stringpodName:type:stringpath:type:stringscope:Namespacednames:kind:Backupsingular:backupplural:backupsshortNames:-bu
$ kubectl apply -f backup-resource.yaml customresourcedefinition.apiextensions.k8s.io/backups.example.com created
$ kubectl get crd backups.example.com NAME CREATED AT backups.example.com 2023-05-24T15:11:15Z $ kubectl describe crd backups.example.com ...
apiVersion:example.com/v1kind:Backupmetadata:name:nginx-backupspec:cronExpression:"00***"podName:nginxpath:/usr/local/nginx
$ kubectl apply -f backup.yaml backup.example.com/nginx-backup created
$ kubectl get backups NAME AGE nginx-backup 24s $ kubectl describe backup nginx-backup ...
$ kubectl config set-credentials mary --client-key=mary.key \ --client-certificate=mary.crt --embed-certs=true
$ kubectl config set-context mary-context --cluster=kubernetes \ --user=mary
$ kubectl config use-context mary-context
$ kubectl run nginx --image=nginx:1.25.2 --port=80 Error from server (Forbidden): pods is forbidden: User "mary" cannot \ create resource "pods" in API group "" in the namespace "default"
$ kubectl create namespace t23
$ kubectl create serviceaccount api-call -n t23
apiVersion:v1kind:Podmetadata:name:service-listnamespace:t23spec:serviceAccountName:api-callcontainers:-name:service-listimage:alpine/curl:3.14command:['sh','-c','whiletrue;docurl-s-k-m5-H\"Authorization:Bearer$(cat/var/run/secrets/kubernetes.io/\serviceaccount/token)"https://kubernetes.default.svc.cluster.\local/api/v1/namespaces/default/services;sleep10;done']
$ kubectl apply -f pod.yaml
$ kubectl logs service-list -n t23
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "services is forbidden: User \"system:serviceaccount:t23 \
:api-call\" cannot list resource \"services\" in API \
group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "services"
},
"code": 403
}
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:list-services-clusterrolerules:-apiGroups:[""]resources:["services"]verbs:["list"]
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:serviceaccount-service-rolebindingsubjects:-kind:ServiceAccountname:api-callnamespace:t23roleRef:kind:ClusterRolename:list-services-clusterroleapiGroup:rbac.authorization.k8s.io
$ kubectl apply -f clusterrole.yaml $ kubectl apply -f rolebinding.yaml
$ kubectl logs service-list -n t23
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1108"
},
"items": [
{
"metadata": {
"name": "kubernetes",
"namespace": "default",
"uid": "30eb5425-8f60-4bb7-8331-f91fe0999e20",
"resourceVersion": "199",
"creationTimestamp": "2022-09-08T18:06:52Z",
"labels": {
"component": "apiserver",
"provider": "kubernetes"
},
...
}
]
}
spec:containers:-command:-kube-apiserver---enable-admission-plugins=NamespaceLifecycle,LimitRanger,\ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,\NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,\ResourceQuota
apiVersion:v1kind:Podmetadata:name:hellospec:containers:-image:bmuschko/nodejs-hello-world:1.0.0name:helloports:-name:nodejs-portcontainerPort:3000volumeMounts:-name:log-volumemountPath:"/var/log"resources:requests:cpu:100mmemory:500Miephemeral-storage:1Gilimits:memory:500Miephemeral-storage:2Givolumes:-name:log-volumeemptyDir:{}
$ kubectl apply -f pod.yaml pod/hello created
$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane 65s v1.28.2 minikube-m02 Ready <none> 44s v1.28.2 minikube-m03 Ready <none> 26s v1.28.2
$ kubectl get pod hello -o wide NAME READY STATUS RESTARTS AGE IP NODE hello 1/1 Running 0 25s 10.244.2.2 minikube-m03
$ kubectl describe pod hello
...
Containers:
hello:
...
Limits:
ephemeral-storage: 2Gi
memory: 500Mi
Requests:
cpu: 100m
ephemeral-storage: 1Gi
memory: 500M
...
$ kubectl create namespace rq-demo namespace/rq-demo created $ kubectl apply -f resourcequota.yaml --namespace=rq-demo resourcequota/app created
$ kubectl describe quota app --namespace=rq-demo Name: app Namespace: rq-demo Resource Used Hard -------- ---- ---- pods 0 2 requests.cpu 0 2 requests.memory 0 500Mi
apiVersion:v1kind:Podmetadata:name:mypodspec:containers:-image:nginxname:mypodresources:requests:cpu:"0.5"memory:"1Gi"restartPolicy:Never
$ kubectl apply -f pod.yaml --namespace=rq-demo Error from server (Forbidden): error when creating "pod.yaml": pods \ "mypod" is forbidden: exceeded quota: app, requested: \ requests.memory=1Gi, used: requests.memory=0, limited: \ requests.memory=500Mi
$ kubectl apply -f pod.yaml --namespace=rq-demo pod/mypod created
$ kubectl describe quota --namespace=rq-demo Name: app Namespace: rq-demo Resource Used Hard -------- ---- ---- pods 1 2 requests.cpu 500m 2 requests.memory 255Mi 500Mi
$ kubectl apply -f setup.yaml namespace/d92 created limitrange/cpu-limit-range created
$ kubectl describe limitrange cpu-limit-range -n d92 Name: cpu-limit-range Namespace: d92 Type Resource Min Max Default Request Default Limit ... ---- -------- --- --- --------------- ------------- Container cpu 200m 500m 500m 500m ...
apiVersion:v1kind:Podmetadata:name:pod-without-resource-requirementsnamespace:d92spec:containers:-image:nginx:1.23.4-alpinename:nginx
$ kubectl apply -f pod-without-resource-requirements.yaml pod/pod-without-resource-requirements created
$ kubectl describe pod pod-without-resource-requirements -n d92
...
Containers:
nginx:
Limits:
cpu: 500m
Requests:
cpu: 500m
apiVersion:v1kind:Podmetadata:name:pod-with-more-cpu-resource-requirementsnamespace:d92spec:containers:-image:nginx:1.23.4-alpinename:nginxresources:requests:cpu:400mlimits:cpu:1.5
$ kubectl apply -f pod-with-more-cpu-resource-requirements.yaml Error from server (Forbidden): error when creating \ "pod-with-more-cpu-resource-requirements.yaml": pods \ "pod-with-more-cpu-resource-requirements" is forbidden: \ maximum cpu usage per Container is 500m, but limit is 1500m
apiVersion:v1kind:Podmetadata:name:pod-with-less-cpu-resource-requirementsnamespace:d92spec:containers:-image:nginx:1.23.4-alpinename:nginxresources:requests:cpu:350mlimits:cpu:400m
$ kubectl apply -f pod-with-less-cpu-resource-requirements.yaml pod/pod-with-less-cpu-resource-requirements created
$ kubectl describe pod pod-with-less-cpu-resource-requirements -n d92
...
Containers:
nginx:
Limits:
cpu: 400m
Requests:
cpu: 350m
$ kubectl create configmap app-config --from-file=application.yaml configmap/app-config created
$ kubectl get configmap app-config -o yaml
apiVersion: v1
data:
application.yaml: |-
dev:
url: http://dev.bar.com
name: Developer Setup
prod:
url: http://foo.bar.com
name: My Cool App
kind: ConfigMap
metadata:
creationTimestamp: "2023-05-22T17:47:52Z"
name: app-config
namespace: default
resourceVersion: "7971"
uid: 00cf4ce2-ebec-48b5-a721-e1bde2aabd84
$ kubectl run backend --image=nginx:1.23.4-alpine -o yaml \ --dry-run=client --restart=Never > pod.yaml
apiVersion:v1kind:Podmetadata:labels:run:backendname:backendspec:containers:-image:nginx:1.23.4-alpinename:backendvolumeMounts:-name:config-volumemountPath:/etc/configvolumes:-name:config-volumeconfigMap:name:app-config
$ kubectl apply -f pod.yaml pod/backend created
$ kubectl exec backend -it -- /bin/sh / # cd /etc/config /etc/config # ls application.yaml /etc/config # cat application.yaml dev: url: http://dev.bar.com name: Developer Setup prod: url: http://foo.bar.com name: My Cool App /etc/config # exit
$ kubectl create secret generic db-credentials --from-literal=\ db-password=passwd secret/db-credentials created
$ kubectl get secret db-credentials -o yaml apiVersion: v1 data: db-password: cGFzc3dk kind: Secret metadata: creationTimestamp: "2023-05-22T16:47:33Z" name: db-credentials namespace: default resourceVersion: "7557" uid: 2daf580a-b672-40dd-8c37-a4adb57a8c6c type: Opaque
$ kubectl run backend --image=nginx:1.23.4-alpine -o yaml \ --dry-run=client --restart=Never > pod.yaml
apiVersion:v1kind:Podmetadata:labels:run:backendname:backendspec:containers:-image:nginx:1.23.4-alpinename:backendenv:-name:DB_PASSWORDvalueFrom:secretKeyRef:name:db-credentialskey:db-password
$ kubectl apply -f pod.yaml pod/backend created
$ kubectl exec -it backend -- env DB_PASSWORD=passwd
apiVersion:v1kind:Podmetadata:name:busybox-security-contextspec:containers:-name:busyboximage:busybox:1.28command:["sh","-c","sleep1h"]
apiVersion:v1kind:Podmetadata:name:busybox-security-contextspec:containers:-name:busyboximage:busybox:1.36.1command:["sh","-c","sleep1h"]volumeMounts:-name:volmountPath:/data/testvolumes:-name:volemptyDir:{}
apiVersion:v1kind:Podmetadata:name:busybox-security-contextspec:securityContext:runAsUser:1000runAsGroup:3000fsGroup:2000containers:-name:busyboximage:busybox:1.36.1command:["sh","-c","sleep1h"]volumeMounts:-name:volmountPath:/data/testsecurityContext:allowPrivilegeEscalation:falsevolumes:-name:volemptyDir:{}
$ kubectl apply -f pod.yaml pod/busybox-security-context created
$ kubectl exec -it busybox-security-context -- sh / $ cd /data/test /data/test $ touch logs.txt /data/test $ ls -l total 0 -rw-r--r-- 1 1000 2000 0 May 23 01:10 logs.txt /data/test $ exit command terminated with exit code 1
apiVersion:apps/v1kind:Deploymentmetadata:name:nginxnamespace:h20spec:replicas:3selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-name:nginximage:nginx:1.25.3-alpine
apiVersion:apps/v1kind:Deploymentmetadata:name:nginxnamespace:h20spec:replicas:3selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-name:nginximage:nginx:1.25.3-alpinesecurityContext:capabilities:drop:-all
$ kubectl apply -f deployment-security-context.yaml deployment.apps/nginx created
$ kubectl get pods -n h20 NAME READY STATUS RESTARTS AGE nginx-674df44dfc-5l7jl 0/1 CrashLoopBackOff 4 (18s ago) 111s nginx-674df44dfc-fmlrh 0/1 CrashLoopBackOff 4 (27s ago) 111s nginx-674df44dfc-rqdkp 0/1 CrashLoopBackOff 4 (22s ago) 111s
$ kubectl logs nginx-674df44dfc-rqdkp -n h20
...
2023/12/15 23:59:56 [emerg] 1#1: chown("/var/cache/nginx/client_temp", \
101) failed (1: Operation not permitted) \
nginx: [emerg] chown("/var/cache/nginx/client_temp", 101) failed \
(1: Operation not permitted)
$ kubectl create service clusterip myapp --tcp=80:80 service/myapp created kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp ClusterIP 10.109.149.59 <none> 80/TCP 4s
$ kubectl create deployment myapp --image=nginx:1.23.4-alpine --port=80 deployment.apps/myapp created $ kubectl get deployments,pods NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/myapp 1/1 1 1 79s NAME READY STATUS RESTARTS AGE pod/myapp-7d6cd46d65-jrc2q 1/1 Running 0 78s
$ kubectl scale deployment myapp --replicas=2 deployment.extensions/myapp scaled $ kubectl get deployments,pods NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/myapp 2/2 2 2 107s NAME READY STATUS RESTARTS AGE pod/myapp-7d6cd46d65-8vr8t 1/1 Running 0 5s pod/myapp-7d6cd46d65-jrc2q 1/1 Running 0 106s
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -- wget -O- 10.109.149.59:80 Connecting to 10.109.149.59:80 (10.109.149.59:80) writing to stdout ... written to stdout pod "tmp" deleted
$ kubectl edit service myapp ... spec: type: NodePort ... $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp NodePort 10.109.149.59 <none> 80:31205/TCP 6m44s
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP ... minikube Ready control-plane 11s v1.28.3 192.168.49.2 ...
$ wget -O- 192.168.49.2:31205 --2019-05-10 16:32:35-- http://192.168.49.2:31205/ Resolving localhost (localhost)... ::1, 127.0.0.1 Connecting to localhost (localhost)|::1|:31205... connected. HTTP request sent, awaiting response... 200 OK Length: 612 [text/html] ... 2019-05-10 16:32:35 (24.3 MB/s) - written to stdout [612/612]
$ kubectl apply -f setup.yaml namespace/y72 created deployment.apps/web-app created service/web-app created
$ kubectl get all -n y72 NAME READY STATUS RESTARTS AGE pod/web-app-5f77f59c78-8svdm 1/1 Running 0 10m pod/web-app-5f77f59c78-mhvjz 1/1 Running 0 10m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) service/web-app ClusterIP 10.106.215.153 <none> 80/TCP NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/web-app 2/2 2 2 10m NAME DESIRED CURRENT READY AGE replicaset.apps/web-app-5f77f59c78 2 2 2 10m
$ kubectl run tmp --image=busybox --restart=Never -it --rm -n y72 \ -- wget web-app Connecting to web-app (10.106.215.153:80) wget: can't connect to remote host (10.106.215.153): Connection refused pod "tmp" deleted pod y72/tmp terminated (Error)
$ kubectl get endpoints -n y72 NAME ENDPOINTS AGE web-app <none> 15m
$ kubectl describe service web-app -n y72 Name: web-app Namespace: y72 Labels: <none> Annotations: <none> Selector: run=myapp Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.106.215.153 IPs: 10.106.215.153 Port: <unset> 80/TCP TargetPort: 3001/TCP Endpoints: <none> Session Affinity: None Events: <none>
$ kubectl get endpoints -n y72 NAME ENDPOINTS AGE web-app 10.244.0.3:3000,10.244.0.4:3000 24m
$ kubectl edit service web-app -n y72 service/web-app edited
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm -n y72 \ -- wget web-app Connecting to web-app (10.106.215.153:80) saving to 'index.html' index.html 100% |********************************| ... 'index.html' saved pod "tmp" deleted
$ kubectl create deployment web --image=bmuschko/nodejs-hello-world:1.0.0 deployment.apps/web created $ kubectl get deployment web NAME READY UP-TO-DATE AVAILABLE AGE web 1/1 1 1 6s
$ kubectl expose deployment web --type=ClusterIP --port=3000 service/web exposed $ kubectl get service web NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web ClusterIP 10.100.86.59 <none> 3000/TCP 6s
$ kubectl run tmp --image=busybox:1.36.1 --restart=Never -it --rm \ -- wget -O- web:3000 Connecting to web:3000 (10.100.86.59:3000) writing to stdout Hello World ... pod "tmp" deleted
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:hello-world-ingressannotations:nginx.ingress.kubernetes.io/rewrite-target:/$1spec:rules:-host:hello-world.exposedhttp:paths:-path:/pathType:Prefixbackend:service:name:webport:number:3000
$ kubectl apply -f ingress.yaml ingress.networking.k8s.io/hello-world-ingress created
$ kubectl get ingress hello-world-ingress NAME CLASS HOSTS ADDRESS ... hello-world-ingress nginx hello-world.exposed 192.168.64.38 ...
192.168.64.38 hello-world.exposed
$ curl hello-world.exposed Hello World
$ kubectl apply -f setup.yaml namespace/s96 created deployment.apps/nginx created service/nginx created ingress.networking.k8s.io/nginx created
$ wget faulty.ingress.com/ --2024-01-04 09:45:44-- http://faulty.ingress.com/ Resolving faulty.ingress.com (faulty.ingress.com)... 127.0.0.1 Connecting to faulty.ingress.com (faulty.ingress.com)|127.0.0.1|:80... HTTP request sent, awaiting response... 503 Service Temporarily \ Unavailable 2024-01-04 09:45:44 ERROR 503: Service Temporarily Unavailable.
$ kubectl apply -f setup.yaml namespace/s96 unchanged deployment.apps/nginx unchanged service/nginx unchanged ingress.networking.k8s.io/nginx configured
$ wget faulty.ingress.com/ --2024-01-04 09:43:03-- http://faulty.ingress.com/ Resolving faulty.ingress.com (faulty.ingress.com)... 127.0.0.1 Connecting to faulty.ingress.com (faulty.ingress.com)|127.0.0.1|:80... HTTP request sent, awaiting response... 200 OK ...
$ kubectl apply -f setup.yaml namespace/end-user created namespace/internal created pod/frontend created pod/backend created
$ kubectl get pod backend --template '{{.status.podIP}}' -n internal
10.244.0.49
$ kubectl exec frontend -it -n end-user \
-- wget --spider --timeout=1 10.244.0.49
Connecting to 10.244.0.49 (10.244.0.49:80)
remote file exists
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:app-stacknamespace:end-userspec:podSelector:matchLabels:app:frontendpolicyTypes:-Egressegress:-to:-podSelector:matchLabels:app:backendnamespaceSelector:matchLabels:access:insideports:-protocol:TCPport:80
$ kubectl apply -f allow-egress-networkpolicy.yaml networkpolicy.networking.k8s.io/allow-egress-networkpolicy created
$ kubectl exec frontend -it -n end-user \ -- wget --spider --timeout=1 10.244.0.49 Connecting to 10.244.0.49 (10.244.0.49:80) remote file exists
$ kubectl apply -f setup.yaml namespace/k1 created namespace/k2 created pod/busybox created pod/nginx created networkpolicy.networking.k8s.io/default-deny-ingress created
$ kubectl get pod -n k1
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 3s
$ kubectl get pod nginx --template '{{.status.podIP}}' -n k2
10.0.0.101
$ kubectl exec -it busybox -n k1 -- wget --timeout=5 10.0.0.101:80 Connecting to 10.0.0.101:80 (10.0.0.101:80) wget: download timed out command terminated with exit code 1
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-ingress-networkpolicynamespace:k2spec:podSelector:matchLabels:app:backendpolicyTypes:-Ingressingress:-from:-namespaceSelector:matchLabels:role:consumerports:-protocol:TCPport:80
$ kubectl apply -f allow-ingress-networkpolicy.yaml networkpolicy.networking.k8s.io/allow-ingress-networkpolicy created
$ kubectl exec -it busybox -n k1 -- wget --timeout=5 10.0.0.101:80 Connecting to 10.0.0.101:80 (10.0.0.101:80) saving to 'index.html' ...
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Define, build, and modify container images |
N/A |
||
Understand Jobs and CronJobs |
Indexed Job for Parallel Processing with Static Work Assignment, Automatic Cleanup for Finished Jobs, Running Automated Tasks with a CronJob |
||
Understand multi-container Pod design patterns |
N/A |
||
Utilize persistent and ephemeral volumes |
N/A |
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Use Kubernetes primitives to implement common deployment strategies |
|||
Understand Deployments and how to perform rolling updates |
Using kubectl to Create a Deployment, Performing a Rolling Update, Running Multiple Instances of Your App |
||
Use the Helm package manager to deploy existing packages |
Helm Charts: making it simple to package and deploy common applications on Kubernetes |
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Understand API deprecations |
Kubernetes Deprecation Policy, Deprecated API Migration Guide |
N/A |
|
Implement probes and health checks |
|||
Use provided tools to monitor Kubernetes applications |
|||
Utilize container logs |
|||
Debugging in Kubernetes |
N/A |
Troubleshooting Applications, Debug Running Pods, Debug Pods, Use Port Forwarding to Access Applications in a Cluster |
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Discover and use resources that extend Kubernetes (CRD) |
|||
Understand authentication, authorization and admission control |
Controlling Access to the Kubernetes API, Authenticating, Using RBAC Authorization, Admission Controllers Reference |
N/A |
|
Understand and defining resource requirements, limits and quotas |
Resource Management for Pods and Containers, Limit Ranges, Resource Quotas |
N/A |
|
Understand ConfigMaps |
|||
Create and consume Secrets |
Managing Secrets using kubectl, Managing Secrets using Configuration File |
||
Understand ServiceAccounts |
|||
Understand SecurityContext |
N/A |
| Exam Objective | Chapter | Reference Documentation | Tutorial |
|---|---|---|---|
Demonstrate basic understanding of NetworkPolicies |
|||
Provide and troubleshoot access to applications via services |
|||
Use Ingress rules to expose applications |
Set up Ingress on Minikube with the NGINX Ingress Controller |