In-Depth Guidance and Practice
Indicates new terms, URLs, and email addresses.
Constant widthUsed for filenames, file extensions, and program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords.
Constant width boldShows commands or other text that should be typed literally by the user.
apiVersion:v1kind:Namespacemetadata:labels:app:orionname:g04---apiVersion:v1kind:Podmetadata:labels:tier:backendname:backendnamespace:g04spec:containers:-image:bmuschko/nodejs-hello-world:1.0.0name:helloports:-containerPort:3000restartPolicy:Never---apiVersion:v1kind:Podmetadata:labels:tier:frontendname:frontendnamespace:g04spec:containers:-image:alpinename:frontendargs:-/bin/sh--c-while true; do sleep 5; done;restartPolicy:Never---apiVersion:v1kind:Podmetadata:labels:tier:outsidename:otherspec:containers:-image:alpinename:otherargs:-/bin/sh--c-while true; do sleep 5; done;restartPolicy:Never
$ kubectl apply -f setup.yaml namespace/g04 created pod/backend created pod/frontend created pod/other created
$ kubectl get pods -n g04 -o wide NAME READY STATUS RESTARTS AGE IP NODE \ NOMINATED NODE READINESS GATES backend 1/1 Running 0 15s 10.0.0.43 minikube \ <none> <none> frontend 1/1 Running 0 15s 10.0.0.193 minikube \ <none> <none>
$ kubectl get pods NAME READY STATUS RESTARTS AGE other 1/1 Running 0 4h45m
$ kubectl exec frontend -it -n g04 -- /bin/sh / # wget --spider --timeout=1 10.0.0.43:3000 Connecting to 10.0.0.43:3000 (10.0.0.43:3000) remote file exists / # exit
$ kubectl exec other -it -- /bin/sh / # wget --spider --timeout=1 10.0.0.43:3000 Connecting to 10.0.0.43:3000 (10.0.0.43:3000) remote file exists / # exit
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:default-deny-ingressnamespace:g04spec:podSelector:{}policyTypes:-Ingress
$ kubectl apply -f deny-all-ingress-network-policy.yaml networkpolicy.networking.k8s.io/default-deny-ingress created
$ kubectl exec frontend -it -n g04 -- /bin/sh / # wget --spider --timeout=1 10.0.0.43:3000 Connecting to 10.0.0.43:3000 (10.0.0.43:3000) wget: download timed out / # exit
$ kubectl exec other -it -- /bin/sh / # wget --spider --timeout=1 10.0.0.43:3000 Connecting to 10.0.0.43:3000 (10.0.0.43:3000) wget: download timed out
$ kubectl get ns g04 --show-labels NAME STATUS AGE LABELS g04 Active 12m app=orion,kubernetes.io/metadata.name=g04 $ kubectl get pods -n g04 --show-labels NAME READY STATUS RESTARTS AGE LABELS backend 1/1 Running 0 9m46s tier=backend frontend 1/1 Running 0 9m46s tier=frontend
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:backend-ingressnamespace:g04spec:podSelector:matchLabels:tier:backendpolicyTypes:-Ingressingress:-from:-namespaceSelector:matchLabels:app:orionpodSelector:matchLabels:tier:frontendports:-protocol:TCPport:3000
$ kubectl apply -f backend-ingress-network-policy.yaml networkpolicy.networking.k8s.io/backend-ingress created
$ kubectl exec frontend -it -n g04 -- /bin/sh / # wget --spider --timeout=1 10.0.0.43:3000 Connecting to 10.0.0.43:3000 (10.0.0.43:3000) remote file exists / # exit
$ kubectl exec other -it -- /bin/sh / # wget --spider --timeout=1 10.0.0.43:3000 Connecting to 10.0.0.43:3000 (10.0.0.43:3000) wget: download timed out
$ kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/\ main/job-master.yaml job.batch/kube-bench-master created
$ kubectl get pods NAME READY STATUS RESTARTS AGE kube-bench-master-8f6qh 0/1 Completed 0 45s
$ kubectl logs kube-bench-master-8f6qh
[INFO] 1 Control Plane Security Configuration[INFO] 1.1 Control Plane Node Configuration Files [PASS] 1.1.1 Ensure that the API server pod specification file permissions are \ set to 644 or more restrictive (Automated)
... [INFO] 1.2 API Server [WARN] 1.2.1 Ensure that the --anonymous-auth argument is set to false \ (Manual)
... [FAIL] 1.2.6 Ensure that the --kubelet-certificate-authority argument is set \ as appropriate (Automated)
== Remediations master == ... 1.2.1 Edit the API server pod specification file /etc/kubernetes/manifests/ \ kube-apiserver.yaml on the control plane node and set the below parameter. --anonymous-auth=false ... 1.2.6 Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and \
set the --kubelet-certificate-authority parameter to the path to the cert \
file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
... == Summary total ==
42 checks PASS 9 checks FAIL 11 checks WARN 0 checks INFO

The inspected node, in this case the control plane node.

A passed check. Here, the file permissions of the API server configuration file.

A warning message that prompts you to manually check the value of an argument provided to the API server executable.

A failed check. For example, the flag --kubelet-certificate-authority should be set for the API server executable.

The remediation action to take to fix a problem. The number, e.g., 1.2.1, of the failure or warning corresponds to the number assigned to the remediation action.

The summary of all passed and failed checks plus warning and informational messages.
[INFO] 1.2 API Server ... [WARN] 1.2.12 Ensure that the admission control plugin AlwaysPullImages is \ set (Manual) == Remediations master == ... 1.2.12 Edit the API server pod specification file /etc/kubernetes/manifests/ \ kube-apiserver.yaml on the control plane node and set the --enable-admission-plugins parameter \ to include AlwaysPullImages. --enable-admission-plugins=...,AlwaysPullImages,...
$ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion:v1kind:Podmetadata:annotations:kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint:\192.168.56.10:6443creationTimestamp:nulllabels:component:kube-apiservertier:control-planename:kube-apiservernamespace:kube-systemspec:containers:-command:-kube-apiserver---advertise-address=192.168.56.10---allow-privileged=true---authorization-mode=Node,RBAC---client-ca-file=/etc/kubernetes/pki/ca.crt---enable-admission-plugins=NodeRestriction,AlwaysPullImages...
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE ... kube-apiserver-control-plane 1/1 Running 0 71m ...
$ kubectl delete job kube-bench-master job.batch "kube-bench-master" deleted
$ kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/\ main/job-master.yaml job.batch/kube-bench-master created $ kubectl get pods NAME READY STATUS RESTARTS AGE kube-bench-master-5gjdn 0/1 Completed 0 10s $ kubectl logs kube-bench-master-5gjdn | grep 1.2.12 [PASS] 1.2.12 Ensure that the admission control plugin AlwaysPullImages is \ set (Manual)
apiVersion:v1kind:Namespacemetadata:name:t75---apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentnamespace:t75labels:app:nginxspec:replicas:3selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-name:nginximage:nginx:1.14.2ports:-containerPort:80---apiVersion:v1kind:Servicemetadata:name:accounting-servicenamespace:t75spec:selector:app:nginxports:-protocol:TCPport:80targetPort:80
$ kubectl apply -f setup.yaml namespace/t75 created deployment.apps/nginx-deployment created service/accounting-service created
$ kubectl get all -n t75 NAME READY STATUS RESTARTS AGE pod/nginx-deployment-6595874d85-5rdrh 1/1 Running 0 108s pod/nginx-deployment-6595874d85-jmhvh 1/1 Running 0 108s pod/nginx-deployment-6595874d85-vtwxp 1/1 Running 0 108s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) \ AGE service/accounting-service ClusterIP 10.97.101.228 <none> 80/TCP \ 108s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deployment 3/3 3 3 108s
$ kubectl run tmp --image=busybox --restart=Never -it --rm \ -- wget 10.97.101.228:80 Connecting to 10.97.101.228:80 (10.97.101.228:80) saving to 'index.html' index.html 100% |**| 612 0:00:00 ETA 'index.html' saved pod "tmp" deleted
$ openssl req -nodes -new -x509 -keyout accounting.key -out accounting.crt \
-subj "/CN=accounting.tls"
Generating a 2048 bit RSA private key
...........................+
..........................+
writing new private key to 'accounting.key'
-----
$ ls
accounting.crt accounting.key
$ kubectl create secret tls accounting-secret --cert=accounting.crt \ --key=accounting.key -n t75 secret/accounting-secret created
kubernetes.io/tlsapiVersion:v1kind:Secretmetadata:name:accounting-secretnamespace:t75type:kubernetes.io/tlsdata:tls.crt:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk...tls.key:LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk...
$ base64 accounting.crt LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNyakNDQ...
$ kubectl create ingress accounting-ingress \ --rule="accounting.internal.acme.com/*=accounting-service:80, \ tls=accounting-secret" -n t75 ingress.networking.k8s.io/accounting-ingress created
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:accounting-ingressnamespace:t75spec:tls:-hosts:-accounting.internal.acme.comsecretName:accounting-secretrules:-host:accounting.internal.acme.comhttp:paths:-path:/pathType:Prefixbackend:service:name:accounting-serviceport:number:80
$ kubectl get ingress -n t75 NAME CLASS HOSTS ADDRESS \ PORTS AGE accounting-ingress nginx accounting.internal.acme.com 192.168.64.91 \ 80, 443 55s
$ kubectl describe ingress accounting-ingress -n t75
Name: accounting-ingress
Labels: <none>
Namespace: t75
Address: 192.168.64.91
Ingress Class: nginx
Default backend: <default>
TLS:
accounting-secret terminates accounting.internal.acme.com
Rules:
Host Path Backends
---- ---- --------
accounting.internal.acme.com
/ accounting-service:80 \
(172.17.0.5:80,172.17.0.6:80,172.17.0.7:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 1s (x2 over 31s) nginx-ingress-controller Scheduled for sync
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP \ EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME minikube Ready control-plane 3d19h v1.24.1 192.168.64.91 \ <none> Buildroot 2021.02.12 5.10.57 docker://20.10.16
$ sudo vim /etc/hosts ... 192.168.64.91 accounting.internal.acme.com
$ wget -O- https://accounting.internal.acme.com --no-check-certificate --2022-07-28 15:32:43-- https://accounting.internal.acme.com/ Resolving accounting.internal.acme.com (accounting.internal.acme.com)... \ 192.168.64.91 Connecting to accounting.internal.acme.com (accounting.internal.acme.com) \ |192.168.64.91|:443... connected. WARNING: cannot verify accounting.internal.acme.com's certificate, issued \ by ‘CN=Kubernetes Ingress Controller Fake Certificate,O=Acme Co’: Self-signed certificate encountered. WARNING: no certificate subject alternative name matches requested host name ‘accounting.internal.acme.com’. HTTP request sent, awaiting response... 200 OK
| Port range | Purpose |
|---|---|
6643 |
Kubernetes API server |
2379–2380 |
etcd server client API |
10250 |
Kubelet API |
10259 |
kube-scheduler |
10257 |
kube-controller-manager |
| Port range | Purpose |
|---|---|
10250 |
Kubelet API |
30000–32767 |
NodePort Services |
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:default-deny-egress-metadata-servernamespace:a12spec:podSelector:{}policyTypes:-Egressegress:-to:-ipBlock:cidr:0.0.0.0/0except:-169.254.169.254/32
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/\ v2.6.0/aio/deploy/recommended.yaml
$ kubectl get deployments,pods,services -n kubernetes-dashboard NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/dashboard-metrics-scraper 1/1 1 1 11m deployment.apps/kubernetes-dashboard 1/1 1 1 11m NAME READY STATUS RESTARTS AGE pod/dashboard-metrics-scraper-78dbd9dbf5-f8z4x 1/1 Running 0 11m pod/kubernetes-dashboard-5fd5574d9f-ns7nl 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP \ PORT(S) AGE service/dashboard-metrics-scraper ClusterIP 10.98.6.37 <none> \ 8000/TCP 11m service/kubernetes-dashboard ClusterIP 10.102.234.158 <none> \ 80/TCP 11m
$ kubectl proxy Starting to serve on 127.0.0.1:8001
apiVersion:v1kind:ServiceAccountmetadata:name:admin-usernamespace:kubernetes-dashboard
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:admin-userroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:cluster-adminsubjects:-kind:ServiceAccountname:admin-usernamespace:kubernetes-dashboard
$ kubectl create -f admin-user-serviceaccount.yaml serviceaccount/admin-user created $ kubectl create -f admin-user-clusterrolebinding.yaml clusterrolebinding.rbac.authorization.k8s.io/admin-user created
$ kubectl create token admin-user -n kubernetes-dashboard eyJhbGciOiJSUzI1NiIsImtpZCI6...
apiVersion:v1kind:ServiceAccountmetadata:name:developer-usernamespace:kubernetes-dashboard
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:annotations:rbac.authorization.kubernetes.io/autoupdate:"true"name:cluster-developerrules:-apiGroups:-'*'resources:-'*'verbs:-get-list-watch-nonResourceURLs:-'*'verbs:-get-list-watch
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:developer-userroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:cluster-developersubjects:-kind:ServiceAccountname:developer-usernamespace:kubernetes-dashboard
$ kubectl create -f restricted-user-serviceaccount.yaml serviceaccount/restricted-user created $ kubectl create -f restricted-user-clusterrole.yaml clusterrole.rbac.authorization.k8s.io/cluster-developer created $ kubectl create -f restricted-user-clusterrolebinding.yaml clusterrolebinding.rbac.authorization.k8s.io/developer-user created
$ kubectl create token developer-user -n kubernetes-dashboard eyJhbGciOiJSUzI1NiIsImtpZCI6...
$ curl -LO "https://dl.k8s.io/v1.26.1/bin/linux/amd64/kubeadm" $ curl -LO "https://dl.k8s.io/v1.26.1/bin/linux/amd64/kubeadm.sha256"
$ echo "$(cat kubeadm.sha256) kubeadm" | shasum -a 256 --check kubeadm: OK
By default, Pod-to-Pod communication is unrestricted. Instantiate a default deny rule to restrict Pod-to-Pod network traffic with the principle of least privilege. The attribute spec.podSelector of a network policy selects the target Pod the rules apply to based on label selection. The ingress and egress rules define Pods, namespaces, IP addresses, and ports for allowing incoming and outgoing traffic. Network policies can be aggregated. A default deny rule may disallow ingress and/or egress traffic. An additional network policy can open up those rules with a more fine-grained definition.
The Kubernetes CIS Benchmark is a set of best practices for recommended security settings in a production Kubernetes environment. You can automate the process of detecting security risks with the help of the tool kube-bench. The generated report from running kube-bench describes detailed remediation actions to fix a detected issue. Learn how to interpret the results and how to mitigate the issue.
An Ingress can be configured to send and receive encrypted data by exposing an HTTPS endpoint. For this to work, you need to create a TLS Secret object and assign it a TLS certificate and key. The Secret can then be consumed by the Ingress using the attribute spec.tls[].
GUI elements, such as the Kubernetes Dashboard, provide a convenient way to manage objects. Attackers can cause harm to your cluster if the application isn’t protected from unauthorized access. For the exam, you need to know how to properly set up RBAC for specific stakeholders. Moreover, you are expected to have a rough understanding of security-related command line arguments. Practice the installation process for the Dashboard, learn how to tweak its command line arguments, and understand the effects of setting permissions for different users.
Platform binaries like kubectl and kubeadm can be verified against their corresponding hash code. Know where to find the hash file and how to use a validation tool to identify if the binary has been tempered with.
$ kubectl cluster-info Kubernetes control plane is running at https://172.28.40.5:6443 ...
$ kubectl get service kubernetes NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32s
$ kubectl get endpoints kubernetes NAME ENDPOINTS AGE kubernetes 172.28.40.5:6443 4m3s
$ kubectl run kubernetes-envs --image=alpine:3.16.2 -it --rm --restart=Never \ -- env KUBERNETES_SERVICE_HOST=10.96.0.1 KUBERNETES_SERVICE_PORT=443
$ curl https://172.28.40.5:6443/api/v1/namespaces -k
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:anonymous\" cannot list \
resource \"namespaces\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
}
$ kubectl config view --raw
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL...
server: https://172.28.132.5:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL...
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktL... 

The base64-encoded value of the certificate authority

The user entry with administrator permissions created by default

The base64-encoded value of the user’s client certificate

The base64-encoded value of the user’s private key
$ echo LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL... | base64 -d > ca $ echo LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL... | base64 -d > kubernetes-admin.crt $ echo LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktL... | base64 - \ > kubernetes-admin.key
$ curl --cacert ca --cert kubernetes-admin.crt --key kubernetes-admin.key \
https://172.28.132.5:6443/api/v1/namespaces
{
"kind": "NamespaceList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "2387"
},
"items": [
...
]
}
$ openssl genrsa -out johndoe.key 2048
Generating RSA private key, 2048 bit long modulus
...+
......................................................................+
e is 65537 (0x10001)
$ openssl req -new -key johndoe.key -out johndoe.csr You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) []: State or Province Name (full name) []: Locality Name (eg, city) []: Organization Name (eg, company) []: Organizational Unit Name (eg, section) []: Common Name (eg, fully qualified host name) []:johndoe Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:
$ cat johndoe.csr | base64 | tr -d "\n" LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tL...
$ cat <<EOF | kubectl apply -f - apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: name: johndoe spec: request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tL... signerName: kubernetes.io/kube-apiserver-client expirationSeconds: 86400 usages: - client auth EOF certificatesigningrequest.certificates.k8s.io/johndoe created
$ kubectl get csr johndoe NAME AGE SIGNERNAME REQUESTOR \ REQUESTEDDURATION CONDITION johndoe 6s kubernetes.io/kube-apiserver-client minikube-user \ 24h Pending
$ kubectl certificate approve johndoe certificatesigningrequest.certificates.k8s.io/johndoe approved $ kubectl get csr johndoe NAME AGE SIGNERNAME REQUESTOR \ REQUESTEDDURATION CONDITION johndoe 17s kubernetes.io/kube-apiserver-client minikube-user \ 24h Approved,Issued
$ kubectl get csr johndoe -o jsonpath={.status.certificate}| base64 \
-d > johndoe.crt
$ kubectl create role developer --verb=create --verb=get --verb=list \ --verb=update --verb=delete --resource=pods role.rbac.authorization.k8s.io/developer created
$ kubectl create rolebinding developer-binding-johndoe --role=developer \ --user=johndoe rolebinding.rbac.authorization.k8s.io/developer-binding-johndoe created
$ kubectl config set-credentials johndoe --client-key=johndoe.key \ --client-certificate=johndoe.crt --embed-certs=true User "johndoe" set. $ kubectl config set-context johndoe --cluster=minikube --user=johndoe Context "johndoe" created.
$ kubectl config use-context johndoe Switched to context "johndoe".
$ kubectl get pods No resources found in default namespace.
$ kubectl get namespaces Error from server (Forbidden): namespaces is forbidden: User "johndoe" cannot \ list resource "namespaces" in API group "" at the cluster scope
$ kubectl config use-context minikube Switched to context "minikube".
apiVersion:v1kind:Namespacemetadata:name:k97---apiVersion:v1kind:ServiceAccountmetadata:name:sa-apinamespace:k97---apiVersion:v1kind:Podmetadata:name:list-objectsnamespace:k97spec:serviceAccountName:sa-apicontainers:-name:podsimage:alpine/curl:3.14command:['sh','-c','whiletrue;docurl-s-k-m5-H\"Authorization:Bearer$(cat/var/run/secrets/kubernetes.io/\serviceaccount/token)"https://kubernetes.default.svc.cluster.\local/api/v1/namespaces/k97/pods;sleep10;done']-name:deploymentsimage:alpine/curl:3.14command:['sh','-c','whiletrue;docurl-s-k-m5-H\"Authorization:Bearer$(cat/var/run/secrets/kubernetes.io/\serviceaccount/token)"https://kubernetes.default.svc.cluster.\local/apis/apps/v1/namespaces/k97/deployments;sleep10;done']
$ kubectl apply -f setup.yaml namespace/k97 created serviceaccount/sa-api created pod/list-objects created
$ kubectl logs list-objects -c pods -n k97
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:k97:sa-api\" \
cannot list resource \"pods\" in API group \"\" in the \
namespace \"k97\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
$ kubectl logs list-objects -c deployments -n k97
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps is forbidden: User \
\"system:serviceaccount:k97:sa-api\" cannot list resource \
\"deployments\" in API group \"apps\" in the namespace \
\"k97\"",
"reason": "Forbidden",
"details": {
"group": "apps",
"kind": "deployments"
},
"code": 403
}
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:list-pods-clusterrolerules:-apiGroups:[""]resources:["pods"]verbs:["list"]
$ kubectl apply -f clusterrole.yaml clusterrole.rbac.authorization.k8s.io/list-pods-clusterrole created
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:serviceaccount-pod-rolebindingnamespace:k97subjects:-kind:ServiceAccountname:sa-apiroleRef:kind:ClusterRolename:list-pods-clusterroleapiGroup:rbac.authorization.k8s.io
$ kubectl apply -f rolebinding.yaml rolebinding.rbac.authorization.k8s.io/serviceaccount-pod-rolebinding created
$ kubectl logs list-objects -c pods -n k97
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "628"
},
"items": [
{
"metadata": {
"name": "list-objects",
"namespace": "k97",
...
}
]
}
$ kubectl logs list-objects -c deployments -n k97
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps is forbidden: User \
\"system:serviceaccount:k97:sa-api\" cannot list resource \
\"deployments\" in API group \"apps\" in the namespace \
\"k97\"",
"reason": "Forbidden",
"details": {
"group": "apps",
"kind": "deployments"
},
"code": 403
}
apiVersion:v1kind:ServiceAccountmetadata:name:sa-apinamespace:k97automountServiceAccountToken:false
apiVersion:v1kind:Podmetadata:name:list-objectsnamespace:k97spec:serviceAccountName:sa-apiautomountServiceAccountToken:false...
$ kubectl create token sa-api eyJhbGciOiJSUzI1NiIsImtpZCI6IjBtQkJzVWlsQjl...
$ kubectl create token sa-api --duration 10m eyJhbGciOiJSUzI1NiIsImtpZCI6IjBtQkJzVWlsQjl...
$ kubectl get serviceaccount sa-api -n k97 NAME SECRETS AGE sa-api 0 42m
apiVersion:v1kind:Secretmetadata:name:sa-api-secretnamespace:k97annotations:kubernetes.io/service-account.name:sa-apitype:kubernetes.io/service-account-token
$ kubectl create -f secret.yaml secret/sa-api-secret created
$ kubectl describe secret sa-api-secret -n k97 ... Data ==== ca.crt: 1111 bytes namespace: 3 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjBtQkJzVWlsQjl...
This chapter demonstrated the different ways to communicate with the Kubernetes API. We performed API requests by switching to a user context, and with the help of a RESTful API call using curl. You will need to understand how to determine the endpoint of the API server and how to use different authentication methods, e.g., client credentials and bearer tokens. Explore the Kubernetes API and its endpoints on your own for broader exposure.
Anonymous user requests to the Kubernetes API will not allow any substantial operations. For requests coming from a user or a service account, you will need to carefully analyze permissions granted to the subject. Learn the ins and outs of defining RBAC rules by creating the relevant objects to control permissions. Service accounts automount a token when used in a Pod. Only expose the token as a Volume if you are intending to make API calls from the Pod.
A Kubernetes cluster needs to be cared for over time for security reasons. Attackers may try to take advantage of known vulnerabilities in outdated Kubernetes versions. The version upgrade process is part of every administrator’s job and shouldn’t be ignored.
$ systemctl | grep running ... snapd.service loaded active running Snap Daemon
$ systemctl status snapd
● snapd.service - Snap Daemon
Loaded: loaded (/lib/systemd/system/snapd.service; enabled; vendor \
preset: enabled)
Active: active (running) since Mon 2022-09-19 22:49:56 UTC; 30min ago
TriggeredBy: ● snapd.socket
Main PID: 704 (snapd)
Tasks: 12 (limit: 2339)
Memory: 45.9M
CGroup: /system.slice/snapd.service
└─704 /usr/lib/snapd/snapd
$ sudo systemctl stop snapd Warning: Stopping snapd.service, but it can still be activated by: snapd.socket
$ sudo systemctl disable snapd Removed /etc/systemd/system/multi-user.target.wants/snapd.service.
$ systemctl status snapd
● snapd.service - Snap Daemon
Loaded: loaded (/lib/systemd/system/snapd.service; disabled; vendor \
preset: enabled)
Active: inactive (dead) since Mon 2022-09-19 23:22:22 UTC; 4min 4s ago
TriggeredBy: ● snapd.socket
Main PID: 704 (code=exited, status=0/SUCCESS)
$ sudo apt purge --auto-remove snapd Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: snapd* squashfs-tools* 0 upgraded, 0 newly installed, 2 to remove and 116 not upgraded. After this operation, 147 MB disk space will be freed. Do you want to continue? [Y/n] y ...
$ systemctl status snapd Unit snapd.service could not be found.
$ cat /etc/passwd root:x:0:0:root:/root:/bin/bash nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin ...
$ ps aux | grep bash root 956 0.0 0.4 22512 19200 pts/0 Ss 17:57 0:00 -bash root 7064 0.0 0.0 6608 2296 pts/0 S+ 18:08 0:00 grep \ --color=auto bash
$ sudo adduser ben Adding user ‘ben’ ... Adding new group ‘ben’ (1001) ... Adding new user ‘ben’ (1001) with group ‘ben’ ... Creating home directory ‘/home/ben’ ... Copying files from ‘/etc/skel’ ... New password: Retype new password: ...
$ cat /etc/passwd ... ben:x:1001:1001:,,,:/home/ben:/bin/bash
$ su ben Password: ben@controlplane:/root$ pwd /root
$ su - ben ben@controlplane:~$ pwd /home/ben
$ sudo -u ben pwd /root
$ sudo userdel -r ben
$ cat /etc/group root:x:0: plugdev:x:46:packer nogroup:x:65534: ...
$ sudo groupadd kube-developers
$ cat /etc/group ... kube-developers:x:1004:
$ sudo usermod -g kube-developers ben
$ cat /etc/passwd | grep ben ben:x:1001:1004:,,,:/home/ben:/bin/bash
$ sudo groupdel kube-developers groupdel: cannot remove the primary group of user ben
$ sudo usermod -g kube-admins ben $ sudo groupdel kube-developers
$ touch my-file
$ ls -l total 0 -rw-r--r-- 1 root root 0 Sep 26 17:53 my-file
$ chown ben my-file $ ls -l total 0 -rw-r--r-- 1 ben root 0 Sep 26 17:53 my-file
$ chmod -w file1 $ ls -l total 0 -r--r--r-- 1 ben root 0 Sep 26 17:53 my-file
$ sudo apt update $ sudo apt install apache2
$ sudo ss -ltpn
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
...
LISTEN 0 511 *:80 *:* users: \
(("apache2",pid=18435,fd=4),("apache2",pid=18434,fd=4),("apache2", ]\
pid=18432,fd=4))
$ sudo systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor \
preset: enabled)
Active: active (running) since Tue 2022-09-20 22:25:25 UTC; 39s ago
Docs: https://httpd.apache.org/docs/2.4/
Main PID: 18432 (apache2)
Tasks: 55 (limit: 2339)
Memory: 5.6M
CGroup: /system.slice/apache2.service
├─18432 /usr/sbin/apache2 -k start
├─18434 /usr/sbin/apache2 -k start
└─18435 /usr/sbin/apache2 -k start
$ sudo systemctl stop apache2 $ sudo systemctl disable apache2 Synchronizing state of apache2.service with SysV service script with \ /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install disable apache2 Removed /etc/systemd/system/multi-user.target.wants/apache2.service. $ sudo apt purge --auto-remove apache2
$ sudo ss -ltpn | grep :80
$ sudo ufw allow ssh Rules updated Rules updated (v6) $ sudo ufw default deny outgoing Default outgoing policy changed to deny (be sure to update your rules accordingly) $ sudo ufw default deny incoming Default incoming policy changed to deny (be sure to update your rules accordingly) $ sudo ufw enable Command may disrupt existing ssh connections. Proceed with operation (y|n)? y Firewall is active and enabled on system startup
$ sudo ufw allow 6443 Rule added Rule added (v6)
$ sudo aa-status apparmor module is loaded. 31 profiles are loaded. 31 profiles are in enforce mode. /snap/snapd/15177/usr/lib/snapd/snap-confine ... 0 profiles are in complain mode. 14 processes have profiles defined. 14 processes are in enforce mode. /pause (11934) docker-default ... 0 processes are in complain mode. 0 processes are unconfined but have a profile defined.
The system enforces the rules, reports the violation, and writes it to the syslog. You will want to use this mode to prevent a program from making specific calls.
The system does not enforce the rules but will write violations to the log. This mode is helpful if you want to discover the calls a program makes.
$ sudo apparmor_parser /etc/apparmor.d/k8s-deny-write
$ sudo aa-status apparmor module is loaded. 32 profiles are loaded. 32 profiles are in enforce mode. k8s-deny-write ...
$ sudo apt-get update $ sudo apt-get install apparmor-utils
apiVersion:v1kind:Podmetadata:name:hello-apparmorannotations:container.apparmor.security.beta.kubernetes.io/hello:\localhost/k8s-deny-writespec:containers:-name:helloimage:busybox:1.28command:["sh","-c","echo'HelloAppArmor!'&&sleep1h"]

The annotation key that consists of a hard-coded prefix and the container name separated by a slash character.

The profile name available on the current node indicated by localhost.

The container name.
$ kubectl apply -f pod.yaml pod/hello-apparmor created $ kubectl get pod hello-apparmor NAME READY STATUS RESTARTS AGE hello-apparmor 1/1 Running 0 4s
$ kubectl exec -it hello-apparmor -- /bin/sh / # touch test.txt touch: test.txt: Permission denied
apiVersion:v1kind:Podmetadata:name:hello-seccompspec:securityContext:seccompProfile:type:RuntimeDefaultcontainers:-name:helloimage:busybox:1.28command:["sh","-c","echo'Helloseccomp!'&&sleep1h"]
$ kubectl apply -f pod.yaml pod/hello-seccomp created $ kubectl get pod hello-seccomp NAME READY STATUS RESTARTS AGE hello-seccomp 1/1 Running 0 4s
$ kubectl logs hello-seccomp Hello seccomp!
$ sudo mkdir -p /var/lib/kubelet/seccomp/profiles
mkdir syscall{"defaultAction":"SCMP_ACT_ALLOW","architectures":["SCMP_ARCH_X86_64","SCMP_ARCH_X86","SCMP_ARCH_X32"],"syscalls":[{"names":["mkdir"],"action":"SCMP_ACT_ERRNO"}]}

The default action applies to all system calls. Here we’ll allow all syscalls using SCMP_ACT_ALLOW.

You can filter for specific architectures the default action should apply to. The definition of the field is optional.

The default action can be overwritten by declaring more fine-grained rules. The SCMP_ACT_ERRNO action will prevent the execution of the mkdir syscall.
mkdir syscallapiVersion:v1kind:Podmetadata:name:hello-seccompspec:securityContext:seccompProfile:type:LocalhostlocalhostProfile:profiles/mkdir-violation.jsoncontainers:-name:helloimage:busybox:1.28command:["sh","-c","echo'Helloseccomp!'&&sleep1h"]securityContext:allowPrivilegeEscalation:false

Refers to a profile on the current node.

Applies the profile with the name mkdir-violation.json in the subdirectory profiles.
$ kubectl apply -f pod.yaml pod/hello-seccomp created $ kubectl get pod hello-seccomp NAME READY STATUS RESTARTS AGE hello-seccomp 1/1 Running 0 4s
$ kubectl exec -it hello-seccomp -- /bin/sh / # mkdir test mkdir: can't create directory test: Operation not permitted
The CKS exam primarily focuses on security functionality in Kubernetes. This domain crosses the boundary to Linux OS security features. It won’t hurt to explore Linux-specific tools and security aspects independent from the content covered in this chapter. On a high level, familiarize yourself with service, package, user, and network management on Linux.
AppArmor and seccomp are just some kernel hardening tools that can be integrated with Kubernetes to restrict system calls made from a container. Practice the process of loading a profile and applying it to a container. In order to expand your horizons, you may also want to explore other kernel functionality that works alongside Kubernetes, such as SELinux or sysctl.
apiVersion:v1kind:Podmetadata:name:non-root-errorspec:containers:-image:nginx:1.23.1name:nginxsecurityContext:runAsNonRoot:true
$ kubectl apply -f container-non-root-user-error.yaml pod/non-root-error created $ kubectl get pod non-root-error NAME READY STATUS RESTARTS AGE non-root-error 0/1 CreateContainerConfigError 0 9s $ kubectl describe pod non-root-error ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 24s default-scheduler Successfully \ assigned default/non-root to minikube Normal Pulling 24s kubelet Pulling image \ "nginx:1.23.1" Normal Pulled 16s kubelet Successfully \ pulled image "nginx:1.23.1" in 7.775950615s Warning Failed 4s (x3 over 16s) kubelet Error: container \ has runAsNonRoot and image will run as root (pod: "non-root-error_default \ (6ed9ed71-1002-4dc2-8cb1-3423f86bd144)", container: secured-container) Normal Pulled 4s (x2 over 16s) kubelet Container image \ "nginx:1.23.1" already present on machine
apiVersion:v1kind:Podmetadata:name:non-root-successspec:containers:-image:bitnami/nginx:1.23.1name:nginxsecurityContext:runAsNonRoot:true
$ kubectl apply -f container-non-root-user-success.yaml pod/non-root-success created $ kubectl get pod non-root-success NAME READY STATUS RESTARTS AGE non-root-success 1/1 Running 0 7s
$ kubectl exec non-root-success -it -- /bin/sh $ id uid=1001 gid=0(root) groups=0(root) $ exit
apiVersion:v1kind:Podmetadata:name:user-idspec:containers:-image:busybox:1.35.0name:busyboxcommand:["sh","-c","sleep1h"]securityContext:runAsUser:1000runAsGroup:3000
$ kubectl apply -f container-user-id.yaml pod/user-id created $ kubectl get pods user-id NAME READY STATUS RESTARTS AGE user-id 1/1 Running 0 6s
$ kubectl exec user-id -it -- /bin/sh / $ id uid=1000 gid=3000 groups=3000 / $ touch test.txt touch: test.txt: Permission denied / $ touch /tmp/test.txt / $ exit
apiVersion:v1kind:Podmetadata:name:non-privilegedspec:containers:-image:busybox:1.35.0name:busyboxcommand:["sh","-c","sleep1h"]
$ kubectl apply -f non-privileged.yaml pod/non-privileged created $ kubectl get pods NAME READY STATUS RESTARTS AGE non-privileged 1/1 Running 0 6s
$ kubectl exec non-privileged -it -- /bin/sh / # sysctl kernel.hostname=test sysctl: error setting key 'kernel.hostname': Read-only file system / # exit
apiVersion:v1kind:Podmetadata:name:privilegedspec:containers:-image:busybox:1.35.0name:busyboxcommand:["sh","-c","sleep1h"]securityContext:privileged:true
$ kubectl apply -f privileged.yaml pod/privileged created $ kubectl get pod privileged NAME READY STATUS RESTARTS AGE privileged 1/1 Running 0 6s
$ kubectl exec privileged -it -- /bin/sh / # sysctl kernel.hostname=test kernel.hostname = test / # exit
metadata:labels:pod-security.kubernetes.io/enforce:restricted
| Mode | Behavior |
|---|---|
|
Violations will cause the Pod to be rejected. |
|
Pod creation will be allowed. Violations will be appended to the audit log. |
|
Pod creation will be allowed. Violations will be rendered on the console. |
| Level | Behavior |
|---|---|
|
Fully unrestricted policy. |
|
Minimally restrictive policy that covers crucial standards. |
|
Heavily restricted policy following best practices for hardening Pods from a security perspective. |
apiVersion:v1kind:Namespacemetadata:name:psalabels:pod-security.kubernetes.io/enforce:restricted
apiVersion:v1kind:Podmetadata:name:busyboxnamespace:psaspec:containers:-image:busybox:1.35.0name:busyboxcommand:["sh","-c","sleep1h"]
$ kubectl create -f psa-namespace.yaml namespace/psa created $ kubectl apply -f psa-violating-pod.yaml Error from server (Forbidden): error when creating "psa-pod.yaml": pods \ "busybox" is forbidden: violates PodSecurity "restricted:latest": \ allowPrivilegeEscalation != false (container "busybox" must set \ securityContext.allowPrivilegeEscalation=false), unrestricted \ capabilities (container "busybox" must set securityContext. \ capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container \ "busybox" must set securityContext.runAsNonRoot=true), seccompProfile \ (pod or container "busybox" must set securityContext.seccompProfile. \ type to "RuntimeDefault" or "Localhost") $ kubectl get pod -n psa No resources found in psa namespace.
apiVersion:v1kind:Podmetadata:name:busyboxnamespace:psaspec:containers:-image:busybox:1.35.0name:busyboxcommand:["sh","-c","sleep1h"]securityContext:allowPrivilegeEscalation:falsecapabilities:drop:["ALL"]runAsNonRoot:truerunAsUser:2000runAsGroup:3000seccompProfile:type:RuntimeDefault
$ kubectl apply -f psa-non-violating-pod.yaml pod/busybox created $ kubectl get pod busybox -n psa NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 10s
$ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/\ gatekeeper/master/deploy/gatekeeper.yaml
$ kubectl get namespaces NAME STATUS AGE default Active 29h gatekeeper-system Active 4s ...
apiVersion:templates.gatekeeper.sh/v1kind:ConstraintTemplatemetadata:name:k8srequiredlabelsspec:crd:spec:names:kind:K8sRequiredLabelsvalidation:openAPIV3Schema:type:objectproperties:labels:type:arrayitems:type:stringtargets:-target:admission.k8s.gatekeeper.shrego:|package k8srequiredlabelsviolation[{"msg": msg, "details": {"missing_labels": missing}}] {provided := {label | input.review.object.metadata.labels[label]}required := {label | label := input.parameters.labels[_]}missing := required - providedcount(missing) > 0msg := sprintf("you must provide labels: %v", [missing])}

Declares the kind to be used by the constraint.

Specifies the validation schema of the constraint. In this case, we allow to pass in a property named labels that captures the required label keys.

Uses Rego to check for the existence of labels and compares them to the list of required keys.
apiVersion:constraints.gatekeeper.sh/v1beta1kind:K8sRequiredLabelsmetadata:name:ns-must-have-app-label-keyspec:match:kinds:-apiGroups:[""]kinds:["Namespace"]parameters:labels:["app"]

Uses the kind defined by the constraint template.

Defines the API resources the constraint template should apply to.

Declares that the labels property expects the key app to exist.
$ kubectl apply -f constraint-template-labels.yaml constrainttemplate.templates.gatekeeper.sh/k8srequiredlabels created $ kubectl apply -f constraint-ns-labels.yaml k8srequiredlabels.constraints.gatekeeper.sh/ns-must-have-app-label-key created
$ kubectl create ns governed-ns
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" \
denied the request: [ns-must-have-app-label-key] you must provide labels: {"app"}
apiVersion:v1kind:Namespacemetadata:labels:app:orionname:governed-ns
$ kubectl apply -f namespace-app-label.yaml namespace/governed-ns created
$ kubectl create secret generic app-config --from-literal=password=passwd123 secret/app-config created
$ sudo ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/\
etcd/server.key get /registry/secrets/default/app-config | hexdump -C
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 61 70 70 2d 63 6f |s/default/app-co|
00000020 6e 66 69 67 0a 6b 38 73 00 0a 0c 0a 02 76 31 12 |nfig.k8s.....v1.|
00000030 06 53 65 63 72 65 74 12 d9 01 0a b7 01 0a 0a 61 |.Secret........a|
00000040 70 70 2d 63 6f 6e 66 69 67 12 00 1a 07 64 65 66 |pp-config....def|
00000050 61 75 6c 74 22 00 2a 24 36 38 64 65 65 34 34 38 |ault".*$68dee448|
00000060 2d 34 39 62 37 2d 34 34 32 66 2d 39 62 32 66 2d |-49b7-442f-9b2f-|
00000070 33 66 39 62 39 62 32 61 66 66 36 64 32 00 38 00 |3f9b9b2aff6d2.8.|
00000080 42 08 08 97 f8 a4 9b 06 10 00 7a 00 8a 01 65 0a |B.........z...e.|
00000090 0e 6b 75 62 65 63 74 6c 2d 63 72 65 61 74 65 12 |.kubectl-create.|
000000a0 06 55 70 64 61 74 65 1a 02 76 31 22 08 08 97 f8 |.Update..v1"....|
000000b0 a4 9b 06 10 00 32 08 46 69 65 6c 64 73 56 31 3a |.....2.FieldsV1:|
000000c0 31 0a 2f 7b 22 66 3a 64 61 74 61 22 3a 7b 22 2e |1./{"f:data":{".|
000000d0 22 3a 7b 7d 2c 22 66 3a 70 61 73 73 77 6f 72 64 |":{},"f:password|
000000e0 22 3a 7b 7d 7d 2c 22 66 3a 74 79 70 65 22 3a 7b |":{}},"f:type":{|
000000f0 7d 7d 42 00 12 15 0a 08 70 61 73 73 77 6f 72 64 |}}B.....password|
00000100 12 09 70 61 73 73 77 64 31 32 33 1a 06 4f 70 61 |..passwd123..Opa|
00000110 71 75 65 1a 00 22 00 0a |que.."..|
$ head -c 32 /dev/urandom | base64 W68xlPT/VXcOSEZJvWeIvkGJnGfQNFpvZYfT9e+ZYuY=
apiVersion:apiserver.config.k8s.io/v1kind:EncryptionConfigurationresources:-resources:-secretsproviders:-aescbc:keys:-name:key1secret:W68xlPT/VXcOSEZJvWeIvkGJnGfQNFpvZYfT9e+ZYuY=-identity:{}

Defines the API resource to be encrypted in etcd. We are only encrypting Secrets data here.

The base64-encoded key assigned to an AES-CBC encryption provider.
$ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: \
192.168.56.10:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --encryption-provider-config=/etc/kubernetes/enc/enc.yaml
volumeMounts:
...
- name: enc
mountPath: /etc/kubernetes/enc
readonly: true
volumes:
...
- name: enc
hostPath:
path: /etc/kubernetes/enc
type: DirectoryOrCreate
...
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE ... kube-apiserver-control-plane 1/1 Running 0 69s
$ kubectl get secrets --all-namespaces -o json | kubectl replace -f - ... secret/app-config replaced
$ sudo ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/\
etcd/server.key get /registry/secrets/default/app-config | hexdump -C
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 61 70 70 2d 63 6f |s/default/app-co|
00000020 6e 66 69 67 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 |nfig.k8s:enc:aes|
00000030 63 62 63 3a 76 31 3a 6b 65 79 31 3a ae 26 e9 c2 |cbc:v1:key1:.&..|
00000040 7b fd a2 74 30 24 85 61 3c 18 1e 56 00 a1 24 65 |{..t0$.a<..V..$e|
00000050 52 3c 3f f1 24 43 9f 6d de 5f b0 84 32 18 84 47 |R<?.$C.m._..2..G|
00000060 d5 30 e9 64 84 22 f5 d0 0b 6f 02 af db 1d 51 34 |.0.d."...o....Q4|
00000070 db 57 c8 17 93 ed 9e 00 ea 9a 7b ec 0e 75 0c 49 |.W........{..u.I|
00000080 6a e9 97 cd 54 d4 ae 6b b6 cb 65 8a 5d 4c 3c 9c |j...T..k..e.]L<.|
00000090 db 9b ed bc ce bf 3c ef f6 2e cb 6d a2 53 25 49 |......<....m.S%I|
000000a0 d4 26 c5 4c 18 f3 65 bb a8 4c 0f 8d 6e be 7b d3 |.&.L..e..L..n.{.|
000000b0 24 9b a8 09 9c bb a3 f9 53 49 78 86 f5 24 e7 10 |$.......SIx..$..|
000000c0 ad 05 45 b8 cb 31 bd 38 b6 5c 00 02 b2 a4 62 13 |..E..1.8.\....b.|
000000d0 d5 82 6b 73 79 97 7e fa 2f 5d 3b 91 a0 21 50 9d |..ksy.~./];..!P.|
000000e0 77 1a 32 44 e1 93 9b 9c be bf 49 d2 f9 dc 56 23 |w.2D......I...V#|
000000f0 07 a8 ca a5 e3 e7 d1 ae 9c 22 1f 98 b1 63 b8 73 |........."...c.s|
00000100 66 3f 9f a5 6a 45 60 a7 81 eb 32 e5 42 4d 2b fd |f?..jE`...2.BM+.|
00000110 65 6c c2 c7 74 9f 1d 6a 1c 24 32 0e 7a 94 a2 60 |el..t..j.$2.z..`|
00000120 22 77 58 c9 69 c3 55 72 e8 fb 0b 63 9d 7f 04 31 |"wX.i.Ur...c...1|
00000130 00 a2 07 76 af 95 4e 03 0a 92 10 b8 bb 1e 89 94 |...v..N.........|
00000140 45 60 01 45 bf d7 95 df ff 2e 9e 31 0a |E`.E.......1.|
0000014d
$ sudo apt-get update && \
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg
$ curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/\ keyrings/gvisor-archive-keyring.gpg $ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/\ gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases \ release main" | sudo tee /etc/apt/sources.list.d/gvisor.list > /dev/null
$ sudo apt-get update && sudo apt-get install -y runsc
$ cat <<EOF | sudo tee /etc/containerd/config.toml version = 2 [plugins."io.containerd.runtime.v1.linux"] shim_debug = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] runtime_type = "io.containerd.runc.v2" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc] runtime_type = "io.containerd.runsc.v1" EOF
$ sudo systemctl restart containerd
apiVersion:node.k8s.io/v1kind:RuntimeClassmetadata:name:gvisorhandler:runsc
apiVersion:v1kind:Podmetadata:name:nginxspec:runtimeClassName:gvisorcontainers:-name:nginximage:nginx:1.23.2
$ kubectl apply -f runtimeclass.yaml runtimeclass.node.k8s.io/gvisor created $ kubectl apply -f pod.yaml pod/nginx created
$ kubectl exec nginx -- dmesg [ 0.000000] Starting gVisor... [ 0.123202] Preparing for the zombie uprising... [ 0.415862] Rewriting operating system in Javascript... [ 0.593368] Reading process obituaries... [ 0.741642] Segmenting fault lines... [ 0.797360] Daemonizing children... [ 0.831010] Creating bureaucratic processes... [ 1.313731] Searching for needles in stacks... [ 1.455084] Constructing home... [ 1.834278] Gathering forks... [ 1.928142] Mounting deweydecimalfs... [ 2.109973] Setting up VFS... [ 2.157224] Ready!
In the course of this chapter, we looked at OS-level security settings and how to govern them with different core features and external tooling. You need to understand the different options, their benefits and limitations, and be able to apply them to implement contextual requirements. Practice the use of security contexts, Pod Security Admission, and Open Policy Agent Gatekeeper. The Kubernetes ecosystem offers more tooling in this space. Feel free to explore those on your own to expand your horizon.
The CKA exam already covers the workflow of creating and using Secrets to inject sensitive configuration data into Pods. I am assuming that you already know how to do this. Every Secret key-value pair is stored in etcd. Expand your knowledge of Secret management by learning how to encrypt etcd so that an attacker with access to a host running etcd isn’t able to read information in plain text.
Container runtime sandboxes help with adding stricter isolation to containers. You will not be expected to install a container runtime sandbox, such as Kata Containers or gVisor. You do need to understand the process for configuring a container runtime sandbox with the help of a RuntimeClass object and how to assign the RuntimeClass to a Pod by name.
Setting up mTLS for all microservices running in a Pod can be extremely tedious due to certificate management. For the exam, understand the general use case for wanting to set up mTLS for Pod-to-Pod communication. You are likely not expected to actually implement it manually, though. Production Kubernetes clusters use services meshes to provide mTLS as a feature.
$ docker pull alpine:3.17.0 ... $ docker image ls alpine REPOSITORY TAG IMAGE ID CREATED SIZE alpine 3.17.0 49176f190c7e 3 weeks ago 7.05MB
$ docker run -it alpine:3.17.0 /bin/sh / # exit
$ docker pull gcr.io/distroless/static-debian11 ... $ docker image ls gcr.io/distroless/static-debian11:latest REPOSITORY TAG IMAGE ID CREATED \ SIZE gcr.io/distroless/static-debian11 latest 901590160d4d 53 years ago \ 2.34MB
$ docker run -it gcr.io/distroless/static-debian11:latest /bin/sh docker: Error response from daemon: failed to create shim task: OCI runtime \ create failed: runc create failed: unable to start container process: exec: \ "/bin/sh": stat /bin/sh: no such file or directory: unknown.
FROMgolang:1.19.4-alpineWORKDIR/appCOPYgo.mod.COPYgo.sum.RUNgomoddownloadCOPY..RUNCGO_ENABLED=0gotest-vRUNgobuild-o/go-sample-app.CMD["/go-sample-app"]

Uses a Go base image

Executes the tests against the application code

Builds the binary of the Go application
$ docker build . -t go-sample-app:0.0.1 ...
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE go-sample-app 0.0.1 88175f3ab0d3 44 seconds ago 358MB
FROMgolang:1.19.4-alpineASbuildRUNapkadd--no-cachegitWORKDIR/tmp/go-sample-appCOPYgo.mod.COPYgo.sum.RUNgomoddownloadCOPY..RUNCGO_ENABLED=0gotest-vRUNgobuild-o./out/go-sample-app.FROMalpine:3.17.0RUNapkaddca-certificatesCOPY--from=build/tmp/go-sample-app/out/go-sample-app/app/go-sample-appCMD["/app/go-sample-app"]

Uses a Go base image for building and testing the program in the stage named build.

Executes the tests against the application code.

Builds the binary of the Go application.

Uses a much smaller base image for running the application in a container.

Copies the application binary produced in the build stage and uses it as the command to run when the container is started.
$ docker build . -t go-sample-app:0.0.1 ... $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE go-sample-app 0.0.1 88175f3ab0d3 44 seconds ago 12MB
RUN instructionsFROMubuntu:22.10RUNapt-getupdate-yRUNapt-getupgrade-yRUNapt-getinstall-ycurl
RUN instructionsFROMubuntu:22.10RUNapt-getupdate-y&&apt-getupgrade-y&&apt-getinstall-ycurl
DockerSlim will optimize and secure your container image by analyzing your application and its dependencies. You can find more information in the tool’s GitHub repository.
Dive is a tool for exploring the layers baked into a container image. It makes it easy to identify unnecessary layers, which you can further optimize on. The code and documentation for Dive are available in a GitHub repository.
alpine:3.17.0 container image on Docker HubapiVersion:v1kind:Podmetadata:name:alpine-validspec:containers:-name:alpineimage:alpine@sha256:c0d488a800e4127c334ad20d61d7bc21b40 \97540327217dfab52262adc02380ccommand:["/bin/sh"]args:["-c","whiletrue;doechohello;sleep10;done"]
$ kubectl apply -f pod-valid-image-digest.yaml pod/alpine-valid created $ kubectl get pod alpine-valid NAME READY STATUS RESTARTS AGE alpine-valid 1/1 Running 0 6s
apiVersion:v1kind:Podmetadata:name:alpine-invalidspec:containers:-name:alpineimage:alpine@sha256:d006a643bccb6e9adbabaae668533c7f2e5 \111572fffb5c61cb7fcba7ef4150bcommand:["/bin/sh"]args:["-c","whiletrue;doechohello;sleep10;done"]
$ kubectl get pods NAME READY STATUS RESTARTS AGE alpine-invalid 0/1 ErrImagePull 0 29s $ kubectl describe pod alpine-invalid ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 13s default-scheduler Successfully assigned default \ /alpine-invalid to minikube Normal Pulling 13s kubelet Pulling image "alpine@sha256: \ d006a643bccb6e9adbabaae668533c7f2e5111572fffb5c61cb7fcba7ef4150b" Warning Failed 11s kubelet Failed to pull image \ "alpine@sha256:d006a643bccb6e9adbabaae668533c7f2e5111572fffb5c61cb7fcba7ef4 \ 150b": rpc error: code = Unknown desc = Error response from daemon: manifest \ for alpine@sha256:d006a643bccb6e9adbabaae668533c7f2e5111572fffb5c61cb7fcba7e \ f4150b not found: manifest unknown: manifest unknown Warning Failed 11s kubelet Error: ErrImagePull Normal BackOff 11s kubelet Back-off pulling image \ "alpine@sha256:d006a643bccb6e9adbabaae668533c7f2e5111572fffb5c61cb7fcba7ef415 \ 0b" Warning Failed 11s kubelet Error: ImagePullBackOff
apiVersion:templates.gatekeeper.sh/v1kind:ConstraintTemplatemetadata:name:k8sallowedreposannotations:metadata.gatekeeper.sh/title:"AllowedRepositories"metadata.gatekeeper.sh/version:1.0.0description:>-Requires container images to begin with a string from the specified list.spec:crd:spec:names:kind:K8sAllowedReposvalidation:openAPIV3Schema:type:objectproperties:repos:description:The list of prefixes a container image is allowed to have.type:arrayitems:type:stringtargets:-target:admission.k8s.gatekeeper.shrego:|package k8sallowedreposviolation[{"msg": msg}] {container := input.review.object.spec.containers[_]satisfied := [good | repo = input.parameters.repos[_] ; \good = startswith(container.image, repo)]not any(satisfied)msg := sprintf("container <%v> has an invalid image repo <%v>, allowed \repos are %v", [container.name, container.image, input.parameters.repos])}violation[{"msg": msg}] {container := input.review.object.spec.initContainers[_]satisfied := [good | repo = input.parameters.repos[_] ; \good = startswith(container.image, repo)]not any(satisfied)msg := sprintf("initContainer <%v> has an invalid image repo <%v>, \allowed repos are %v", [container.name, container.image, \input.parameters.repos])}violation[{"msg": msg}] {container := input.review.object.spec.ephemeralContainers[_]satisfied := [good | repo = input.parameters.repos[_] ; \good = startswith(container.image, repo)]not any(satisfied)msg := sprintf("ephemeralContainer <%v> has an invalid image repo <%v>, \allowed repos are %v", [container.name, container.image, \input.parameters.repos])}
apiVersion:constraints.gatekeeper.sh/v1beta1kind:K8sAllowedReposmetadata:name:repo-is-gcrspec:match:kinds:-apiGroups:[""]kinds:["Pod"]parameters:repos:-"gcr.io/"
$ kubectl apply -f allowed-repos-constraint-template.yaml constrainttemplate.templates.gatekeeper.sh/k8sallowedrepos created $ kubectl apply -f gcr-allowed-repos-constraint.yaml k8sallowedrepos.constraints.gatekeeper.sh/repo-is-gcr created
$ kubectl run nginx --image=nginx:1.23.3 Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" \ denied the request: [repo-is-gcr] container <nginx> has an invalid image \ repo <nginx:1.23.3>, allowed repos are ["gcr.io/"]
$ kubectl run busybox --image=gcr.io/google-containers/busybox:1.27.2 pod/busybox created $ kubectl get pods NAME READY STATUS RESTARTS AGE busybox 0/1 Completed 1 (2s ago) 3s
$ curl -X POST -H "Content-Type: application/json" -k -d \'{"apiVersion": \
"imagepolicy.k8s.io/v1alpha1", "kind": "ImageReview", "spec": \
{"containers": [{"image": "nginx:1.19.0"}]}}' https://localhost:8080/validate
{"apiVersion": "imagepolicy.k8s.io/v1alpha1", "kind": "ImageReview", \
"status": {"allowed": false, "reason": "Denied request: [container 1 \
has an invalid image repo nginx:1.19.0, allowed repos are [gcr.io/]]"}}
$ curl -X POST -H "Content-Type: application/json" -k -d '{"apiVersion": \
"imagepolicy.k8s.io/v1alpha1", "kind": "ImageReview", "spec": {"containers": \
[{"image": "gcr.io/nginx:1.19.0"}]}}' https://localhost:8080/validate
{"apiVersion": "imagepolicy.k8s.io/v1alpha1", "kind": "ImageReview", \
"status": {"allowed": true, "reason": ""}}
apiVersion:apiserver.config.k8s.io/v1kind:AdmissionConfigurationplugins:-name:ImagePolicyWebhookconfiguration:imagePolicy:kubeConfigFile:/etc/kubernetes/admission-control/\imagepolicywebhook.kubeconfigallowTTL:50denyTTL:50retryBackoff:500defaultAllow:false

Provides the configuration for the ImagePolicyWebhook plugin.

Points to the configuration file used to configure the backend.

Denies an API request if the backend cannot be reached. The default is true but setting it to false is far more sensible.
apiVersion:v1kind:Configpreferences:{}clusters:-name:image-validation-webhookcluster:certificate-authority:/etc/kubernetes/admission-control/ca.crtserver:https://image-validation-webhook:8080/validatecontexts:-context:cluster:image-validation-webhookuser:api-server-clientname:image-validation-webhookcurrent-context:image-validation-webhookusers:-name:api-server-clientuser:client-certificate:/etc/kubernetes/admission-control/api-server-client.crtclient-key:/etc/kubernetes/admission-control/api-server-client.key
...spec:containers:-command:-kube-apiserver---enable-admission-plugins=NodeRestriction,ImagePolicyWebhook---admission-control-config-file=/etc/kubernetes/admission-control/ \image-policy-webhook-admission-configuration.yaml...volumeMounts:...-name:admission-controlmountPath:/etc/kubernetes/admission-controlreadonly:truevolumes:...-name:admission-controlhostPath:path:/etc/kubernetes/admission-controltype:DirectoryOrCreate...
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE ... kube-apiserver-control-plane 1/1 Running 0 69s
FROMgolangCOPYmain.go.RUNgobuildmain.goCMD["./main"]
$ hadolint Dockerfile Dockerfile:1 DL3006 warning: Always tag the version of an image explicitly Dockerfile:2 DL3045 warning: `COPY` to a relative destination without \ `WORKDIR` set.
FROMgolang:1.19.4-alpineWORKDIR/appCOPYmain.go.RUNgobuildmain.goCMD["./main"]
$ hadolint Dockerfile
apiVersion:v1kind:Podmetadata:name:kubesec-demospec:containers:-name:kubesec-demoimage:gcr.io/google-samples/node-hello:1.0securityContext:readOnlyRootFilesystem:true
$ docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin \
< pod-initial-kubesec-test.yaml
[
{
"object": "Pod/kubesec-demo.default",
"valid": true,
"message": "Passed with a score of 1 points",
"score": 1,
"scoring": {
"advise": [
{
"selector": ".spec .serviceAccountName",
"reason": "Service accounts restrict Kubernetes API access and \
should be configured with least privilege"
},
{
"selector": ".metadata .annotations .\"container.apparmor.security. \
beta.kubernetes.io/nginx\"",
"reason": "Well defined AppArmor policies may provide greater \
protection from unknown threats. WARNING: NOT PRODUCTION \
READY"
},
{
"selector": "containers[] .resources .requests .cpu",
"reason": "Enforcing CPU requests aids a fair balancing of \
resources across the cluster"
},
{
"selector": ".metadata .annotations .\"container.seccomp.security. \
alpha.kubernetes.io/pod\"",
"reason": "Seccomp profiles set minimum privilege and secure against \
unknown threats"
},
{
"selector": "containers[] .resources .limits .memory",
"reason": "Enforcing memory limits prevents DOS via resource \
exhaustion"
},
{
"selector": "containers[] .resources .limits .cpu",
"reason": "Enforcing CPU limits prevents DOS via resource exhaustion"
},
{
"selector": "containers[] .securityContext .runAsNonRoot == true",
"reason": "Force the running image to run as a non-root user to \
ensure least privilege"
},
{
"selector": "containers[] .resources .requests .memory",
"reason": "Enforcing memory requests aids a fair balancing of \
resources across the cluster"
},
{
"selector": "containers[] .securityContext .capabilities .drop",
"reason": "Reducing kernel capabilities available to a container \
limits its attack surface"
},
{
"selector": "containers[] .securityContext .runAsUser -gt 10000",
"reason": "Run as a high-UID user to avoid conflicts with the \
host's user table"
},
{
"selector": "containers[] .securityContext .capabilities .drop | \
index(\"ALL\")",
"reason": "Drop all capabilities and add only those required to \
reduce syscall attack surface"
}
]
}
}
]
apiVersion:v1kind:Podmetadata:name:kubesec-demospec:containers:-name:kubesec-demoimage:gcr.io/google-samples/node-hello:1.0resources:requests:memory:"64Mi"cpu:"250m"limits:memory:"128Mi"cpu:"500m"securityContext:readOnlyRootFilesystem:truerunAsNonRoot:truerunAsUser:20000capabilities:drop:["ALL"]
$ docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin \
< pod-improved-kubesec-test.yaml
[
{
"object": "Pod/kubesec-demo.default",
"valid": true,
"message": "Passed with a score of 9 points",
"score": 9,
"scoring": {
"advise": [
{
"selector": ".metadata .annotations .\"container.seccomp.security. \
alpha.kubernetes.io/pod\"",
"reason": "Seccomp profiles set minimum privilege and secure against \
unknown threats"
},
{
"selector": ".spec .serviceAccountName",
"reason": "Service accounts restrict Kubernetes API access and should \
be configured with least privilege"
},
{
"selector": ".metadata .annotations .\"container.apparmor.security. \
beta.kubernetes.io/nginx\"",
"reason": "Well defined AppArmor policies may provide greater \
protection from unknown threats. WARNING: NOT PRODUCTION \
READY"
}
]
}
}
]
$ trivy -v Version: 0.36.1 Vulnerability DB: Version: 2 UpdatedAt: 2022-12-13 12:07:14.884952254 +0000 UTC NextUpdate: 2022-12-13 18:07:14.884951854 +0000 UTC DownloadedAt: 2022-12-13 17:09:28.866739 +0000 UTC
In this section, we described some techniques for reducing the size of a container image when building it. I would suggest you read the best practices mentioned on the Docker web page and try to apply them to sample container images. Compare the size of the produced container image before and after applying a technique. You can try to challenge yourself by reducing a container image to the smallest size possible while at the same time avoiding the loss of crucial functionality.
OPA Gatekeeper offers a way to define the registries users are allowed to resolve container images from. Set up the objects for the constraint template and constraint, and see if the rules apply properly for a Pod that defines a main application container, an init container, and an ephemeral container. To broaden your exposure, also look at other products in the Kubernetes spaces that provide similar functionality. One of those products is Kyverno.
After building a container image, make sure to also create a digest for it. Publish the container image, as well as the digest, to a registry. Practice how to use the digest in a Pod definition and verify the behavior of Kubernetes upon pulling the container image.
You are not expected to write a backend for an ImagePolicyWebhook. That’s out of scope for the exam and requires knowledge of a programming language. You do need to understand how to enable the plugin in the API server configuration, though. I would suggest you practice the workflow even if you don’t have a running backend application available.
The CKS curriculum doesn’t prescribe a specific tool for analyzing Dockerfiles and Kubernetes manifests. During the exam, you may be asked to run a specific command that will produce a list of error and/or warning messages. Understand how to interpret the messages, and how to fix them in the relevant resource files.
The FAQ of the CKS lists the documentation page for Trivy. Therefore, it’s fair to assume that Trivy may come up in one of the questions. You will need to understand the different ways to invoke Trivy to scan a container image. The produced report will give a you clear indication on what needs to be fixed and the severity of the found vulnerability. Given that you can’t modify the container image easily, you will likely be asked to flag Pods that run container images with known vulnerabilities.
FROMnode:latestENVNODE_ENVdevelopmentWORKDIR/appCOPYpackage.json.RUNnpminstallCOPY..EXPOSE3001CMD["node","app.js"]
apiVersion:v1kind:Podmetadata:name:hello-worldspec:securityContext:runAsUser:0containers:-name:linuximage:hello-world:linux
$ curl -s https://falco.org/repo/falcosecurity-packages.asc | apt-key add - $ echo "deb https://download.falco.org/packages/deb stable main" | tee -a \ /etc/apt/sources.list.d/falcosecurity.list $ apt-get update -y
$ apt-get -y install linux-headers-$(uname -r)
$ apt-get install -y falco=0.33.1
$ sudo systemctl status falco
● falco.service - Falco: Container Native Runtime Security
Loaded: loaded (/lib/systemd/system/falco.service; enabled; vendor preset: \
enabled)
Active: active (running) since Tue 2023-01-24 15:42:31 UTC; 43min ago
Docs: https://falco.org/docs/
Main PID: 8718 (falco)
Tasks: 12 (limit: 1131)
Memory: 30.2M
CGroup: /system.slice/falco.service
└─8718 /usr/bin/falco --pidfile=/var/run/falco.pid
$ tree /etc/falco /etc/falco ├── aws_cloudtrail_rules.yaml ├── falco.yaml ├── falco_rules.local.yaml ├── falco_rules.yaml ├── k8s_audit_rules.yaml ├── rules.available │ └── application_rules.yaml └── rules.d
$ sudo systemctl restart falco
$ kubectl run nginx --image=nginx:1.23.3 pod/nginx created $ kubectl exec -it nginx -- bash root@nginx:/# exit
$ kubectl get pod nginx -o jsonpath='{.spec.nodeName}'
kube-worker-1
$ sudo journalctl -fu falco ... Jan 24 18:03:37 kube-worker-1 falco[8718]: 18:03:14.632368639: Notice A shell \ was spawned in a container with an attached terminal (user=root user_loginuid=0 \ nginx (id=18b247adb3ca) shell=bash parent=runc cmdline=bash pid=47773 \ terminal=34816 container_id=18b247adb3ca image=docker.io/library/nginx)
root@nginx:/# apt update root@nginx:/# apt install git
$ sudo journalctl -fu falco Jan 24 18:55:48 ubuntu-focal falco[8718]: 18:55:05.173895727: Error Package \ management process launched in container (user=root user_loginuid=0 \ command=apt update pid=60538 container_id=18b247adb3ca container_name=nginx \ image=docker.io/library/nginx:1.23.3) Jan 24 18:55:48 ubuntu-focal falco[8718]: 18:55:11.050925982: Error Package \ management process launched in container (user=root user_loginuid=0 \ command=apt install git-all pid=60823 container_id=18b247adb3ca \ container_name=nginx image=docker.io/library/nginx:1.23.3) ...
-rule:access_cameradesc:a process other than skype/webex tries to access the cameracondition:evt.type = open and fd.name = /dev/video0 and not proc.name in \(skype, webex)output:Unexpected process opening camera video device (command=%proc.cmdline)priority:WARNING
-macro:camera_process_accesscondition:evt.type = open and fd.name = /dev/video0 and not proc.name in \(skype, webex)
-rule:access_cameradesc:a process other than skype/webex tries to access the cameracondition:camera_process_accessoutput:Unexpected process opening camera video device (command=%proc.cmdline)priority:WARNING
-list:video_conferencing_softwareitems:[skype,webex]
-macro:camera_process_accesscondition:evt.type = open and fd.name = /dev/video0 and not proc.name in \(video_conferencing_software)
-macro:spawned_processcondition:(evt.typein(execve,execveat)andevt.dir=<)-macro:containercondition:(container.id!=host)-macro:container_entrypointcondition:(notproc.pnameexistsorproc.pnamein(runc:[0:PARENT],\runc:[1:CHILD],runc,docker-runc,exe,docker-runc-cur))-macro:user_expected_terminal_shell_in_container_conditionscondition:(never_true)-rule:Terminalshellincontainerdesc:Ashellwasusedastheentrypoint/execpointintoacontainerwithan\attachedterminal.condition:>spawned_process and containerand shell_procs and proc.tty != 0and container_entrypointand not user_expected_terminal_shell_in_container_conditionsoutput:>A shell was spawned in a container with an attached terminal (user=%user.name \user_loginuid=%user.loginuid %container.infoshell=%proc.name parent=%proc.pname cmdline=%proc.cmdline pid=%proc.pid \terminal=%proc.tty container_id=%container.id image=%container.image.repository)priority:NOTICEtags:[container,shell,mitre_execution]

Defines a macro, a condition reusable across multiple rules referenced by name.

Specifies the name of the rule.

The aggregated condition composed of multiple macros.

The alerting message should the event happen. The message may use built-in fields to reference runtime value.

Indicates how serious a violation of the rule it is.

Categorizes the rule set into groups of related rules for ease of management.
-rule:Terminalshellincontainerdesc:Ashellwasusedastheentrypoint/execpointintoacontainerwithan\attachedterminal.condition:>spawned_process and containerand shell_procs and proc.tty != 0and container_entrypointand not user_expected_terminal_shell_in_container_conditionsoutput:>Opened shell: %evt.time,%user.name,%container.namepriority:ALERTtags:[container,shell,mitre_execution]

Simplifies the log output rendered for a violation.

Treats a violation of the rule with ALERT priority.
$ sudo systemctl restart falco
$ sudo journalctl -fu falco ... Jan 24 21:19:13 kube-worker-1 falco[100017]: 21:19:13.961970887: Alert Opened \ shell: 21:19:13.961970887,<NA>,nginx
apiVersion:v1kind:Podmetadata:name:nginxspec:containers:-name:nginximage:nginx:1.21.6securityContext:readOnlyRootFilesystem:truevolumeMounts:-name:nginx-runmountPath:/var/run-name:nginx-cachemountPath:/var/cache/nginx-name:nginx-datamountPath:/usr/local/nginxvolumes:-name:nginx-runemptyDir:{}-name:nginx-dataemptyDir:{}-name:nginx-cacheemptyDir:{}
| Level | Effect |
|---|---|
|
Do not log events matching this rule. |
|
Only log request metadata for the event. |
|
Log metadata and the request body for the event. |
|
Log metadata, request, and response body for the event. |
...spec:containers:-command:-kube-apiserver---audit-policy-file=/etc/kubernetes/audit-policy.yaml---audit-log-path=/var/log/kubernetes/audit/audit.log...volumeMounts:-mountPath:/etc/kubernetes/audit-policy.yamlname:auditreadOnly:true-mountPath:/var/log/kubernetes/audit/name:audit-logreadOnly:false...volumes:-name:audithostPath:path:/etc/kubernetes/audit-policy.yamltype:File-name:audit-loghostPath:path:/var/log/kubernetes/audit/type:DirectoryOrCreate

Provides the location of the policy file and log file to the API server process.

Mounts the policy file and the audit log directory to the given paths.

Defines the Volumes for the policy file and the audit log directory.
$ kubectl run nginx --image=nginx:1.21.6 pod/nginx created
$ sudo grep 'audit.k8s.io/v1' /var/log/kubernetes/audit/audit.log
...
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse", \
"auditID":"285f4b99-951e-405b-b5de-6b66295074f4","stage":"ResponseComplete", \
"requestURI":"/api/v1/namespaces/default/pods/nginx","verb":"get", \
"user":{"username":"system:node:node01","groups":["system:nodes", \
"system:authenticated"]},"sourceIPs":["172.28.116.6"],"userAgent": \
"kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8","objectRef": \
{"resource":"pods","namespace":"default","name":"nginx","apiVersion":"v1"}, \
"responseStatus":{"metadata":{},"code":200},"responseObject":{"kind":"Pod", \
"apiVersion":"v1","metadata":{"name":"nginx","namespace":"default", \
...
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID": \
"5c8e5ecc-0ce0-49e0-8ab2-368284f2f785","stage":"ResponseComplete", \
"requestURI":"/api/v1/namespaces/default/pods/nginx/status","verb":"patch", \
"user":{"username":"system:node:node01","groups":["system:nodes", \
"system:authenticated"]},"sourceIPs":["172.28.116.6"],"userAgent": \
"kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8","objectRef": \
{"resource":"pods","namespace":"default","name":"nginx","apiVersion":"v1", \
"subresource":"status"},"responseStatus":{"metadata":{},"code":200}, \
...
Falco is definitely going to come up as a topic during the exam. You will need to understand how to read and modify a rule in a configuration file. I would suggest you browse through the syntax and options in more detail in case you need to write one yourself. The main entry point for running Falco is the command line tool. It’s fair to assume that it will have been preinstalled in the exam environment.
Immutable containers are a central topic to this exam domain. Understand how to set the spec.containers[].securityContext.readOnlyRootFilesystem attribute for a Pod and how to mount a Volume to a specific path in case a write operation is required by the container process.
Setting up audit logging consists of two steps. For one, you need to understand the syntax and structure of an audit policy file. The other aspect is how to configure the API server to consume the audit policy file, provide a reference to a backend, and mount the relevant filesystem Volumes. Make sure to practice all of those aspects hands-on.
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:deny-egress-externalspec:podSelector:matchLabels:app:backendpolicyTypes:-Egressegress:-to:-namespaceSelector:{}ports:-port:53protocol:UDP-port:53protocol:TCP
$ kubectl apply -f deny-egress-external.yaml
$ kubectl run web --image=busybox:1.36.0 -l app=frontend --port=80 -it \ --rm --restart=Never -- wget http://google.com --timeout=5 --tries=1 Connecting to google.com (142.250.69.238:80) Connecting to www.google.com (142.250.72.4:80) saving to /'index.html' index.html 100% |**| 13987 \ 0:00:00 ETA /'index.html' saved pod "web" deleted
$ kubectl run web --image=busybox:1.36.0 -l app=backend --port=80 -it \ --rm --restart=Never -- wget http://google.com --timeout=5 --tries=1 wget: download timed out pod "web" deleted pod default/web terminated (Error)
$ kubectl get ns kubernetes-dashboard NAME STATUS AGE kubernetes-dashboard Active 109s
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/\ dashboard/v2.6.0/aio/deploy/recommended.yaml
apiVersion:v1kind:ServiceAccountmetadata:name:observer-usernamespace:kubernetes-dashboard---apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:annotations:rbac.authorization.kubernetes.io/autoupdate:"true"name:cluster-observerrules:-apiGroups:-'apps'resources:-'deployments'verbs:-list---apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:observer-userroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:cluster-observersubjects:-kind:ServiceAccountname:observer-usernamespace:kubernetes-dashboard
$ kubectl apply -f dashboard-observer-user.yaml
$ kubectl create token observer-user -n kubernetes-dashboard \ --duration 0s eyJhbGciOiJSUzI1NiIsImtpZCI6Ik5lNFMxZ1...
$ kubectl proxy
$ curl -LO "https://dl.k8s.io/v1.26.1/bin/linux/amd64/kube-apiserver"
$ curl -LO "https://dl.k8s.io/v1.23.1/bin/linux/amd64/\ kube-apiserver.sha256"
$ echo "$(cat kube-apiserver.sha256) kube-apiserver" | shasum -a 256 \ --check kube-apiserver: FAILED shasum: WARNING: 1 computed checksum did NOT match
$ openssl genrsa -out jill.key 2048 $ openssl req -new -key jill.key -out jill.csr -subj \ "/CN=jill/O=observer"
$ cat jill.csr | base64 | tr -d "\n" LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tL...
$ cat <<EOF | kubectl apply -f - apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: name: jill spec: request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tL... signerName: kubernetes.io/kube-apiserver-client expirationSeconds: 86400 usages: - client auth EOF
$ kubectl certificate approve jill
$ kubectl get csr jill -o jsonpath={.status.certificate}| base64 \
-d > jill.crt
$ kubectl config set-credentials jill --client-key=jill.key \ --client-certificate=jill.crt --embed-certs=true $ kubectl config set-context jill --cluster=minikube --user=jill
$ kubectl create role observer --verb=create --verb=get --verb=list \ --verb=watch --resource=pods --resource=configmaps --resource=secrets $ kubectl create rolebinding observer-binding --role=observer \ --group=observer
$ kubectl config use-context jill
$ kubectl get configmaps NAME DATA AGE kube-root-ca.crt 1 16m
$ kubectl get nodes Error from server (Forbidden): nodes is forbidden: User "jill" cannot \ list resource "nodes" in API group "" at the cluster scope
$ kubectl config use-context minikube
$ kubectl create namespace t23
$ kubectl create serviceaccount api-call -n t23
apiVersion:v1kind:Podmetadata:name:service-listnamespace:t23spec:serviceAccountName:api-callcontainers:-name:service-listimage:alpine/curl:3.14command:['sh','-c','whiletrue;docurl-s-k-m5\-H"Authorization:Bearer$(cat/var/run/secrets/\kubernetes.io/serviceaccount/token)"https://kubernetes.\default.svc.cluster.local/api/v1/namespaces/default/\services;sleep10;done']
$ kubectl apply -f pod.yaml
$ kubectl logs service-list -n t23
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "services is forbidden: User \"system:serviceaccount:t23 \
:api-call\" cannot list resource \"services\" in API \
group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "services"
},
"code": 403
}
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:list-services-clusterrolerules:-apiGroups:[""]resources:["services"]verbs:["list"]
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:serviceaccount-service-rolebindingsubjects:-kind:ServiceAccountname:api-callnamespace:t23roleRef:kind:ClusterRolename:list-services-clusterroleapiGroup:rbac.authorization.k8s.io
$ kubectl apply -f clusterrole.yaml $ kubectl apply -f rolebinding.yaml
$ kubectl logs service-list -n t23
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1108"
},
"items": [
{
"metadata": {
"name": "kubernetes",
"namespace": "default",
"uid": "30eb5425-8f60-4bb7-8331-f91fe0999e20",
"resourceVersion": "199",
"creationTimestamp": "2022-09-08T18:06:52Z",
"labels": {
"component": "apiserver",
"provider": "kubernetes"
},
...
}
]
}
$ kubectl create token api-call -n t23 eyJhbGciOiJSUzI1NiIsImtpZCI6IjBtQkJzVWlsQjl...
apiVersion:v1kind:Podmetadata:name:service-listnamespace:t23spec:serviceAccountName:api-callautomountServiceAccountToken:falsecontainers:-name:service-listimage:alpine/curl:3.14command:['sh','-c','whiletrue;docurl-s-k-m5\-H"Authorization:BearereyJhbGciOiJSUzI1NiIsImtpZCI6Ij\BtQkJzVWlsQjl"https://kubernetes.default.svc.cluster.\local/api/v1/namespaces/default/services;sleep10;\done']
$ kubectl logs service-list -n t23
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "81194"
},
"items": [
{
"metadata": {
"name": "kubernetes",
"namespace": "default",
"uid": "30eb5425-8f60-4bb7-8331-f91fe0999e20",
"resourceVersion": "199",
"creationTimestamp": "2022-09-08T18:06:52Z",
"labels": {
"component": "apiserver",
"provider": "kubernetes"
},
...
}
]
}
$ vagrant ssh kube-control-plane
$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get \ install -y kubeadm=1.26.1-00 && sudo apt-mark hold kubeadm $ sudo kubeadm upgrade apply v1.26.1
$ kubectl drain kube-control-plane --ignore-daemonsets $ sudo apt-get update && sudo apt-get install -y \ --allow-change-held-packages kubelet=1.26.1-00 kubectl=1.26.1-00 $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet $ kubectl uncordon kube-control-plane
$ kubectl get nodes $ exit
$ vagrant ssh kube-worker-1
$ sudo apt-get update && sudo apt-get install -y \ --allow-change-held-packages kubeadm=1.26.1-00 $ sudo kubeadm upgrade node
$ kubectl drain kube-worker-1 --ignore-daemonsets $ sudo apt-get update && sudo apt-get install -y \ --allow-change-held-packages kubelet=1.26.1-00 kubectl=1.26.1-00 $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet $ kubectl uncordon kube-worker-1
$ kubectl get nodes $ exit
$ vagrant ssh kube-worker-1
$ sudo lsof -i :21 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME vsftpd 10178 root 3u IPv6 56850 0t0 TCP *:ftp (LISTEN)
$ sudo ss -at -pn '( dport = :21 or sport = :21 )'
State Recv-Q Send-Q Local Address:Port \
Peer Address:Port Process
LISTEN 0 32 *:21 \
*:* users:(("vsftpd",pid=10178,fd=3))
$ sudo systemctl status vsftpd
● vsftpd.service - vsftpd FTP server
Loaded: loaded (/lib/systemd/system/vsftpd.service; enabled; \
vendor preset: enabled)
Active: active (running) since Thu 2022-10-06 14:39:12 UTC; \
11min ago
Main PID: 10178 (vsftpd)
Tasks: 1 (limit: 1131)
Memory: 604.0K
CGroup: /system.slice/vsftpd.service
└─10178 /usr/sbin/vsftpd /etc/vsftpd.conf
Oct 06 14:39:12 kube-worker-1 systemd[1]: Starting vsftpd FTP server...
Oct 06 14:39:12 kube-worker-1 systemd[1]: Started vsftpd FTP server.
$ sudo systemctl stop vsftpd $ sudo systemctl disable vsftpd $ sudo apt purge --auto-remove -y vsftpd
$ sudo lsof -i :21
$ exit
$ vagrant ssh kube-worker-1
#include <tunables/global>
profile network-deny flags=(attach_disconnected) {
#include <abstractions/base>
network,
}
$ sudo apparmor_parser /etc/apparmor.d/network-deny
$ kubectl get pod -o yaml > pod.yaml $ kubectl delete pod network-call
apiVersion:v1kind:Podmetadata:name:network-callannotations:container.apparmor.security.beta.kubernetes.io/network-call:\localhost/network-denyspec:containers:-name:network-callimage:alpine/curl:3.14command:["sh","-c","whiletrue;doping-c1google.com;\sleep5;done"]
$ kubectl create -f pod.yaml $ kubectl get pod network-call NAME READY STATUS RESTARTS AGE network-call 1/1 Running 0 27s
$ kubectl logs network-call ... sh: ping: Permission denied sh: sleep: Permission denied
$ exit
$ vagrant ssh kube-worker-1
$ sudo mkdir -p /var/lib/kubelet/seccomp/profiles
{"defaultAction":"SCMP_ACT_LOG"}
$ kubectl get pod -o yaml > pod.yaml $ kubectl delete pod network-call
apiVersion:v1kind:Podmetadata:name:network-callspec:securityContext:seccompProfile:type:LocalhostlocalhostProfile:profiles/audit.jsoncontainers:-name:network-callimage:alpine/curl:3.14command:["sh","-c","whiletrue;doping-c1google.com;\sleep5;done"]securityContext:allowPrivilegeEscalation:false
$ kubectl create -f pod.yaml $ kubectl get pod network-call NAME READY STATUS RESTARTS AGE network-call 1/1 Running 0 27s
$ sudo cat /var/log/syslog Oct 6 16:25:06 ubuntu-focal kernel: [ 2114.894122] audit: type=1326 \ audit(1665073506.099:23761): auid=4294967295 uid=0 gid=0 \ ses=4294967295 pid=19226 comm="sleep" exe="/bin/busybox" \ sig=0 arch=c000003e syscall=231 compat=0 ip=0x7fc026adbf0b \ code=0x7ffc0000
$ exit
apiVersion:v1kind:Podmetadata:name:sysctl-podspec:securityContext:sysctls:-name:net.core.somaxconnvalue:"1024"-name:debug.iotracevalue:"1"containers:-name:nginximage:nginx:1.23.1
$ kubectl create -f pod.yaml $ kubectl get pods NAME READY STATUS RESTARTS AGE sysctl-pod 0/1 SysctlForbidden 0 4s
$ kubectl describe pod sysctl-pod
...
Events:
Type Reason Age From \
Message
---- ------ ---- ---- \
-------
Warning SysctlForbidden 2m48s kubelet \
forbidden sysctl: "net.core.somaxconn" \
not allowlisted
apiVersion:v1kind:Podmetadata:name:busybox-security-contextspec:securityContext:runAsUser:1000runAsGroup:3000fsGroup:2000volumes:-name:volemptyDir:{}containers:-name:busyboximage:busybox:1.28command:["sh","-c","sleep1h"]volumeMounts:-name:volmountPath:/data/testsecurityContext:allowPrivilegeEscalation:false
$ kubectl apply -f busybox-security-context.yaml $ kubectl get pod busybox-security-context NAME READY STATUS RESTARTS AGE busybox-security-context 1/1 Running 0 54s
$ kubectl exec busybox-security-context -it -- /bin/sh / $ cd /data/test /data/test $ touch hello.txt /data/test $ ls -l total 0 -rw-r--r-- 1 1000 2000 0 Nov 21 18:29 hello.txt /data/test $ exit
apiVersion:v1kind:Namespacemetadata:name:auditedlabels:pod-security.kubernetes.io/warn:baseline
$ kubectl apply -f psa-namespace.yaml
apiVersion:v1kind:Podmetadata:name:busyboxnamespace:auditedspec:hostNetwork:truecontainers:-name:busyboximage:busybox:1.28command:["sh","-c","sleep1h"]
$ kubectl apply -f psa-pod.yaml Warning: would violate PodSecurity "baseline:latest": host namespaces \ (hostNetwork=true) pod/busybox created $ kubectl get pod busybox -n audited NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 2m21s
$ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/\ gatekeeper/master/deploy/gatekeeper.yaml
$ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/\ gatekeeper-library/master/library/general/replicalimits/template.yaml
apiVersion:constraints.gatekeeper.sh/v1beta1kind:K8sReplicaLimitsmetadata:name:replica-limitsspec:match:kinds:-apiGroups:["apps"]kinds:["Deployment"]parameters:ranges:-min_replicas:3max_replicas:10
$ kubectl apply -f replica-limits-constraint.yaml
$ kubectl create deployment nginx --image=nginx:1.23.2 --replicas=15
error: failed to create deployment: admission webhook \
"validation.gatekeeper.sh" denied the request: [replica-limits] \
The provided number of replicas is not allowed for deployment: nginx. \
Allowed ranges: {"ranges": [{"max_replicas": 10, "min_replicas": 3}]}
$ kubectl create deployment nginx --image=nginx:1.23.2 --replicas=7
deployment.apps/nginx created
$ kubectl create secret generic db-credentials \ --from-literal=api-key=YZvkiWUkycvspyGHk3fQRAkt
$ sudo ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/\ etcd/server.key get /registry/secrets/default/db-credentials | hexdump -C
$ vagrant ssh kube-worker-1
apiVersion:node.k8s.io/v1kind:RuntimeClassmetadata:name:container-runtime-sandboxhandler:runsc
$ kubectl apply -f runtime-class.yaml
apiVersion:v1kind:Podmetadata:name:nginxspec:runtimeClassName:container-runtime-sandboxcontainers:-name:nginximage:nginx:1.23.2
$ kubectl apply -f pod.yaml $ kubectl get pod nginx NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 2m21s
$ exit
$ docker build . -t node-app:0.0.1 ... $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE node-app 0.0.1 7ba99d4ba3af 3 seconds ago 998MB $ docker run -p 3001:3001 -d node-app:0.0.1 c0c8a301eeb4ac499c22d10399c424e1063944f18fff70ceb5c49c4723af7969 $ curl -L http://localhost:3001/ Hello World
$ docker build . -t node-app:0.0.1 ... $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE node-app 0.0.1 ef2fbec41a75 2 seconds ago 176MB
$ kubectl create -f https://raw.githubusercontent.com/kyverno/\ kyverno/main/config/install.yaml
apiVersion:kyverno.io/v1kind:ClusterPolicymetadata:name:restrict-image-registriesannotations:policies.kyverno.io/title:Restrict Image Registriespolicies.kyverno.io/category:Best Practices, EKS Best Practicespolicies.kyverno.io/severity:mediumpolicies.kyverno.io/minversion:1.6.0policies.kyverno.io/subject:Podpolicies.kyverno.io/description:>-Images from unknown, public registries can be of dubious quality \and may not be scanned and secured, representing a high degree of \risk. Requiring use of known, approved registries helps reduce \threat exposure by ensuring image pulls only come from them. This \policy validates that container images only originate from the \registry `eu.foo.io` or `bar.io`. Use of this policy requires \customization to define your allowable registries.spec:validationFailureAction:Enforcebackground:truerules:-name:validate-registriesmatch:any:-resources:kinds:-Podvalidate:message:"Unknownimageregistry."pattern:spec:containers:-image:"gcr.io/*"
$ kubectl apply -f restrict-image-registries.yaml
$ kubectl run nginx --image=nginx:1.23.3
Error from server: admission webhook "validate.kyverno.svc-fail" \
denied the request:
policy Pod/default/nginx for resource violation:
restrict-image-registries:
validate-registries: 'validation error: Unknown image registry. \
rule validate-registries
failed at path /spec/containers/0/image/'
$ kubectl run busybox --image=gcr.io/google-containers/busybox:1.27.2
pod/busybox created
apiVersion:v1kind:Podmetadata:name:nginxspec:containers:-name:nginximage:nginx@sha256:c1b9fe3c0c015486cf1e4a0ecabe78d05864475e279638 \e9713eb55f013f907f
$ kubectl apply -f pod-validate-image.yaml pod/nginx created $ kubectl get pods nginx NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 29s
$ docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < pod.yaml
[
{
"object": "Pod/hello-world.default",
"valid": true,
"message": "Passed with a score of 0 points",
"score": 0,
"scoring": {
"advise": [
{
"selector": "containers[] .securityContext .capabilities \
.drop | index(\"ALL\")",
"reason": "Drop all capabilities and add only those \
required to reduce syscall attack surface"
},
{
"selector": "containers[] .resources .requests .cpu",
"reason": "Enforcing CPU requests aids a fair balancing \
of resources across the cluster"
},
{
"selector": "containers[] .securityContext .runAsNonRoot \
== true",
"reason": "Force the running image to run as a non-root \
user to ensure least privilege"
},
{
"selector": "containers[] .resources .limits .cpu",
"reason": "Enforcing CPU limits prevents DOS via resource \
exhaustion"
},
{
"selector": "containers[] .securityContext .capabilities \
.drop",
"reason": "Reducing kernel capabilities available to a \
container limits its attack surface"
},
{
"selector": "containers[] .resources .requests .memory",
"reason": "Enforcing memory requests aids a fair balancing \
of resources across the cluster"
},
{
"selector": "containers[] .resources .limits .memory",
"reason": "Enforcing memory limits prevents DOS via resource \
exhaustion"
},
{
"selector": "containers[] .securityContext \
.readOnlyRootFilesystem == true",
"reason": "An immutable root filesystem can prevent malicious \
binaries being added to PATH and increase attack \
cost"
},
{
"selector": ".metadata .annotations .\"container.seccomp. \
security.alpha.kubernetes.io/pod\"",
"reason": "Seccomp profiles set minimum privilege and secure \
against unknown threats"
},
{
"selector": ".metadata .annotations .\"container.apparmor. \
security.beta.kubernetes.io/nginx\"",
"reason": "Well defined AppArmor policies may provide greater \
protection from unknown threats. WARNING: NOT \
PRODUCTION READY"
},
{
"selector": "containers[] .securityContext .runAsUser -gt \
10000",
"reason": "Run as a high-UID user to avoid conflicts with \
the host's user table"
},
{
"selector": ".spec .serviceAccountName",
"reason": "Service accounts restrict Kubernetes API access \
and should be configured with least privilege"
}
]
}
}
]
apiVersion:v1kind:Podmetadata:name:hello-worldspec:serviceAccountName:defaultcontainers:-name:linuximage:hello-world:linuxresources:requests:memory:"64Mi"cpu:"250m"limits:memory:"128Mi"cpu:"500m"securityContext:readOnlyRootFilesystem:truerunAsNonRoot:truerunAsUser:20000capabilities:drop:["ALL"]
$ kubectl apply -f setup.yaml namespace/r61 created pod/backend created pod/loop created pod/logstash created
$ kubectl get pods -n r61 NAME READY STATUS RESTARTS AGE backend 1/1 Running 0 115s logstash 1/1 Running 0 115s loop 1/1 Running 0 115s
$ kubectl describe pod backend -n r61
...
Containers:
hello:
Container ID: docker://eb0bdefc75e635d03b625140d1e \
b229ca2db7904e44787882147921c2bd9c365
Image: bmuschko/nodejs-hello-world:1.0.0
...
$ trivy image bmuschko/nodejs-hello-world:1.0.0 $ trivy image alpine:3.13.4 $ trivy image elastic/logstash:7.13.3
$ kubectl delete pod backend -n r61 $ kubectl delete pod logstash -n r61 $ kubectl delete pod loop -n r61
$ vagrant ssh kube-worker-1
$ kubectl get pod malicious -o jsonpath='{.spec.containers[0].args}'
...
spec:
containers:
- args:
- /bin/sh
- -c
- while true; do echo "attacker intrusion" >> /etc/threat; \
sleep 5; done
...
$ sudo journalctl -fu falco Jan 24 23:40:18 kube-worker-1 falco[8575]: 23:40:18.359740123: Error \ File below /etc opened for writing (user=<NA> user_loginuid=-1 \ command=sh -c while true; do echo "attacker intrusion" >> /etc/threat; \ sleep 5; done pid=9763 parent=<NA> pcmdline=<NA> file=/etc/threat \ program=sh gparent=<NA> ggparent=<NA> gggparent=<NA> \ container_id=e72a6dbb63b8 image=docker.io/library/alpine) ...
-rule:Write below etcdesc:an attempt to write to any file below /etccondition:write_etc_commonoutput:"Filebelow/etcopenedforwriting(user=%user.name\user_loginuid=%user.loginuidcommand=%proc.cmdline\pid=%proc.pidparent=%proc.pnamepcmdline=%proc.pcmdline\file=%fd.nameprogram=%proc.namegparent=%proc.aname[2]\ggparent=%proc.aname[3]gggparent=%proc.aname[4]\container_id=%container.idimage=%container.image.repository)"priority:ERRORtags:[filesystem,mitre_persistence]
-rule:Write below etcdesc:an attempt to write to any file below /etccondition:write_etc_commonoutput:"%evt.time,%user.name,%container.id"priority:ERRORtags:[filesystem,mitre_persistence]
$ sudo systemctl restart falco $ sudo journalctl -fu falco Jan 24 23:48:18 kube-worker-1 falco[17488]: 23:48:18.516903001: \ Error 23:48:18.516903001,<NA>,e72a6dbb63b8 ...
file_output:enabled:truekeep_alive:falsefilename:/var/log/falco.logstdout_output:enabled:false
$ sudo tail -f /var/log/falco.log 00:10:30.425084165: Error 00:10:30.425084165,<NA>,e72a6dbb63b8 ...
$ exit
$ kubectl apply -f setup.yaml pod/hash created $ kubectl get pod hash NAME READY STATUS RESTARTS AGE hash 1/1 Running 0 27s $ kubectl exec -it hash -- /bin/sh / # ls /var/config/hash.txt /var/config/hash.txt
apiVersion:v1kind:Podmetadata:name:hashspec:containers:-name:hashimage:alpine:3.17.1securityContext:readOnlyRootFilesystem:truevolumeMounts:-name:hash-volmountPath:/var/configcommand:["sh","-c","if[!-d/var/config];thenmkdir-p\/var/config;fi;whiletrue;doecho$RANDOM|md5sum\|head-c20>>/var/config/hash.txt;sleep20;done"]volumes:-name:hash-volemptyDir:{}
$ vagrant ssh kube-control-plane
apiVersion:audit.k8s.io/v1kind:PolicyomitStages:-"RequestReceived"rules:-level:RequestResponseresources:-group:""resources:["pods"]-level:Metadataresources:-group:""resources:["secrets","configmaps"]-level:Requestresources:-group:""resources:["services"]
...spec:containers:-command:-kube-apiserver---audit-policy-file=/etc/kubernetes/audit/rules/audit-policy.yaml---audit-log-path=/var/log/kubernetes/audit/logs/apiserver.log---audit-log-maxage=5...volumeMounts:-mountPath:/etc/kubernetes/audit/rules/audit-policy.yamlname:auditreadOnly:true-mountPath:/var/log/kubernetes/audit/logs/name:audit-logreadOnly:false...volumes:-name:audithostPath:path:/etc/kubernetes/audit/rules/audit-policy.yamltype:File-name:audit-loghostPath:path:/var/log/kubernetes/audit/logs/type:DirectoryOrCreate
$ kubectl create configmap db-user --from-literal=username=tom configmap/db-user created
$ sudo cat /var/log/kubernetes/audit/logs/apiserver.log
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata", \
"auditID":"1fbb409a-3815-4da8-8a5e-d71c728b98b1","stage": \
"ResponseComplete","requestURI":"/api/v1/namespaces/default/configmaps? \
fieldManager=kubectl-create\u0026fieldValidation=Strict","verb": \
"create","user":{"username":"kubernetes-admin","groups": \
["system:masters","system:authenticated"]},"sourceIPs": \
["192.168.56.10"], "userAgent":"kubectl/v1.24.4 (linux/amd64) \
kubernetes/95ee5ab", "objectRef":{"resource":"configmaps", \
"namespace":"default", "name":"db-user","apiVersion":"v1"}, \
"responseStatus":{"metadata": {},"code":201}, \
"requestReceivedTimestamp":"2023-01-25T18:57:51.367219Z", \
"stageTimestamp":"2023-01-25T18:57:51.372094Z","annotations": \
{"authorization.k8s.io/decision":"allow", \
"authorization.k8s.io/reason":""}}
$ exit