Pod Manifests
Basic Pod
Pod with Resources
Pod with Health Checks
apiVersion: v1
kind: Pod
metadata:
name: nnappone
namespace: learning
labels:
app: nnappone
spec:
containers:
- name: networknuts-app
image: lovelearnlinux/webserver:v1
ports:
- containerPort: 80
name: http
protocol: TCP
apiVersion: v1
kind: Pod
metadata:
name: guaranteed
labels:
app: apache
spec:
containers:
- name: networknuts-app
image: lovelearnlinux/webserver:v1
resources:
limits:
memory: 250Mi
cpu: 200m
requests:
memory: 250Mi
cpu: 200m
When requests equal limits, the Pod gets Guaranteed QoS class
apiVersion: v1
kind: Pod
metadata:
name: nnappone
namespace: learning
labels:
app: nnappone
spec:
containers:
- name: networknuts-app
image: lovelearnlinux/webserver:v1
livenessProbe:
exec:
command:
- cat
- /tmp/health.txt
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
resources:
requests:
cpu: "400m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
ports:
- containerPort: 80
name: http
protocol: TCP
Deployment Manifests
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-declarative
annotations:
environment: prod
organization: sales
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deploy
namespace: learning
spec:
replicas: 2
selector:
matchLabels:
app: hello-nn
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: hello-nn
spec:
containers:
- name: webserver-pod
image: lovelearnlinux/webserver:v1
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: gotohell
spec:
selector:
matchLabels:
run: gotohell
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
run: gotohell
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
run: gotohell
containers:
- name: gotohell
image: lovelearnlinux/webserver:v2
livenessProbe:
exec:
command:
- cat
- /var/www/html/index.html
initialDelaySeconds: 10
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
ports:
- containerPort: 80
name: http
protocol: TCP
Change podAffinity to podAntiAffinity to spread Pods across nodes instead of co-locating them
DaemonSet Manifests
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-logging
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
DaemonSets ensure all (or some) nodes run a copy of a Pod. Common use cases: logs collection, node monitoring, cluster storage daemons
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ds-important
namespace: project-tiger
spec:
selector:
matchLabels:
id: ds-important
template:
metadata:
labels:
id: ds-important
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
spec:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
containers:
- name: httpd
image: httpd:2-alpine
resources:
requests:
cpu: 10m
memory: 10Mi
Use tolerations to run DaemonSet Pods on control plane nodes for monitoring or logging
StatefulSet Manifests
apiVersion: v1
kind: Service
metadata:
name: nginx-headless
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx-headless"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 1Gi
StatefulSets provide stable network identities and persistent storage for stateful applications like databases
Job and CronJob Manifests
Basic Job
Parallel Job
CronJob
apiVersion: batch/v1
kind: Job
metadata:
name: batch-job
spec:
template:
metadata:
labels:
app: batch-job
spec:
restartPolicy: OnFailure
containers:
- name: nn-batch
image: lovelearnlinux/batch-job
Jobs create one or more Pods and ensure a specified number complete successfully
apiVersion: batch/v1
kind: Job
metadata:
name: parallel-batch-job
spec:
completions: 5
parallelism: 2
template:
metadata:
labels:
app: batch-job
spec:
restartPolicy: OnFailure
containers:
- name: nn-batch
image: lovelearnlinux/batch-job
- completions: Total number of successful Pod completions required
- parallelism: Number of Pods to run in parallel
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
CronJobs run Jobs on a schedule using Cron syntax. */1 * * * * = every minute
Service Manifests
ClusterIP Service
NodePort Service
Full NodePort Example
apiVersion: v1
kind: Service
metadata:
name: nnappone-service
spec:
selector:
app: nnappone
ports:
- protocol: TCP
port: 8080
targetPort: 80
ClusterIP is the default service type - accessible only within the cluster
apiVersion: v1
kind: Service
metadata:
name: website-svc
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30007
NodePort range: 30000-32767. If not specified, Kubernetes auto-assigns one
apiVersion: v1
kind: Service
metadata:
name: nnweb-svc
namespace: learning
labels:
app: hello-nn
spec:
type: NodePort
ports:
- port: 80
nodePort: 30003
protocol: TCP
selector:
app: hello-nn
ConfigMap and Secret Manifests
ConfigMap - Literal Values
ConfigMap - Script File
Pod with ConfigMap as Env
Pod with ConfigMap as Volume
Secret - Opaque
Pod with Secret
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: prod
data:
database_url: "postgresql://db:5432"
max_connections: "100"
log_level: "info"
ConfigMaps store non-confidential configuration data as key-value pairs
apiVersion: v1
kind: ConfigMap
metadata:
name: mytestscript
data:
test.sh: |
echo "testing script"
df -h
date
Use the pipe | for multi-line values to preserve formatting
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: nginx
env:
- name: DATABASE_URL
valueFrom:
configMapKeyRef:
name: app-config
key: database_url
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: log_level
apiVersion: v1
kind: Pod
metadata:
name: script-pod
spec:
volumes:
- name: testing
configMap:
name: mytestscript
defaultMode: 0777
containers:
- name: nginx
image: lovelearnlinux/webserver:v1
command: ["/bin/bash", "./tmp/test.sh"]
volumeMounts:
- mountPath: /tmp
name: testing
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
namespace: prod
type: Opaque
data:
username: YWRtaW4=
password: cGFzc3dvcmQxMjM=
Secret values must be base64-encoded. Use echo -n 'value' | base64 to encode
apiVersion: v1
kind: Pod
metadata:
name: secret-pod
spec:
containers:
- name: app
image: nginx
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: app-secrets
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: password
Storage Manifests
PersistentVolume (NFS)
PersistentVolumeClaim
Pod Using PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvone-nfs
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
nfs:
path: /foldername
server: ip-address-nfs-server
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
storageClassName: slow
apiVersion: v1
kind: Pod
metadata:
name: app-using-pvc
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: myclaim
Autoscaling Manifests
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 66
HPA automatically scales Pods based on CPU/memory utilization or custom metrics
Resource Management
LimitRange - Default Resource Limits
apiVersion: v1
kind: LimitRange
metadata:
name: def-cpu-mem-limit
namespace: dev
spec:
limits:
- default:
cpu: 111m
memory: 99Mi
defaultRequest:
cpu: 101m
memory: 91Mi
max:
cpu: 200m
memory: 100Mi
min:
cpu: 100m
memory: 90Mi
type: Container
How it works:
- If Pod has no resource block →
default and defaultRequest apply
- If Pod has resource block → values must be within
min and max
NetworkPolicy Manifests
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
This policy denies all incoming traffic to all Pods in the namespace by default
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-backend
namespace: project-snake
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: db1
ports:
- protocol: TCP
port: 1111
- to:
- podSelector:
matchLabels:
app: db2
ports:
- protocol: TCP
port: 2222
This allows backend Pods to connect to db1 on port 1111 and db2 on port 2222
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-namespace
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
environment: production
ports:
- protocol: TCP
port: 8080
Use namespaceSelector to control traffic between namespaces
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external
namespace: default
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
ports:
- protocol: TCP
port: 80
This allows external traffic on port 80 but blocks the 10.0.0.0/8 range
Access Modes Reference
| Access Mode | Abbreviation | Description |
|---|
| ReadWriteOnce | RWO | Volume can be mounted read-write by a single node |
| ReadOnlyMany | ROX | Volume can be mounted read-only by many nodes |
| ReadWriteMany | RWX | Volume can be mounted read-write by many nodes |
| ReadWriteOncePod | RWOP | Volume can be mounted read-write by a single Pod |
Common kubectl Commands
kubectl create -f manifest.yaml
kubectl apply -f manifest.yaml