Requirements
Ensure kubectl is installed and configured. See links below
- Local installation of kubectl
- Local installation of helm
kubectl test
You should see this:
Client Version: v1.16.0
Server Version: v1.14.8
Command Cheat Sheet
Command |
Description |
Reference Links |
kubectl config view |
View current config |
kubernetes |
kubectl config get-contexts |
Get all installed contexts |
kubernetes |
kubectl delete ns <namesapce> |
Delete kubernetes namespace |
kubernetes |
Utilities
Deleting resources in bulk using regEx
# Delete all pods matching a given pattern
kubectl get pods -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}'| xargs kubectl delete -n mynamespace pod
Port Forwarding
Port-Forward allows you to access and interact with internal Kubernetes cluster processes from your localhost. You can use this
method to investigate issues and make changes locally without the need to expose the services.
# Port Forwarding Command
kubectl port-forward TYPE/NAME [options] LOCAL_PORT:REMOTE_PORT
# List Services
kubectl get svc -n argocd
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-dex-server ClusterIP 10.108.149.200 <none> 5556/TCP,5557/TCP,5558/TCP 12h
argocd-metrics ClusterIP 10.103.23.208 <none> 8082/TCP 12h
argocd-redis ClusterIP 10.111.143.18 <none> 6379/TCP 12h
argocd-repo-server ClusterIP 10.98.189.78 <none> 8081/TCP,8084/TCP 12h
argocd-server ClusterIP 10.97.109.37 <none> 80/TCP,443/TCP 12h
argocd-server-metrics ClusterIP 10.108.87.103 <none> 8083/TCP 12h
If we want to expose port 443 for argocd-server, we can run the following
kubectl port-forward svc/argocd-server -n argocd 8080:443
If you want to access port 8080 outside localhost or any local address, we can run the following
kubectl port-forward --address 0.0.0.0 svc/argocd-server -n argocd 8080:443
If you want to access port 8080 on localhost and specific addresses, we can run the following
kubectl port-forward --address localhost,192.168.1.34 svc/argocd-server -n argocd 8080:443
You can also adjust for deployments or other service as needed.
Kubernetes Context
Kubernetes assumes the default after a clean setup. As you add more namespaces, you have to pass -n <namespace>
as part of the command.
You can change the context to avoid repeated use of that option
kubectl get ns
NAME STATUS AGE
poc1 Active 19h
default Active 20h
poc2 Active 4h
kube-node-lease Active 20h
kube-public Active 20h
kube-system Active 20h
#change context
kubectl config set-context --current --namespace=poc1
Kubernetes Options
To list enabled options, type this command:
Kubernetes ClusterRole Description
To list clusterrole description, type:
kubectl describe clusterrole.rbac
Clean Stale Kubernetes Namespace
Delete stale namespace and all resource in it
export NAMESPACE="test"
kubectl get namespace $NAMESPACE -o json > $NAMESPACE.json
# edit $NAMESPACE.json and change finilizer to an empty list
kubectl replace --raw "/api/v1/namespaces/$NAMESPACE/finalize" -f ./$NAMESPACE.json
kubectl delete pods -n $NAMESPACE --force
kubectl get namespace
Drain and Remove Pods
Move pods to a new host for maintainance or retiring hosts
export rNode="kube-node-01"
# Drain Node
kubectl drain $rNode --ignore-daemonsets --delete-emptydir-data
# Remove Node
kubectl delete node $rNode
# Un-Cordon Node to allp scheduling
kubectl uncordon $rNode
Force Delete Pods
kubectl delete pods name-of-pod --grace-period=0 --force
kubectl delete pod pod-two --force --grace-period=0 --namespace=default
kubectl patch pod pod-two -p '{"metadata":{"finalizers":null}}'
Scale in or scale out
kubectl scale --replicas=3 deployment/nginx -n nginx-demo
kubectl scale --replicas=1 deployment/nginx -n nginx-demo
Check Rollout Status
To check the rollout status of multiple deployments in a namespace, type
deploy=$(kubectl -n namespace get deploy -o name)
for i in $deploy; do kubectl -n namespace rollout status $i; done
Check Node Taints
kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}"
Troubleshooting Persistent Volume Claim issues
Issue: 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
- Check all persistent volumes and claims and cleanup as needed
- Check to ensure schedulable nodes exist. If non, fix it
kubectl taint nodes node1 node-role.kubernetes.io/master-
Reference Links