Installing a kubernetes cluster involves a lot of steps and can be very complex. When you have a running cluster, it’s easy to manage
that cluster from the master node or a node with kubectl
configured. This becomes cumbersome and difficult to manage as the cluster
increases in size and also when there is a need to integrate kubernetes with CI/CD tools. This post is geared at simplifying the kubectl
configuration for managing multiple clusters
Copy the kubernetes admin config to the admin node.
export SERVER1="server-ip-address" #192.168.1.2
export SERVER2="server-ip-address" #192.168.1.3
export KUSER="admin-user"
# download the Kubernetes admin config for work dev
scp $KUSER@$SERVER1:/etc/rancher/rke2/rke2.yaml ~/.kube/dev-kube-context`
scp $KUSER@$SERVER2:/etc/rancher/rke2/rke2.yaml ~/.kube/prod-kube-context`
- cluster:
server: https://192.168.1.x:6443
name: dev-cluster
Users Section.
Contexts Section.
contexts:
- context:
cluster: dev-cluster
user: dev-admin
name: dev-cluster-context
current-context: dev-cluster-context
export KUBECONFIG="$HOME/.kube/dev-kube-context:$HOME/.kube/prod-kube-context"
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* dev-cluster-context dev-cluster dev-admin
prod-cluster-context prod-cluster prod-admin
kubectl get nodes
NAME STATUS ROLES AGE VERSION
dev-kube-master Ready control-plane,etcd,master 4d16h v1.20.6+rke2r1
dev-kube-node1 Ready <none> 4d16h v1.20.6+rke2r1
dev-kube-node2 Ready <none> 4d16h v1.20.6+rke2r1
kubectl config use-context prod-cluster-context
# Switched to context "prod-cluster-context".
kubectl config get-contexts
# CURRENT NAME CLUSTER AUTHINFO NAMESPACE
# dev-cluster-context dev-cluster dev-admin
#* prod-cluster-context prod-cluster prod-admin
kubectl get nodes
#NAME STATUS ROLES AGE VERSION
#prod-kube-master Ready control-plane,etcd,master 50m v1.20.6+rke2r1
#prod-kube-node1 Ready <none> 47m v1.20.6+rke2r1