Configure Kubernetes with the Node Package on Operating System#

After installing Kubernetes, configure the ICE ClusterWare ™ software to work with your Kubernetes cluster. These instructions assume you have installed Kubernetes on control plane and worker nodes with a mutable operating system (OS). Using the ClusterWare node package on worker nodes is supported for Kubernetes distributions installed on top of the node OS. Harvester clusters, or other Kubernetes clusters installed on an immutable OS, should use a container registry.

Two Kubernetes control plane configuration options are supported: a single Kubernetes control plane node or a High Available (with HAProxy and Keepalived) Kubernetes control plane with a first and additional control nodes. Both configurations can have additional workers (non-controller nodes).

The following sections include instructions for configuring single and multiple control planes combined with ClusterWare and non-ClusterWare systems. Example scenarios are at the end.

Bootstrap Kubernetes Control Plane#

Initialize the control plane node(s).

Important

For a server to function as a Kubernetes control plane or worker, swap must be turned off. Verify current status with swapon -s. Use swapoff -a -v to disable swap. You should not use a RAM-booted or otherwise ephemeral compute node as Kubernetes control plane.

For a single node control plane:

  • For a single ClusterWare node control plane, use the following command on a ClusterWare admin node:

    cw-kube -i <control plane node ID> --init
    
  • For a single non-ClusterWare node control plane, use the following command:

    cw-kube --init
    

For a multi-node control pane:

  • For a ClusterWare multi-node control plane, use the following commands on a ClusterWare admin node:

    $cw-kube --prepare-lb <unused IP> <first control plane node ID>:<node IP>,<additional control plane node ID>:<node IP>,<additional control plane node ID>:<node IP>
    $cw-kube -i <first control plane node ID> --init-ha
    
  • For non-ClusterWare multi-node control plane, use the following commands on the first control plane system:

    $cw-kube --prepare-lb <unused IP> <first control plane node ID>:<node IP>,<additional control plane node ID>:<node IP>,<additional control plane node ID>:<node IP>
    $cw-kube --init-ha
    

Note

In some advanced cases, you may want to include Kubernetes options as part of the initialization or add worker process. By default, --init and --init-ha already include --pod-network-cidr=10.244.0.0/16. You can override the default using the --kubeargs option in your command. For example:

--kubeargs '--pod-network-cidr=10.55.0.0/16 --service-cidr 10.99.0.0/16 \
--control-plane-endpoint 10.110.23.22'

Checking Deployment Status#

Verify that Kubernetes is ready on each system after the first initialization. Verify again after each control plane node or worker node joins.

  • For a ClusterWare control plane, use the following command:

    cw-nodectl -i <node ID> exec kubectl get nodes -o wide
    
  • For a non-ClusterWare control plane, use the following command:

    kubectl get nodes -o wide
    

Additional Configuration#

The following steps may be required depending on your cluster configuration.

  1. The INTERNAL-IP of the ClusterWare control plane may not match the IP address known to the ClusterWare platform. If they are different, replace the --cluster value with the INTERNAL-IP value when using the printed messages to join additional control plane nodes and workers.

  2. The certificate key printed in the initialization output expires after 2 hours. If you are joining additional control plane nodes, you may need to generate a new certificate key.

    For a ClusterWare control plane, use the following command to generate a new key:

    [adminr@cwhead ~]$ cw-nodectl -i <node ID> exec kubeadm init phase upload-certs --upload-certs
    [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
    [upload-certs] Using certificate key:
    ad556dcd5c795a42321be46b0a3cf8a52d7a1c7fef6e0bd96c65525569c39105
    

    On a non-ClusterWare control plane, use the following command to generate a new key:

    [root@kube1 ~]$ kubeadm init phase upload-certs –upload-certs
    

    Replace the --certificate-key value with the new certificate key you just generated when using the output messages to join additional control plane nodes.

Adding Workers#

There are two ways to join worker nodes to the control plane: you can join previously booted nodes directly or create a ClusterWare image and boot nodes using that image.

  1. Using the messages output after initialization as a guide, join workers to the control plane.

    To join ClusterWare nodes as workers to a ClusterWare control plane:

    cw-kube -i n[<node IDs>] --join --cluster <control plane node ID>
    

    To join ClusterWare nodes as workers to a non-ClusterWare control plane:

    cw-kube -i n[<node IDs>] --join --token <token value> --cahash <cahash value> --cluster <control plane IP>
    

    To join non-ClusterWare systems as workers to a ClusterWare control plane:

    cw-kube --join --token <token value> --cahash <cahash> --cluster <control plane IP>
    

    To join non-ClusterWare systems as workers to a non-ClusterWare control plane:

    cw-kube --join --token <token value> --cahash <cahash value> --cluster <control plane IP>
    
  2. For ClusterWare workers, use the following commands to create a Kubernetes worker node image and then boot the nodes with the node image as workers:

    $ cw-bootctl -i DefaultBoot clone name=<boot name>
    
    $ cw-imgctl -i DefaultImage clone name=<image name>
    
    $ cw-kube --image <image name> --join --cluster <control plane node ID>
    
    $ cw-bootctl -i <boot name> up image=<image name>
    
    $ cw-nodectl -i n[<node IDs>] set _boot_config=<boot name>
    
    $ cw-nodectl -i n[<node IDs>] reboot
    

Examples: Configure Kubernetes with the Node Package#

This section provides examples of setting up Kubernetes clusters with ICE ClusterWare ™ and non-ClusterWare systems.

Note

All examples assume you have root user or ClusterWare administrator access and that the clusterware-kubeadm package is installed.

Using a Single ClusterWare System as a Control Plane#

  1. Run the following command to initialize ClusterWare node n0 (10.154.1.100) as a control plane node:

    cw-kube -i n0 --init
    

    Messages about joining ClusterWare NODES/IMAGE and non-ClusterWare systems as workers to this ClusterWare control plane print after a successful initialization. For example:

    ...
    To join ClusterWare NODES/IMAGE as worker to this Clusterware control plane:
    cw-kube -i NODES --join --cluster n0
    cw-kube --image IMAGE --join --cluster n0
    
    To join non ClusterWare system as worker to this ClusterWare control plane:
    cw-kube --join --token yp6lxa.wcb6g48ud3f2cwng --cahash sha256:413a6267bac67ff749734749dc8b5f60323a68c64bf7fc8e99292dd9b29040b2 --cluster 10.154.1.100
    ...
    
  2. Verify the deployment status by running the following command:

    [admin@cwhead ~]$ cw-nodectl -i n0 exec kubectl get nodes -o wide
    NAME               STATUS     ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                            KERNEL-VERSION                 CONTAINER-RUNTIME
    n0.cluster.local   Ready      control-plane   1d   v1.31.2   10.154.1.100   <none>        Rocky Linux 8.10 (Green Obsidian)   4.18.0-553.el8_10.x86_64       containerd://1.6.32
    

    The INTERNAL-IP of the ClusterWare control plane is 10.154.1.100, which is the same as n0’s IP address known to the ClusterWare platform.

  3. Using the message output after initialization as a guide, join booted ClusterWare nodes n[1-4] as workers to the control plane node n0 with the following command:

    $ cw-kube -i n[1-4] --join --cluster n0
    
  4. Add additional worker nodes to control plane n0 by creating a Kubernetes worker node image and booting n[5-10] with the new node image:

    $ cw-bootctl -i DefaultBoot clone name=KubeWorkerBoot
    $ cw-imgctl -i DefaultImage clone name=KubeWorkerImage
    $ cw-kube --image KubeWorkerImage --join --cluster n0
    $ cw-bootctl -i KubeWorkerBoot up image=KubeWorkerImage
    $ cw-nodectl -i n[5-10] set _boot_config=KubeWorkerBoot
    $ cw-nodectl -i n[5-10] reboot
    
  5. Verify that Kubernetes is ready on each system using the following command:

    $ cw-nodectl -i n0 exec kubectl get nodes -o wide
    NAME               STATUS     ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                            KERNEL-VERSION                 CONTAINER-RUNTIME
    n0.cluster.local   Ready      control-plane   1d   v1.31.2   10.154.1.100   <none>        Rocky Linux 8.10 (Green Obsidian)   4.18.0-553.el8_10.x86_64       containerd://1.6.32
    n1.cluster.local   Ready      <none>          1d   v1.31.2   10.154.1.101   <none>        Rocky Linux 8.10 (Green Obsidian)   4.18.0-553.el8_10.x86_64       containerd://1.6.32
    

    The abbreviated example output shows the Kubernetes cluster has ClusterWare n0 as a working control plane and n1 as a worker.

Using a Single Non-ClusterWare System as a Control Plane#

  1. On the non-ClusterWare system where clusterware-kubeadm is installed, kube1 (10.154.3.1), initialize the local system as a control plane:

    cw-kube --init
    

    The following messages print after the control plane initialization:

    ...
    To join ClusterWare NODES/IMAGE as worker to this non ClusterWare control plane:
    cw-kube -i NODES --join --token nfg0ku.73f1gre8gxzco1qx --cahash sha256:cc999d4001c018a3423238773614bb8d6d8ad720e1f31a8b0e862052a67262da --cluster 10.154.3.1
    cw-kube --image IMAGE --join --token nfg0ku.73f1gre8gxzco1qx --cahash sha256:cc999d4001c018a3423238773614bb8d6d8ad720e1f31a8b0e862052a67262da --cluster 10.154.3.1
    
    To join non ClusterWare system as worker to this non Clusterware control plane:
    cw-kube --join --token nfg0ku.73f1gre8gxzco1qx --cahash sha256:cc999d4001c018a3423238773614bb8d6d8ad720e1f31a8b0e862052a67262da --cluster 10.154.3.1
    ...
    
  2. Verify the deployment status by running the following command:

    kubectl get nodes -o wide
    
  3. Using the messages at the end of step 1 as a guide, join booted ClusterWare nodes n[11-14] as workers with explicit --token, --cahash, and --cluster arguments to the control plane kube1 (10.154.3.1):

    cw-kube -i n[11-14] --join nfg0ku.73f1gre8gxzco1qx --cahash sha256:cc999d4001c018a3423238773614bb8d6d8ad720e1f31a8b0e862052a67262da --cluster 10.154.3.1
    
  4. Create a Kubernetes worker node image with explicit --token, --cahash, and --cluster arguments then boot n[15-20] with the new node image as workers to control plane kube1 (10.154.3.1):

    $ cw-bootctl -i DefaultBoot clone name=KubeWorkerBoot2
    $ cw-imgctl -i DefaultImage clone name=KubeWorkerImage2
    $ cw-kube --image KubeWorkerImage2 --join --token nfg0ku.73f1gre8gxzco1qx --cahash sha256:cc999d4001c018a3423238773614bb8d6d8ad720e1f31a8b0e862052a67262da --cluster 10.154.3.1
    $ cw-bootctl -i KubeWorkerBoot2 up image=KubeWorkerImage2
    $ cw-nodectl -i n[15-20] set _boot_config=KubeWorkerBoot2
    $ cw-nodectl -i n[15-20] reboot
    
  5. On EACH non-ClusterWare system that you want to join as a worker and where clusterware-kubeadm is installed, join the local system to control plane kube1 (10.154.3.1) with explicit --token, --cahash, and --cluster arguments:

    cw-kube --join --token nfg0ku.73f1gre8gxzco1qx --cahash sha256:cc999d4001c018a3423238773614bb8d6d8ad720e1f31a8b0e862052a67262da --cluster 10.154.3.1
    
  6. Verify the deployment status by running the following command:

    kubectl get nodes -o wide
    

    You should see kube1 as the control plane and both the ClusterWare and non-ClusterWare systems you joined as workers in the output.

Using Multiple ClusterWare Nodes as a Control Plane#

  1. Create High Available (HAProxy and Keepalived) configure files with ClusterWare node n21 (10.154.1.121) as the first control plane node and n22 (10.154.1.122) and n23 (10.154.1.123) as additional control plane nodes:

    cw-kube --prepare-lb 10.154.2.0 n21:10.154.1.121,n22:10.154.1.122,n23:10.154.1.123
    

    Note

    10.154.2.0 is an unused IP within the cluster network. It will be the apiserver virtual IP for these Kubernetes control planes.

  2. Initialize the first control plane node on n21:

    cw-kube -i n21 --init-ha
    

    The following message prints from a successful initialization:

    ...
    To join ClusterWare NODES as control planes to this ClusterWare control plane:
    cw-kube -i NODES --join-ha --certificate-key 1271738c2ee3cda4dc022a9bef8a3166550a608e80d000cdf0dfbe3defb03776 --cluster n21
    ...
    

    Note

    There are also messages about joining non-ClusterWare systems as workers to this ClusterWare control plane.

  3. Verify the first control plane node is ready and note the --cluster value with INTERNAL-IP (see Checking Deployment Status). If it is more than 2 hours since the first control plane node was initialized, generate a new certificate key (see Additional Configuration).

  4. Join n22 and n23 as additional control plane nodes to the first control plane node (n21):

    cw-kube -i n[22-23] --join-ha --certificate-key 1271738c2ee3cda4dc022a9bef8a3166550a608e80d000cdf0dfbe3defb03776 --cluster n21
    
  5. Verify all control plane nodes are ready. See Checking Deployment Status.

  6. Using the messages at the end of step 2 as a guide, join booted ClusterWare nodes n[1-4] as workers to the control plane node n21:

    cw-kube -i n[1-4] --join --cluster n21
    
  7. Create a Kubernetes worker node image and then boot n[5-10] with the new node image as workers to the control plane node n21:

    $ cw-bootctl -i DefaultBoot clone name=KubeWorkerBoot
    $ cw-imgctl -i DefaultImage clone name=KubeWorkerImage
    $ cw-kube --image KubeWorkerImage --join --cluster n21
    $ cw-bootctl -i KubeWorkerBoot up image=KubeWorkerImage
    $ cw-nodectl -i n[5-10] set _boot_config=KubeWorkerBoot
    $ cw-nodectl -i n[5-10] reboot
    
  8. On EACH non-ClusterWare system that you want to join as a worker and where clusterware-kubeadm is installed, join the local system to the control plane node n21 with explicit --token, --cahash, and --cluster arguments:

    cw-kube --join --token yp6lxa.wcb6g48ud3f2cwng --cahash sha256:413a6267bac67ff749734749dc8b5f60323a68c64bf7fc8e99292dd9b29040b2 --cluster 10.154.2.0
    

Using Multiple Non-ClusterWare Systems as a Control Plane#

  1. On EACH non-ClusterWare system where clusterware-kubeadm is installed, create High Available (HAProxy and Keepalived) configure files with kube2 (10.154.3.2) as the first control plane node and kube3 (10.154.3.3) and kube4 (10.154.3.4) as additional control plane nodes:

    cw-kube --prepare-lb 10.154.4.0 kube2:10.154.3.2,kube3:10.154.3.3,kube4:10.154.3.4
    

    Note

    10.154.4.0 is an unused IP within the cluster network. It will be the apiserver virtual IP for these Kubernetes control planes.

  2. Initialize the control plane on kube2:

    cw-kube --init-ha
    

    The following message prints from a successful initialization:

    ...
    To join non ClusterWare system as control plane to this non ClusterWare control plane:
    cw-kube --join-ha --token ka8y8y.enwcyfsk4hblayz5 --cahash sha256:413a6267bac67ff749734749dc8b5f60323a68c64bf7fc8e99292dd9b29040b2 --certificate-key 86ae5340eb592759debd51ab9a03c9f9005a5027e7900d3a2fff687de473e2be --cluster 10.154.4.0
    ...
    

    Note

    There are also messages about joining ClusterWare NODES/IMAGE as workers to this non-ClusterWare control plane.

  3. On kube2, verify the first control plane node is ready (see Checking Deployment Status). If it is more than 2 hours since the first control plane node was initialized, generate a new certificate key (see Additional Configuration).

  4. On kube3, create the same High Available (HAProxy and Keepalived) configure files as on kube2 and then join kube3 as an additional control plane node:

    $ cw-kube --prepare-lb 10.154.4.0 kube2:10.154.3.2,kube3:10.154.3.3,kube4:10.154.3.4
    $ cw-kube --join-ha --token ka8y8y.enwcyfsk4hblayz5 --cahash sha256:413a6267bac67ff749734749dc8b5f60323a68c64bf7fc8e99292dd9b29040b2 --certificate-key 86ae5340eb592759debd51ab9a03c9f9005a5027e7900d3a2fff687de473e2be --cluster 10.154.4.0
    
  5. Repeat step 4 on kube4.

  6. Verify all control planes nodes are ready. See Checking Deployment Status.

  7. Using the messages at the end of step 2 as a guide, join booted ClusterWare nodes n[11-14] as workers with explicit --token, --cahash, and --cluster arguments to the control plane node kube2 (10.154.4.0):

    cw-kube -i n[11-14] --join --token ka8y8y.enwcyfsk4hblayz5 --cahash sha256:413a6267bac67ff749734749dc8b5f60323a68c64bf7fc8e99292dd9b29040b2 --cluster 10.154.4.0
    
  8. Create a Kubernetes worker node image with explicit --token, --cahash, and --cluster arguments and then boot n[15-20] with the new node image as workers to the control plane node kube2 (10.154.4.0):

    $ cw-bootctl -i DefaultBoot clone name=KubeWorkerBoot2
    $ cw-imgctl -i DefaultImage clone name=KubeWorkerImage2
    $ cw-kube --image KubeWorkerImage2 --join --token ka8y8y.enwcyfsk4hblayz5 --cahash sha256:413a6267bac67ff749734749dc8b5f60323a68c64bf7fc8e99292dd9b29040b2 --cluster 10.154.4.0
    $ cw-bootctl -i KubeWorkerBoot2 up image=KubeWorkerImage2
    $ cw-nodectl -i n[15-20] set _boot_config=KubeWorkerBoot2
    $ cw-nodectl -i n[15-20] reboot
    
  9. On EACH non-ClusterWare system that you want to join as a worker and where clusterware-kubeadm is installed, join the local system to the control plane node kube2 (10.154.4.0) with explicit --token, --cahash, and --cluster arguments:

    cw-kube --join --token ka8y8y.enwcyfsk4hblayz5 --cahash sha256:413a6267bac67ff749734749dc8b5f60323a68c64bf7fc8e99292dd9b29040b2 --cluster 10.154.4.0