Kubernetes#

ICE ClusterWare™ administrators who want to use Kubernetes as a container orchestration layer across their cluster can either choose to install Kubernetes manually following directions found online or use scripts provided by the clusterware-kubeadm package to install and bootstrap Kubernetes clusters.

The provided scripts are based on the kubeadm tool and inherit both the benefits and limitations of that tool. If you prefer to use a different tool to install Kubernetes, follow appropriate directions available online from your chosen Kubernetes provider.

ClusterWare nodes and non-ClusterWare systems can be joined into the same Kubernetes cluster when the servers are on the same network.

  • To use clusterware-kubeadm scripts on ClusterWare nodes, install the clusterware-kubeadm package on a server that a ClusterWare admin can use to access those nodes from scyld-nodectl. Use the following command to install:

    sudo yum --enablerepo=scyld* install clusterware-kubeadm clusterware-tools
    
  • To use clusterware-kubeadm scripts on non-ClusterWare servers, install the clusterware-kubeadm package on all of those servers. The scripts will be run from each of those servers locally. Use the following command to install:

    sudo yum --enablerepo=scyld* install clusterware-kubeadm
    

After installing the software, a ClusterWare admin or a root user on a non-ClusterWare system can use the scyld-kube tool to install the Kubernetes cluster. The default kubernetes version is hardcoded in /opt/scyld/clusterware-kubeadm/files/core/etc/yum.repos.d/kubernetes.repo.default and has been tested. If you want to install any other version of Kubernetes, you can append a specific version (major.minor.patch) argument to scyld-kube. For example:

scyld-kube --version 1.31.1

Two Kubernetes control plane configuration options are supported: a single Kubernetes control plane node or a High Available (with HAProxy and Keepalived) Kubernetes control plane with a first and additional control nodes. Both configurations can have additional workers (non-controller nodes).

Important

For a server to function as a Kubernetes control plane or worker, swap must be turned off. Verify current status with swapon -s. Use swapoff -a -v to disable swap. You should not use a RAM-booted or otherwise ephemeral compute node as Kubernetes control plane.

The following sections include an example of a single ClusterWare control pane node plus ClusterWare nodes as workers. See Using Kubernetes for additional examples, including non-ClusterWare systems and multiple control plane nodes.

Bootstrap Kubernetes Control Plane#

Initialize the control plane node(s).

For a single node control plane:

  • For a single ClusterWare node control plane, use the following command:

    scyld-kube -i <control plane node ID> --init
    
  • For a single non-ClusterWare node control plane, use the following command:

    scyld-kube --init
    

For a multi-node control pane:

  • For a ClusterWare multi-node control plane, use the following commands on a ClusterWare admin node:

    $scyld-kube --prepare-lb <unused IP> <first control plane node ID>:<node IP>,<additional control plane node ID>:<node IP>,<additional control plane node ID>:<node IP>
    $scyld-kube -i <first control plane node ID> --init-ha
    
  • For non-ClusterWare multi-node control plane, use the following commands on the first control plane system:

    $scyld-kube --prepare-lb <unused IP> <first control plane node ID>:<node IP>,<additional control plane node ID>:<node IP>,<additional control plane node ID>:<node IP>
    $scyld-kube --init-ha
    

Example#

Run the following command to initialize ClusterWare node n0 (10.154.1.100) as a control plane node:

scyld-kube -i n0 --init

Messages about joining ClusterWare NODES/IMAGE and non-ClusterWare system as workers to this ClusterWare control plane are printed out after a successful initialization. For example:

...
To join ClusterWare NODES/IMAGE as worker to this Clusterware control plane:
scyld-kube -i NODES --join --cluster n0
scyld-kube --image IMAGE --join --cluster n0

To join non ClusterWare system as worker to this ClusterWare control plane:
scyld-kube --join --token yp6lxa.wcb6g48ud3f2cwng --cahash sha256:413a6267bac67ff749734749dc8b5f60323a68c64bf7fc8e99292dd9b29040b2 --cluster 10.154.1.100
...

Checking Deployment Status#

Verify that Kubernetes is ready on each system after the first initialization. Verify again after each control plane node or worker node joins.

  • For a ClusterWare control plane, use the following command:

    scyld-nodectl -i <node ID> exec kubectl get nodes -o wide
    
  • For a non-ClusterWare control plane, use the following command:

    kubectl get nodes -o wide
    

Example#

The following example shows the Kubernetes cluster has ClusterWare n0 as a working control plane.

[admin@cwhead ~]$ scyld-nodectl -i n0 exec kubectl get nodes -o wide
NAME               STATUS     ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                            KERNEL-VERSION                 CONTAINER-RUNTIME
n0.cluster.local   Ready      control-plane   1d   v1.31.2   10.154.1.100   <none>        Rocky Linux 8.10 (Green Obsidian)   4.18.0-553.el8_10.x86_64       containerd://1.6.32

Additional Configuration#

Depending on your ClusterWare cluster configuration, the INTERNAL-IP of the ClusterWare control plane may not match the IP address known to the ClusterWare platform. If they are different, replace the --cluster value with the INTERNAL-IP value when using the printed out messages to join additional control plane nodes and workers. In the example, the INTERNAL-IP of the ClusterWare control plane is 10.154.1.100, which is the same as n0’s IP address known to the ClusterWare platform.

If you are joining additional control plane nodes, you may need to generate a new certificate key because the one printed in the output expires in 2 hours.

For a ClusterWare control plane, use the following command to generate a new key:

[adminr@cwhead ~]$ scyld-nodectl -i <node ID> exec kubeadm init phase upload-certs --upload-certs
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
ad556dcd5c795a42321be46b0a3cf8a52d7a1c7fef6e0bd96c65525569c39105

On a non-ClusterWare control plane, use the following command:

[root@kube1 ~]$ kubeadm init phase upload-certs –upload-certs

Replace the --certificate-key value with the new certificate key you just generated when using the output messages to join additional control plane nodes.

Adding Workers#

  1. Using the messages output after initialization as a guide, join workers to the control plane.

    To join ClusterWare nodes as workers to a ClusterWare control plane:

    scyld-kube -i n[<node IDs>] --join --cluster <control plane node ID>
    

    To join ClusterWare nodes as workers to a non-ClusterWare control plane:

    scyld-kube -i n[<node IDs>] --join --token <token value> --cahash <cahash value> --cluster <control plane IP>
    

    To join non-ClusterWare systems as workers to a ClusterWare control plane:

    scyld-kube --join --token <token value> --cahash <cahash> --cluster <control plane IP>
    

    To join non-ClusterWare systems as workers to a non-ClusterWare control plane:

    scyld-kube --join --token <token value> --cahash <cahash value> --cluster <control plane IP>
    
  2. For ClusterWare workers, use the following commands to create a Kubernetes worker node image and then boot the nodes with the node image as workers:

    $ scyld-bootctl -i DefaultBoot clone name=<boot name>

    $ scyld-imgctl -i DefaultImage clone name=<image name>

    $ scyld-kube --image <image name> --join --cluster <control plane node ID>

    $ scyld-bootctl -i <boot name> up image=<image name>

    $ scyld-nodectl -i n[5-10] set _boot_config=<boot name>

    $ scyld-nodectl -i n[5-10] reboot

Example#

For the single ClusterWare control plane example, the following messages are printed out after the control plane initialization:

...
To join ClusterWare NODES/IMAGE as worker to this Clusterware control plane:
scyld-kube -i NODES --join --cluster n0
scyld-kube --image IMAGE --join --cluster n0

To join non ClusterWare system as worker to this ClusterWare control plane:
scyld-kube --join --token yp6lxa.wcb6g48ud3f2cwng --cahash sha256:413a6267bac67ff749734749dc8b5f60323a68c64bf7fc8e99292dd9b29040b2 --cluster 10.154.1.100
...
  1. Using the message output after initialization as a guide, join ClusterWare nodes (n[1-4]) as workers to the control plane node n0 with the following command:

    $ scyld-kube -i n[1-4] --join --cluster n0
    
  2. Create a Kubernetes worker node image and then boot n[5-10] with the node image as workers to control plane n0:

    $ scyld-bootctl -i DefaultBoot clone name=KubeWorkerBoot
    $ scyld-imgctl -i DefaultImage clone name=KubeWorkerImage
    $ scyld-kube --image KubeWorkerImage --join --cluster n0
    $ scyld-bootctl -i KubeWorkerBoot up image=KubeWorkerImage
    $ scyld-nodectl -i n[5-10] set _boot_config=KubeWorkerBoot
    $ scyld-nodectl -i n[5-10] reboot
    
  3. Verify that Kubernetes is ready on each system using the following command:

    $ scyld-nodectl -i n0 exec kubectl get nodes -o wide
    NAME               STATUS     ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                            KERNEL-VERSION                 CONTAINER-RUNTIME
    n0.cluster.local   Ready      control-plane   1d   v1.31.2   10.154.1.100   <none>        Rocky Linux 8.10 (Green Obsidian)   4.18.0-553.el8_10.x86_64       containerd://1.6.32
    n1.cluster.local   Ready      <none>          1d   v1.31.2   10.154.1.101   <none>        Rocky Linux 8.10 (Green Obsidian)   4.18.0-553.el8_10.x86_64       containerd://1.6.32
    

    The example output shows the Kubernetes cluster has ClusterWare n0 as a working control plane and n1 as a worker.