KubeVirt Provider#

You can provision KubeVirt virtual machines (VMs) to work with the ICE ClusterWare ™ platform. Complete the required configuration steps to set up a KubeVirt provider, then you can provision VMs and they will function like regular ClusterWare compute nodes.

Tip

The KubeVirt provider works with Harvester after completing the configuration steps.

Configure KubeVirt Provider#

  1. Configure you cluster networking so the ClusterWare head node and Kubernetes worker nodes are on the same layer 2 network.

    1. Ensure the worker node and the ClusterWare head node each have a bridge.

    2. Set up the Kubernetes namespace with a single NetworkAttachmentDefinition (NAD) that has a working configuration. The configuration should reference a real bridge defined on the worker node.

      The NAD needs a way of connecting the virtualization pods with the worker node, such as with a Multus CNI plugin. With Multus CNI, the NAD .spec may look something like:

      spec:
      config: '{"cniVersion":"0.3.1","name":"mgmt-vm","type":"bridge",\
      "bridge":"<worker node bridge name>","promiscMode":true,"ipam":{}}'
      
    3. Put the ClusterWare head node and Kubernetes worker node on the same subnet.

  2. Create the KubeVirt provider using the kubeconfig file to populate values for the spec argument.

    For a KubeVirt provider with individual namespace credentials:

    cw-clusterctl providers mk name=<name> type=kubevirt spec='{
       "ca": "<clusters[].cluster.certificate-authority-data>",
       "server": "<clusters[].cluster.server>",
       "namespaces": {
          # insert info below for the namespaces the provider will access
          "<users[].namespace>": {
          "token": "<users[].user.token>"
          OR
          "client_cert": "<users[].user.client-certificate-data>",
          "client_key": "<users[].user.client-key-data>"
          }
       }
    }'
    

    For a KubeVirt provider with common namespace credentials:

    cw-clusterctl providers mk name=<name> type=kubevirt spec='{
       "ca": "<clusters[].cluster.certificate-authority-data>",
       "server": "<clusters[].cluster.server>",
       "token": "<users[].user.token>"
       OR
       "client_cert": "<users[].user.client-certificate-data>",
       "client_key": "<users[].user.client-key-data>"
    }'
    

Example KubeVirt Provider Create Commands#

The following command creates a provider that attempts to access the "default" namespace using the token specified when a provider action either includes the --context=`{"namespace": "default"}` argument or when no --context argument is provided:

cw-clusterctl providers mk name=my-provider type=kubevirt spec='{
   "ca": "CERT123",
   "server": "https://10.1.1.1:6443",
   "namespaces": {
      "default": {
         "token": "abc123"
      }
   }
}'

The following command creates a provider that attempts to access the "my-ns" namespace using client authentication when a provider action includes the --context=`{"namespace": "my-ns"}` argument:

 cw-clusterctl providers mk name=my-provider type=kubevirt spec='{
   "ca": "CERT123",
   "server": "https://my.kubecluster.org/k8s/clusters/local",
   "namespaces": {
      "my-ns": {
         "client_cert": "realcert123",
         "client_key": "realkey123"
      }
   }
}'

The following command creates a provider that attempts to use the token below to access any namespace specified in the context:

cw-clusterctl providers mk name=my-provider type=kubevirt spec='{
   "ca": "CERT123",
   "server": "https://10.1.1.1/k8s/clusters/local",
   "token": "abc123admin"
}'

Allocate and Attach KubeVirt Virtual Machines#

  1. Query the shapes available in your Kubernetes cluster:

    kubectl get virtualmachineclusterinstancetype
    
  2. Allocate and attach a VM:

    cw-clusterctl providers -i<provider name> alloc --shape <shape name> --count 1 --attach
    

    Where:

    • <provider name> matches the KubeVirt provider name created during initial provider configuration

    • <shape name> matches one of the shapes provided by virtualmachineclusterinstancetype

The attached VMs should boot and be registered as ClusterWare compute nodes.

Work with KubeVirt Virtual Machines#

To view the VM node status:

cw-nodectl status

To list the provider VMs:

cw-clusterctl providers -i<provider name> resources

Tip

To specify a namespace listed in the spec, add --context to provider commands. For example, the following command lists the provider VMs within the my-namespace namespace:

cw-clusterctl providers -imy-provider resources --context='{"namespace": "my-namespace"}'

Remove KubeVirt Virtual Machines#

The following command removes the associated compute node and deletes the VM:

cw-nodectl rm <node name> --release

Attach Existing KubeVirt Virtual Machines to ClusterWare#

If you have an existing KubeVirt VM created outside of the ClusterWare provider commands, you can attach that VM to the ClusterWare software as a compute node and add power_uri for power control.

Prerequisites:

  • The ClusterWare KubeVirt provider has correct kubeconfig and KubeVirt access.

  • For traditional PXE boot nodes:

    • The VM's YAML definition specifies the interface and associated MAC address used to boot the VM. For example, bootOrder: 1.

    • The interface used to boot the VM is on the same subnet and can reach the head node via DHCP. This may require that a NAD resource lives in the same namespace as the compute VM and references a real bridge on the host machine.

To configure the VM:

  1. Create a compute node:

    cw-nodectl mk mac=<interface mac> power_uri=kubevirt://provider:<uid>/<namespace>/<vm>
    

    Where:

    • interface mac is the MAC address of the interface used to boot the VM

    • uid is the UID of the KubeVirt provider

    • namespace is the name of the namespace

    • vm is the name of the VM

    For example:

    cw-nodectl mk mac=b2:a4:ea:58:43:40 power_uri=kubevirt://provider:05291242174e4d149a3f9c54fee83101/development/NodeA
    
  2. Install the clusterware-node packages on the compute node.

    • For nodes with the operating system installed on a local disk, manually install the clusterware-node, telegraf and clusterware-telegraf packages. See Installing the clusterware-node Package.

    • For PXE boot nodes, reboot the node and the ClusterWare head node will respond with a proper OS image.