Updating ICE ClusterWare Software#

From time to time, updates and add-ons to the ICE ClusterWare™ platform are released. Customers on active support plans can access these updates on the Penguin Computing website. Visit https://www.penguinsolutions.com/computing/support/technical-support/ for details. That website offers answers to common technical questions and provides access to application notes, software updates, and product documentation.

ClusterWare release versions follow the traditional three dot-separated numbers format: <major>.<minor>.<patch>. Updating to a newer major release should be done with care. Updating ClusterWare 11 to ClusterWare 12 requires an awareness of specific issues that are discussed later in this section.

The Release Notes contains brief notes about the latest release, and the Changelog provides a history of significant changes for each software release and a list of Known Issues And Workarounds.

Updating head nodes#

The scyld-install tool is used to update ClusterWare software on a head node, just as it was used to perform the initial installation. This tool first determines if a newer clusterware-installer package is available, and if so will update clusterware-installer and then restart scyld-install.

Important

A simple yum update will not update ClusterWare packages on a head node, as the scyld-install tool has disabled /etc/yum.repos.d/clusterware.repo in order to prevent yum update from inadvertently updating the ClusterWare software. Instead, Penguin Computing strongly recommends using the scyld-install tool to perform updates of the basic ClusterWare packages that were originally installed by scyld-install. To install or update any optional ClusterWare packages described in Additional Software, you must use sudo yum <install-or-update>--enablerepo=scyld* <packages>.

Important

scyld-install uses the yum command to access the ClusterWare software and potentially various other repositories (for example, Red Hat RHEL or Rocky) that by default normally reside on Internet websites. However, if the head node(s) do not have Internet access, then the required repositories must reside on local storage that is accessible by the head node(s). See Creating Local Repositories without Internet.

Note

Executing scyld-install with no arguments presupposes that the ClusterWare platform is not yet installed. If the ClusterWare platform is currently installed, then the tool asks for positive confirmation that the user does intend to update existing software. You can avoid this interaction by providing the -u` or --update arg. That same degree of caution occurs if executing scyld-install --update on a server that does not currently have the ClusterWare platform already installed: the tool asks for positive confirmation that the user does intend to install the ClusterWare platform as a fresh install.

Important

Updating from 12.0.1 and earlier to 12.1.0 requires reconfiguration of the Influx/Telegraf monitoring stack. The following command can be used to update the necessary config files: /opt/scyld/clusterware/bin/influx_grafana_setup --tele-env, followed by systemctl restart telegraf. All data will persist through the upgrade.

The scyld-install tool only updates basic ClusterWare head node software that was previously installed by the tool, plus any other dependency packages. After the ClusterWare software is updated, you can execute yum check-update --enablerepo=scyld* | grep scyld to view the optional ClusterWare packages that were previously installed using yum install --enablerepo=scyld*, and then use sudo yum update --enablerepo=scyld* <PACKAGES> to update (or not) as appropriate for your local head node.

You can also execute yum check-update to view the non-ClusterWare installed packages that have available updates, and then use sudo yum update <PACKAGES> to selectively update (or not) as appropriate for your local head node.

Alternatively, scyld-install --clear-all empties the database and clears the current installation. Just like during an initial installation, after a --clear-all the database should be primed with a cluster configuration. The cluster configuration can be loaded at the same time as the --clear-all using the --config /path/to/cluster-conf argument. This will use the scyld-cluster-conf tool to load the cluster configuration's initial declaration of private cluster interface, max number of nodes, starting IP address, and MAC address(es), as described in Execute the ICE ClusterWare Install Script. See scyld-cluster-conf for more details about the scyld-cluster-conf tool.

Similar to using scyld-install on a non-ClusterWare server to perform a fresh install or to join another head node on an existing cluster, executing scyld-install --clear-all --config /path/to/cluster-conf> will invoke the scyld-add-boot-config script to create a new default boot image.

Updating compute nodes#

A compute node can be dynamically updated using a simple yum update, which will use the local /etc/yum.repos.d/*repo file(s). If the compute node is executing a ClusterWare created image, then these changes (and any other changes) can be made persistent across reboots using scyld-modimg and performing the yum install and yum update operations inside the chroot. See Modifying Images for details.

Updating ClusterWare 11 to ClusterWare 12#

ClusterWare version 11 updates cleanly to version 12, albeit retaining the CW11-built boot configurations and images.

Important

A cluster using the ClusterWare Couchbase database must first switch that database to etcd.

Important

You must examine /etc/yum.repos.d/clusterware.repo and potentially edit that file to reference the ClusterWare version 12 repos. If the baseurl= contains the string clusterware/11/, then change 11 to 12. If the gpgkey contains RPM-GPG-KEY-PenguinComputing, then change PenguinComputing to scyld-clusterware.

CW11-based compute nodes are compatible with CW12 parent head nodes. However, to make use of the full additional functionality of CW12, after updating the CW11 head node(s) you should also update CW11 images to CW12 with at least the newest version of clusterware-node. See Updating compute nodes, above.