Skip to main content
Ctrl+K
ICE ClusterWare12 12.4.2 documentation - Home ICE ClusterWare12 12.4.2 documentation - Home
  • ICE ClusterWare Overview
  • Quickstart
  • Install
  • Administration
  • Articles
    • API Reference
    • Release Notes, Changelog, and Known Issues
    • Frequently Asked Questions (FAQ)
    • License Agreements
    • Feedback
  • ICE ClusterWare Overview
  • Quickstart
  • Install
  • Administration
  • Articles
  • API Reference
  • Release Notes, Changelog, and Known Issues
  • Frequently Asked Questions (FAQ)
  • License Agreements
  • Feedback

Section Navigation

Introduction

  • Supported Distributions and Features
  • Required and Recommended Components

Install

  • Install ICE ClusterWare
    • Download the ICE ClusterWare Install Script and Related Files
    • Execute the ICE ClusterWare Install Script
  • scyld-install
  • scyld-tool-config
  • scyld-cluster-conf

Cluster Security

  • Securing the Cluster
    • Authentication
    • Role-Based Access Controls
    • Changing the Database Password
    • Compute Node Remote Access
    • Compute Node Host Keys
    • Encrypting Communications
    • Security-Enhanced Linux (SELinux)
    • Security Technical Implementation Guides (STIG)

Additional Configuration

  • Services, Ports, Protocols
  • Common Additional Configuration
  • Additional Software
    • Adding 3rd-party Software
    • Job Schedulers
    • Kubernetes
      • scyld-kube
    • OpenMPI, MPICH, and/or MVAPICH
  • Install
  • Required and Recommended Components

Required and Recommended Components#

ICE ClusterWare™ head nodes are expected to use x86_64 processors running a Red Hat RHEL, Rocky, or similar distribution. See Supported Distributions and Features for specifics.

Important

ClusterWare head nodes currently require a Red Hat RHEL or Rocky 8.4 (or later) or CentOS Stream 8 (or later) base distribution environment due to dependencies on newer selinux packages. This requirement only applies to head nodes, not compute nodes.

Important

By design, ClusterWare compute nodes handle DHCP responses on the private cluster network (bootnet) by employing the base distribution's facilities, including NetworkManager. If your cluster installs a network file system or other software that disables this base distribution functionality, then dhclient or custom static IP addresses, and potentially additional workarounds, must be configured.

ClusterWare head nodes should ideally be "lightweight" for simplicity and contain only software that is needed for the local cluster configuration. Non-root users typically do not have direct access to head nodes and do not execute applications on head nodes.

Head node components for a production cluster:

  • x86_64 processor(s) are required, with a minimum of four cores recommended.

  • 8GB RAM (minimum) is recommended.

  • 100GB fast storage (minimum) is recommended. All storage should be backed by NVMe or other performant technology.

    The largest storage consumption contains packed images, uploaded ISOs, et al. Its location is set in the file /opt/scyld/clusterware/conf/base.ini and defaults to /opt/scyld/clusterware/storage/.

    The directory /opt/scyld/clusterware/git/cache/ consumes storage roughly the size of the git repos hosted by the system.

    Other than the above storage/ and cache/, the directory /opt/scyld/ consumes roughly 300MB.

    Each administrator's ~/.scyldcw/workspace/ directory contains unpacked images that have been downloaded by an administrator for modification or viewing.

  • One Ethernet controller (required) that connects to the private cluster network which interconnects the head node(s) with all compute nodes.

  • A second Ethernet controller (recommended) that connects a head node to the Internet.

Multiple Ethernet or other high-performance network controllers (for example, Infiniband, Omni-Path) are common on the compute nodes, but do not need to be accessible by the head node(s).

We recommend employing virtual machines, hosted by "bare metal" hypervisors, for head nodes, login nodes, job scheduler servers, etc., for ease of management. Virtual machines are easy to resize and easy to migrate between hypervisors. See https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/ for basic Red Hat documentation.

Note

A bare metal hypervisor host must contain the aggregated resources required by each hosted virtual server, and ideally the aggregated recommended resources, plus several additional CPUs/cores and RAM resources devoted to the hypervisor functionality itself.

Note

The nmcli connection add tool can be used to create network bridges and to add physical interfaces to those newly created bridges. Once appropriate bridges exist, the virt-install command can attach the virtual interfaces to the bridges, so that the created virtual machines exist on the same networks as the physical interfaces on the hypervisor.

A High Availability ("HA") cluster requires a minimum of three "production" head nodes, each a virtual machine hosted on a different bare metal hypervisor. Even if an HA cluster is not required, we recommend a minimum of two head nodes - one functioning as the production head node, and the other as a development head node that can be used to test software updates and configuration changes prior to updating the production node to the validated final updates.

Compute nodes are generally bare metal servers for optimal performance. See Supported Distributions and Features for a list of supported distributions.

See ICE ClusterWare Overview for more details.

previous

Supported Distributions and Features

next

Install ICE ClusterWare

© Copyright 2018-2025, Penguin Computing. All rights reserved.