Supported Technologies and Platforms

Updated: December 2, 2022

Edit this Page via GitHub       Comment by Filing an Issue      Have Questions? Ask them here.

The Fred Hutch provides researchers on-campus access to high performance computing using on-premise resources for computing needs that rise above those that can be met using your desktop computer. The various technologies provided are outlined on here along with the basic information required for researchers to identify which resource might be best suited to their work.

Often reasons to move to these high performance computing (HPC) resources include:

  • reproducible compute jobs
  • version controlled, Linux-only and/or specialized software
  • increased compute capability
  • rapid access to large data sets in central data storage locations

Overview of On-Premise Resources

Compute Resource Access Interface Connection to FH Data Storage
Rhino CLI, FH credentials on campus/VPN off campus Direct to all local storage types
NoMachine NX Client, FH credentials on campus/VPN off campus Direct to all local storage types
Gizmo Via Rhino or NoMachine hosts (CLI, FH credentials on campus/VPN off campus) Direct to all local storage types

If you’re new to remote cluster usage, please see this tutorial for step-by-step instructions for logging on to rhino and gizmo,

Rhino

Rhino, or more specifically the Rhinos, are three locally managed HPC servers all accessed via the name rhino. Together, they function as a data and compute hub for a variety of data storage resources and high performance computing (HPC) such as those in the table above. The specific guidance for the use of each of the approaches to HPC access are slightly different, but will all require the user to learn how to access and interact with rhino.

More information on the topic of ssh configurations for access to rhino can be found here. More information on specific guidance for using rhino are in our Resource Library for the basics and for more detailed descriptions.

Gizmo Cluster

While we generally don’t recommend interactive computing on the HPC clusters- interactive use can limit the amount of work you can do and introduce “fragility” into your computing- there are many scenarios where interactively using cluster nodes is a valid approach. For example, if you have a single task that is too much for a rhino, opening a session on a cluster node is the way to go.

If you need an interactive session with dedicated resources, you can start a job on the cluster using the command grabnode. The grabnode command will start an interactive login session on a cluster node. This command will prompt you for how many cores (probably 1 unless you know your task is multi-threaded), how much memory, and how much time you estimate will be required. This command can be run from any rhino host.

While most users will follow the interactive screen prompts to execute grabnode, the command will also take some common sbatch options and flags. Contact scicomp if you need options beyond those offered by grabnode prompts.

For non-interactive use of gizmo, see our pages on Computing Environments and Software and Job Management and perhaps Parallel Computing.

Access to the Gizmo cluster requires both a HutchNet ID and an association to a PI account on the cluster. If you get errors like “Invalid account” when using grabnode or Slurm commands like sbatch, please contact scicomp.

NoMachine Access

NoMachine is a software suite that allows you to run a Linux desktop session remotely. The session runs on the NoMachine server but is displayed on your desktop or laptop using the NoMachine client. NoMachine (also abbreviated NX) is installed on Center IT supported PC desktops and laptops.

NX has the particular advantage of maintaining your session even when you disconnect or lose connectivity. All that is required is to restart the client and your session will be as you’d last left it.

The three rhino hosts are available for NX sessions: rhino01, rhino02, and rhino03. The name rhino is an alias that returns one of those three names and should not be used for NoMachine sessions

Resource and Node Description information

Below we describe the current basic configurations available for node types, numbers, and memory for a variety of scicomp supported computing resources. These tables are useful when deciding on what type of resources you need to request when using rhino and gizmo for interactive and non-interactive jobs. These tables are auto-generated and are a work in progress so that we can provide the most up to date information on the Wiki for your use. Please file an Issue in our GitHub repository if you notice something amiss or need clarification.

Resource Information

Name Type Authentication Authorization Location
rstudio web web hutchnetID FHCRC

Cluster Node Information

The particular number and resources of cluster nodes available to Fred Hutch researchers depend on the resource and are described here. Details include:

  • Partition: All nodes of a given generation are in one or more partitions to facilitate resource use efficiency
  • Node Gen: Nodes are named after the their cluster + generation + sequential ID (ex: gizmoj23)

GIZMO

Location: FHCRC

Partition Node Gen Node Count CPU Cores Memory
campus, short, new j 42 Intel Gold 6146 24 384GB
campus, short, new k 170 Intel Gold 6154 36 768GB
chorus harmony 8 AMD EPYC 9354P 32 1536GB
none (interactive use) rhino 3 Intel Gold 6154 14 384GB
Partition Node Gen GPU Count GPU Compute Capability GPU Memory
campus, short, new j 1 NVIDIA GTX 1080ti 6.1 10.92 GB
campus, short, new k 1 NVIDIA RTX 2080ti 7.5 10.76 GB
chorus harmony 4 NVIDIA L40S 8.9 44 GB
none (interactive use) rhino 1 NVIDIA RTX1080ti 6.1 10.92 GB

Resource Detail

Specific details about our cluster(s):

  • Network: The underlying network fabric may affect jobs that rely on inter-node messaging
  • Local Storage: The amount and type of storage on each node, used as TMPDIR during jobs, but also available for other job use

GIZMO

Location: FHCRC

Node Gen Network Local Storage
j 10G (up to 1GB/s throughput) 7TB @ /loc (300MB/s throughput / 1000 IOps)
k 10G (up to 1GB/s throughput) 6TB @ /loc
harmony 10G (up to 1GB/s throughput) 3TB @ /loc
rhino 10G (up to 1GB/s throughput) 6TB @ /loc

Updated: December 2, 2022

Edit this Page via GitHub       Comment by Filing an Issue      Have Questions? Ask them here.