UAF-RCS HPC Documentation
  • README
  • RCS High Performance Computing (HPC) Documentation
  • Request an Account
  • Getting Started
    • Introduction
    • Linux Training/Overview
    • SSH
    • Nodes
    • Using VNC to Login
    • Migrating to the New Environment
  • Available Filesystems
    • Available Filesystems
  • Using Batch
    • Batch Overview
    • Available Partitions
    • Common Slurm Commands
    • Batch Scripts
    • Interactive Jobs
    • Select a Processor Architecture
  • Third Party Software
    • Third-Party Software
    • Installing Your Own Software
    • Using the Software Stack
    • Using Software Collections (SCL)
    • Complier Toolchains
    • Maintained Software Packages
    • Singularity
    • Miniconda
    • Jupyter
    • Lmod
  • Compiling Source
    • Compiling from Source Code
  • System Architecture
    • System Architecture
  • Community Condo Model
    • Community Condo Model
  • Policies
    • Policies
Powered by GitBook
On this page
  • Login Nodes
  • Data Mover Nodes
  • Compute Nodes
  • Analysis/Bio Nodes
  • GPU Nodes
  1. Getting Started

Nodes

PreviousSSHNextUsing VNC to Login

Last updated 9 months ago

Login Nodes

Login nodes are front-end nodes that you log into to access the cluster. These nodes are used for compiling code, modifying applications and workflows, managing data between the different , and to manage jobs. Login nodes are shared between all users so any processing or compiling should be kept as short as possible to not impact other users.

Submitting jobs is done using a . Chinook uses . The status of jobs and the nodes can be monitored from the login nodes.

Current login nodes:

chinook03.alaska.edu chinook04.alaska.edu

Data Mover Nodes

Data mover nodes are used for transferring data between various and to and from RCS managed storage from other servers. ONLINE filesystems may be accessible via SAMBA from the data mover nodes.

Current data mover nodes:

bigdipper.alaska.edu

Compute Nodes

Compute nodes are where computation is performed and are accessed through the . Generally to perform tasks faster than a standard computer or laptop one must make use of parallel processing.

Analysis/Bio Nodes

Compute nodes with more memory (RAM) than a standard compute node. The amount of cores (CPUs) available on these nodes are restricted, so these nodes are used for jobs with low CPU usage but higher memory requirements.

GPU Nodes

Compute nodes with a GPU (Graphical Processing Unit) to be used with GPU-accelerated software that makes use of APIs such as CUDA.

mounted filesystems
batch scheduler
Slurm
mounted filesystems
job scheduler