UAF-RCS HPC Documentation
  • README
  • RCS High Performance Computing (HPC) Documentation
  • Request an Account
  • Getting Started
    • Introduction
    • Linux Training/Overview
    • SSH
    • Nodes
    • Using VNC to Login
    • Migrating to the New Environment
  • Available Filesystems
    • Available Filesystems
  • Using Batch
    • Batch Overview
    • Available Partitions
    • Common Slurm Commands
    • Batch Scripts
    • Interactive Jobs
    • Select a Processor Architecture
  • Third Party Software
    • Third-Party Software
    • Installing Your Own Software
    • Using the Software Stack
    • Using Software Collections (SCL)
    • Complier Toolchains
    • Maintained Software Packages
    • Singularity
    • Miniconda
    • Jupyter
    • Lmod
  • Compiling Source
    • Compiling from Source Code
  • System Architecture
    • System Architecture
  • Community Condo Model
    • Community Condo Model
  • Policies
    • Policies
Powered by GitBook
On this page
  • Migration Overview
  • Important Changes
  1. Getting Started

Migrating to the New Environment

PreviousUsing VNC to LoginNextAvailable Filesystems

Last updated 9 months ago

Migration Overview

The new Chinook environment is intended to be similar to the old Chinook environment with a newer operating system, Rocky Linux 8, as well as updated software packages. A newer module system, called Lmod, has replaced the old Tcl modules system, and newer 40 and 48-core nodes have been added to Chinook.

Important Changes

  • chinook03.alaska.edu and chinook04.alaska.edu are the login nodes.

  • The module system has replaced Tcl Modules. Please see our for how it differs from the old module system. One thing to do note is that in Lmod you have to load modules in a particular order.

  • Software packages have been updated or removed and the RCS have been updated. Please see for our current list of maintained software

  • Due to the variety of CPU processors on our nodes jobs may not run if they are using the 48-core nodes combined with the 40 or 28 core nodes

    • The behavior of this is that jobs will seem as if they have "stalled" and are running on 48-core (n26-37, n143) and other nodes (n38-104) at the same time

    • Software built with the Intel toolchain will work

    • Software built via Anaconda, the foss toolchain, or software that relies on the GCC suite of compilers will run into issues. To make sure that your job does not run into this issue add the "#SBATCH --constraint=Haswell|Broadwell|Skylake" sbatch directive to your script or your sbatch command

  • RCS now recommends as the Conda package manager due to its faster dependency solving capabilities

Lmod
documentation
third party software installation policies
here
Mamba