Cardinal Research Cluster (CRC) Usage Information

The Cardinal Research Cluster (CRC) is open to all UofL faculty for research and instructional purposes and all faculty-sponsored researchers at UofL. The following guidelines are intended to guarantee that the CRC remains widely available to all for research or educational purposes. Faculty can submit a CRC account application online.

All CRC users are asked to (and should) cite or acknowledge use of the CRC and/or research consulting services in any grants, publications, posters or presentations resulting from work performed, at least in part, on the CRC or with ITS Research consulting assistance. We also ask that all faculty users/research groups consider writing extensions/add-ons to the CRC into your grant applications (software and hardware as needed for your projects). The CRC continues to exist due to your citations and faculty grant supported extensions.

Once a user has a CRC account, they are eligible to run jobs on the cluster. Every job submitted on the cluster requires an estimate of the wall time needed for completion. The wall time information will be used for scheduling so that short, small jobs may receive priority over earlier long, large jobs as the scheduler waits for resources to become available for the large job. Contact us with questions about scheduling.

The CRC uses a fairshare prioritization system for scheduling to ensure all users are given a reasonably equal share of the resources available. Several queues are available – run the command qstat -q from the login nodes to see the status and current load for each queue as well as their respective job time limits.

Our preventive maintenance (PM) window and system time, when necessary, will usually occur on Friday evenings from 10 PM – 2 AM. Users will be given at least a one week notice unless emergency maintenance is required. Typically, when a full system shutdown is required the system will restart with an empty queue and users are responsible for resubmitting jobs. It is not anticipated that the system will be taken down more frequently than once per month except for emergencies, which by necessity, again, will not permit advance notice before running jobs are impacted.

It is a good practice to ensure that running jobs write restart files in order to minimize the loss when running jobs are affected by such downtime. Furthermore, it is a good usage guideline to submit jobs so that the user is emailed when the running job terminates either by completion or by being deleted due to system time. Research computing consultants will offer HPC seminars as needed/upon request that explain how to submit jobs with these features.

You may also contact us for custom grant text, letters of support, and/or example citations and acknowledgements as needed. Boilerplate text for computing facilities description sections is available in our Knowledge Base. The preferred text for all general acknowledgements: This work was conducted in part using the resources of the University of Louisville's Research Computing group and the Cardinal Research Cluster.

A few usage requirements must be observed on the HPC cluster:

The login nodes (user1, user2 and user3) are only for compiling codes and submitting jobs. Any larger or longer-running processes than necessary for these tasks will be automatically killed.

Interactive tasks should be carried out using interactive sessions on the compute nodes. Likewise, the compute nodes are for running jobs only, and access to these nodes will be restricted to jobs submitted through the queuing system.

Each user will have access to a home directory and a scratch directory.

The home directory will be backed up daily and subject to a soft disk quota of 10 GB if the user is a student and 25 GB if the user is faculty member. Running jobs should write to the scratch space and neither to a user's home directory (if possible) nor /tmp on the compute nodes. The scratch directory is NOT backed up. The scratch space is not for long-term storage.

Users that have exceeded their home directory quota will have their oldest files tagged for deletion, receive an email to this effect, and then the files will be subject to purging on the specified date. Purging is not an ongoing/regular process, purges will occur as needed.

During regular usage, if the available system disk space drops below a threshold value (typically 10%), no new jobs will be allowed and in extreme cases running jobs will be killed. If users are unable to free enough space, emergency purges may occur without user notification, first to scratch space and then to user data in home directories that are over quota. After the available space is above the threshold, the queues will be re-opened.

This CRC is the property of the University of Louisville. Use of the system is for authorized users only. It is the policy of the University to protect and safeguard the protected health information, created, acquired and maintained in accordance with the Privacy Regulations promulgated pursuant to the Health Insurance Portability and Accountability Act of 1996 and all applicable state laws. Use of your account constitutes your approval and acceptance of this agreement. Acceptance of this agreement is a condition of use.

Contact us with any questions or for additional information.