CRC Usage Policies

The Cardinal Research Cluster (CRC) is open to all UofL faculty for research and instructional purposes and all faculty-sponsored researchers at UofL. The following policies are intended to guarantee that the CRC remains widely available to anyone needing to use it for research or educational purposes.


ALL CRC users are asked to (and should) cite or acknowledge use of the the CRC and/or IT's research consulting services in any grants, publications, posters or presentations resulting from work performed, at least in part, on the CRC or with IT's research consulting assistance. We also ask that all faculty users/research groups consider writing extensions/add-ons to the CRC into your grant applications (software and hardware as needed for you projects). Without both citations and faculty grant supported extensions the CRC will not last forever! Assistance is available on our Materials for Grants page, in particular boilerplate text for use in facilities descriptions sections is available. You may also contact us at for custom grant text, letters of support, and/or example citations and acknowledgements as needed.

The preferred text for acknowledgements is: This work was conducted in part using the resources of the University of Louisville's research computing group and the Cardinal Research Cluster.


Once a user has a CRC account, they are eligible to run jobs on the cluster. Every job submitted on the cluster requires an estimate of the wall time needed for completion. The wall time information will be used for scheduling so that short, small jobs may receive priority over earlier long, large jobs as the scheduler waits for resources to become available for the large job.


A few usage policies must be observed on the HPC cluster. The login nodes (user1, user2 and user3) are only for compiling codes and submitting jobs. Any larger or longer-running processes than necessary for these tasks will be automatically killed. Interactive tasks should be carried out using interactive sessions on the compute nodes. Likewise, the compute nodes are for running jobs only, and access to these nodes will be restricted to jobs submitted through the queuing system.


The CRC uses a fairshare prioritization system for scheduling to ensure all users are given a reasonably equal share of the resources available. Several queues are available, run the command qstat -q from the login nodes to see the status and current load for each queue as well as their respective job time limits.  Note - the reserved, gpgpu, p570 and crcadmin queues are for special tasks. The crcadmin queue is reserved for users needing extra priority to meet grant proposal, publication and other such deadlines. The reserved queue is restricted to users needing to run jobs that will not checkpoint (are not restartable from where they left off) within the time limits placed on the long queue. The gpgpu and p570 queues are special purpose hardware queues for users with codes capable of leveraging the special hardware.  Please contact us at if you believe you need access to one or more of the special queues.  The queues short and long are for general purpose computing, short is the default queue.  These queues differ only in the max runtime available to each and the default priority - longer, larger jobs received default lower priorities.  The dev queue is for interactive development work ONLY. general jobs submitted to it will be terminated.


Each user will have access to a home directory and a scratch directory. The home directory will be backed up daily and subject to a soft disk quota of 10 GB if the user is a student and 25 GB if the user is faculty member.  Running jobs should write to the scratch space and neither to a user's home directory (if possible) nor /tmp on the compute nodes. The scratch directory is NOT backed up. The scratch space is not for long-term storage. Users that have exceeded their home directory quota will have their oldest files tagged for deletion, receive an email to this effect, and then the files will be subject to purging on the specified date. Purging is not an ongoing/regular process, purges will occur as needed. During regular usage, if the available system disk space drops below a threshold value (typically 10%), no new jobs will be allowed and in extreme cases running jobs will be killed. If users are unable to free enough space, emergency purges may occur without user notification, first to scratch space and then to user data in home directories that are over quota. After the available space is above the threshold, the queues will be re-opened.


System time and preventive maintenance (PM), when necessary, will usually occur Friday evenings from 10 PM – 2 AM.  Users will be given at least 1 week’s notice unless emergency maintenance is required. Typically, when a full system shutdown is required the system will restart with an empty queue and the users are responsible for resubmitting jobs. It is not anticipated that the system will be taken down more frequently than once per month except for emergencies, which by necessity, again, will not permit advance notice before running jobs are affected. It is a good practice to ensure that running jobs write restart files in order to minimize the loss when running jobs are affected by such downtime. Furthermore, it is a good usage policy to submit jobs so that the user is emailed when the running job terminates either by completion or by being deleted due to system time. Research computing consultants will offer HPC seminars as needed/upon request that explain how to submit jobs with these features.

For ALL CRC Users...

This CRC is the property of the University of Louisville. Use of the system is for authorized users only. It is the  policy of the University to protect and safeguard the protected health information, created, acquired and maintained in accordance with the Privacy Regulations promulgated pursuant to the Health Insurance Portability and Accountability Act of 1996 and all applicable state laws. Use of your account constitutes your approval and acceptance of this agreement. Acceptance of this agreement is a condition of use.

Contact us at with any questions or for additional information.