PACER - Physics and Astronomy Computer for Education and Research

The Physics and Astronomy Computer for Education and Research (PACER), purchased through a generous gift received from the family of Nathan Shrewsbury  Lord and Rachel Macauley Smith Lord, will allow faculty and students to engage in frontier computational research areas of Astronomy, Atmospheric Science, Condensed Matter Physics, and High Energy Physics. PACER is a 22 teraflop multi-node, multi-core computing platform equipped with 18 compute nodes, 1 master node, one GPU node, 196 GB memory per node, 2TB hard disk per node, 100 TB disk array for data storage, and a 100 GB switch for inter-node communications. Each compute node contains two 20-core, 2.4 GHz dual Xeon processors for a total of 720 computational cores and the GPU node contains two NVDIA TITAN Xp 12 GB G5X graphics cards. The computing capabilities of PACER will address the needs of data-intensive as well as computationally-intensive researchers and will enable UofL researchers to become major players in computer-aided design of materials and big data research endeavors in astronomy using various ground based and satellite telescopes including the Large Synoptic Survey Telescope (LSST), and in high energy physics using the high luminosity particle colliders (at Tsukuba in Japan and the Fermi National Laboratory near Chicago). Thank you to the Lord Family for this generous gift that continues to help make UofL a great place to learn, discover, connect, and work.


TEAL - Technology Enabled Active Learning

 With generous donations received from Mr. Samuel L. Lord, the Department of Physics and Astronomy unveiled a Technology-Enabled Active-Learning (TEAL) classroom on January 6, 2016, in honor of his brother Mr. Nathan Macauley Lord. Designed as a classroom for engaged learning, this classroom has all the features of a flipped classroom and more, namely, integrating the lab experience with lectures and facilitating chat room discussions, remote tutoring, etc. The hardware installed in the room includes 18 Mac minis, two smart projectors, two video cameras, two TV screens, and a central content capturing and sharing system that shares video signals to/from several devices. Photographs from the dedication and ribbon-cutting ceremony can be found at:

Other Computational Resources

Condensed Matter Theory (CMT)

  • AMD Opteron Cluster (2.0 GHz-2.4GHz): This is a distributed memory cluster composed of 121 dual nodes (i.e. 242 CPUs) with 32 nodes having 4 GB memory/node and 88 nodes having 2 GB memory/node (i.e. a total memory of 308 GBs). Clock speeds of the Opteron cluster range from 2.0 GHz to 2.4 GHz (master node@2.4 GHz, 32 nodes@2.2 GHz, and 88 nodes@2.0 GHz). The nodes are linked via gigabit ethernet connections. A 700 GB raid array system is also available for the home directory files.
  • AMD Athlon MP Cluster (1.6 GHz - 2.1 GHz): An older Linux cluster with 56 compute nodes, 56 GB of memory, and 200GB of the raid array.
  • SGI ALTIX 350 System (Intel Itanium-II): Two shared-memory computers, (clock speeds of 1.4 GHz and 1.5GHz, respectively), each with 16 CPUs, 64 GBs of memory, and 274 GBs of storage.
  • SGI ALTIX 450 System (Intel Itanium-II): A symmetric multi-processing (SMP) system with 64 GBs of shared memory, 16 CPUs, and 800 GBs of storage.

Comparative Planetology Laboratory (CPL)

  • AMD Opteron Cluster (2 GHz): This is a 64-core linux parallel computer cluster composed of 3 nodes.

University-wide Computing Resources

  • IBM iDataplex (Intel Xeon L5420, 2.5 GHz): This distributed memory cluster will have 312 nodes with quad core processors for a total of 2496 processor cores. Each node has 32 GBs of memory and the nodes will be interconnected by a mixture of Gigabit Ethernets and infiniband (16 Gps) technology. This cluster is expected to have a peak performance of 20+ teraflops.
  • IBM P570 (Power 6, 4.7 GHz): This is a SMP system with 16 CPUs and 128 GBs of shared memory.