Materials for Grants
The research computing group is available to assist researchers in the development of grants for computational research. General grant-writing support is available at the University of Louisville through the Office of the Executive Vice President for Research. Services offered by research computing include , collaboration on grants when suitable in-house expertise is available, and making letters of support for grants available when necessary (please forward us a description of the project and provide 3 business days advanced notice).
Text describing the research cyberinfrastructure at the University of Louisville suitable for immediate inclusion into proposals are offered in general and detailed versions, below. The general version follows in the next paragraph while the detailed version is available in Microsoft Word and PDF formats. A diagram of the cluster is also available for use as a figure or in talks. If letters of support are required, please to request one.
A general overview of the UofL research cyberinfrastructure follows:
The University of Louisville’s central research-computing infrastructure became available in spring 2009 and was upgraded in spring 2011. This infrastructure includes multiple systems serving the research needs of the entire university, including a general-purpose high-performance distributed-memory computation cluster, a high-memory SMP system, an informatics data management system, a visualization server, and several general-purpose web and software servers.
The general-purpose compute cluster is composed of 525 IBM iDataplexnodes 312 of which are equipped with two Intel Xeon L5420 2.5GHz quad-core processors and 213 of which are equipped with two Intel Xeon 5650 2.66GHz hexa-core processors for a total of 5052 processor cores. Each node has between 16 and 48 GB of memory (2GB to 4GB per core), and the node interconnects are a mixture of Gigabit Ethernet (1Gbps) and InfiniBand (16 Gbps) technology. In addition four of the iDataplex nodes are equipped with dual NVIDIA M2050 GPGPU cards. The cluster is estimated to have a peak performance rating of 35+ TFLOPS.
The high-memory SMP system is an IBM p570 server with 16 IBM Power6 4.7GHz CPU cores. It is equipped with 128 GB of memory, 8 Gigabit Ethernet interfaces, and a quarter Terabyte of local high-performance SAS disk space for system file and scratch space.
The visualization server consists of one IBM xSeries x3755 visualization node. It is equipped with two AMD 8360 2.5GHz quad-core Opteron processors, 16 GB of memory, an nVidia Quadro FX5600 graphics adapter and a quarter Terabyte of local high-performance SAS disk space for system file and scratch space.
The Informatics data management system, based on Oracle’s Extended RAC database system, has 40 TB of dedicated storage and is optimized for transaction processing.
All research systems share approximately 600 TB of data storage and archiving space using IBM's General Purpose File System (GPFS). All of the systems are housed in the University's secure data center and are administered by a team of specialized HPC system administrators supported by a team of research computing consultants with experience in HPC software and database design, development and optimization.
Contact us at firstname.lastname@example.org with any questions or for additional information.