The UWM High Performance Computing (HPC) Service was established in 2009. The services currently supports a large research cluster called "Avi" and a small educational cluster called "Peregrine."
- 142 compute nodes (1136 cores total). Each node is a Dell PowerEdge R410 server with two quad-core 2.67 GHz Intel Xeon X5550 processors.
- Most compute nodes have 24 GiB of RAM. A few "high memory" nodes with 128 GiB of RAM are provided for special programs that require large amounts of memory and cannot be spread across many nodes.
- One LSF scheduling node, a Dell PowerEdge R710 server, with two quad-core 2.67 GHz Intel Xeon E5520 processors and 24 GiB of system memory.
- One IO node, a Dell PowerEdge R710 server, with two quad-core 2.67 GHz Intel Xeon E5520 processors and 24 GiB of system memory.
- 7 Dell PowerVault MD1000 3Gb/s SAS attached expansion units providing 80TB of RAID 60 and RAID 10 storage. This storage is available to all nodes via NFS.
- Each node has both a Qlogic DDR InfiniBand (16Gb/s) and a gigabit Ethernet network interface.
- All nodes run Redhat Linux 5.3
Periodically, a service report of Avi usage is compiled containing graphs and charts detailing load utilization, usage by community, job submissions by job type, multi-core job requests, and job array requests. This report is helpful in seeing how the service is being utilized and in what manner.
- 8 compute nodes (96 cores total). Each node is a Dell PowerEdge R415 rack-mount server with two six-core AMD Opteron 4180 2.6GHz processors and 32 GB of system memory.
- One head node, a Dell PowerEdge R415 server, with one 6-core AMD Opteron processor and 16 GB of system memory.
- The head node houses a 5 Terabyte RAID5 array, available to all compute nodes via NFS.
- All nodes are connected by a dedicated gigabit ethernet network interface.
- Jobs are scheduled using the Portable Batch System (PBS).
- In addition, Peregrine is a submit node and manager for the UWM Condor grid, which provides access to hundreds of idle cores on lab PCs around campus for use in parallel computing.
- All nodes run FreeBSD 8.3