Specifications

The UWM High Performance Computing (HPC) Service was established in 2009. The services currently supports a large research cluster called "Avi" and a small educational cluster called "Peregrine."

Avi specifications

  • 142 compute nodes (1136 cores total). Each node is a Dell PowerEdge R410 server with two quad-core Intel(R) Xeon(R) X5550 processors @ 2.67GHz.
  • Most compute nodes have 24 GiB of RAM.  A few "high memory" nodes with 128 GiB of RAM are provided for special programs that require large amounts of memory on a single node.
  • One head node running the SLURM resource manager, a Dell PowerEdge R310 server with 6 Intel(R) Xeon(R) E5-2407 processors @ 2.20GHz and 32 gigabytes of RAM.   An identical backup node automatically takes over in the event of a head node failure.
  • A primary IO node, a Dell PowerEdge R710 server, with two quad-core Intel(R) Xeon(R) E5520 processors @ 2.27GHz, 48 GiB of system memory and seven Dell PowerVault MD1000 3Gb/s SAS attached expansion units, serving nine shared RAID 60 and RAID 10 partitions of approximately 7 terabytes each over NFSv4.
  • One high-speed I/O node, a Dell PowerEdge R720xd with two six-core Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz and 32 gigabytes of RAM, serving a single 10 terabyte RAID 6 partition over NFSv4.
  • All compute and I/O nodes are linked by Qlogic DDR InfiniBand (16Gb/s) and gigabit Ethernet networks.
  • All nodes currently run CentOS Linux 6.5.

Peregrine Specifications

  • 8 compute nodes (96 cores total). Each node is a Dell PowerEdge R415 rack-mount server with two six-core AMD Opteron 4180 2.6GHz processors and 32 GB of system memory.
  • One head node, a Dell PowerEdge R415 server, with one 6-core AMD Opteron processor and 16 GB of system memory.
  • The head node houses a 5 Terabyte RAID5 array, available to all compute nodes via NFS.
  • All nodes are connected by a dedicated gigabit ethernet network interface.
  • Jobs are scheduled using the SLURM resource manager.
  • In addition, Peregrine is a submit node and manager for the UWM HTCondor grid, which provides access to idle cores on lab PCs and other machines around campus for use in embarrassingly parallel computing.
  • All nodes run FreeBSD 9.2 and offer a wide variety of open source software installed via the FreeBSD ports system.