CMS-T2 Resources#
Computing#
The High Energy Physics group at the University of Wisconsin strives to maintain an information technology infrastructure that is trouble-free, secure, highly available and well understood. The table below provides a summary of the expected available computing (and storage) resources. In practice, due to disk failures and lack of back up disks the total storage can be less than or equal to the number shown.#
Gen | Cores | CPU Class (year puchased) | Slots* | HS06/slot** | Total HS06 | Storage (TB) |
---|---|---|---|---|---|---|
g26 | 32 | 2.2GHz Xeon E5-2660 (2013 Winter) | 384 | 9.64 | 3702 | 480 |
g27 | 40 | 2.2GHz Xeon E5-2660V2 (2013 Fall) | 1200 | 9.64 | 11568 | 320 |
g28 | 40 | 2.3GHz Xeon E5-2660V2 (2014 Fall) | 1360 | 9.64 | 13110 | 346 |
g29 | 40 | 2.3GHz Xeon E5-2650V3 (2015 Fall) | 1160 | 10.88 | 12615 | 316 |
g30 | 40 | 2.3GHz Xeon E5-2650V3 (2016 Mar) | 1200 | 10.88 | 13050 | 477 |
g31 | 48 | 2.2GHz Xeon E5-2650V4 (2016 Fall ) | 3264 | 10.21 | 33320 | 1067 |
g32 | 48 | 2.2GHz Xeon E5-2650V4 (2017 Fall ) | 960 | 10.21 | 9800 | 1194 |
g33 | 48 | 2.2GHz Xeon E5-2650V4 (2018 Fall ) | 816 | 10.21 | 8330 | 1738 |
g34 | 48 | 2.8GHz AMD EPYC 7402P (2019 Fall ) | 672 | 14.06 | 9450 | 1818 |
g35 | 64 | 2.3GHz Xeon Gold 5218 (2019 Fall ) | 64 | 11.59 | 742 | 0 |
g36 | 64 | 3.0GHz AMD EPYC 7302 (2020 Fall ) | 1344 | 15.5 | 20832 | 2600 |
Total | - | 12424 | 10.99 (Wtd) | 136519 | TBS |
*: Slot counts are working batch slots as of Mar 30, 2021.#
**: HS06 benchmark was run with all services disabled other than sshd, afsd, and xinetd. The following software was used:#
GCC: gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)
C++: g++ (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)
FC: GNU Fortran (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)
SPEC2006 version: 1.2
We use Condor batch computing software to implement a high throughput Linux computing environment. Opportunistic computing resources from the Grid Laboratory of Wisconsin (GLOW) and Center for Highthroughput Computing (CHTC) provide the potential for the utilization of a total of over around 10,000 Linux CPUs.#
Network#
Compute nodes from prior to 2017 have 1G links to our LAN. All newer ones have 10G. The 10G endpoints are connected to Nexus 93108TC-EX rack switches. These rack switches each have 100G links to a pair of Nexus 9336C-FX2 building switches that together are connected at 2x100G to the campus research network backbone, which operates on 4x100G links. The UW-Madison research backbone is connected to wide area research networks in Chicago via two independent routes each of which is composed of 2x100G links. In Chicago, the connections are to national research networks including Internet2 and ESNet. Via ESNet, our CMS Tier-2 is connected to the LHCONE international network.#
[Research Backbone Network Diagram]#
Storage#
The storage system is based on HDFS with an SRM and xrootd interface. A total storage space of more than 4PB is distributed across many dedicated and dual-use commodity servers in the cluster. To avoid data loss when hard disks fail, two copies of each HDFS block is maintained, so the amount of data that can be stored is half the total disk space.#