Skip to content

Computer Research Support

Computer Research Support

Network infrastructure

UC Merced utilizes a campus network that is almost completely constructed with ExtremeNetworks hardware. Our architecture connecting to our uplink provider, CENIC (also known as CalREN), comes in to campus over a pair of 1Gb fiber connections from each of two separate networks. One of the networks connects UC Merced back up to the Internet over 1Gb, and is considered our Commodity Internet link. The other link connects UC Merced to "private" CENIC research-connected facilities up and down the West Coast, and can be referred to as our private network for research connectivity.

UC Merced's on-campus network connects in from our border switches via dual-redundant 2Gb aggregated fiber links to a pair of dual-redundant ExtremeNetworks Black-Diamond 10808 Multi-Layer Core switches, where each core has 2Gb aggregated fiber link from the border switch. Our campus cores are where all of our internal routing is done. Connection downstream to campus buildings is done via dual-redundant 2Gb fiber links from the campus cores to each building, for a total combined aggregate link of 4Gb to each academic building. From building cores to main server rooms in each building, we utilize 2Gb aggregated fiber-links so the server room has 2Gb main fiber-link back to the building cores. Downstream connectivity inside server rooms to access switches is connected to devices on 10/100/1000 switch-ports which can be aggregated to a max bundled link of up to 2Gb for device connection to the network.

Computer support infrastructure (server room)

This facility is essential to provide adequate computer research support to faculty in both schools. The science-and-engineering server room is served by a 130-kVA UPS system. This unit is equipped with a maintenance bypass feature and contains dual strings of batteries. The battery status is monitored remotely at the Central Plant. The S&E UPS system can support a full load 100 percent for up to five minutes during an event like momentary power failure, switching from primary power source to the alternate or backup power source, or when utility power is no longer available. However, this UPS system is also backed up by the Central Plant emergency generators that come online within 15 seconds or less upon initiation by the automatic transfer switch (ATS) serving this UPS system.

The input voltage of this UPS is 480V three phase and the output voltage is 120/208V three phase. This UPS system serves six electrical distribution panels (120V, three phase, 150 amps) each, and all panels are located inside the S&E server room with plenty of spare capacity still available. The existing load is below the 50 percent mark.

Research Computing Infrastructure and Support

High Performance Computing

-1 x 32-core (4 processor x 8 core) 2.13GHz Intel Xeon E7-4830 with 128GB RAM

-1 x 32-core (4 processor x 8 core) 1.8GHz Intel Xeon L7555 system with 64GB RAM

- 2 x 16-core (8 processor x 2 core) server with 2.6GHz AMD Opteron 885 processors and 32GB RAM

-1 x 16-core (4 processor x 4 core) 1.6GHz Intel Xeon E7310 system with 24GB RAM

-2 x SUN X4600 8 Opteron Dual Core with 32 GB RAM available for faculty to run simulations.

-1 x 4-core (2 processor x 2 core) 1.8 GHz AMD Opteron 265 system with 8GB RAM

-2-core (2 processor x 1 core) 2.8GHz Intel Xeon with 2GB RAM

-We have access to the San Diego Super Computer Center Triton cluster. Triton comprises a medium-sized (256-node) cluster that can tackle many research computing tasks, and a 28-node "large memory" cluster specifically for data-intensive computing projects. The system also includes a high-performance parallel file system for staging large data sets and access to high-bandwidth research networks such as CENIC. Each Triton node has 2 quad-core Intel Nehalem 2.4 GHz processors, 24 GB memory with 20 TeraFlops peak.

Storage, file service, backups

Engineering provides a robust file/storage service running on three Sun X4500-series servers in SE-142a. These systems run Oracle Solaris 11 and use the ZFS filesystem. Client access is via NFS, SFTP/SSHFS, or CIFS/Samba. Group access and shared folders are implemented using NFS and ACLs and symbolic links to "projects" directories in users’ homedirs.

Snapshots/Backups

Frequent ZFS snapshots are taken on nfs00/nfs02 using the built-in "time-slider" feature of Solaris. Snapshots live in the ".zfs" directory of the top-level filesystem, eg /datapool/home10/.zfs. Each morning, one snapshot is transferred from the production systems to nfs01 using sftp over the private network.

Backup system: (engbackups01.ucmerced.edu) runs as a virtual machine under OpenVZ. Backups of all other machines occur nightly using dirvish so versions of each machine's file system are always immediately available for examination and restoration. Restorations are handled by administrative staff, but a self-service (Web) user interface is planned.

Virtual servers

We provide OpenVZ virtual machines upon request by users.  Users are given root access to basic machines and are free to administer the machines as they like.  These machines have access to the central Engineering  storage array and receive backups transparently