Department of Physics and Astronomy: Publications and Other Research

 

Document Type

Article

Date of this Version

2007

Comments

Published in Journal of Physics: Conference Series 119 (2008) 052004. © 2008 IOP Publishing Ltd

Abstract

The CMS computing model relies heavily on the use of “Tier-2” computing centers. At LHC startup, the typical Tier-2 center in the United States will have 1 MSpecInt2K of CPU resources, 200 TB of disk for data storage, and a WAN connection of 10 Gbit/s. These centers will be the primary sites for the production of large-scale simulation samples and for the hosting of experiment data for user analysis – an interesting mix of experiment-controlled and user-controlled tasks. As a result, there are a wide range of services that must be deployed and commissioned at these centers, which are responsible for tasks such as dataset transfer, management of datasets, hosting of jobs submitted through Grid interfaces, and several varieties of monitoring. We discuss the development of the seven CMS Tier-2 computing centers in the United States, with a focus on recent operational performance and preparations for the start of data-taking in 2008.

Included in

Physics Commons

Share

COinS