:: Distributed Computing Grid Experiences
in CMS Data Challenge
Dept. of Physics and INFN, Bologna
The Compact Muon Solenoid experiment (CMS) is one of the four
High Energy Physics experiments that will collect data at the
Large Hadron Collider (LHC) being build at CERN. The CMS collaboration
is currently taking part in computing intensive Monte Carlo simulation
studies of the detector. CMS has a long term need to perform large-scale
simulation efforts, in which physics events are generated and
their manifestations in the CMS detector are simulated.
The challenge for the CMS computing infrastructure is to cope
with the very large computational and data access requirements.
The size of the resources required, the complexity of the software
and the physical distribution of the CMS collaboration naturally
imply a distributed computing and data access solution.
The Grid paradigm is one of the most promising solutions to be
investigated, and CMS is collaborating with many Grid projects
with the aim of understanding how the Grid can be useful for CMS
and how CMS software needs to be adapted to use Grid functionalities.
The preparation and building of the Computing System able to treat
the data being collected pass through sequentially planned steps
of increasing complexities (data and physics challenges).
Data Challenge for CMS during the year 2004 was planned to reach
a complexity scale equal to about 25% of that foreseen for LHC
The goal of the challenge was to run CMS reconstruction for sustained
period at 25Hz input rate, distribute the data to the CMS Tier-1
centers and analyze them at remote sites. To achieve the challenge
CMS undertook a large simulated event production in advance. Grid
environments developed in Europe by the LHC Computing Grid (LCG)
in Europe and in the US with Grid2003 were utilized to complete
the aspects of the challenge.
A description of the experiences, successes and lessons learned
from experiences with grid infrastructure is presented.