Data grids are used in large scale scientific experiments to access and store nontrivial amounts of data by combining the storage resources from multiple data centers in one system. This enables users and automated services to use the storage resources in a common and efficient way. However, as data grids grow it becomes a hard problem for developers and operators to estimate how modifications in policy, hardware, and software affect the performance metrics of the data grid. In this paper we address the modeling of operational data grids. We first analyze the data grid middleware system of the ATLAS experiment at the Large Hadron Collider to identify components relevant to the data grid performance. We describe existing modeling approaches for pre-transfer, network, storage, and validation components, and build black-box models for these components. Consequently, we present a novel hybrid model, which unifies these separate component models, and we evaluate the model using an event simulator. The evaluation is based on historic workloads extracted from the ATLAS data grid. The median evaluation error of the hybrid model is at 22%.