iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW 17 January 2007 > iSGTW Feature - Meeting the Data Transfer Challenge

Feature: Meeting the Data Transfer Challenge


On December 12, 2006, the second large endcap disk of the CMS detector was lowered into the underground experimental cavern. The piece, weighing around 880 tons, took 10 hours to be lowered the 100 meters into the cavern.
Image Copyright CERN
As part of a computing challenge held in 2006, two university groups used parts of two grid infrastructures to transfer a massive amount of data. Over a one-month period, 105 terabytes of data was transferred between Purdue University in Indiana and the University of California, San Diego, using parts of the Open Science Grid and TeraGrid infrastructures.

Both university groups have computing sites that are part of the worldwide computing infrastructure for the CMS particle physics experiment. The challenge was part of preparation for start-up of the experiment at the end of 2007.

While all U.S. sites participating in the CMS computing infrastructure are part of the Open Science Grid, Purdue and the San Diego Supercomputer Center at UCSD are also members of the TeraGrid. Working with their respective TeraGrid sites, the two university groups combined the OSG infrastructure with a dedicated link set up through the TeraGrid network to push the limits of their network use.

The group reached peak data transfer rates of 200 megabytes per second, with bursts of over four gigabits per second. The model for the CMS experiment calls for data to be passed by the CMS detector at CERN in Switzerland, to a series of large computing sites around the world, and then to centers such as Purdue and UCSD, called “Tier-2” centers.

“The data transfer routes for CMS don’t really include moving data from one Tier-2 to another, but it’s a great way for us to learn how to better use the 10-gigabit links,” says Preston Smith from Purdue University. And experience and advances made by any one of the CMS computing centers is shared with other centers in CMS, OSG and the TeraGrid, to the benefit of the larger grid community.

The CMS Tier-2 centers in the United States and around the world have more work yet to do on their network infrastructure before they’re ready to accept the large data rates expected when the experiment starts running – up to 100 megabytes per second. The eventual goal for the computing sites during 2007 is to sustain the use of more than 50% of their network capacity for an entire day. For the Purdue-UCSD network link, that would mean sustaining transfers at approximately four gigabits per second for one day.

“We’ve shown that we can exceed any peak performance goal that we set for ourselves, so now I’d like to see more focus on sustained operations,” says Frank Würthwein from UCSD. “I’m hoping in the near future we’ll see a week of daily average transfers above 100 megabytes per second between UCSD and Purdue.”

Learn more at the Open Science Grid, TeraGrid and CMS Web sites.

- Katie Yurkewicz, iSGTW Editor-in-Chief


Tags:



Null
 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR

 Announcements

NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum

 

January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2

 

February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III


More calendar items . . .

 

FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map