iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW 09 January 2008 > iSGTW Feature - United we compute: FermiGrid continues to yield results

 

Feature - United we compute: FermiGrid continues to yield results


From February to May 2007, about 900 million D-Zero events were reprocessed with the help of FermiGrid resources. D-Zero is a high energy physics experiment that delves into the fundamental nature of matter.
Image courtesy of Peter Ginter
Before FermiGrid, the computing resources at high energy physics laboratory Fermilab, in Illinois, U.S., were individually packaged for the dedicated use of particular experiments.

By late 2004, all this began to change. The birth of FermiGrid, an initiative aiming to unite all of Fermilab’s computing resources into a single grid infrastructure, changed the way that computing was done at the lab, improving efficiency and making better use of these resources along the way.

Faster physics from FermiGrid

FermiGrid was initially deployed for small Fermilab experiments and the Compact Muon Solenoid experiment; other large experiments—D-Zero and the Collider Detector at Fermilab—soon followed suit.

The ability to access FermiGrid resources has already paid off. As one example, researchers working on the D-Zero experiment recently decided to reprocess vast amounts of data in response to a detector upgrade. Thanks to FermiGrid, they were able to process about 900 million events between February and May of 2007, around 233 million of which used FermiGrid resources ostensibly belonging to CMS.

FermiGrid also plays a major role in the Collider Detector at Fermilab, which primarily uses grid resources to generate Monte Carlo data. 
The Feynmann computer center, foreground, houses the Fermilab WLCG Tier-1 center as well as computers contributing to Open Science Grid and FermiGrid.
Images courtesy of Reidar Hahn
Community; laboratory; community

FermiGrid’s contributions continue to grow, reaching beyond physics and Fermilab to play a part in projects including the Laser Interferometer Gravitational Wave Observatory and the nanohub computational nanotechnology group.

“FermiGrid operates as a universal donor of opportunistic resources to Open Science Grid,” says Keith Chadwick, associate head of FermiGrid facilities. “Since January 2007, we have given away roughly ten thousand CPU-hours per month to OSG’s virtual organizations.”  

Eleven virtual organizations are hosted at Fermilab and take advantage of FermiGrid resources. In addition, FermiGrid also contributes to the Worldwide LHC Computing Grid, which will be used for the upcoming operations of the Large Hadron Collider.

Of course, providing such a facility does not come without challenges.  The FermiGrid team must react to changing demands without advanced warning and must keep the core services up and running with minimal outages. Effort per month averages two-and-a-half full-time employees, but this is increasing.

So what’s next for FermiGrid?  

Although some system redundancy is built-in, the FermiGrid team is now working to configure the core grid services such that they can be hosted on two independent platforms, either of which could carry the load if one system went down.  

“FermiGrid is well-positioned to continue to provide a high performance, robust and highly available grid facility to the Fermilab scientific community and the broader OSG consortium,” says Eileen Berman, head of FermiGrid facilities.

Six production analysis clusters are running jobs to and from the FermiGrid gateway. As of September 2007, 2081 worker nodes (~9000 job slots) are available. An additional 400 worker nodes (~1800 job slots) will soon be deployed.

- Marcia Teckenbrock, Open Science Grid

This story also appeared as an OSG Research Highlight.

 

Tags:



Null
 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR

 Announcements

NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum

 

January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2

 

February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III


More calendar items . . .

 

FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map