iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week

Home > iSGTW 27 February 2008 > iSGTW Feature - Les Robertson: six years at the head of the LCG


Feature - Les Robertson: six years at the head of the LCG

Les Robertson in 2001, “just before it [the LCG project] started,” and in 1974, when he first arrived at CERN.
Images courtesy of Les Robertson

In this special feature iSGTW chats to Les Robertson, who recently stepped down after six years at the head of the Large Hadron Collider Computing Grid.

A hole in the funding bucket
Democratic and global
The Grid
Challenges so far: the big three
Our challenges for the future
Countdown to startup

In the beginning 

Les Robertson arrived at CERN in 1974 to fix a problem. The European physics research laboratory had just purchased a new supercomputer. The problem, says Robertson, was that it didn’t work.

“At that time customers fixed their own operating systems,” he explains. “I arrived as an operating systems expert and stayed on.”

Twenty-seven years later, Robertson began work on an entirely different problem: Preparations for the Large Hadron Collider were well underway, but the computing resources required to handle LHC data had been left behind.

A hole in the funding bucket

“Computing wasn’t included in the original costs of the LHC,” Robertson explains. “The story was that you wouldn’t be able to estimate the costs involved, although the estimates we made at the time have proven to be more or less correct.” This decision left a big hole in funding for IT crucial to the ultimate success of the LHC.

“We clearly required computing,” says Robertson, “but the original idea was that it could be handled by other people.”

In 2001, these “other people” had not stepped forward.

“There was no funding at CERN or elsewhere,” Robertson says. “A single organization could never find the money to do it. We realized the system would have to be distributed.”

CERN began asking countries to help. The charge was led by the UK, who contributed a big chunk of e-science funding, closely followed by Italy, who continues to supply substantial funding to CERN. Germany also donated a chunk of funding, and then, says Robertson, other countries followed suit.

“This money gave us a big boost,” he says. “It allowed us to create something much bigger.”

The Grid

In 1999 Harvey Newman from Caltech had initiated the Monarc project to look at distributed architectures that could integrate computing resources for LHC, no matter where they were located. At around the same time, Carl Kesselman and Ian Foster carved a spot on the world stage for the Grid.

“Their book motivated the idea of doing distributed computing in a very general way,” Robertson says. “It stimulated everyone’s interest. We decided to ride the wave.” But the Grid has not become the panacea, says Robertson. “It has become 250 different things, which has led to benefits and problems. Standards haven’t emerged in the way we expected, nor have off the shelf products.”


Some centers involved in the WLCG; clockwise from top left: the French Tier-1 in Lyon; the Asian Tier-1 in Taipai, the University of Wisconsin Tier-2 in the U.S., and the Cern Tier-0 in Switzerland.
Images courtesy of IN2P3, ASGC, UW-Madison and CERN 

Democratic and global

A big success of the LCG has been the involvement of multiple centers from around the world.

“Different countries, universities, labs…We have over 110 Tier-2 centers up and running, some big and some very small, but all delivering resources to the experiments,” Robertson explains. “Many of these are computing centers that haven’t been a fundamental part of the experiments environment before, and we’ve all put a lot of effort into working as a collaboration, sensitizing people to what will be required when the first data starts to arrive. The advantage is that all these centers are now involved in the experiments and so there are many options for injecting new resources when they are required.”


Challenges so far: the big three

When asked about the challenges he faced as head of the LCG project, Robertson laughs wryly. “There were several big problems,” he says, “and they were all a bit the same.”

“Funding was certainly a problem, and the UK and Italy were especially important in providing people to get us started. We also benefited enormously from EGEE and OSG and their predecessors. As far as equipment is concerned, with the exception of ALICE, we have what we need for the first couple of years. After that there’s still a lot of work to be done to build up resources as the data grows.”

“I was surprised by the intensity of competition within the HEP community. This is collaboration, not a project with funding for people, so we all have to agree on what we do. People have had lots of good ideas, but in the end you have to do the practical thing. Achieving resolution has been harder than I expected.”

Distant deadlines
“When the end is far away, there’s a temptation to think of sophisticated, clever ways of doing things. But this is difficult when there is little experience and so you don’t actually know what you need. Over the past year, the LHC has drawn closer to startup and this situation has changed. People have started to realize that we have to use what is available, because we want to do physics, and we need a solution.”


Our challenges for the future

Immediate: Stabilizing operations
“The futures of HEP and the grid depend on what comes out of the LHC. It’s very important that the LHC produces something quickly and that grid operations stabilize rapidly.”

Mid-term: Managing the data
“We can physically move data around quite well, but the challenges of data placement and management are still being proven. How do we distribute the data, and how do you find out where it is? There are enormous challenges yet to come.”

Long-term: Managing energy requirements
“Computing has been getting cheaper and cheaper. Now costs are going up because of power requirements. The cost of supplying energy will affect all large-scale computing. We will have to invest heavily in ways to improve efficiency.”


Countdown to startup

So is Robertson confident that all will go according to the LCG plan when the first proton beams race through the LHC? He’s hoping!

“There is a lot of work still to be done,” he says. “This is new, this idea that you start a machine and the computing required is not all at the same place as the machine. It hasn’t actually been done before. When the beams come, we don’t know what will happen. Things will be chaotic, people will want things we didn’t expect. But HEP is showing that this highly distributed environment is useable. Physicists are no longer dependent on CERN having all the funding or CERN deciding on priorities. We’ve created a democratic environment where you can plug in computer resources wherever you find it. In principle, that was the real goal of the grid.”

- Cristy Burne, iSGTW 




 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR


NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing


Enter your email address to subscribe to iSGTW.


 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum


January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2


February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III

More calendar items . . .


FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map