iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW - 6 October 2010 > Feature - HPC adds a spark to EDF's computing capacities

Feature - HPC adds a spark to EDF’s computing capacities


Image courtesy Zsuzsanna Kilian, stock.exchng

Jean-Yves Berthou is responsible for IT in the research and development area of Electricite de France — a major energy company in Europe. EDF’s 2,000 researchers use computing to work on a number of different projects, including areas such as minimizing CO2 emissions, alternatives to fossil fuels, and ensuring the security of electricity grids. Here, he describes the use and impact of high performance computing (HPC) at the company.


Why do you use HPC?

Berthou: In many cases physical experiments and testing are not possible, for example in the simulation of fuel assemblies and crack propagation in nuclear reactors, or in optimizing electricity production and trading. Even when experimentation is possible, numerical simulation can go beyond what is physically possible. However, experimentation still remains an indispensable tool.

 

In what application areas do you use HPC?

Berthou: It has been used for a long time in operational matters, such as optimizing day-to-day production, or choosing the safest and most effective configurations for nuclear refuelling. However, most of the advances towards higher levels of performance have been driven by the need to explain complex physical phenomena behind maintenance issues better; to assess the impact of new vendor technology;  and to anticipate changes in operating or regulatory conditions.


For your business does cloud computing offer a viable alternative to owning and managing your own computing systems?

Berthou: EDF will not use third-party resources for its production requirements in the short term. It is collaborating with CEA (the French Commission for Atomic Energy) on distributed computing and, in particular, on the pooling of resources across an organization. EDF plans to combine its own resources to create an in-house “cloud” of virtual, pooled resources. It does not plan to use current external cloud offerings, because these are not yet mature enough for its requirements which are determined by performance, portability and virtualization considerations.

 Image courtesy Flavio Takemoto, stock.exchng

 

What are the challenges you see in the development of your HPC capability (e.g. scalability of applications, power consumption, cost of systems)?

Berthou: The major challenges relate to portability of codes across different systems and scalability to 10,000 cores and beyond. The requirement is to port a complete simulation environment, comprising coupled multiphysics codes, to systems with large numbers of cores. Power consumption for such large systems is a major issue because it has a significant bearing on the total cost of ownership.


Are new languages and programming paradigms needed particularly as we move toward exascale systems?

Berthou: This is an important issue. New languages and libraries are needed for large systems. For example, EDF’s structural mechanics codes represent an investment of over €100 million. This investment needs to be preserved as the codes are moved to new machines. This places constraints on development methodology and new languages. The expertise of staff also represents a major investment because it may take several years for a programmer to become proficient and productive on the various tools needed. Together, these create inertia and constitute a real barrier to change. More and more, over the last 20 years, EDF has been developing object-oriented codes. This approach cannot be abruptly changed.

 

What HPC systems would you like to have available in a few years' time?

Berthou: The market continues to demand an exponential increase in computer power. This will take us towards systems with performances of 10s and 100s of petaflops. These systems will need to run existing codes and be programmable using existing tools and languages.

—Emilie Tanke, for iSGTW. An earlier version of this article appeared in Planet HPC

Tags:



Null
 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR

 Announcements

NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum

 

January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2

 

February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III


More calendar items . . .

 

FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map