iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW - 24 November 2010 > Feature - SuperComputing 2010 comes to a close

Feature - SuperComputing 2010 comes to a close


Last week 10,000 people from around the world converged on the city of New Orleans to attend SuperComputing 2010.

Hot topics included (but were hardly limited to) climate change modeling, graphic processing units, and the rise of data-intensive science.

Climate change modeling

Keynotes, panels, and technical papers all sought to address the challenges facing climate modeling in the coming years. Some suggested that exascale supercomputers enabled by graphic processing will be necessary to run future climate model. But greater computational power on its own is not enough. A model that accurately described the Earth’s climate would provide increasingly accurate results when run at increasingly high resolution – and draw on increasingly large quantities of computational power in the process, since computational cost increases with resolution. But as the panelists at the “Pushing the Frontiers of Climate and Weather Models” panel pointed out, existing models are each optimized at a specific resolution, and will become less accurate if resolution is increased. Before we can take advantage of the higher resolutions enabled by greater computational resources, climate modelers will have to come up with models that are accurate at higher resolutions.

Graphics Processing Units

There remains a great deal of hype around the little chips known as graphics processing units, or GPUs for short. Proponents argue that GPUs are the only way to reach the exascale at a reasonable cost in both money and energy. But during the panel “Toward Exascale Computing with Heterogeneous Architectures,” NERSC director Kathy Yelick (standing in for John Shalf) pointed out several factors that are often overlooked. First, there is the fact that in some cases, the benefits of GPUs are overstated because they are being compared to unoptimized CPU code. After extensive benchmarking, Yelick and her colleagues found that actual speed-ups should range from 2.2 for memory-intensive code to 6.7 for compute-intensive code. Second, teaching developers and researchers to program in a new paradigm such as CUDA and translating existing applications into CUDA is no small endeavor. Anyone who believes that something new will come along soon may conclude that they are better off waiting to transition to a new architecture at that time, and skipping over CUDA entirely.

Bob Jones speaks on data-intensive science at SC10. Image courtesy of Miriam Boon.

Data-Intensive Science

As last year’s plenary by Leroy Hood about personalized medicine suggested, the data generated by the life sciences is rapidly increasing. This year Bob Jones of CERN (who was also the project leader for EGEE) picked up the ball on that topic during his plenary talk, “Building Cyber-Infrastructure for Data Intensive Science.”

“Forget about CPU cycles,” Jones said during his plenary. “What people want to see is the data in the end.”

Jones identified interoperability, security, and single sign-on as some of the key issues that face data-intensive cyberinfrastructures. He also made a few predictions that are worth paying heed to:

  1. There will be massive adoption of virtualization techniques in grid computing centers, “because it’s cheaper from the operations point of view and it reduces dependency between building up the software that all the people need… and of course it helps simplify the grid middleware.”
  2. Federated identity will enjoy wide adoption. “I don’t think it will be the x509-based systems,” Jones said. “It’s more like Shibboleth and OpenID style of trusted networks.”
  3. The borders between supercomputing, grids, volunteer computing, etc., will blur, “because people want to use the best services they have available.”
  4. A shared data infrastructure will emerge.
  5. E-Science requirements will always outstrip supply.
  6. The policy for resource allocation will not change.
  7. Politicians will continue to prefer to fund clusters and supercomputers rather than access to commercial clouds. “The politicians can have their photograph taken in front of the new cluster or new supercomputer, and it looks great for them,” Jones said. “You can’t give them a photograph in front of a credit for Amazon.”
  8. People will realize that the hard part is paying to be able to have immediate access to an exabyte-scale of data which is archived for years.
  9. —Miriam Boon, iSGTW
Tags:



Null
 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR

 Announcements

NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum

 

January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2

 

February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III


More calendar items . . .

 

FooterINFSOMEuropean CommissionDepartment of EnergyNational Science Foundation RSSHeadlines | Site Map