iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW 12 December 2007 > iSGTW Opinion - The rise of parallelism - and other computing challenges

 

Opinion - The rise of parallelism (and other computing challenges)


The ILLIAC IV supercomputer led its field in 1966 as a parallel computing machine. This computer was only ever quarter finished, but took eleven years to build at nearly four times the original estimated cost.
Image courtesy of Steve Jurvetson

In the past, parallelism was just one solution among the many available to manufacturers wanting to propose computer architectures with attractive peak performances.

Today, parallelism is no longer an “option”: it is now necessary for manufacturers to make large use of parallelism in order to propose attractive solutions.

Parallelism is no longer devoted purely to the field of high performance or high speed computing. As a consequence, it is almost everywhere: parallelism is used in PCs, cellular phones and much more. The extensive use of parallelism has transformed “More than Moore” into reality, contributing to the sustained amazement of modern users of computer devices.

The double-edges of the parallel sword

Fields such as computer science and numerical computing have traditionally faced a number of important challenges; however, the advent of grid computing and the massive use of parallelism have now raised many more important questions.

Will the convergence of parallel and distributed computing change the very nature of computer science and numerical computing? Will communication libraries or interfaces such as MPI or OpenMPI continue to permit programmers to maintain high performance? Do the numerical methods presently in use suit massive parallelism and the presence of faults in the systems? These are just few of the important questions that have arisen with the advent of parallel and distributed computing.

To ensure efficient use of new parallel and distributed architectures, new concepts related to communication, synchronization, fault tolerance and auto-organization must come into view and be widely used.

Parallel problems can be split into many smaller sub-problems, so that each sub-problem can be worked on by a different processor. This means that many sub-probems can be worked on  “in parallel,” thus increasing the speed of your computation.
Stock image courtesy of sxc.hu

Innovation through evolution

Manufacturers agree that the architecture of future supercomputers will be massively parallel, and as a consequence, they will need to be fault tolerant and well suited to dynamicity. So, a kind of auto-organization will also be needed, since efficient control of these very large systems will not necessarily be possible solely from the outside.

Parallel and distributed algorithms will also have to cope more and more with the asynchronous nature of communication networks and the presence of faults in the system.

Further, concepts such as asynchronous algorithms—whereby each process can run at its own pace according to its load and performance—present many similarities with the concept of wait-free processes in distributed computing, but they have yet to generate the popularity they deserve.

Ideas such as these are gaining more and more attention in many fields, particularly among computer scientists working on communication libraries such as Open MPI. Thus many more questions are raised: where will parallelism lead us and along which roads will we travel to get there? All of these questions must be answered and new solutions found if we are to continue to drive the evolution of computing.

These questions and concepts will be discussed at the 16th Euromicro International Conference on Parallel, Distributed and network-based Processing (PDP 2008), which will be held from 13-15 February 2008 in Toulouse, France. Eighty-three papers from 22 countries in Asia, Europe, North-America and South America have been selected by the Program Committee.

In addition to the conference main track, Special Sessions will address hot topics such as grids, parallel and distributed bioinformatics, virtualization in distributed systems, security in networked and distributed systems, modeling simulation and optimization of peer-to-peer environments and next-generation web computing. Computer manufacturers will also present their architectures, processors and strategies.

- Didier El Baz, Head of the Distributed Computing and Asynchronism team, LAAS-CNRS

 

Tags:



Null
 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR

 Announcements

NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum

 

January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2

 

February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III


More calendar items . . .

 

FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map