iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW 20 January 2010 > Back to Basics - Why go parallel?

Back to Basics - Why go parallel?


 BY DARIN OHASHI
Darin Ohashi is a senior kernel developer at Maplesoft. His background is in mathematics and computer science, with a focus on algorithm and data structure design and analysis. For the last few years he has been focused on developing tools to enable parallel programming in Maple, a well-known scientific software package.

Parallel programs are no longer limited to the realm of high-end computing. In this column, Darin Ohashi takes us back to basics to explain why we all need to go parallel.

Computers with multiple processors have been around for a long time, and people have been studying parallel programming techniques for just as long. However, only in the last few years have multi-core processors and parallel programming become truly mainstream. What changed?

For years, processor designers have been able to increase the performance of processors by increasing their clock speeds. But a few years ago, they ran into a few serious problems. RAM speeds were not able to keep up with the increased speed of processors, causing processors to waste clock cycles waiting for data. The speed at which electrons can flow through wires is limited, leading to delays within the chip itself. Finally, increasing a processor's clock speed also increases its power requirements. Increased power requirements leads to the processor generating more heat (which is why overclockers come up with such ridiculous cooling solutions).

Glossary of terms:
  • core: the part of a processor responsible for executing a single series of instructions at a time.
  • processor: the physical chip that plugs into a motherboard. A computer can have multiple processors, and each processor can have multiple cores
  • process: a running instance of a program. A process's memory is usually protected from access by other processes.
  • thread: a running instance of a process's code. A single process can have multiple threads, and multiple threads can be executing at the same on multiple cores
  • parallel: the ability to utilize more than one processor at a time to solve problems more quickly, usually by being multi-threaded.

With so many issues making it increasingly difficult to increase clock speed, designers needed to find another way to improve performance. That’s when they realized that instead of increasing the core's clock speed, they could keep the clock speed fairly constant and put more cores on the chip. Thus was born the multi-core revolution.

Unfortunately, the shift to multi-core processors has led to some serious issues on the software side. In the past, processor clock speed was doubling about every 18 months. Thus a piece of software would run faster over time simply because it was running on newer, faster processors.

With multi-core processors, this speed up no longer occurs. Clock speeds have settled around the two to three gigahertz range. New processors may be slightly faster for non-parallel (single-threaded) applications (usually due to architectural changes, as opposed to clock speed increases). But the real increase in computer processing power is due to multiple cores.

A single-threaded application will never be able to utilize the increase in power provided by multiple cores. You could be using a processor that is many times more powerful, but a single-threaded application will not show a corresponding increase in speed.

In other words, if you want an application to get faster, you can no longer rely on processor clock speed increasing over time. To take advantage of processor improvements and speed up an application, the application must be written in parallel and it must be able to scale to the number of available cores.

As an aside, please note that I have been talking about parallelizing for performance reasons. There are certain types or parts of programs that are traditionally implemented using multiple threads, such as graphical user interfaces (GUIs). These applications generally use threads to handle multiple inputs at the same time, or to reduce the latency of certain interactions. In these cases, having access to multiple cores often does not improve the overall performance. Thus, even programs such as GUIs will need to parallelize their time consuming algorithms if they are to take advantage of (and thus get faster on) multiple core machines.

Hopefully I've convinced you that parallelizing applications is not just important, but necessary. But for a more in-depth look at the software issues, take a look at The Free Lunch Is Over by Herb Sutter, which originally appeared in Dr. Dobb's Journal, 30(3), March 2005.

A version of this story originally appeared in Darin Ohashi’s blog.

Tags:



Null
 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR

 Announcements

NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum

 

January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2

 

February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III


More calendar items . . .

 

FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map