iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW - 29 April 2009 > iSGTW Feature - Poker, parallelism and payoff

Feature - Poker, parallelism and payoff


Image courtesy of PSC

You won’t find them at the Vegas casinos and what they do is hard to call gambling, but it’s fair to say that Tuomas Sandholm and grad student Andrew Gilpin of Carnegie Mellon’s School of Computer Science are professional poker players. Last July in Chicago — at the AAAI (Association for the Advancement of Artificial Intelligence) Computer Poker Competition, involving 19 programs from six countries — they walked away with no pile of cash but, nevertheless, were the biggest winners.

Their field, game theory, in which Sandholm’s work is internationally recognized, describes conflict in which the payoff is affected by actions and counter-actions of intelligent opponents. Head-to-head poker between two players is a prominent example of what’s called two-person zero-sum games: One player wins what the other loses.

In recent years, poker has emerged as an AI challenge much as chess was for many years.

"But poker is far more demanding,” says Sandholm. “In poker, a player doesn’t know which cards the other player holds or what cards will be dealt in the future.”

Tuomas Sandholm 

Used with permission from T. Sandholm  

Like many games, poker can be formulated mathematically, but the formulations are huge. Two-player poker has a game-tree of a billion-billion (1018) nodes, notes Gilpin.

“To solve that requires massive computational resources,” he says. “Our research is on scaling up game-theory solution techniques to those large games, and new algorithmic design.”

The most computationally intensive portion of their algorithm is a matrix-vector product, where the matrix is the payoff matrix and the vector is a strategy for one of the players. This operation accounts for more than 99 percent of the computation, and is a bottleneck to applying game theory to many problems of practical importance.

To drastically increase the size of problems the algorithm can handle, Gilpin and Sandholm devised an approach that can potentially exploit massively parallel systems of non-uniform memory-access architecture, such as Pople, an SGI Altix 4700 TeraGrid resource at the Pittsburgh Supercomputing Center. By making all data addressable from a single process, shared memory simplifies a central, non-parallelizable operation performed in conjunction with the matrix-vector product. Sandholm and Gilpin have revised their code to run on Pople, and are doing experiments to learn how much parallelism can help, and possibly point to areas for further algorithmic improvement.

Michael Schneider, Pittsburgh Supercomputing Center

Tags:



Null
 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR

 Announcements

NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum

 

January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2

 

February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III


More calendar items . . .

 

FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map