Abstract
We present a framework for optimizing the distributed performance of sparse matrix computations. These computations are optimally parallelized by distributing their operations across processors in a subtly uneven balance. Because the optimal balance point depends on the non-zero patterns in the data, the algorithm, and the underlying hardware architecture, it is difficult to determine. The Hogs and Slackers genetic algorithm (GA) identifies processors with many operations - hogs, and processors with few operations - slackers. Its intelligent operation-balancing mutation operator swaps data blocks between hogs and slackers to explore new balance points. We show that this operator is integral to the performance of the genetic algorithm and use the framework to conduct an architecture study that varies network specifications. The Hogs and Slackers GA is itself a parallel algorithm with near linear speedup on a large computing cluster.
Original language | English (US) |
---|---|
Pages (from-to) | 635-644 |
Number of pages | 10 |
Journal | Parallel Computing |
Volume | 36 |
Issue number | 10-11 |
DOIs | |
State | Published - Oct 2010 |
Externally published | Yes |
Keywords
- Genetic algorithm
- Simulation-based optimization
- Sparse matrix
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Hardware and Architecture
- Computer Networks and Communications
- Computer Graphics and Computer-Aided Design
- Artificial Intelligence