Its features include a type-safe functional thread interface, lazy thread creation, garbage-collected types (lists, arrays, pointer structures) and controlled non-determinism (thread bags). Threads are first-order objects that can be used like any other objects and that are automatically reclaimed when they are not referenced any more. The package has been ported to numerous types of mono-processors and shared memory multiprocessors and can be easily embedded into existing application frameworks.
Comparisons with other thread libraries on the intro page.
The program is available online. Alas, only abstracts are posted at the moment.
Posted to Parallel/Distributed by Ehud Lamm on 5/31/04; 11:11:59 PM
Discuss
The Globus Alliance is a research and development project focused on enabling the application of Grid concepts to scientific and engineering computing.
Toolkit history here.
POOMA is a high-performance C++ toolkit for parallel scientific computation. POOMA's object-oriented design facilitates rapid application development. POOMA has been optimized to take full advantage of massively parallel machines. POOMA is available free of charge in order to facilitate its use in both industrial and research environments, and has been used extensively by the Los Alamos National Laboratory.
Cilk is a language for multithreaded parallel programming based on ANSI C. Cilk is designed for general-purpose parallel programming, but it is especially effective for exploiting dynamic, highly asynchronous parallelism, which can be difficult to write in data-parallel or message-passing style. Using Cilk, our group has developed three world-class chess programs, StarTech, *Socrates, and Cilkchess. Cilk provides an effective platform for programming dense and sparse numerical algorithms, such as matrix factorization and N-body simulations, and we are working on other types of applications. Unlike many other multithreaded programming systems, Cilk is algorithmic, in that the runtime system employs a scheduler that allows the performance of programs to be estimated accurately based on abstract complexity measures.
How is the work going? Are there any implementations available? Where does it fit in with other research on automated parallel execution, and what is the current state of that whole area anyway?
Posted to Parallel/Distributed by Luke Gorrie on 3/26/04; 8:43:45 AM
Discuss (2 responses)
distcc does not require all machines to share a filesystem, have synchronized clocks, or to have the same libraries or header files installed. They can even have different processors or operating systems, if cross-compilers are installed.
Let's get our new Parallel/Distributed department off on a solid practical footing.
distcc is the software-tools paradigm applied to distributed processing. You have a tool for compiling a source file (gcc). You also have a tool for coordinating compilation of multiple files, with dependency-aware parallelism (make). Throw in a distributed front-end to the compiler (distcc) and suddenly you have a perfectly integrated distributed compile farm. Truly, the only question is: why didn't somebody do this years ago?
Posted to Parallel/Distributed by Luke Gorrie on 3/26/04; 8:24:46 AM
Discuss (1 response)