3 |
|
|
4 |
ParGCL provides three parallel programming interfaces, each described |
ParGCL provides three parallel programming interfaces, each described |
5 |
individually, below. The three interfaces are: |
individually, below. The three interfaces are: |
6 |
master-slave - simple, easy-to-use master-worker model of parallelism |
|
|
based on TOP-C (http://www.ccs.neu.edu/home/gene/topc.html) |
|
|
It also supports non-trivial parallelism, in which any |
|
|
slave process can cause the "shared data" to be updated, |
|
|
and so made visible to other slaves. |
|
7 |
slave-listener - master sends LISP commands to a slave, and then receives |
slave-listener - master sends LISP commands to a slave, and then receives |
8 |
replies. |
replies. |
9 |
MPI - Simplified LISP interface to MPI. At this time, it |
MPI - Simplified LISP interface to MPI. At this time, it |
10 |
primarily implements the point-to-point layer of MPI. |
primarily implements the point-to-point layer of MPI. |
11 |
See the MPI standard on the web for a full manual of the |
See the MPI standard on the web for a full manual of the |
12 |
MPI functions. |
MPI functions. |
13 |
|
master-slave - simple, easy-to-use master-worker model of parallelism |
14 |
|
based on TOP-C (http://www.ccs.neu.edu/home/gene/topc.html) |
15 |
|
It also supports non-trivial parallelism, in which any |
16 |
|
slave process can cause the "shared data" to be updated, |
17 |
|
and so made visible to other slaves. |
18 |
|
|
19 |
ParGCL/MPI is intended as an easy-to-use master-slave distributed architecture. |
To begin, make sure that there is a procgroup file in the directory where |
20 |
It combines the feedback of an interactive language (the GCL or AKCL dialect |
you start pargcl, or else call: pargcl -p4pg FULL_PATH_OF_PROCGROUP_FILE |
21 |
of LISP) with the the use of MPI to take advantage of networks of |
The procgroup file specifies the remote "slave" machines and the path |
22 |
workstations. As such, it is hoped that it will make available an SPMD |
to pargcl on those machines. pargcl/bin/procgroup is a sample file. |
23 |
architecture that helps people overcome the initial learning barrier in |
|
24 |
writing parallel programs. Ease-of-use is emphasized while hoping to maintain |
Most people will want to begin with the slave-listener interface. |
25 |
reasonable efficiency and a reasonable feature set. This distribution, |
Note that (help 'COMMAND) works for most commands. |
26 |
along with a paper describing it, is available by anonymous ftp in |
There are example parallel programs in pargcl/examples/. |
27 |
the directory /pub/people/gene/starmpi at ftp.ccs.neu.edu . |
Here is a sample usage of some simple commands: |
28 |
If you use this software, please send e-mail to gene@ccs.neu.edu to notify me. |
(send-message '(+ 3 4) 2) ; send to slave 2 for evaluation |
29 |
|
(send-message "(+ 5 6)" 1) ; send to slave 1 for evaluation |
30 |
This is admittedly a very simplistic manual for now. As the system develops, |
(send-message "(+ 7 8)" 1) ; send to slave 1 for evaluation |
31 |
this manual will expand further. The main idea is that gcl/mpi is built in |
(receive-message 2) |
32 |
three layers. Most people will prefer to use almost entirely the master slave |
(receive-message 1) |
33 |
layer, while taking advantage of the lower layers only as needed. By default, |
(probe-message 1) ; Slave 1 has message pending |
34 |
commands are executed on the master only. The implicit "PRINT" of the |
(flush-all-messages) |
35 |
top-level read-eval-print loop operates only on the master. However, explicit |
(probe-message 1) ; Slave 1 now has no messages |
36 |
print commands executed by user programs on master AND slaves will display on |
(broadcast-message '(setq myrank (MPI-Comm-rank))) ; rank (id) of slave |
37 |
the user console. |
(par-eval '(defun myfnc (x) (sqrt x))) |
38 |
|
(send-message '(dotimes (i 1000000000) (myfnc i)) 1) |
39 |
The routines par-eval and par-funcall cause execution on all processors |
(par-reset) ; useful if slave not responding. |
40 |
(master and slaves). A user can restrict execution to a particular processor |
(par-load "/home/joe/myfile.lsp") ; Load file on all processors |
41 |
by use of the functions master-p and mpi-comm-rank. However, as a matter of |
|
42 |
style, it is recommended to use these only as a last resort, since programming |
;;;; A remote process is set up for each one specified in your procgroup file. |
43 |
is conceptually easier when the same data structures are present on all |
;;;; Available commands: |
44 |
processors. |
;;;; (send-message <lisp-expr> <optional dest = 1>) |
45 |
|
;;;; (broadcast-message <lisp-expr>) ; no reply message |
46 |
The current implementation is based on MPI and GCL or AKCL, but it should |
;;;; (receive-message <optional source = (MPI-Any-source)>) |
47 |
be easily portable to other message passing libraries (such as PVM) |
;;;; (free-msg-buffer <buf>) ; optimization for greater efficiency, only |
48 |
and other dialects of Common LISP with a foreign function interface |
;;;; ; frees message buffer for re-use in next MPI-recv |
49 |
capable of loading object (.o) files and library archive (.a) files. |
;;;; (probe-message <optional source = (MPI-Any-source)>) |
50 |
|
;;;; (flush-all-messages) ; flush all commands pending by slaves |
51 |
|
;;;; (bye) modified to also delete remote processes |
52 |
|
;;;; (quit) synonym for (bye) |
53 |
|
;;;; (get-last-source) ; not recommended, but can be useful in cases |
54 |
|
;;;; ; of a continuation between master and slave, |
55 |
|
;;;; ; if master wants to match initial info and later |
56 |
|
;;;; ; info after continuation |
57 |
|
;;;; --- |
58 |
|
;;;; It works to call (send-message ... 1) n times, and then call |
59 |
|
;;;; (receive-message 1) n times, although this may be less efficient. |
60 |
|
;;;; Commands sent to the same processor, are evaluated in sequence. |
61 |
|
;;;; --- |
62 |
|
;;;; CAUTION: If you add the optional tag parameter to send-message, |
63 |
|
;;;; note that the slave-listener does not reply if tag = broadcast-tag |
64 |
|
;;;; Also, tags larger than broadcast-tag are interpreted as (vector fixnum) |
65 |
|
;;;; or (vector float), and use the corresponding MPI data types. |
66 |
|
|
67 |
|
==================== |
68 |
|
|
69 |
|
The current implementation has a built-in MPI subset (MPINU). Note, |
70 |
|
however, that one can invoke: ./configure --with-mpicc=XXX |
71 |
|
where XXX is an mpicc script that compiles an application to use MPI. |
72 |
|
This allows you to use other dialects of MPI. |
73 |
|
|
74 |
Note that messages are converted to (vector fixnum), (vector float), |
Note that messages are converted to (vector fixnum), (vector float), |
75 |
or in general, strings (general print representations) before being sent. |
or (by default) strings (general print representations) before being sent. |
76 |
This implies significant overhead for very large messages that have to be |
Conversion from a general object to its print representation as a string |
77 |
sent as their print representation. Also, a timer will kill any process |
implies significant overhead for very large messages |
78 |
|
|
79 |
|
Also, a timer will kill any process |
80 |
that has not received a message for some time (60 minutes by default). |
that has not received a message for some time (60 minutes by default). |
81 |
|
|
82 |
This version is still experimental, and commands can change. More examples |
Following is a summary of the master-slave layer, slave-listener layer, |
83 |
are planned for the future to make the learning curve less steep. |
and MPI layer. |
|
Comments will be gratefully accepted. |
|
84 |
|
|
85 |
========================================================================= |
========================================================================= |
86 |
|
[ The master-slave layer uses the TOP-C model. For a detailed description |
87 |
|
of the TOP-C model, see http://www.ccs.neu.edu/home/gene/topc.html ] |
88 |
|
|
89 |
MASTER SLAVE LAYER |
MASTER SLAVE LAYER |
90 |
|
|