bugfindutils - Bugs: bug #38092, Optionally support distributed...

 
 

bug #38092: Optionally support distributed memory machines using MPI

Submitter:  None
Submitted:  Tue 15 Jan 2013 09:20:38 AM UTC
   
 
Category:  xargs Severity:  1 - Wish
Item Group:  None Status:  Postponed
Privacy:  Public Assigned to:  jay
Originator Name:  Originator Email:  -email is unavailable-
Open/Closed:  Closed Release:  None
Fixed Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Sun 01 Sep 2019 02:20:04 PM UTC, comment #16: 

I'm going to close this for now, since we didn't get a patch.  In any case if such a patch existed, it's not clear how the maintainers would maintain the relevant functionality or how many of findutils' developers would benefit.


James Youngman <jay>
Group administrator
Fri 25 Jan 2013 07:51:13 AM UTC, comment #15: 

So any chance you could show an example of how to make xargs only process every Nth item where N is a preprocessor define?

Anonymous
Tue 22 Jan 2013 02:19:14 PM UTC, comment #14: 


> Even if there was, you would have to pass this "N" to xargs,
> and that would have to be done by the caller - mpirun.


Correct although technically N isn't given by mpirun but by the MPI library environment in which a program is executed (typically by mpirun or aprun or some such program).

The only way to know what N is in this environment is to call MPI_* functions. But once I see an example of stock xargs only processing every #define N 3 -th argument it should be easy to adjust N at runtime with MPI instead.

> The problem is that you would like to have xargs to process
> every Nth argument on node A while xargs on node B
> should process every (N+1)th argument etc.


Yes and as far as I know there is no way (standard compliant, portable, ...) to find out which xargs should process which files without calling MPI functions. That's just the nature of the environment.

> it is the job of rank 0 to distribute the data to the others.


Not if each process can find out that information on its own. In all systems that I've tried all processes have the same view of the filesystem (lustre.org seems to be a popular implementation) and so can open and read the same argument file. They just have to know which items to process and which ones to skip.

> I think you're better off asking in the OpenMPI forum.


Using MPI isn't the problem here for me but figuring out how to make xargs.c only process e.g. every Nth item in an argument file is. Modifying any of the loops in main() didn't seem to accomplish this. Once xargs can skip items it should be very easy to calculate the items to skip using MPI instead.

> the MPI stuff is usually linked dynamically into a program


In this respect MPI is the same as any other library. On some systems the parallel executable must be static but in e.g. on my ubuntu laptop it is dynamic. And anyway if MPI portions are behind #defines non-MPI systems shouldn't be affected in any way.
Then the regular xargs could be compiled as usual and a configure option could be provided to also compile an MPI aware version with for example the name mpixargs.

Anonymous
Tue 22 Jan 2013 08:11:58 AM UTC, comment #13: 


> Is there an easy way to make xargs process only every Nth argument [...]?


Even if there was, you would have to pass this "N" to xargs,
and that would have to be done by the caller - mpirun.
The problem is that you would like to have xargs to process
every Nth argument on node A while xargs on node B should process
every (N+1)th argument etc.

I still got not much knowledge about mpirun - and I'm sorry
that I don't have more time to dive into it -, but from quick
reading thru a few docs like
http://www.open-mpi.org/video/general/what-is-[open-mpi-1up.pdf]
it is the job of rank 0 to distribute the data to the others.
I think you're better off asking in the OpenMPI forum.

Finally, IMO MPI won't make it into upstream xargs because
the MPI stuff is usually linked dynamically into a program
... which will be a killer on 99.999% of all PCs it's used on.

Bernhard Voelker <berny>
Group administrator
Tue 22 Jan 2013 06:46:24 AM UTC, comment #12: 

Is there an easy way to make xargs process only every Nth argument (in addition to all options given to it)? In the end there didn't seem to be after hacking on xargs.c for a while.

Anonymous
Thu 17 Jan 2013 07:49:49 AM UTC, comment #11: 

If you could provide a patch that makes xargs process every third file I could probably continue from there.

Anonymous
Wed 16 Jan 2013 03:25:59 PM UTC, comment #10: 


> The other obvious - and more readable - way to do this is to
> split the argument file into however many chunks you need.
> Then just run ...


Couldn't I skip xargs completely with this type of a for loop?
for f in $(cat files); do process $f; done &

Manually fiddling with argument files kind of defeats the purpose of using xargs in the first place...

Anonymous
Wed 16 Jan 2013 03:21:46 PM UTC, comment #9: 

You can also easily test your suggestions with a stock xargs by installing open-mpi from http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.3.tar.bz2
./configure --prefix=$HOME && make && make install works for me and most distribution have some version prepackaged.

I've tried some permutations of your suggetions but so far nothing has worked. If a stock xargs can really be used in an MPI environment that would be great but I seriously doubt it.

Anonymous
Wed 16 Jan 2013 03:15:07 PM UTC, comment #8: 


> TOT_RANKS=# some decimal value
> ARGS_PER_RANK=# another decimal value
> xargs --process_slot_var=RANK -P $TOT_RANKS -n $ARGS_PER_RANK
> sh -c 'mpirun -n $RANK yourcommand "$@"' dollarzero


The above doesn't work in practice because an MPI parallel program has to be started by mpirun or equivalent (at least in all systems that I've used [Cray XT4&5, IBM Blue Gene P, etc.]). Furthermore since the environment given to each copy of a program started by MPI is still identical xargs cannot know which files a particular instance should process without calling MPI_Comm_rank(...). At least I don't see how else that could be accomplished portably and reliably (i.e. without looking at node hostnames or such, which might not even exist).

I cannot test the above commands but I suspect they also wouldn't work because the -n given to mpirun specifies the total number of processes to start, not a quantity going from 0 to n - 1.

Anonymous
Wed 16 Jan 2013 02:51:06 PM UTC, comment #7: 

There are several reasons why the other way around doesn't work:

1) If the final program is not MPI aware mpirun will still have to call xargs but the second xargs would have to take its input from command line arguments, e.g.

xargs -t --arg-file files -P 1 -n 2 mpirun -n 1 xargs -t -P 1 -n 1

will try to run:

mpirun -n 1 xargs -t -P 1 -n 1 file1 file2
^Cmpirun: killing job...

but since no input is given on stdin the command hangs.
On the other hand if the final program is MPI aware xargs isn't needed at all, but in that case this logic must be added to each program separately. Without xargs threading logic would similarly have to be added to each program separately.

Additionally in many cases mpirun still must be the first program that is executed so the complete command line for above would look like:

mpirun -n 1 xargs -t --arg-file files -P 1 -n 2 mpirun -n 1 xargs -t -P 2 -n 1

I'm not even sure which arguments should be given to which program in the above but clearly this is much more difficult to use than:

mpirun -n 1000 xargs --arg-file files -P 10 -n 1

would be, which leads to:

2) not standard: MPI standard does not specify how programs using MPI should be started and based on some experiments recursive calls to mpirun just don't work: http://www.mpi-forum.org/docs/mpi22-report/node195.htm

On some systems there is a way to launch separate parallel programs:
aprun -n 12 ./app1 : -n 8 ./app2 : -n 32 ./app3
but again the division of work would have to be done manually by e.g. creating separate argument files for each xargs.


3) Other reasons probably also exist but the first two are show stoppers for the other way around anyway.

Anonymous
Wed 16 Jan 2013 02:03:26 PM UTC, comment #6: 

The --process_slot_var is documented in later findutils-4.5.x releases (you can look at the latest version of the manpage here: http://git.savannah.gnu.org/cgit/findutils.git/tree/xargs/xargs.1).

It looks to me like your requirements would be met by:

TOT_RANKS=# some decimal value
ARGS_PER_RANK=# another decimal value
xargs --process_slot_var=RANK -P $TOT_RANKS -n $ARGS_PER_RANK sh -c 'mpirun -n $RANK  yourcommand "$@"' dollarzero 

The other obvious - and more readable - way to do this is to split the argument file into however many chunks you need.  Then just run

for chunk in $total_chunks; do
  xargs -a chunks/$chunk mpirun -n $chunk yourcommmand
done



James Youngman <jay>
Group administrator
Wed 16 Jan 2013 01:00:23 PM UTC, comment #5: 

I don't know much about mpirun, but if

  mpirun -n 1 xargs --args-file files -P 12 -n 5 ...

doesn't work because xargs wouldn't split the files,
what about the other way round?

  xargs --arg-file files -P 12 -n 5 mpirun ...

By this, xargs splits the input files into chunks of 5,
and mpirun can run the worker program on the nodes
it likes to.

Bernhard Voelker <berny>
Group administrator
Wed 16 Jan 2013 09:42:44 AM UTC, comment #4: 

After googling --process-slot-var I don't see how it could be useful, although I haven't found its exact description. Mainly because I don't think it is possible to distunguish between different invocations of xargs started with mpirun (or equivalent) in a standard way since all of the invocations are identical as far as non-MPI programs can tell.

Anyway if the solution outlined previously works the required patch would be quite small and not noticeable if the program is compiled with regular gcc.

Anonymous
Wed 16 Jan 2013 07:02:59 AM UTC, comment #3: 

For reference: https://en.wikipedia.org/wiki/Message_Passing_Interface
One implementation of the above (3-clause BSD): http://www.open-mpi.org

Anonymous
Wed 16 Jan 2013 06:57:54 AM UTC, comment #2: 

I didn't find an option called --process-slot-var in the man or info page of my xargs, is it the same as --max-procs or -P? If yes then those are not sufficient because in MPI multiple copies of the same program are launched, one per shared memory node, with identical command line arguments and environment.

So for example if I were to have a list of files to process:

for i in $(seq 1 1 1000); do echo file"$i"; done > files

I can easily parallelize their processing on one node by calling xargs (in many environments mpirun or its equivalent must be used to launch any program on the compute nodes):

mpirun -n 1 xargs --args-file files -P 12 -n 5

But trying to use more than one node doesn't work because the same xargs is invoked on each node meaning that each copy of xargs processes all of the files. I.e.:

mpirun -n 1 xargs --arg-file files -P 12 -n 5 | wc -l
200
mpirun -n 2 xargs --arg-file files -P 12 -n 5 | wc -l
400
mpirun -n 3 xargs --arg-file files -P 12 -n 5 | wc -l
600
...
mpirun -n 10 xargs --arg-file files -P 12 -n 5 | wc -l
2000

It would be nice if in the above case when n copies of xargs are started by MPI the file list would first be divided into n parts and then each of those would be processed by only one xargs in the usual way. In principle adding support for that should be easy but I didn't get very far in the few hours that I hacked on xargs.c.

For a simple programs the modifications would consist of something like this:

At the top:

#ifdef HAVE_MPI
#include <mpi.h>
#endif

Then inside main:

#ifdef MPI_VERSION
int comm_size, rank;
MPI_Initialize(argc, argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
// optionally check the return value of each MPI_* call...
#endif

Then in the place where the arguments are processed only process for example every comm_sizeth arg starting at offset rank:

#ifdef MPI_VERSION
size_t current_arg = 0;
#endif

in the while loop (or for loop doing bc_push_arg?):

#ifdef MPI_VERSION
if (current_arg % size == rank) {
  current_arg++;
#endif

...do the usual stuff here...

#ifdef MPI_VERSION
} else {
...maybe do something here just to skip this argument?...
}
#endif


and just before returning from main:
original_exit_value = child_error;
MPI_Finalize();
return child_error;

If not using MPI then the preprocessed source should stay identical to the current version so these changes shouldn't be too invasive.

Anonymous
Tue 15 Jan 2013 05:03:49 PM UTC, comment #1: 

Your feature request is quite unclear.   Could you explain what problem you're trying to solve?   Why is the --process-slot-var option of xargs not sufficient for your purpose?

James Youngman <jay>
Group administrator
Tue 15 Jan 2013 09:20:38 AM UTC, original submission:  

The number of cores available on a supercomputer can easily be 10 or 100 times more than what is available on one node in which xargs already works.

It would be nice if xargs could also be run on distributed memory machines and adding that shouldn't be difficult. Basically each instance of xargs would just have to skip some parts of the command line, e.g. given these arguments and assuming xargs was run with mpirun -n 3 xargs -P 2 -n 1 a b c d ...:
a b c d e f g h i j k
xargs that is MPI rank 0 would execute (in 2 regular processes):
a b
c d
MPI rank 1 would execute:
e f
g h
and rank 2 would execute:
i j
k

Only four calls to MPI functions would probably be required to code this: MPI_Initialize, MPI_Comm_rank, MPI_Comm_size and MPI_Finalize.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by berny (Posted a comment)
  • -email is unavailable- added by jay (Posted a comment)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follow 4 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2019-09-01 jay StatusNone Postponed
        Assigned toNone jay
        Open/ClosedOpen Closed
    2013-09-25 jay Severity3 - Normal 1 - Wish

    Back to the top

    Powered by Savane 3.13-cf05.
    Corresponding source code