Mon 03 Nov 2014 05:36:35 PM UTC, original submission:
Hi,
There is a bug regarding the GNU Parallel ("Shellshock" version) feature that allows the dynamic modification of the servers list ("include/exclude servers on-the-fly", #42983). I received some complaints from the administrator here at the computer lab and then I ssh-ed into some remote machines to see what was happening: there were tons of processes running on a single machine, far more than the amount of specified slots! Not only that, but also stopping GNU Parallel on the local machine didn't stop the inconvenient remote processes.
Every time the ssh login file is reloaded, a number of jobs are immediately launched on all servers, regardless of their load (number of slots currently used).
Suppose we have the following entries in the ssh login file
1/server1.net
5/server5.net
and that there are 1 and 5 jobs currently running on server1.net and server5.net, respectively.
If this file is reread for whatever reason, then GNU Parallel will launch 1 more job on server1.net (thus a total of 2 jobs) and 5 more jobs on server5.net, totaling 10 jobs there. After that, when starting new jobs GNU Parallel will not check if the total number of running jobs on a host is less than its capacity. Thus, there will be 5 + the number of unfinished jobs running on server5.net, for example.
It seems that after rereading an ssh login file, GNU Parallel simply forgets/loses track of the number of jobs still running on each machine.
Unfortunately, for any large set of reasonably long running jobs, when the ssh login file changes frequently (which is common on unreliable machines/networks) GNU Parallel will end up launching a zillion jobs on each server, effectively rendering them inoperative.
Steps to reproduce:
1) Create a 'hosts.slf' file containing some hosts in it. For instance:
1/:
1/server1.net
1/server2.net
1/server3.net
2) Run the following GNU Parallel's command:
for i in $(seq 1 1000); do perl -e 'print rand(100)'; echo; done | ./parallel --slf hosts.slf 'echo $(hostname): {}; sleep {}'
3) Monitor how many 'sleep' instances are being executing on each host:
watch -n 1 "ps aux|grep 'sleep [0-9]\+'"
4) Add or remove a host in 'hosts.slf' (e.g., comment entry '1/:') and save the file. Wait until the file 'hosts.slf' is reread (when a job finishes).
5) Go to step 3 and observe there will be an additional 'sleep' instance on each host, thus not obeying the maximum number of allowed jobs (in our case it is one).
|