bugGNU Parallel - Bugs: bug #33352, The pipe option fails to maintain...

 
 

bug #33352: The pipe option fails to maintain order, despite --keeporder

Submitter:  Davin Shearer <davin>
Submitted:  Thu 19 May 2011 07:41:48 PM UTC
   
 
Category:  None Severity:  4 - Important
Item Group:  None Status:  Fixed
Privacy:  Public Assigned to:  tange
Open/Closed:  Closed
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Sat 21 May 2011 09:47:33 AM UTC, comment #13: 

Having found the solution it is suddenly very easy to reproduce the problem - even on other hardware:

perl -e '@x=1 .. 17000; for(1..30) { print "@x\n"}' | pv -qL 200000 |parallel -j2 --pipe --keeporder --block 150000 cat | md5sum

This gives different md5sums for each run.

The problem is that read(STDIN) is being interrupted by a dead child. The chance of this happening is very small if there are few children dying or read(STDIN) never has to wait for data.

The test above forces data to arrive slowly (using pv) which causes read(STDIN) to take a long time - thus being interrupted by a dead child.

Fixed in [59ddc7b7]

Ole Tange <tange>
Group administrator
Sat 21 May 2011 12:25:19 AM UTC, comment #12: 

I believe I fixed it. Please try inserting these two lines and let me know if that solved the issue:

--- /usr/local/bin/parallel     2011-05-19 23:29:58.000000000 +0200
+++ /home/tange/bin/parallel    2011-05-21 02:14:42.000000000 +0200
@@ -120,6 +120,7 @@
     my $recendrecstart = $recend.$recstart;
 
     while(read(STDIN,substr($buf,length $buf,0),$::opt_blocksize)) {
+       reap_if_needed();
        # substr above = append to $buf
        if($::opt_regexp) {
            if($Global::max_number_of_args) {
@@ -160,6 +161,7 @@
                }
            }
        }
+       do_not_reap(); # Disable reaping when reading from stdin
     }
     # If there is anything left in the buffer write it
     write_record_to_pipe(\$buf,$recstart,$recend);

Ole Tange <tange>
Group administrator
Sat 21 May 2011 12:00:45 AM UTC, comment #11: 

Making a copy of all being read:

    open(OUT,">debug.$$") || die;
    my $append;
    while(read(STDIN,$append,$::opt_blocksize)) {
        print OUT $append;
        substr($buf,length $buf,0) = $append;
        # substr above = append to $buf

shows that everything being read is being saved and processed correctly. The problem seems to be that the read() does not do what it is supposed to do.

The read() works correctly if write_record_to_pipe() is disabled.

The only actual I/O in write_record_to_pipe is complete_write(), close() and reaper(). And we know the writing is not the the problem - so is any of these messing with the STDIN we are reading from?


Ole Tange <tange>
Group administrator
Fri 20 May 2011 11:24:20 PM UTC, comment #10: 

I still do not have a test case that fails every time - not even on perl 5.8.

However, it seems timing has a big influence. Changes for the test to fail is much higher if I nice the process.

mkdir -p tmp
OPT='--block 100k --nice 10'

perl -e '$x=join(" ",map {($_ % 1000) ? $_ : $_."\n" } 0..170000); print "$x\n"x100' | md5sum
perl -e '$x=join(" ",map {($_ % 1000) ? $_ : $_."\n" } 0..170000); print "$x\n"x100' | \                                                 
  parallel $OPT --pipe --keeporder cat > offrig1
perl -e '$x=join(" ",map {($_ % 1000) ? $_ : $_."\n" } 0..170000); print "$x\n"x100' | \                                                 
  parallel $OPT --pipe --recend '' --keeporder cat > offrig2
perl -e '$x=join(" ",map {($_ % 1000) ? $_ : $_."\n" } 0..170000); print "$x\n"x100' | \                                                 
  parallel $OPT --pipe --keeporder --files cat | parallel -Xj1 cat {}';'rm {} > offrig3
perl -e '$x=join(" ",map {($_ % 1000) ? $_ : $_."\n" } 0..170000); print "$x\n"x100' | \                                                 
  parallel $OPT --pipe --recend '' --keeporder --files cat | parallel -Xj1 cat {}';'rm {} > offrig4
perl -e '$x=join(" ",map {($_ % 1000) ? $_ : $_."\n" } 0..170000); print "$x\n"x100' | \                                                 
  parallel $OPT --pipe --tmpdir=tmp --recend '' --keeporder --files cat | parallel -Xj1 cat {}';'rm {} > offrig5
perl -e '$x=join(" ",map {($_ % 1000) ? $_ : $_."\n" } 0..170000); print "$x\n"x100' | \                                                 
  parallel $OPT --pipe --halt-on-error 2 --tmpdir=tmp --recend '' --keeporder --files cat | parallel -Xj1 cat {}';'rm {} > offrig6
perl -e '$x=join(" ",map {($_ % 1000) ? $_ : $_."\n" } 0..170000); print "$x\n"x100' | \                                                 
  parallel $OPT --pipe --keeporder --tmpdir=tmp --halt-on-error 2 cat > offrig7
wait
parallel md5sum ::: off*


Ole Tange <tange>
Group administrator
Fri 20 May 2011 10:51:38 PM UTC, comment #9: 

Can you make a test file that I can have, too? E.g. by using seq.

It seems the demeter test below is more likely an issue when using >() with tee as I cannot reproduce the error otherwise.

Ole Tange <tange>
Group administrator
Fri 20 May 2011 09:31:29 PM UTC, comment #8: 

Test case that show the problem on the computer demeter (it is not a general test case):

perl -e '$x=join(" ",map {($_ % 10) ? $_ : $_."\n" } 0..500); print "$x\n"x100' | tee >/dev/null \
  >(parallel -j2 --pipe --keeporder --block 1500 --tmpdir=. cat | tee >`hostname`1 >(md5sum)) \
  >(parallel -j2 --pipe --keeporder --block 1500 --recend '' --tmpdir=/tmp cat | tee >`hostname`2 >(md5sum)) \
  >(parallel -j2 --pipe --keeporder --block 1500 --recend '' --tmpdir=tmp/5 --files cat | xargs cat | tee >`hostname`5 >(md5sum)) \
  >(parallel -j1 --pipe --keeporder --block 1500 --recend '' --tmpdir=/tmp cat >correct)

Ole Tange <tange>
Group administrator
Fri 20 May 2011 04:12:51 PM UTC, comment #7: 

So found a test that I think demonstrates that it might not be a simple matter of flushing before closing.

The size of my test file is 5667673817 bytes.

My command is:

cat testfile | parallel --tmpdir=/localdata/tmp/parallel --pipe --recend '' --keeporder --files cat | tee outfiles | parallel -Xj1 -m cat > out

I totalled the number of bytes in the files in outfiles and they equal the number of bytes in out (5667661629).  The short file is 108349 and the short file is the same as the last 108349 bytes of testfile.  All the other files are exactly 1048576 (1M) bytes and there are 5405 of these.  This leaves 12288 bytes missing.

Details:

cat outfiles | parallel -L1 -Xj1 ls -l | awk '{print $5}' | sort | uniq -c | sort -nbr
5404 1048576
   1 108349
   1 0  

1048576 * 5405 + 108349 = 5667661629 (the size of the temp files equals the size of the out file as we'd expect).

The input file is 5667673917 leaving a gap of 12288 bytes.

Hope this helps.

Davin Shearer <davin>
Fri 20 May 2011 03:38:52 PM UTC, comment #6: 

rm -rf /localdata/tmp/parallel_test ; mkdir -p /localdata/tmp/parallel_test ; cat bigfile | parallel --tmpdir=/localdata/tmp/parallel_test --recend '' --pipe --keeporder --files cat | xargs ls -l

What I would expect to see here are all but one file containing 1048576 (1M) bytes and most do, but there are many of length 0 and one that is smaller (and it contains the last short bock). 

Another test:

cat largefile | parallel --tmpdir=/localdata/tmp/parallel_test --pipe --recend '' --keeporder --files cat | tee outfiles | parallel -Xj1 cat > out

Tail of out shows the last block came through OK, though the file containing the last bock was not the last file in outfiles, but it is the last populated file.  I suppose this is just an artifact of how you're handling your job slots.

Davin Shearer <davin>
Fri 20 May 2011 02:40:55 PM UTC, comment #5: 

The lack of a flush sounds plausible as close(2) does not do that automagically like fclose does.  I ran your seq test and used comm to inspect the differences and it appears to be the case that bytes are missing from the ends of blocks.  Perhaps strace can shed some further light on the issue.  I'm on an isolated network so unfortunatly I can't paste my observations here very easily.

Davin Shearer <davin>
Fri 20 May 2011 09:29:40 AM UTC, comment #4: 

I have re-tested on perl 5.10 and if I run multiple in parallel I can even get it to fail on 5.10, so this is a serious bug.

Ole Tange <tange>
Group administrator
Thu 19 May 2011 10:17:02 PM UTC, comment #3: 

Diff between the real and the failing is missing bytes. So the order is correct, but some bytes are missing (it is not full lines). The missing bytes can be in the middle of the file.

This leads me to believe this is caused by some buffers not being flushed properly in perl 5.8.8. They probably need to be explicitly flushed.

Maybe the missing bytes are all located at the end of a block?

Ole Tange <tange>
Group administrator
Thu 19 May 2011 10:05:27 PM UTC, comment #2: 

Thank you for your error report. The function is beta precisely because I want error reports.

I am happy that you get the same results with Libya and FOSDEM, as the code for --pipe has not changed.

I have been unable to reproduce your results on a 17 GB file on:

  • a 8 core 64-bit machine, perl 5.10.0, 32 GB RAM
  • a 24 core 64-bit machine, perl 5.10.0, 128 GB RAM
  • a 48 core 64-bit machine, perl 5.10.1, 128 GB RAM


However, with 'seq 1 10000000 > largefile' on:

  • a 4 core 64-bit machine, perl 5.8.8, 16 GB RAM


I can reproduce your errors.

My guess is perl 5.8.8 being the culprit.

I will look into this at some point. For now you might want to see if upgrading perl will fix the issue.

Ole Tange <tange>
Group administrator
Thu 19 May 2011 08:00:19 PM UTC, comment #1: 

The "Libya" release is suffering from the same issue.  The order and the size are different with --pipe and --keeporder with the tmpdir on large storage and halt on error set to 2.

cat largefile | parallel --pipe --keeporder --tmpdir=/bigstore/tmp --halt-on-error 2 cat >out
cmp largefile out #Fails

Size of the files are also different, with the file out being the smaller of the two (by about 100K bytes).

Davin Shearer <davin>
Thu 19 May 2011 07:41:48 PM UTC, original submission:  

For each of these attempts:

cat largefile | parallel --pipe --keeporder cat > out
cat largefile | parallel --pipe --recend '' --keeporder cat > out
cat largefile | parallel --pipe --keeporder --files cat | parallel -Xj1 cat {}';'rm {} > out
cat largefile | parallel --pipe --recend '' --keeporder --files cat | parallel -Xj1 cat {}';'rm {} > out
cat largefile | parallel --pipe --tmpdir=/bigstore/tmp --recend '' --keeporder --files cat | parallel -Xj1 cat {}';'rm {} > out
cat largefile | parallel --pipe --halt-on-error 2 --tmpdir=/bigstore/tmp --recend '' --keeporder --files cat | parallel -Xj1 cat {}';'rm {} > out

The input is not the same as the output
cmp largefile out # fails

I ran this on a 16 core Dell R900 server with 128GB of memory running Red Hat Enterprise Linux 5.6, perl version 5.8.8, parallel version 20110205.  The file tested is a 5.6GB newline delimited csv file.  I know --pipe is still beta, but I hope this report helps.  Will try the latest version "Libya".

Davin Shearer <davin>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by tange (Posted a comment)
  • -email is unavailable- added by davin (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follow 5 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2011-05-21 tange StatusConfirmed Fixed
        Open/ClosedOpen Closed
    2011-05-19 tange Severity3 - Normal 4 - Important
        StatusNone Confirmed
        Assigned toNone tange

    Back to the top

    Powered by Savane 3.13-4b48.
    Corresponding source code