bugGNU Octave - Bugs: bug #60928, Performance of sort unexpectedly...

 
 

bug #60928: Performance of sort unexpectedly slow for DIM=2

Submitted by:  None
Submitted on:  Fri 16 Jul 2021 11:50:46 PM UTC  
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Performance
Status:  Confirmed Assigned to:  None
Originator Name:  Originator Email:  -email is unavailable-
Open/Closed:  Open Release:  dev
Operating System:  GNU/Linux

Add a New Comment (Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

( Jump to the original submission)

Mon 19 Jul 2021 10:32:12 PM UTC, comment #6: 

1. The permute approach has the advantage that it can be written once in libinterp/corefcn/data.cc and it would work for all data types. Doing the skip calculations and loops looks like it would need to be implemented in multiple places inside octave-value/ for real, complex, double, single, integer, etc, even if the code is the same. Is this correct? Or can it too be written in only one place with templating?

2. A generic permute approach could be:

# arr is the array to be sorted along dimension D
n = ndims(arr);
p = 1:n;
p([1 D]) = p([D 1]);   # p is now the permutation vector exchanging D and 1
returnvalue = ipermute (sort (permute (arr, p), 1), p);   # always sorts on dimension 1

From some numerical experiments, this seems to be at least as fast than calling "sort (arr, D)" directly.

Anonymous
Mon 19 Jul 2021 09:39:15 PM UTC, comment #5: 

I didn't look at the code, but I would think that at some point inside the sort method the algorithm (which is timsort as taken from python, it seems) should be operating on a copy of the raw data buffer as accessible by fortran_vec(). Thus, all that would be needed to sort along dimensions other than the first would be to use a stepping that is the product of the dimensions before the one along which is sorted. Thus, in the case of sorting a 5x3x5e4 array of doubles along the second dimension one would call five sorts in a for (j=0;j<5;j++) loop where every buffer[i] in the algorithm would have to be replaced by buffer[j+5*i]. This would minimize the copying, but there are situations where it would become inefficient due to bad caching behaviour.

Thus, I think it would be best to indeed do the permuting (assuming that permute is implemented cache-efficient) and always sort along the first dimension (with a special treatment for the case that the size of the dimension along which to sort is one): in the typical case, a given value will be copied around during sorting about log(N) times, where N is the size of the dimension along which to sort, and N will be most times not very small. As a consequence, the pre- and post-processing by dimensional permuting will be cheap compared to sorting, and on the other hand sorting along the first dimension is always optimally cache-efficient.

For optimal efficiency, one could do the idea of the first paragraph for N smaller than, say, 20, otherwise the permuting.

Michael Leitner <mleitner>
Mon 19 Jul 2021 02:28:58 AM UTC, comment #4: 

It will take some time to check, but I think you are on the right track.  The Octave project began a long, long time ago before many parts of C++ had been firmed up.  Relevant to this, Octave was coded before move constructors even existed--everything was a copy constructor.  My guess is that it would be possible to improve this by figuring out how not to create new arrays which involves a lot of memory allocation and copying of values.

Rik <rik5>
Project Administrator
Mon 19 Jul 2021 02:03:20 AM UTC, comment #3: 

In libinterp/octave-value/ov-re-mat.cc, in this function:

octave_matrix::sort (octave_idx_type dim, sortmode mode) const

there is this line:

    return octave_base_matrix<NDArray>::sort (dim, mode);

which seems to create a new copy of each row when sorting an NDArray along DIM=2, then sorting it and returning it. The extra time seems to be spent mostly in copying existing values back and forth.

When called with DIM=1, there seems to be an idx_cache and the line

    return octave_lazy_index (*idx_cache).sort (dim, mode);

seems to be invoked instead. Can someone please confirm?

I think the reason it affects only DIM=2 and not DIM=3 for 3-dimensional arrays, is that for DIM=3 the construction of "octave_base_matrix<NDArray>" is called only once but for DIM=2 it's called for each row. By this hypothesis, sorting higher-dimensional arrays will cause DIM=3 to also slow down, by roughly 1/ncols of the DIM=2 slowdown.

Here's some additional data for sorting a 4D array. As seen, DIM=1 is very fast, but now both DIM=2 and DIM=3 are slow, presumably from all the copy constructions? Also the time for DIM=3 is roughly 1/3 that of DIM=2, because there are 3 columns in the array.

octave:11> tmp = rand (3,3,3,1e5); tic; tmp = sort(tmp,1); toc
Elapsed time is 0.0273931 seconds.
octave:12> tmp = rand (3,3,3,1e5); tic; tmp = sort(tmp,2); toc
Elapsed time is 32.3812 seconds.
octave:13> tmp = rand (3,3,3,1e5); tic; tmp = sort(tmp,3); toc
Elapsed time is 10.9626 seconds.
octave:14> tmp = rand (3,3,3,1e5); tic; tmp = sort(tmp,4); toc
Elapsed time is 0.238446 seconds.

Is this thinking on the right path?

Anonymous
Sun 18 Jul 2021 09:07:02 PM UTC, comment #2: 

It seems to be made worse for arrays where size(foo,2) is small, even just 1, and size(foo,3) is big.

Test code:

function myfun (tmp)
  tic; sort (tmp,1); t1 = toc;
  tic; sort (tmp,2); t2 = toc;
  tic; sort (tmp,3); t3 = toc;
  disp ([t1 t2 t3])
endfunction

tmp = rand (1,2,1e5); myfun (tmp)
tmp = rand (1,1e5,2); myfun (tmp)
tmp = rand (2,1,1e5); myfun (tmp)
tmp = rand (2,1e5,1); myfun (tmp)
tmp = rand (1e5,1,2); myfun (tmp)
tmp = rand (1e5,2,1); myfun (tmp)

tmp = rand (1,2,5e5); myfun (tmp)
tmp = rand (1,5e5,2); myfun (tmp)
tmp = rand (2,1,5e5); myfun (tmp)
tmp = rand (2,5e5,1); myfun (tmp)
tmp = rand (5e5,1,2); myfun (tmp)
tmp = rand (5e5,2,1); myfun (tmp)

Everything takes only milliseconds or less, as expected, EXCEPT the case sort(tmp,2) where tmp = rand (2,1,1e5) or tmp = rand(2,1,5e5) which are about 1500X and 7500X slower than the baseline. This is particularly pathological because size(tmp,2) == 1, so it doesn't even need to do anything when asked for sort(tmp,2) except return the argument as-is.

Anonymous
Sun 18 Jul 2021 02:50:39 AM UTC, comment #1: 

Confirmed.  This is really odd behavior that seems to be limited to sorting along the second dimension.  I modified your test code to look at sorting along each dimension.

data = rand (5,3,5e4);
tic; s2 = sort (data, 1); bm_sort = toc
#tic; s2 = sort (data, 2); bm_sort = toc
#tic; s2 = sort (data, 3); bm_sort = toc
perm = [1 2 3];
#perm = [2 1 3];
#perm = [3 2 1];
tic; s1 = ipermute (sort (permute (data, perm), 1), perm); bm_perm = toc
if (! isequal (s1, s2))
  disp ("s1 != s2");
endif

Timings for sorting along dimension 1 are already strange.

bm_sort = 0.016349
bm_perm = 0.011381

In this case, the benchmark for the permute strategy has a call to permute(), the exact same call to sort(), and a call to ipermute().  How can that take less time then the single call to sort?

Timing for dimension 2 are the real issue.

bm_sort = 3.8263
bm_perm = 0.016575

The call to sort() is 231X slower than using the permute strategy.

Finally, sorting on dimension 3 seems about the same.

bm_sort = 0.085065
bm_perm = 0.072229

Rik <rik5>
Project Administrator
Fri 16 Jul 2021 11:50:46 PM UTC, original submission:  

Hello,

The time performance of the sort() function is very non-intuitive for 3-dimensional arrays when trying to sort each row (like 1000 times). I am assuming some CPU cache effect is at work here but am dubious whether a 1000x difference is attributable to that alone.

Test:

tmp = rand(5,3,5e5);
tic; tmp2 = sort(tmp,2); toc
tic; tmp1 = permute (sort (permute (tmp, [2 1 3]), 1), [2 1 3]); toc
assert(all(all(all(tmp1 == tmp2))))

Time difference:

Elapsed time is 0.154699 seconds.
Elapsed time is 152.283 seconds.

That's a 1000x difference between sorting rows and sorting columns, even after the extra calls to permute.

One workaround therefore is to always wrap the sort() inside two calls to permute(), so that the sort is only happening down the first dimension, which seems to be fast. I do not know if this is the same problem in Matlab or not.

If the above is expected behavior for sort(), please consider calling permute() from inside sort to get that overall performance boost. Maybe only if the input has 3 or more dimensions and is bigger than say 2^16 elements? I will accept the dev team's decision either way.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by mleitner (Posted a comment)
  • -email is unavailable- added by rik5 (Posted a comment)
  •  

    Do you think this task is very important?
    If so, you can add your encouragement to it.
    This task has 0 encouragements so far.

    Only project members can vote.

     

     

     

    Follow 2 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2021-07-18 rik5 StatusNone => Confirmed
        SummaryPerformance of sort, not sure if this is expected behavior? => Performance of sort unexpectedly slow for DIM=2

    Back to the top


    Powered by Savane 3.6