bugGNU Octave - Bugs: bug #60928, Performance of sort unexpectedly...

 
 

bug #60928: Performance of sort unexpectedly slow for DIM=2

Submitter:  None
Submitted:  Fri 16 Jul 2021 11:50:46 PM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Performance
Status:  Fixed Assigned to:  None
Originator Name:  Originator Email:  -email is unavailable-
Open/Closed:  * Closed Release:  * dev
Operating System:  * GNU/Linux Fixed Release:  None
Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Tue 17 Aug 2021 05:11:05 PM UTC, comment #22: 

filter.cc performance was also bad as would be predicted by the use of the while loop.  Timings for before and after are shown below.


## Baseline
DIM1: 0.020907
DIM2: 2.158784
DIM3: 0.464140
DIM4: 0.018331
---------------
## division fix
DIM1: 0.020159
DIM2: 0.020605
DIM3: 0.020099
DIM4: 0.018271


A very nice 100X improvement along dimension 2!  See changeset http://hg.savannah.gnu.org/hgweb/octave/rev/e3e0193963ea.

Rik <rik5>
Group administrator
Tue 17 Aug 2021 04:15:26 PM UTC, comment #21: 

Related to this report, there were two FIXME notes in the code about whether testing for NaN values on integer types (which can't have these values, only IEEE floating point variables can have NaN values) has a performance impact.  I did some testing and there is no impact so I removed these FIXME notes and left a comment to future programmers that it is not necessary to special case the code between integers and floating point values.  See http://hg.savannah.gnu.org/hgweb/octave/rev/ac5e1b64f8c9.

Rik <rik5>
Group administrator
Tue 17 Aug 2021 12:32:03 AM UTC, comment #20: 

I searched for "offset2" through the Octave code base on the assumption that the same pattern might use the same variable name. I found this in libinterp/corefcn/filter.cc:


    123           octave_idx_type x_offset2 = 0;
    124           x_offset = num;
    125           while (x_offset >= x_stride)
    126             {
    127               x_offset -= x_stride;
    128               x_offset2++;
    129             }
    130           x_offset += x_offset2 * x_stride * x_len;
    131         }
    132       octave_idx_type si_offset = num * si_len;


It might not be a performance bottleneck but if it doesn't break anything it might be worth replacing with the same division expression, at your discretion.

Thanks again for this analysis with sort() and the patch.

Anonymous
Mon 16 Aug 2021 11:32:07 PM UTC, comment #19: 

I ran 100 tests (to get above the central limit theorem) for sorting along the first dimension with the special case code and without.

Results:


special case: 98 +/- 2 milliseconds
regular case: 109 +/- 3 milliseconds


Percentage difference from 98 to 109 is 11% slowdown so it appears to be worth it to have the special case.

I checked in the patch.  Thanks to the original bug reporter for finding this odd behavior and drawing attention to it.

Rik <rik5>
Group administrator
Mon 16 Aug 2021 09:32:35 PM UTC, comment #18: 

Using Rik's patch, the times for "sort (foo, dim)" are consistently faster than "ipermute (sort (permute (foo, p), 1), p)" for the corresponding permutation vector p. Roughly twice as fast as the permute hack across the dimensions.

Anonymous
Mon 16 Aug 2021 08:54:38 PM UTC, comment #17: 

Rik, compare your comment #14 to your comment #1. Does the permute strategy now take more time than the plain sorting for DIM=1, as expected, or does it still beat the plain sort?

Anonymous
Mon 16 Aug 2021 08:39:29 PM UTC, comment #16: 

Well done. I would think that the purpose of the code would perhaps be still more obvious if the for-loop over j would be written as two loops, the inner (j1) counting up to stride, the outer (j2) counting up to dv.numel()/(ns*stride), and forming offset just as offset=j1+j2*ns*stride. Or rather incrementing j2 by ns*stride, going up to dv.numel(), and having offset=j1+j2.

Disregarding that, you could try whether it is even worth it having the special case for stride==1. According to your timing data, sorting along the first dimension with the special-case code is about 10% faster than for the other dimensions (no special case). It is quite plausible that also for the non-optimized path sorting along the first dimension is fastest, so if the special case gives in the end only, say, 3% added efficiency, it is questionable whether it is worth the complication. The difference is that in the non special case, the data are copied from the in-array into a buffer, sorted there, and copied into the out-array, while in the special case they are directly copied into the out-array and sorted there.

Michael Leitner <mleitner>
Mon 16 Aug 2021 08:32:34 PM UTC, comment #15: 

Good God! Yes, that makes perfect sense now. The reason why only dimension 2 on were affected was the special code path for dimension 1, not a CPU cache effect. The while loop explains the quadratic behavior spotted by mleitner, and also why DIM=3 was n times as fast as DIM=2, because the stride length was n times higher inside the while loop, not because of copy construction. And the fix certainly makes all the timings far more consistent. My congratulations to rik5 for isolating the real problem and fixing it!

Anonymous
Mon 16 Aug 2021 08:08:32 PM UTC, comment #14: 

The discrepancy in timings for dimension 1 was an artifact of testing.  The first sort call engaged the CPU and the extra load caused the CPU frequency to spike.  The speed remained high during the remainder of the test which meant that the subsequent sorts looked better.  I threw in an extra call to sort() before the benchmarking just to have things be even.  The test code is now


ND = ndims (sdata);

## Do a sort just to get CPU cranking
sort (sdata, i);

for i = 1 : ND
  tic
  sort (sdata, i);
  bm(i) = toc;
endfor

for i = 1 : ND
  printf ("DIM%d: %f\n", i, bm(i))
endfor


Results are


DIM1: 0.033301
DIM2: 0.036282
DIM3: 0.035667
DIM4: 0.145048


which makes more sense.

Rik <rik5>
Group administrator
Mon 16 Aug 2021 06:53:35 PM UTC, comment #13: 

I replaced the while loop calculation of offset with a calculation using division and multiplication of integers.  I believe the logic is correct.

The old timings for my benchmark data were


## Baseline
DIM1: 0.072642
DIM2: 2.146125
DIM3: 0.478395
DIM4: 0.137138


The new timings are


## patch applied
DIM1: 0.055149
DIM2: 0.038490
DIM3: 0.037952
DIM4: 0.148342


Much better results, and I verified that the sorted data matches that done with the original algorithm.

Interestingly, dimension 1 is now slightly slower than two or three.  I think the cause lies outside of the Array<T>::sort routine.

Marking as Patch Submitted.

(file #51791)

Rik <rik5>
Group administrator
Mon 16 Aug 2021 04:45:14 PM UTC, comment #12: 

Okay, the slowdown is entirely due to this piece of code


      for (octave_idx_type j = 0; j < iter; j++)
        {
          octave_idx_type offset = j;
          octave_idx_type offset2 = 0;

          while (offset >= stride)
            {
              offset -= stride;
              offset2++;
            }

          offset += offset2 * stride * ns;


For dimension 1, stride is 1 and offset = j will take the values 1 .. 125,000 for my benchmark data.  So for the last calculation, the code reduces to a while loop that executes 125,000 times simply to increment offset2.

I assume that the original coder was trying to calculate


offset = floor (j / stride) * stride * ns;


in such a way as to avoid conversion to double and to avoid actual division operation.  However, this is causing real problems.

Rik <rik5>
Group administrator
Mon 16 Aug 2021 04:20:19 PM UTC, comment #11: 

I'm narrowing in on the cause of the problem which now seems to be in the gather phase.

First, I created benchmark data so there would be no differences between various test runs do to actual differences in the data.

The N-D array is of size 5x5x5x5000.  The baseline times for sorting along each dimension were


## Baseline
DIM1: 0.072642
DIM2: 2.146125
DIM3: 0.478395
DIM4: 0.137138


Next, I shut off the special case for dimension 1 and forced everything through the gather, sort, scatter path.  Timings were:


## Everything through second code path
DIM1: 11.184590
DIM2: 2.272407
DIM3: 0.468403
DIM4: 0.146637


Clearly, it was very, very bad for dimensions 1 and 2.

Next I used a std::chrono::high_resolution_clock to bracket the gather, sort, and scatter blocks of code.  For the first three dimensions, the only thing that differs is the stride


  octave_idx_type ns = dv(dim);
  octave_idx_type iter = dv.numel () / ns;
  octave_idx_type stride = 1;

  for (int i = 0; i < dim; i++)
    stride *= dv(i);


The variable ns (number of elements per sort) is 5.  The number of loop iterations is 125,000.  Only the stride changes from 1 to 5 to 125 depending on dimension.

The sort and scatter phases always takes ~1 microsecond regardless of dimension (1, 2, or 3).  However, the gather phase can take  a maximum of ~250, ~75 , ~9 microseconds.  Clearly something is going on in this phase.

The instrumented code is


          auto start = high_resolution_clock::now();
          octave_idx_type offset = j;
          octave_idx_type offset2 = 0;

          while (offset >= stride)
            {
              offset -= stride;
              offset2++;
            }

          offset += offset2 * stride * ns;

          // gather and partition out NaNs.
          // FIXME: impact on integer types noticeable?
          octave_idx_type kl = 0;
          octave_idx_type ku = ns;
          for (octave_idx_type i = 0; i < ns; i++)
            {
              T tmp = ov[i*stride + offset];
              if (sort_isnan<T> (tmp))
                buf[--ku] = tmp;
              else
                buf[kl++] = tmp;
            }
          auto stop = high_resolution_clock::now ();
          auto duration = duration_cast<microseconds> (stop - start);
          std::cout << "gather: " << duration.count () << std::endl;


I'm not sure what the while loop for calculation of offset and offset2 is doing, but I don't like the look of it.  That's the next benchmark timing to run.

Rik <rik5>
Group administrator
Sun 15 Aug 2021 09:10:57 PM UTC, comment #10: 

Further, can you please point out to me where the sort routine that is called e.g. in line 1812 and that actually does the sorting is to be found? I am not familiar with C++, only C, so I can read the code line by line, but do not find my way around the concept of templates etc.

Finally, I really do not think that it is a question of cache efficiency. Because if it was, the elapsed time in


d=rand(2,2,100000);
tic;sort(d,1);toc
tic;sort(d,2);toc


should be the same -- cache lines are today 64 bytes, meaning 8 doubles, so in both cases you have the same data transfer between main memory and cache. But what I see is 0.018 sec for the first line and 7.03 sec for the second.

And here is a very drastic oddity, that should give a strong indication what is going wrong here: it very much looks like the necessary time goes with the square of N, the third dimension of d (100000 in the upper example) -- it is 1.74 sec for N=50000, 7.03 sec for N=100000, and 28.09 sec for N=200000. On the other hand, sorting along the first dimension is rather sub-linear with time readings of 0.012, 0.018 and 0.028 seconds.

This cannot be, as it is just as hard to sort the first of the N 2x2-matrices along the second dimension as the last. The quadratic behaviour looks as if the whole result array is copied around for each elemental sorting -- it has a length proportional to N, and it is done N times. Please check whether you can reproduce that, and if yes, then it should be a quite dumb fix.

Michael Leitner <mleitner>
Sun 15 Aug 2021 08:51:59 PM UTC, comment #9: 

First: Interestingly, in the email Rik's comment was not truncated. I give here the part after "then" that belongs to comment #7:


dim = 1
dv = [3, 4, 5, 6];


When sorting over the first dimension the stride is 1 which means the subsequent code takes a shortcut and can just grab the data from the source matrix one value after another and then place the sorted data into the output matrix one after another.  Cache efficiency makes two appearances here.

When not sorting over the first dimension, the correct data has to be gathered from the source matrix, sorted, and then scattered to the destination matrix. The code is complicated by the addition of NaN processing, but you can see how the gather/scatter is working and how it could potentially be tremendously inefficient.


         // gather and partition out NaNs.
          // FIXME: impact on integer types noticeable?
          octave_idx_type kl = 0;
          octave_idx_type ku = ns;
          for (octave_idx_type i = 0; i < ns; i++)
            {
              T tmp = ov[i*stride + offset];
              if (sort_isnan<T> (tmp))
                buf[--ku] = tmp;
              else
                buf[kl++] = tmp;
            }

          // sort.
          lsort.sort (buf, kl);

          if (ku < ns)
            {
              // NaNs are in reverse order
              std::reverse (buf + ku, buf + ns);
              if (mode == DESCENDING)
                std::rotate (buf, buf + ku, buf + ns);
            }

          // scatter.
          for (octave_idx_type i = 0; i < ns; i++)
            v[i*stride + offset] = buf[i];


Seems like this can be avoided by using equivalent of permute/ipermute since those functions are also templated code in Array.cc.

Michael Leitner <mleitner>
Sat 14 Aug 2021 03:18:37 PM UTC, comment #8: 

My comments were cut off which is a shame because I had written quite a bit.  I have no energy to reproduced them so here is a brief summary.

The gather/scatter routine when stride != 1 is shown below (Array.cc:1843)


          // gather and partition out NaNs.
          // FIXME: impact on integer types noticeable?
          octave_idx_type kl = 0;
          octave_idx_type ku = ns;
          for (octave_idx_type i = 0; i < ns; i++)
            {
              T tmp = ov[i*stride + offset];
              if (sort_isnan<T> (tmp))
                buf[--ku] = tmp;
              else
                buf[kl++] = tmp;
            }

          // sort.
          lsort.sort (buf, kl);

          if (ku < ns)
            {
              // NaNs are in reverse order
              std::reverse (buf + ku, buf + ns);
              if (mode == DESCENDING)
                std::rotate (buf, buf + ku, buf + ns);
            }

          // scatter.
          for (octave_idx_type i = 0; i < ns; i++)
            v[i*stride + offset] = buf[i];


There are two potential cache inefficiencies, one at the gather stage and one at the scatter stage.  If the first dimension is being sorted then the data values are arranged one after another and there is no calculation necessary to retrieve the correct values.  Moreover, the scatter phase can be skipped entirely.  Instead of gathering data into a temporary buffer, sorting the temp buffer, and then copying data from the temp buffer to the destination the output memory, the output memory pointer can just be handed to the sort routine.  That also improves efficiency.

Given that Array<T>::permute is templated code in Array.cc, I think the change can occur quite easily.

Rik <rik5>
Group administrator
Sat 14 Aug 2021 03:05:55 PM UTC, comment #7: 

The Octave architecture is built around a library of mathematical routines (in liboctave) and then an interpreter (in libinterpreter).  The structure was created this way so that, if desired, programmers could write their own applications in whatever language they want (Fortran, C, C++, others) and link against the Octave library (liboctave).

If a performance change is made in an m-file (a new sort.m which does the permute and then calls the previous C++ sort in libinterp) or in libinterp/corefcn/data.cc (in libinterp) then the sort algorithm will still be slow for programmers who link directly against liboctave and call the sort routines on Array objects.  I will say that I don't think many people use liboctave directly, but it would still be nice to preserve that capability.

Based on the discussion above, I think any fix should be in liboctave.  There are a few specializations of sort, such as for Range objects, but the principal function is templated code in Array.cc so it would still require basically just one change in one location.

The code to review is in liboctave/array/Array.cc:1762.  It seems that the issue is related to cache-line efficiency.  The code begins with calculating a stride: the distance between data points in an N-D array.


  octave_idx_type ns = dv(dim);
  octave_idx_type iter = dv.numel () / ns;
  octave_idx_type stride = 1;

  for (int i = 0; i < dim; i++)
    stride *= dv(i);


In the code above, dim is the 0-based dimension to sort on and dv is the dimension vector for the array.  For this example,


tmp = rand (3,4,5,6);
sort (tmp, 2)


then


dim = 1
dv = [3, 4, 5, 6];
+verbatim+

When sorting over the first dimension the stride is 1 which means the subsequent code takes a shortcut and can just grab the data from the source matrix one value after another and then place the sorted data into the output matrix one after another.  Cache efficiency makes two appearances here.

When not sorting over the first dimension, the correct data has to be gathered from the source matrix, sorted, and then scattered to the destination matrix.  The code is complicated by the addition of NaN processing, but you can see how the gather/scatter is working and how it could potentially be tremendously inefficient.

+verbatim+
          // gather and partition out NaNs.
          // FIXME: impact on integer types noticeable?
          octave_idx_type kl = 0;
          octave_idx_type ku = ns;
          for (octave_idx_type i = 0; i < ns; i++)
            {
              T tmp = ov[i*stride + offset];
              if (sort_isnan<T> (tmp))
                buf[--ku] = tmp;
              else
                buf[kl++] = tmp;
            }

          // sort.
          lsort.sort (buf, kl);

          if (ku < ns)
            {
              // NaNs are in reverse order
              std::reverse (buf + ku, buf + ns);
              if (mode == DESCENDING)
                std::rotate (buf, buf + ku, buf + ns);
            }

          // scatter.
          for (octave_idx_type i = 0; i < ns; i++)
            v[i*stride + offset] = buf[i];


Seems like this can be avoided by using equivalent of permute/ipermute since those functions are also templated code in Array.cc.

Rik <rik5>
Group administrator
Mon 19 Jul 2021 10:32:12 PM UTC, comment #6: 

1. The permute approach has the advantage that it can be written once in libinterp/corefcn/data.cc and it would work for all data types. Doing the skip calculations and loops looks like it would need to be implemented in multiple places inside octave-value/ for real, complex, double, single, integer, etc, even if the code is the same. Is this correct? Or can it too be written in only one place with templating?

2. A generic permute approach could be:

# arr is the array to be sorted along dimension D
n = ndims(arr);
p = 1:n;
p([1 D]) = p([D 1]);   # p is now the permutation vector exchanging D and 1
returnvalue = ipermute (sort (permute (arr, p), 1), p);   # always sorts on dimension 1

From some numerical experiments, this seems to be at least as fast than calling "sort (arr, D)" directly.

Anonymous
Mon 19 Jul 2021 09:39:15 PM UTC, comment #5: 

I didn't look at the code, but I would think that at some point inside the sort method the algorithm (which is timsort as taken from python, it seems) should be operating on a copy of the raw data buffer as accessible by fortran_vec(). Thus, all that would be needed to sort along dimensions other than the first would be to use a stepping that is the product of the dimensions before the one along which is sorted. Thus, in the case of sorting a 5x3x5e4 array of doubles along the second dimension one would call five sorts in a for (j=0;j<5;j++) loop where every buffer[i] in the algorithm would have to be replaced by buffer[j+5*i]. This would minimize the copying, but there are situations where it would become inefficient due to bad caching behaviour.

Thus, I think it would be best to indeed do the permuting (assuming that permute is implemented cache-efficient) and always sort along the first dimension (with a special treatment for the case that the size of the dimension along which to sort is one): in the typical case, a given value will be copied around during sorting about log(N) times, where N is the size of the dimension along which to sort, and N will be most times not very small. As a consequence, the pre- and post-processing by dimensional permuting will be cheap compared to sorting, and on the other hand sorting along the first dimension is always optimally cache-efficient.

For optimal efficiency, one could do the idea of the first paragraph for N smaller than, say, 20, otherwise the permuting.

Michael Leitner <mleitner>
Mon 19 Jul 2021 02:28:58 AM UTC, comment #4: 

It will take some time to check, but I think you are on the right track.  The Octave project began a long, long time ago before many parts of C++ had been firmed up.  Relevant to this, Octave was coded before move constructors even existed--everything was a copy constructor.  My guess is that it would be possible to improve this by figuring out how not to create new arrays which involves a lot of memory allocation and copying of values.

Rik <rik5>
Group administrator
Mon 19 Jul 2021 02:03:20 AM UTC, comment #3: 

In libinterp/octave-value/ov-re-mat.cc, in this function:

octave_matrix::sort (octave_idx_type dim, sortmode mode) const


there is this line:

    return octave_base_matrix<NDArray>::sort (dim, mode);


which seems to create a new copy of each row when sorting an NDArray along DIM=2, then sorting it and returning it. The extra time seems to be spent mostly in copying existing values back and forth.

When called with DIM=1, there seems to be an idx_cache and the line

    return octave_lazy_index (*idx_cache).sort (dim, mode);

seems to be invoked instead. Can someone please confirm?

I think the reason it affects only DIM=2 and not DIM=3 for 3-dimensional arrays, is that for DIM=3 the construction of "octave_base_matrix<NDArray>" is called only once but for DIM=2 it's called for each row. By this hypothesis, sorting higher-dimensional arrays will cause DIM=3 to also slow down, by roughly 1/ncols of the DIM=2 slowdown.

Here's some additional data for sorting a 4D array. As seen, DIM=1 is very fast, but now both DIM=2 and DIM=3 are slow, presumably from all the copy constructions? Also the time for DIM=3 is roughly 1/3 that of DIM=2, because there are 3 columns in the array.

octave:11> tmp = rand (3,3,3,1e5); tic; tmp = sort(tmp,1); toc
Elapsed time is 0.0273931 seconds.
octave:12> tmp = rand (3,3,3,1e5); tic; tmp = sort(tmp,2); toc
Elapsed time is 32.3812 seconds.
octave:13> tmp = rand (3,3,3,1e5); tic; tmp = sort(tmp,3); toc
Elapsed time is 10.9626 seconds.
octave:14> tmp = rand (3,3,3,1e5); tic; tmp = sort(tmp,4); toc
Elapsed time is 0.238446 seconds.


Is this thinking on the right path?

Anonymous
Sun 18 Jul 2021 09:07:02 PM UTC, comment #2: 

It seems to be made worse for arrays where size(foo,2) is small, even just 1, and size(foo,3) is big.

Test code:

function myfun (tmp)
  tic; sort (tmp,1); t1 = toc;
  tic; sort (tmp,2); t2 = toc;
  tic; sort (tmp,3); t3 = toc;
  disp ([t1 t2 t3])
endfunction

tmp = rand (1,2,1e5); myfun (tmp)
tmp = rand (1,1e5,2); myfun (tmp)
tmp = rand (2,1,1e5); myfun (tmp)
tmp = rand (2,1e5,1); myfun (tmp)
tmp = rand (1e5,1,2); myfun (tmp)
tmp = rand (1e5,2,1); myfun (tmp)

tmp = rand (1,2,5e5); myfun (tmp)
tmp = rand (1,5e5,2); myfun (tmp)
tmp = rand (2,1,5e5); myfun (tmp)
tmp = rand (2,5e5,1); myfun (tmp)
tmp = rand (5e5,1,2); myfun (tmp)
tmp = rand (5e5,2,1); myfun (tmp)


Everything takes only milliseconds or less, as expected, EXCEPT the case sort(tmp,2) where tmp = rand (2,1,1e5) or tmp = rand(2,1,5e5) which are about 1500X and 7500X slower than the baseline. This is particularly pathological because size(tmp,2) == 1, so it doesn't even need to do anything when asked for sort(tmp,2) except return the argument as-is.

Anonymous
Sun 18 Jul 2021 02:50:39 AM UTC, comment #1: 

Confirmed.  This is really odd behavior that seems to be limited to sorting along the second dimension.  I modified your test code to look at sorting along each dimension.


data = rand (5,3,5e4);
tic; s2 = sort (data, 1); bm_sort = toc
#tic; s2 = sort (data, 2); bm_sort = toc
#tic; s2 = sort (data, 3); bm_sort = toc
perm = [1 2 3];
#perm = [2 1 3];
#perm = [3 2 1];
tic; s1 = ipermute (sort (permute (data, perm), 1), perm); bm_perm = toc
if (! isequal (s1, s2))
  disp ("s1 != s2");
endif


Timings for sorting along dimension 1 are already strange.


bm_sort = 0.016349
bm_perm = 0.011381


In this case, the benchmark for the permute strategy has a call to permute(), the exact same call to sort(), and a call to ipermute().  How can that take less time then the single call to sort?

Timing for dimension 2 are the real issue.


bm_sort = 3.8263
bm_perm = 0.016575


The call to sort() is 231X slower than using the permute strategy.

Finally, sorting on dimension 3 seems about the same.


bm_sort = 0.085065
bm_perm = 0.072229



Rik <rik5>
Group administrator
Fri 16 Jul 2021 11:50:46 PM UTC, original submission:  

Hello,

The time performance of the sort() function is very non-intuitive for 3-dimensional arrays when trying to sort each row (like 1000 times). I am assuming some CPU cache effect is at work here but am dubious whether a 1000x difference is attributable to that alone.

Test:

tmp = rand(5,3,5e5);
tic; tmp2 = sort(tmp,2); toc
tic; tmp1 = permute (sort (permute (tmp, [2 1 3]), 1), [2 1 3]); toc
assert(all(all(all(tmp1 == tmp2))))


Time difference:

Elapsed time is 0.154699 seconds.
Elapsed time is 152.283 seconds.


That's a 1000x difference between sorting rows and sorting columns, even after the extra calls to permute.

One workaround therefore is to always wrap the sort() inside two calls to permute(), so that the sort is only happening down the first dimension, which seems to be fast. I do not know if this is the same problem in Matlab or not.

If the above is expected behavior for sort(), please consider calling permute() from inside sort to get that overall performance boost. Maybe only if the input has 3 or more dimensions and is bigger than say 2^16 elements? I will accept the dev team's decision either way.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #51791:  bug60928.cset added by rik5 (2KiB - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by mleitner (Posted a comment)
  • -email is unavailable- added by rik5 (Posted a comment)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 6 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2021-08-16 rik5 StatusPatch Submitted Fixed
        Open/ClosedOpen Closed
    2021-08-16 rik5 Attached File- Added bug60928.cset, #51791
        StatusConfirmed Patch Submitted
    2021-07-18 rik5 StatusNone Confirmed
        SummaryPerformance of sort, not sure if this is expected behavior? Performance of sort unexpectedly slow for DIM=2

    Back to the top

    Powered by Savane 3.13-758e.
    Corresponding source code