bugGNU Octave - Bugs: bug #45890, Sparse A*x and A+B are a bit slow

 
 

bug #45890: Sparse A*x and A+B are a bit slow

Submitter:  Ceral Paquet <octavebugs>
Submitted:  Sun 06 Sep 2015 08:39:55 PM UTC
   
 
Category:  Libraries Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Performance
Status:  Need Info Assigned to:  None
Originator Name:  Open/Closed:  * Open
Release:  * dev Operating System:  * GNU/Linux
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Tue 28 Aug 2018 07:54:39 PM UTC, comment #24: 

@jwe: Thanks, that restores the multiply/transpose operation back to being fast.

Rik <rik5>
Group administrator
Tue 28 Aug 2018 06:04:47 PM UTC, comment #23: 

How about the following change?

http://hg.savannah.gnu.org/hgweb/octave/rev/8ac4bfa55053

With more work, maybe the optimization for the compound negation and elementwise logical operators could be preserved when the expression is not eligible for short circuiting.

John W. Eaton <jwe>
Group administrator
Tue 28 Aug 2018 03:34:02 PM UTC, comment #22: 

I just asked jwe to take a look at the patch he applied for bug #54465 which disabled compound binary optimization.

Rik <rik5>
Group administrator
Tue 28 Aug 2018 06:47:04 AM UTC, comment #21: 

From comment #19, I think we should open a new bug report.

Marco Caliari <caliari>
Group Member
Thu 16 Aug 2018 08:10:23 PM UTC, comment #20: 

Just few comments.
The matrix A has not exactly NNZ nonzeros (because of repeated entries in the pattern). This may explain some differences.
In the sum A+B, A and B have the same sparsity pattern. Does octave check for that and specialize the code? Maybe matlab does it. In finite difference or element discretizations it is quite natural to sum up matrices with the same sparsity pattern. I would be in favor of such a specialization, if not already done.
@Dmitri: in the sparse A*x and A'*x no BLAS is used, just two nested for loops.

Marco Caliari <caliari>
Group Member
Wed 15 Aug 2018 10:58:20 PM UTC, comment #19: 

Good catch.  I also find that 4.2.2 is significantly faster for the transpose/multiply.  It turns out that this is a very recent (6 days ago) drop in performance due to this cset


parent: 25753:b5dc88246c02
 disable compound binary operator optimization (bug #54465)


I think I was right that transpose and multiply used to be recognized as a possible compound operator, but now is recognized as a transpose followed by a multiply so the timing is equal to the sum of the individual operations.


Rik <rik5>
Group administrator
Wed 15 Aug 2018 09:56:30 PM UTC, comment #18: 

But it fast in 4.2.2 9same computer):


octave:1> bm_sparse
----------------------------------------------------------------------
GNU Octave Version: 4.2.2
GNU Octave License: GNU General Public License
Operating System: Linux 4.17.12-200.fc28.x86_64 #1 SMP Fri Aug 3 15:01:13 UTC 2018 x86_64
----------------------------------------------------------------------
no packages installed.

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 2.505576 (0.008103)
 2*A   : 0.257853 (0.005682)
 A'    : 2.001430 (0.008049)
 A+B   : 0.841132 (0.006074)
 A*x   : 0.425208 (0.006004)
 A'*x  : 0.160962 (0.004781)


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Wed 15 Aug 2018 09:05:43 PM UTC, comment #17: 

Here is "official" benchmark:


----------------------------------------------------------------------
GNU Octave Version: 4.4.1 (hg id: 8800e167c665)
GNU Octave License: GNU General Public License
Operating System: Linux 4.17.12-200.fc28.x86_64 #1 SMP Fri Aug 3 15:01:13 UTC 2018 x86_64
----------------------------------------------------------------------
no packages installed.

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 3.138651 (0.007312)
 2*A   : 0.307419 (0.005336)
 A'    : 2.439559 (0.001288)
 A+B   : 0.845855 (0.001971)
 A*x   : 0.389937 (0.002420)
 A'*x  : 2.844413 (0.004723)


for a reference, a simple test:


octave:1> a=randn(4000);
octave:2> tic; inv(a)*a; toc
Elapsed time is 7.02823 seconds.


(Openblas, 4-threads)

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Wed 15 Aug 2018 08:46:44 PM UTC, comment #16: 

I see the same as you:


----------------------------------------------------------------------
GNU Octave Version: 5.0.0 (hg id: 95eb72d50fb0)
GNU Octave License: GNU General Public License
Operating System: Linux 4.17.0-1-amd64 #1 SMP Debian 4.17.8-1 (2018-07-20) x86_64
----------------------------------------------------------------------
no packages installed.

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 2.631752 (0.058011)
 2*A   : 0.219845 (0.003500)
 A'    : 2.159551 (0.035017)
 A+B   : 0.625339 (0.017899)
 A*x   : 0.229362 (0.007507)
 A'*x  : 2.390337 (0.034360)


Mike Miller <mtmiller>
Group Member
Wed 15 Aug 2018 08:44:27 PM UTC, comment #15: 

I get numbers that are very similar to Rik's (oldish Xeon).


t =

   3.23645   3.28782   3.28776   3.32183   3.19292
   0.30566   0.32043   0.32495   0.30760   0.31066
   2.51998   2.52266   2.61129   2.49808   2.45491
   0.88806   0.88445   0.89706   0.85344   0.85099
   0.42461   0.41365   0.44222   0.41224   0.39768
   2.97178   3.02007   3.04358   2.98643   2.97333


I also noticed that it is all running single-thread
(though I expected A*x and A'*x to utilize
multithreaded blas).

(this is with 4.4.1)

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Wed 15 Aug 2018 08:40:49 PM UTC, comment #14: 

Oops, I've attached bm_sparse.m.

I don't know why the results are so different for


A'*x


Maybe a few others can test just with Octave to see if the results are consistent.

Otherwise, I do agree that the multiply and addition operations are just a little bit slow compared to Matlab.


(file #44788)

Rik <rik5>
Group administrator
Wed 15 Aug 2018 08:25:30 PM UTC, comment #13: 

@Rik: I do not see your bm_sparse.m, but I run the original test. I do not see at all the problem with A'*x. My A'*x is faster than A*x, as in the original report. In general, my results agree with the original report. About A+B, I thought it was due to multithreading in matlab, but this is not true. That is, matlab is faster also forcing it to use a single thread (matlab -singleCompThread). To summarize:


octave:7> bm_sparse
----------------------------------------------------------------------
GNU Octave Version: 5.0.0 (hg id: 8df89a90fdba+)
GNU Octave License: GNU General Public License
Operating System: Linux 4.4.0-131-generic #157~14.04.1-Ubuntu SMP Fri Jul 13 08:53:17 UTC 2018 x86_64
----------------------------------------------------------------------
no packages installed.

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 3.138752 (0.058661)
 2*A   : 0.215353 (0.006110)
 A'    : 2.566570 (0.058285)
 A+B   : 0.744875 (0.027312)
 A*x   : 0.420970 (0.012134)
 A'*x  : 0.194607 (0.009360)


and matlab -singleCompThread


MATLAB Version: 9.3.0.713579 (R2017b)

[skip]

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 3.604938 (0.018558)
 2*A   : 0.149879 (0.001227)
 A'    : 2.547904 (0.097282)
 A+B   : 0.378851 (0.001527)
 A*x   : 0.243945 (0.004315)
 A'*x  : 0.177749 (0.007021)


Marco Caliari <caliari>
Group Member
Wed 15 Aug 2018 04:32:00 PM UTC, comment #12: 

I don't think octave_quit is the problem.  I removed every instance in Sparse-op-defs.h, recompiled, and re-ran the benchmark.  The results are essentially unchanged


----------------------------------------------------------------------
GNU Octave Version: 5.0.0 (hg id: hg-id-disabled)
GNU Octave License: GNU General Public License
Operating System: Linux 4.4.0-131-generic #157-Ubuntu SMP Thu Jul 12 15:51:36 UTC 2018 x86_64
----------------------------------------------------------------------
Package Name  | Version | Installation directory
--------------+---------+-----------------------
          io  |  2.4.11 | /home/rik/.octavepkg/io-2.4.11
  statistics  |   1.4.0 | /home/rik/.octavepkg/statistics-1.4.0

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 3.961243 (0.004620)
 2*A   : 0.232663 (0.010505)
 A'    : 3.068885 (0.016318)
 A+B   : 0.677771 (0.009605)
 A*x   : 0.528133 (0.014709)
 A'*x  : 3.637228 (0.007716)


The biggest differest was with the ordinary multiply with a 3% savings which is most likely just noise (it is within 1.5 SD so unlikely to be significant).

Rik <rik5>
Group administrator
Wed 15 Aug 2018 04:19:29 PM UTC, comment #11: 

I extracted the benchmarking code from the original post and have uploaded it as bm_sparse.m.

Running the CLI with tip df27088c307f I get


----------------------------------------------------------------------
GNU Octave Version: 5.0.0 (hg id: hg-id-disabled)
GNU Octave License: GNU General Public License
Operating System: Linux 4.4.0-131-generic #157-Ubuntu SMP Thu Jul 12 15:51:36 UTC 2018 x86_64
----------------------------------------------------------------------
Package Name  | Version | Installation directory
--------------+---------+-----------------------
          io  |  2.4.11 | /home/rik/.octavepkg/io-2.4.11
  statistics  |   1.4.0 | /home/rik/.octavepkg/statistics-1.4.0

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 3.949250 (0.013439)
 2*A   : 0.233298 (0.006160)
 A'    : 3.075504 (0.006592)
 A+B   : 0.679681 (0.008432)
 A*x   : 0.544324 (0.011615)
 A'*x  : 3.657135 (0.027786)


Since the original issue report was from 2015, could someone try getting running the script in Matlab to get more up-to-date results?

The only thing that sticks out to me is


 A'*x  : 3.657135 (0.027786)


This looks to be the sum of the transpose operation and the multiply operation.  When calling BLAS/LAPACK Octave tries to avoid calculating the transpose itself and instead passes along an appropriate flag to the underlying library.  Is there something similar that we could do with sparse matrices?

Rik <rik5>
Group administrator
Fri 11 Sep 2015 02:55:08 PM UTC, comment #10: 

In the example, a_nr is very large compared to a_nc, but still A*x takes less than a second. I think it is an accetable time to wait for before interruption. Anyway, if you modify x into


x = randn(N,100);


then I see, for A*x, 9 seconds for Matlab and 40 seconds for Octave 4.0.0. With the original vector x, 0.25 vs. 0.36.

Marco

Marco Caliari <caliari>
Group Member
Fri 11 Sep 2015 02:12:04 PM UTC, comment #9: 

It would probably work OK in many cases, but what if a_nr is very large compared to a_nc?  In that case, interrupting would be less responsive.

It would be easy enough to do some tests just to see whether removing it completely actually makes a significant difference.

Note that octave_quit is an inline function that just does


  if (octave_signal_caught)
    {
      octave_signal_caught = 0;
      octave_handle_signal ();
    }


so it should just be checking a variable value and not performing an actual function call.  Given the other things that happen in the inner loop in SPARSE_FULL_MUL, I'd be surprised if that makes a significant difference, but I suppose it could.

John W. Eaton <jwe>
Group administrator
Fri 11 Sep 2015 01:51:28 PM UTC, comment #8: 

Looking at SPARSE_FULL_MUL, the only thing I notice is


octave_quit ();


inside the loop over the number of rows of the second term of the multiplication (a vector in the test we are considering). I guess it allows to interrupt the calculation with a Ctrl+C. How much does it take? Wouldn't it enough to move it in the previous loop?

Marco

Marco Caliari <caliari>
Group Member
Tue 08 Sep 2015 10:30:55 AM UTC, comment #7: 

FYI here is a comparison of your benchmark using octave's sparse and sparsersb:



GNU Octave Version: 4.1.0+
prog (@sparsersb)

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 0.733759 (0.023204)
 2*A   : 0.023806 (0.000919)
 A'    : 0.049507 (0.001497)
 A+B   : 0.028259 (0.002406)
 A*x   : 0.003106 (0.000072)
 A'*x  : 0.024023 (0.001079)


GNU Octave Version: 4.1.0+
prog (@sparse)

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 1.120718 (0.011892)
 2*A   : 0.002711 (0.000525)
 A'    : 0.002250 (0.000305)
 A+B   : 0.007880 (0.000594)
 A*x   : 0.005876 (0.000045)
 A'*x  : 0.003467 (0.000029)


In order to run these tests I modified your
benchmark code as follows:


function prog (sparsefun = @sparse)

N = 1000000;
NNZ = 20*N;
NRUNS = 5;

for j = 1:NRUNS

r = randi(N,NNZ,1);
c = randi(N,NNZ,1);
v = randn(NNZ,1);
x = randn(N,1);

tic; A = sparsefun (r,c,v,N,N); t(1,j) = toc;
tic; B = 2*A;                   t(2,j) = toc;
tic; A = A';                    t(3,j) = toc;
tic; B = A+B;                   t(4,j) = toc;
tic; y = A*x;                   t(5,j) = toc;
tic; y = A'*x;                  t(6,j) = toc;

end

% ignore 1st run
av = mean(t(:,2:end),2);
sd = std(t(:,2:end),0,2);

ver
fprintf('\nN=%i NNZ=%i NRUNS=%i\n',N,NNZ,NRUNS);
fprintf(' sparse: %f (%f)\n', av(1),sd(1))
fprintf(' 2*A   : %f (%f)\n', av(2),sd(2))
fprintf(' A''    : %f (%f)\n',av(3),sd(3))
fprintf(' A+B   : %f (%f)\n', av(4),sd(4))
fprintf(' A*x   : %f (%f)\n', av(5),sd(5))
fprintf(' A''*x  : %f (%f)\n',av(6),sd(6))

end


Carlo de Falco <cdf>
Group Member
Tue 08 Sep 2015 09:41:08 AM UTC, comment #6: 

Hi,

Not sure whether you are interested but,
if you really care about the speed of
sparse matrix * vector operations, you may want
to try the sparsersb package (which links to librsb):

http://librsb.sourceforge.net

the speedup is usually very impressive and worth the time spent
in installing the library if you use sparse matrix * vector a lot.




Carlo de Falco <cdf>
Group Member
Tue 08 Sep 2015 09:09:41 AM UTC, comment #5: 

The originators of the code are David Bateman, Andy Adler, and Jaroslav Hajek. Maybe you can get in contact with them regarding  code improvements or feel encouraged to get your hands on the free code yourself.

I worked myself on sparse matrix functions (ILU, ICHOL) within Octave, thus I have a basic knowledge of the sparse matrix code. But anyway, Octave consists of thousands of lines of code, there is no point in first knowing all functions by heart to work on them, or to answer at the bug tracker.

Kai Torben Ohlhus <siko1056>
Group Member
Tue 08 Sep 2015 08:32:28 AM UTC, comment #4: 

So... you misread and closed my bug report, and don't even know the code?

Ceral Paquet <octavebugs>
Tue 08 Sep 2015 08:14:32 AM UTC, comment #3: 

The Octave core code is for my taste very ugly and hardly maintainable, if you want to change something regarding maintainability even commenting stuff would be very helpful!

I figured out two promising macros SPARSE_FULL_MUL and SPARSE_SPARSE_MUL in

liboctave/operators/Sparse-op-defs.h

If you want to get into the code, I recommend you in Linux the grep command


grep -R "SPARSE_SPARSE_MUL"


to get ideas where something could be implemented.

Kai Torben Ohlhus <siko1056>
Group Member
Tue 08 Sep 2015 07:13:58 AM UTC, comment #2: 

Hi Kai,

I compiled Octave using ./configure on 64-bit Xubuntu (sadly, I still can't get --enable-64 to work). I left out all non-essential components such as SuiteSparse, so presumably octave is using this code?

~/octave-4.0.0/liboctave/array/Sparse*
~/octave-4.0.0/liboctave/array/MSparse*

I scanned them but can't really figure out what's going on. Are there 2 separate implementations of sparse array? E.g. where is A*x implemented? If you could start me off, I'd be happy to delve into it more - thanks.

(By the way I think you read the benchmarks wrong - Octave is much faster then Matlab at sparse creation but twice as slow at A+B and 50% slower at A*x.)

Ceral Paquet <octavebugs>
Mon 07 Sep 2015 05:31:14 AM UTC, comment #1: 

Your benchmark looks good to me! Octave's sparse matrix operations are not by factor 2 slower than MATLAB, and even outperform MATLAB in A'. Only the sparse matrix creation looks improvable to me.

But to really understand your benchmarks you need to tell more about your system, e.g. which SuiteSparse (https://packages.debian.org/source/jessie/suitesparse) you have installed, the MATLAB/Octave version, did you do any performance tweaks to MATLAB/Octave?

If you was interested in sparse matrix computations within Octave you might consider contributing to Octave and find out the potential bottle necks in your performed computation.

http://wiki.octave.org/Projects#Sparse_Matrices

Anyway, don't be shy to publish results like them!

Best,
Kai

Kai Torben Ohlhus <siko1056>
Group Member
Sun 06 Sep 2015 08:39:55 PM UTC, original submission:  

I just was doing some comparisons of sparse matrix operations and noticed some seem a bit slower than Matlab. Not serious but kind of odd considering they are probably using very similar C++ code.

Octave results

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 2.709228 (0.011811)
 2*A   : 0.222145 (0.000448)
 A'    : 1.776688 (0.044113)
 A+B   : 1.081774 (0.007834)
 A*x   : 0.652453 (0.002009)
 A'*x  : 0.292271 (0.000704)

Matlab results

N=1000000 NNZ=20000000 NRUNS=5
 sparse: 8.625574 (0.008490)
 2*A   : 0.169647 (0.002336)
 A'    : 2.126985 (0.025129)
 A+B   : 0.543265 (0.001062)
 A*x   : 0.426534 (0.002292)
 A'*x  : 0.290045 (0.001137)

Code


function prog()

N = 1000000;
NNZ = 20*N;
NRUNS = 5;

for j = 1:NRUNS

r = randi(N,NNZ,1);
c = randi(N,NNZ,1);
v = randn(NNZ,1);
x = randn(N,1);

tic; A = sparse(r,c,v,N,N); t(1,j) = toc;
tic; B = 2*A;               t(2,j) = toc;
tic; A = A';                t(3,j) = toc;
tic; B = A+B;               t(4,j) = toc;
tic; y = A*x;               t(5,j) = toc;
tic; y = A'*x;              t(6,j) = toc;

end

% ignore 1st run
av = mean(t(:,2:end),2);
sd = std(t(:,2:end),0,2);

ver
fprintf('\nN=%i NNZ=%i NRUNS=%i\n',N,NNZ,NRUNS);
fprintf(' sparse: %f (%f)\n', av(1),sd(1))
fprintf(' 2*A   : %f (%f)\n', av(2),sd(2))
fprintf(' A''    : %f (%f)\n',av(3),sd(3))
fprintf(' A+B   : %f (%f)\n', av(4),sd(4))
fprintf(' A*x   : %f (%f)\n', av(5),sd(5))
fprintf(' A''*x  : %f (%f)\n',av(6),sd(6))

end


Ceral Paquet <octavebugs>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #44788:  bm_sparse.m added by rik5 (794B - text/x-matlab)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by dasergatskov (Posted a comment)
  • -email is unavailable- added by rik5 (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by caliari (Posted a comment)
  • -email is unavailable- added by cdf (Posted a comment)
  • -email is unavailable- added by siko1056 (Posted a comment)
  • -email is unavailable- added by octavebugs (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 11 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-08-28 mtmiller Carbon-CopyRemoved 80942 -
    2018-08-15 rik5 Priority3 - Low 5 - Normal
    2018-08-15 rik5 Attached File- Added bm_sparse.m, #44788
    2015-12-22 mtmiller CategoryNone Libraries
    2015-09-08 siko1056 StatusWont Fix Need Info
        Open/ClosedClosed Open
        Release4.0.0 dev
    2015-09-07 siko1056 Priority5 - Normal 3 - Low
        Item GroupNone Performance
        StatusNone Wont Fix
        Open/ClosedOpen Closed

    Back to the top

    Powered by Savane 3.13-758e.
    Corresponding source code