Sun 15 Mar 2015 12:29:12 AM UTC, comment #12:
Just updating bug summary and status, since no one is able to reproduce this. I just tried the comment #7 test case and am also not able to reproduce on my system (tested with both ATLAS and OpenBLAS).
This will probably be closed if no one can reproduce or determine what is contributing to the reported performance loss.
|
Tue 04 Nov 2014 10:50:02 PM UTC, comment #11:
I recompiled with ATLAS, the result is much better, but still not ideal
octave:1> version
ans = 3.6.4
octave:2> for L=1:6
> doit(L*1000);
> end
L = 1000
full : 1.131828
sparse : 0.817876
L = 2000
full : 3.670442
sparse : 3.672441
L = 3000
full : 9.240595
sparse : 9.546549
L = 4000
full : 16.913429
sparse : 18.988113
L = 5000
full : 31.201257
sparse : 34.039825
L = 6000
full : 45.836033
sparse : 55.160614
|
Tue 04 Nov 2014 09:10:57 PM UTC, comment #10:
Well, I don't think I can help anymore since I can't reproduce this. I was running a self-compiled version of 3.8.2 done with gcc 4.6.3 and arpack 3.0.2. Possibly the older arpack makes a difference, possibly the newer gcc and gfortran makes a difference.
|
Tue 04 Nov 2014 09:00:49 PM UTC, comment #9:
I suspect ACML is to blame. Can you try with a different BLAS?
|
Tue 04 Nov 2014 07:43:11 PM UTC, comment #8:
OK, I have tried many ways, the problem persists
version
ans = 3.8.2
octave:2> for i=1:6
> L=i*1000;doit(L);
> end
L = 1000
full : 0.627905
sparse : 8.175758
L = 2000
full : 4.337341
sparse : 39.021067
L = 3000
full : 9.973484
sparse : 75.483524
L = 4000
full : 23.701398
sparse : 231.474810
L = 5000
full : 44.528232
sparse : 365.127491
L = 6000
full : 57.121317
sparse : 486.797995
I'm running CentOS 6.5 64bit on Intel(R) Xeon(R) CPU E5-1620 0 @ 3.60GHz with 8G memory, compiled with gfortran gcc 4.4.7-4,linked with ACML 4.4 gfortran64_mp,arpack_ng-3.1.5
|
Tue 04 Nov 2014 06:06:19 PM UTC, comment #7:
Summarizing, I was not able to reproduce this bug and jwe's results also show that eigs on a sparse matrix is faster. There is probably something else going on, like swapping, that is particular to the bug reporter's machine.
Following up on comment #6, I have re-written the benchmark to use cputime. It would be useful if the bug reporter could run this code and report the results back here.
Code to cut/paste and execute
Results on my machine:
|
Tue 04 Nov 2014 02:58:47 PM UTC, comment #6:
Here is what I see:
My system has 16GB of memory.
To see whether it is actually CPU time used or time spent by the system swapping, I recommend using cputime instead of tic/toc since tic/toc measures wall clock time, not CPU time used.
|
Tue 04 Nov 2014 10:52:59 AM UTC, comment #5:
It could be that the L=8000 example in the original submission:
>> tic;e=eigs(c);toc
Elapsed time is 2501.76 seconds.
represents mainly swapping. Could the submitter checkthis? And, tell us how much memory his system has?
|
Mon 03 Nov 2014 11:22:48 PM UTC, comment #4:
On my computer the situation is reversed and the sparse case runs much faster than the full one.
With L = 2000 and the example code from comment #2,
This is Octave version 3.8.2 compiled from scratch running on Kubuntu 12.04. The arpack library is version 3.0.2-3.
|
Mon 03 Nov 2014 10:39:04 PM UTC, comment #3:
In the previous comment it should be
L=1000;
>> test
Elapsed time is 0.59783 seconds.
Elapsed time is 0.793859 seconds.
|
Mon 03 Nov 2014 10:36:20 PM UTC, comment #2:
I can confirm it's not a memory issue
So I tried a rather sparse matrix
%L=8000
a=rand(L)>0.95;
b=a+a'+0.001*eye(L);
c=sparse(b);
tic; e=eigs(b); toc
tic; e=eigs(c); toc
L=1000;
test
Elapsed time is 0.0945721 seconds.
Elapsed time is 0.841012 seconds.
L=2000;
>> test
Elapsed time is 0.632753 seconds.
Elapsed time is 4.99704 seconds.
L=3000;
>> test
Elapsed time is 1.71377 seconds.
Elapsed time is 13.1236 seconds.
L=4000;
>> test
Elapsed time is 3.08764 seconds.
Elapsed time is 27.1122 seconds.
L=5000;
>> test
Elapsed time is 4.88847 seconds.
Elapsed time is 43.0939 seconds.
L = 6000
>> test
Elapsed time is 8.19819 seconds.
Elapsed time is 65.2827 seconds.
whos
Variables in the current scope:
Attr Name Size Bytes Class
==== ==== ==== ===== =====
L 1x1 8 double
a 6000x6000 36000000 logical
b 6000x6000 288000000 double
c 6000x6000 42251956 double
e 6x1 48 double
Total is 75519003 elements using 366252012 bytes
|
Mon 03 Nov 2014 09:17:08 PM UTC, comment #1:
I don't see comparable performance. I'm hesistant to allocate almost 2 gigs of RAM on my aging laptop, but I can get somewhat close in order of magnitude:
I think the slowness here is to be expected, since this matrix is not really sparse at 50% density. It actually takes slightly more memory to store at this density, and it can take 1.5 times more memory to store a sparse matrix at 50% density if you're using 64-bit indexing.
What's surprising is the huge slowness you experienced with L=8000. It looks like times for both are growing quadratically or cubically with a different time constant, which makes me suspect that for L=8000, your system actually thrashed. Can you confirm that you did not see a thrash?
Compare the memory usage of Octave and Matlab for the two cases. If they are comparable, the problem may lie elsewhere.
|
Mon 03 Nov 2014 07:03:08 PM UTC, original submission:
eigs of a sparse matrix is much much slower than a full matrix.
L=8000;
>> a=rand(L)>0.5;b=a+a';c=sparse(b);
>> tic;e=eigs(b);toc
Elapsed time is 18.6111 seconds.
>> tic;e=eigs(c);toc
Elapsed time is 2501.76 seconds.
I compiled with arpack-ng3.1.5 and linked with ACML4.4 gfortran64_mp. Same performance is found on version 3.6.4 too.
With Matlab on my laptop eigs has this performance
L=8000;
a=rand(L)>0.5;b=a+a';c=sparse(b);
tic;eigs(b);toc
Elapsed time is 20.715128 seconds.
tic;eigs(c);toc
Elapsed time is 58.428723 seconds.
|