bugGNU Octave - Bugs: bug #53140, Solution of a system of linear...

 
 

bug #53140: Solution of a system of linear equations takes forever and hurts OS performance.

Submitter:  None
Submitted:  Tue 13 Feb 2018 10:48:10 AM UTC
   
 
Category:  Performance Severity:  3 - Normal
Priority:  4 Item Group:  Performance
Status:  Fixed Assigned to:  None
Originator Name:  Octave user Originator Email:  -email is unavailable-
Open/Closed:  * Closed Release:  * other
Operating System:  * Any Fixed Release:  None
Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Mon 02 Apr 2018 04:20:36 AM UTC, comment #42: 

I benchmarked and the latest changeset is the same as the last one, about 95 seconds versus 55 with the current code.  Still no reason to change from what we have now.

Rik <rik5>
Group administrator
Fri 30 Mar 2018 04:37:49 PM UTC, comment #41: 

@Rik: I did, sparse_linear_systems2.diff. Can you please benchmark?

(file #43729)

Marco Caliari <caliari>
Group Member
Fri 30 Mar 2018 03:30:44 PM UTC, comment #40: 

@Marco: Can we close this bug now, or do you want to still prepare a change to the guess of the initial number of zeros?

Rik <rik5>
Group administrator
Thu 22 Mar 2018 06:44:27 PM UTC, comment #39: 

@Marco: I ran the test 'X = a\b' with linsolve_test.mat 10 times as a benchmark.  I got the following results


octave:12> [bm0; bm1; bm2]
ans =

   102.389   102.051    99.073    81.978    82.861    83.698    79.992    79.089    79.394    83.341
    75.471    78.109    78.393    58.081    56.656    56.390    56.522    56.702    56.574    57.008
    95.194    95.683    96.816   100.142    96.531    96.343    99.811    99.955    99.713    96.212


bm0 is the original code, bm1 is the addition of the patch to change the number of iterations to 1, bm2 is the latest patch which changes the guess of the number of non-zeros, but also changes UMFPACK_ZNAME (solve) to UMFPACK_ZNAME (wsolve).  The performance is worse.

I would stop where we are, unless you think it is still worthwhile to change the guess for the initial number of zeros, but keep UMFPACK_ZNAME (solve).

Rik <rik5>
Group administrator
Thu 22 Mar 2018 07:36:24 AM UTC, comment #38: 

sparse_linear_systems.diff attached.

(file #43627)

Marco Caliari <caliari>
Group Member
Thu 22 Mar 2018 04:03:47 AM UTC, comment #37: 

@Marco: I pushed the first patch that changes Octave to use just a single iteration (http://hg.savannah.gnu.org/hgweb/octave/rev/ed8090ee632c).  According to comment #36, there is a second patch that implements the rest of the changeset.  However it doesn't seem to be attached to this report.  Could you upload it again?

Rik <rik5>
Group administrator
Fri 16 Mar 2018 12:24:08 PM UTC, comment #36: 

I attach a second patch which implements points 2)-4) in comment #24 (default branch). I do not see a clear gain, but, as said, my system is not reliable.

Marco Caliari <caliari>
Group Member
Sun 04 Mar 2018 10:12:47 PM UTC, comment #35: 

I benchmarked and using only one iteration is 27% faster.  In terms of numerical difference, the differences are on the order 1e-20 so that seems acceptable too.

Rik <rik5>
Group administrator
Fri 02 Mar 2018 08:00:30 AM UTC, comment #34: 

@Carlo: sorry for the noise. I realized that it is my system which is simply unreliable with cpu times. Or my test, which is


load 'linsolve_test.mat'
x = A\b;
tic,for i = 1:5,x=A\b;,end,toc


I would like to prepare a patch for default which implements points 2)-4) in comment #24. But I wait for some feedback on the available patch.

Marco Caliari <caliari>
Group Member
Thu 01 Mar 2018 04:26:30 PM UTC, comment #33: 

@Marco are you compiling both stable and default with the same flags?

Do you have either "--enable-64" or "--disable-64" explicitely set?


Carlo de Falco <cdf>
Group Member
Thu 01 Mar 2018 04:05:28 PM UTC, comment #32: 

Here it is a patch for stable which just set UMFPACK_IRSTEP = 1.
With the same patch, default is slower than stable of about 15 seconds over 245 (6 runs of the test, discarding the first).
After Rik's patch (but not as a consequence of it), I think that default is slower than stable.

(file #43422)

Marco Caliari <caliari>
Group Member
Thu 01 Mar 2018 10:26:35 AM UTC, comment #31: 

I do not know whether the number of iterative refinements is fixed or not. Anyway, I confirm a gain of about 20% just by setting


Control (UMFPACK_IRSTEP) = 1;


Marco Caliari <caliari>
Group Member
Thu 01 Mar 2018 09:56:47 AM UTC, comment #30: 

Marco,

As for setting


Control (UMFPACK_IRSTEP) = 1;

 
The paper discusses a stopping criterion for iterative refinement
but I don't understand whether umfpack implements that or just unconditionally performs the given number of iterative refinements?
 
In this letter case I would agree to setting the number to 1 and documenting it, in the former it should not have any significant effect though ...

I don't know much about items 2)-4) but from your comments below
I would agree with your choices.


Carlo de Falco <cdf>
Group Member
Thu 01 Mar 2018 07:47:23 AM UTC, comment #29: 

@Carlo: thanks. Do you therefore suggest to set


Control (UMFPACK_IRSTEP) = 1;


instead of the default 2? I would agree. And what is your opinion about the other points 2)-4) in comment #24?

Marco Caliari <caliari>
Group Member
Thu 01 Mar 2018 07:17:07 AM UTC, comment #28: 


>> The residual r has to be computed in higher
>> precision (I do not know the details).


@Marco, just for sake of complete information: the UMFPACK manual references this paper for iterative refinement:

M. Arioli, J. W. Demmel, and I. S. Duff.
Solving sparse linear systems with sparse backward error.
SIAM J. Matrix Anal. Applic., 10:165–190, 1989.

which in turn cites

Skeel, R.D.,
Iterative Refinement Implies Stability for Gaussian Elimination.
Math. Comp, 35, 1980

and states that (under reasonable hypoteses)

"One step of iterative refinement is enough to guarantee that w [the a-posteriori measure for the element-wise relative backward error] is small [...] even if the the residual Ax-b is computed in the same arithmetic precision as Gaussian elimination."

We might want to reference these papers in the docs.



Carlo de Falco <cdf>
Group Member
Wed 28 Feb 2018 04:32:59 PM UTC, comment #27: 

Okay.  Let's document the behavior, and the original issue of 10 hours to a solution has been resolved.

Rik <rik5>
Group administrator
Wed 28 Feb 2018 07:41:24 AM UTC, comment #26: 

A single iterative refinement is


x = A \ b;
r = b - A * x;
xnew = x + (A \ r);


where of course A is factorized only once. The residual r has to be computed in higher precision (I do not know the details). Iterative refinement is a technique generally used in the solution of sparse linear system and it "may" improve accuracy. If I set


Control (UMFPACK_IRSTEP) = 1;


instead of the default value 2, I can see a gain of about 10 seconds (20%).

I found a very similar thread here https://github.com/JuliaLang/julia/issues/19500 with the same conclusions (UMFPACK uses iterative refinements and wsolve does not really improve).

I would prefer to keep them, since other programs/languages use them. And add an explicit description in the documentation, with the hint to rely on a direct factorization is speed is more important than accuracy.

Marco Caliari <caliari>
Group Member
Tue 27 Feb 2018 09:57:47 PM UTC, comment #25: 

What are the refinements that are being implemented?  Do they improve accuracy?  I'm not sure what the trade-off is if refinements are disabled.

Rik <rik5>
Group administrator
Tue 27 Feb 2018 07:52:13 PM UTC, comment #24: 

I think I have a solution now: umfpack_solve performs refinement iterations after the factorization. The parameter UMFPACK_DEFAULT_IRSTEP is set to 2 in umfpack.h and can be overwritten with UMFPACK_IRSTEP. The first consequence, is that W must have length 5*b_nr, and not b_nr (documentation at the end of umfpack_wsolve.h). This would explain the strange behavior I partially described in comment #21 (and other strange things not reported). With W of the right length, no more strange things, but it is not possible to reduce the cpu time. The only possibility is to disable the refinements, with a


Control (UMFPACK_IRSTEP) = 0;


before calling UMFPACK_DNAME (solve). If I do that, I have the same cpu time of a manual factorization and solution. So now, we have to decide:

1) do we want iterative refinements? From comment #17 it seems that matlab perform them, too. If we want them, I think they should be documented (stable branch).

2) I would anyhow use wsolve instead of solve, with the right allocation of the buffers Wi and W (default branch).

3) I would anyhow assemble Bx as described in comment #18 (default branch).

4) I would anyhow set the number of nonzeros in the solution to b_nr*b_nc. I do not see a reason to set it to nnz(b), see comment #15 (default branch).

Marco Caliari <caliari>
Group Member
Tue 27 Feb 2018 06:45:00 PM UTC, comment #23: 

@Marco: Is the slowdown repeatable?  If you run it 5 times, is it always the case that the first coding strategy is 55 second and the second one 40?  It might have just been an aberration during the run.

Also, if you complete and post the patch I can try it on my computer to see if it is something strange with versions of gcc or HW.

Rik <rik5>
Group administrator
Tue 27 Feb 2018 02:02:30 PM UTC, comment #22: 

No, it doesn't make much sense to me that there would be a significant change in performance just by changing the order of those allocations.

John W. Eaton <jwe>
Group administrator
Tue 27 Feb 2018 08:14:21 AM UTC, comment #21: 

I see something strange (to me): if I declare


OCTAVE_LOCAL_BUFFER (double, Bx, b_nr);
OCTAVE_LOCAL_BUFFER (double, Xx, b_nr);
OCTAVE_LOCAL_BUFFER (double, W, b_nr);
OCTAVE_LOCAL_BUFFER (octave_idx_type, Wi, b_nr);


I do not see any improvement. On the other hand, with


OCTAVE_LOCAL_BUFFER (double, Bx, b_nr);
OCTAVE_LOCAL_BUFFER (double, W, b_nr);
OCTAVE_LOCAL_BUFFER (octave_idx_type, Wi, b_nr);
OCTAVE_LOCAL_BUFFER (double, Xx, b_nr);


I see the reduction from about 55 seconds to 40. Does it make sense?

Marco Caliari <caliari>
Group Member
Mon 26 Feb 2018 11:48:47 PM UTC, comment #20: 

@Marco: I made the corresponding change for complex sparse matrices in CSparse.cc (http://hg.savannah.gnu.org/hgweb/octave/rev/d3a79cb734d2).

I think you can go ahead with your plan for using wsolve to improve performance.

Rik <rik5>
Group administrator
Fri 23 Feb 2018 03:50:51 PM UTC, comment #19: 

Marco, allocating the work arrays only once makes sense to me.  I assume the umfpack_*_solve function does that job, so it happens on every call, correct?

John W. Eaton <jwe>
Group administrator
Fri 23 Feb 2018 03:39:51 PM UTC, comment #18: 

Since b is sparse with 9 elements different from zero per column, on average, I thought that a problematic part was


for (octave_idx_type j = 0; j < b_nc; j++)
  {
    for (octave_idx_type i = 0; i < b_nr; i++)
      Bx[i] = b.elem (i, j);
    status = UMFPACK_DNAME (solve) (UMFPACK_A, Ap,
                                    Ai, Ax, Xx, Bx, Numeric,
                                    control, info);


and replaced it with


for (octave_idx_type i = 0; i < b_nr; i++)
  Bx[i] = 0.0;
for (octave_idx_type j = 0; j < b_nc; j++)
  {
    for (octave_idx_type i = b.cidx(j); i < b.cidx(j+1); i++)
      Bx[b.ridx(i)] = b.data(i);
    status = UMFPACK_DNAME (solve) (UMFPACK_A, Ap,
                                    Ai, Ax, Xx, Bx, Numeric,
                                    control, info);
    for (octave_idx_type i = b.cidx(j); i < b.cidx(j+1); i++)
      Bx[b.ridx(i)] = 0.0;


with no gain at all. Then I read


When you have many linear systems to solve, this routine [umfpack_*_wsolve] is faster than umfpack_*_solve, since the workspace (Wi, W) needs to be allocated only once, prior to calling umfpack_*_wsolve.


and replaced the orignal code with


for (octave_idx_type j = 0; j < b_nc; j++)
  {
    for (octave_idx_type i = 0; i < b_nr; i++)
      Bx[i] = b.elem (i, j);
    status = UMFPACK_DNAME (wsolve) (UMFPACK_A, Ap,
                                    Ai, Ax, Xx, Bx, Numeric,
                                    control, info, Wi, W);


where


OCTAVE_LOCAL_BUFFER (octave_idx_type, Wi, b_nr);
OCTAVE_LOCAL_BUFFER (double, W, b_nr);


In this way I can reduce the solution A\b from about 60 seconds to about 40 seconds. Comments before I prepare a patch?

Marco Caliari <caliari>
Group Member
Wed 21 Feb 2018 11:06:45 AM UTC, comment #17: 

@Rik: I set xz to b_nr * b_nc and see


octave:10> tic,A\b;,toc
Elapsed time is 64.9591 seconds.
octave:11> tic; [L, U, P, Q, R] = lu (A); xlu = Q * ( U \ ( L \ ( P * ( R \ b ) ) ) ); toc;
Elapsed time is 15.2399 seconds.


that is a factor 4. The same test in matlab R2017b gives


>> tic,A\b;,toc
Elapsed time is 58.941977 seconds.
>> tic; [L, U, P, Q, R] = lu (A); xlu = Q * ( U \ ( L \ ( P * ( R \ b ) ) ) ); toc;
Elapsed time is 33.345003 seconds.


Therefore, it is no more a problem with memory allocation. And there is a difference between direct-solve and first-factorize, in Matlab too. What I am not sure of, is whether the factorize function in dSparse.cc performs a long [L,U,P,Q,R] factorization or not.

@Rik: you should apply your patch to CSparse.cc, too.

Marco Caliari <caliari>
Group Member
Tue 20 Feb 2018 02:49:33 PM UTC, comment #16: 

The original issue, why it was taking 10 hours, was definitely caused by problems of memory allocation.

However, I doubt that this explains the ~2.5X speed advantage of the LU approach.  When I had the backslash code instrumented with std::cerr statements I found that the output was resized ~10 times.  In each case it didn't seem to take very long.

One way to check would be to hardcode the matrix size to be the final size of x at the beginning of the algorithm.  The resize code would never be activated and you could compare just algorithm times.

Rik <rik5>
Group administrator
Tue 20 Feb 2018 09:26:31 AM UTC, comment #15: 

@Rik: is it possible to understand whether it is a problem of memory allocation or not? Because I find quite strange this idea


// Take a first guess that the number of nonzero terms
// will be as many as in b


If you try something like


A = sprand (1000, 1000, rand); x = A \ [1; zeros(999,1)]; nnz (x)


you will see that x is "never" sparse. I would like to investigate more, but I have no time till the end of February.

Marco Caliari <caliari>
Group Member
Mon 19 Feb 2018 08:00:47 PM UTC, comment #14: 

There is probably still an issue about choosing the best algorithm.  The method implemented by '\' still seems to be slower than using LU.


octave:1> linsolve_test
Elapsed time is 97.7086 seconds.
octave:2> lu_test
Elapsed time is 38.6278 seconds.



Rik <rik5>
Group administrator
Mon 19 Feb 2018 07:39:15 PM UTC, comment #13: 

The problem seems to have been an assumption that floating point arithmetic would be used for mathematical expressions.  But because all operands were integers there was no need for the compiler to promote any integer to double.  When performing the calculation with integers the result was a value which exceeded the storage size of octave_index_type (signed int), and that resulted in a negative number for the size.

Here is how I recoded it.


   // Resize the sparse matrix
-  octave_idx_type sz = x_nz * (b_nc - j) / b_nc;
-  sz = (sz > 10 ? sz : 10) + x_nz;
+  octave_idx_type sz;
+  sz = (static_cast<double> (b_nc) - j) / b_nc
+       * x_nz;
+  sz = x_nz + (sz > 100 ? sz : 100);
   retval.change_capacity (sz);


I tried to show more clearly that a fraction of the current number of non-zero elements x_nz is added to x_nz to determine the new reserved storage space.

I only cast one of the operands to a double and then relied on the implicit compiler rules.  However, if it seems clearer we could also code this using static_casts on every value which we actually want to be double.  Or we could use the double constructor explicitly.

I also changed the default increase in size from 10 elements to 100 elements.  10 double values is ~80 bytes which is pretty small given most modern machine's installed memory.  Even 100 elements is ~800B or about 1kB which is also pretty small.  As an example, I found that the smallest increase in size using the problem matrix was ~8,000 elements.

I checked this change in on the stable branch here (http://hg.savannah.gnu.org/hgweb/octave/rev/956e28867c80).

Marking as Ready for Test.

Rik <rik5>
Group administrator
Mon 19 Feb 2018 06:19:26 PM UTC, comment #12: 

After some more debugging, there is definitely a problem here. 

I instrumented the code to be


octave_idx_type sz = x_nz * (b_nc - j) / b_nc;
std::cerr << "sz: " << sz << std::endl;
std::cerr << "x_nz: " << x_nz << std::endl;
sz = (sz > 10 ? sz : 10) + x_nz;
std::cerr << "before change_capacity" << std::endl;
retval.change_capacity (sz);
std::cerr << "after change_capacity" << std::endl;
x_nz = sz;
std::cerr << "new sz: " << sz << std::endl;


And when running, I see that the size argument goes negative because octave_idx_type is a sized value, but I can't see why we would ever want to change the capacity of the sparse matrix to a negative value.

Runtime results:


resizing sparse matrix
sz: -2031109
x_nz: 3519011
before change_capacity
after change_capacity
new sz: 3519021
sz: -2031102
x_nz: 3519021
before change_capacity
after change_capacity
new sz: 3519031
sz: -2031094
x_nz: 3519031
before change_capacity
after change_capacity
new sz: 3519041
sz: -2031086
x_nz: 3519041


As you can see, the overall size keeps going up by just 10 values which means it takes many iterations before the sparse matrix is increased sufficiently in size.


Rik <rik5>
Group administrator
Mon 19 Feb 2018 04:41:22 PM UTC, comment #11: 

I think it is explainable.  jwe has done a lot of work on the symbol table on the development branch.  It is possible that calls like change_capacity() do the equivalent of a malloc, rather than a realloc, and therefore Octave starts to run out of memory.

I'll do a little more debugging to see what is actually taking the time.

Rik <rik5>
Group administrator
Mon 19 Feb 2018 08:35:36 AM UTC, comment #10: 

Strange.

1) this part of code is the same in stable and default branches, and thus comment #2 remains unexplained.

2) in the same file there are other matrix resizing parts, with a slightly different computation of sz. This maybe partially explains comment #4.

Marco Caliari <caliari>
Group Member
Sun 18 Feb 2018 05:51:00 AM UTC, comment #9: 

I put in some debug statements and the slowdown is in this code which resizes the matrix


                  std::cerr << "resizing sparse matrix" << std::endl;
                  for (octave_idx_type i = 0; i < b_nr; i++)
                    {
                      double tmp = Xx[i];
                      if (tmp != 0.0)
                        {
                          if (ii == x_nz)
                            {
                              // Resize the sparse matrix
                              octave_idx_type sz = x_nz * (b_nc - j) / b_nc;
                              sz = (sz > 10 ? sz : 10) + x_nz;
                              retval.change_capacity (sz);
                              x_nz = sz;
                            }
                          retval.xdata (ii) = tmp;
                          retval.xridx (ii++) = i;
                        }
                    }
                  std::cerr << "done resizing sparse matrix" << std::endl;




Rik <rik5>
Group administrator
Fri 16 Feb 2018 07:31:36 AM UTC, comment #8: 

@Rik: to_suitesparse_intptr does not exist in stable. What happens on previous versions of octave?

Marco Caliari <caliari>
Group Member
Fri 16 Feb 2018 12:57:26 AM UTC, comment #7: 

Not much difference.  What happens if you backport just this one function?  Does stable then work?

Rik <rik5>
Group administrator
Thu 15 Feb 2018 08:53:16 AM UTC, comment #6: 

If my previous consideration is valid, this is the difference between SparseMatrix::fsolve in stable and default branch


2c2
< SparseMatrix::fsolve (MatrixType& mattype, const SparseMatrix& b,
---
> SparseMatrix::fsolve (MatrixType &mattype, const SparseMatrix& b,
39c39
<               SUITESPARSE_ASSIGN_FPTR (printf_func, cm->print_function, nullptr);
---
>               SUITESPARSE_ASSIGN_FPTR (printf_func, cm->print_function, 0);
64c64
<           A->nz = nullptr;
---
>           A->nz = 0;
75c75
<           if (A->x == nullptr)
---
>           if (A->x == 0)
87c87
<           B->nz = nullptr;
---
>           B->nz = 0;
98c98
<           if (B->x == nullptr)
---
>           if (B->x == 0)
158,159c158,159
<               static char blank_name[] = " ";
<               CHOLMOD_NAME(print_common) (blank_name, cm);
---
>               static char tmp[] = " ";
>               CHOLMOD_NAME(print_common) (tmp, cm);
207,210c207,208
<                   status = UMFPACK_DNAME (solve) (UMFPACK_A,
<                                                   octave::to_suitesparse_intptr (Ap),
<                                                   octave::to_suitesparse_intptr (Ai),
<                                                   Ax, Xx, Bx, Numeric,
---
>                   status = UMFPACK_DNAME (solve) (UMFPACK_A, Ap,
>                                                   Ai, Ax, Xx, Bx, Numeric,


Marco Caliari <caliari>
Group Member
Wed 14 Feb 2018 03:16:12 PM UTC, comment #5: 

The problem seems to be the sparsity of b. If I try A \ full (b), then it works in 4.2.1. So, there is something between Matrix SparseMatrix::fsolve and SparseMatrix SparseMatrix::fsolve.

Marco Caliari <caliari>
Group Member
Tue 13 Feb 2018 10:11:10 PM UTC, comment #4: 

I changed the test script to be


load linsolve_test.mat;

disp ('leftdiv');
tic; x = A \ b; toc;
disp ('lu factorization');
tic; [L, U, P, Q, R] = lu (A); toc;
disp ('lu solve');
tic; xlu = Q * ( U \ ( L \ ( P * ( R \ b ) ) ) ); toc;


On my system, the two results are nearly identical (magnitude of max difference is around 1e-20).  X has 3 more nonzero entries than XLU, but their absolute values are all around 1e-22.

Performance is significantly different.  About 75 seconds for the leftdiv operator and 20 seconds combined for the lu factorization and solve steps.  Again, if someone is interested in digging into this, the place to start is SparseMatrix::fsolve.

John W. Eaton <jwe>
Group administrator
Tue 13 Feb 2018 09:09:33 PM UTC, comment #3: 

If someone would like to understand this problem, the place to start is probably the xleftdiv function in libinterp/corefcn/sparse-xdiv.cc.  From there, verify that the arguments to the SparseMatrix::solve function are the same in both 4.2.1 and dev.  From there, you should end up in the SparseMatrix::fsolve function that takes a SparseMatrix B array.  As far as I can tell, there are no significant changes in that function between 4.2.1 and dev.

If the inputs to SparseMatrix::fsolve are the same for both versions, then why does it in 4.2.1 take so much longer than the current one in the dev sources?  If the inputs are different, then you'll have to find out why.

John W. Eaton <jwe>
Group administrator
Tue 13 Feb 2018 06:32:30 PM UTC, comment #2: 

Confirmed.  This seems to be an issue wit the 4.2.1 release.  The development branch that will become 4.4.0 works fine.  I tested on a Linux system which proves this is unrelated to the operating system.

I suspect that rather than trying to decipher the cause it may be easier to simply release a new version of Octave which is something that the Maintainer's would like to do anyways.

Rik <rik5>
Group administrator
Tue 13 Feb 2018 04:53:22 PM UTC, comment #1: 

I just tested the matrix with the development version of Octave on Linux:


octave:3> lu_test
Elapsed time is 41.4186 seconds.
octave:4> linsolve_test
Elapsed time is 91.4311 seconds.


The lu_test script uses your second code.  Clearly this is a big matrix, but I don't see anything like 10 hours.

I'll see if I can start a virtual machine running Windows XP and check the result there.

Rik <rik5>
Group administrator
Tue 13 Feb 2018 10:48:10 AM UTC, original submission:  

The execution of the following code takes at least ten hours or more.

clear all;
close all;

load linsolve_test.mat;

tic;
x = A \ b;
toc;

However the LU decomposition of A takes only a few seconds:

tic;
[L, U, P, Q, R] = lu (A);
x = Q ( U \ ( L \ ( P ( R \ b ) ) ) );
toc;

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #43729:  sparse_linear_systems2.diff added by caliari (40KiB - application/x-tex)
file #43627:  sparse_linear_systems.diff added by caliari (54KiB - application/x-tex)
file #43422:  iterref.diff added by caliari (4KiB - application/x-tex)
file #43269:  linsolve_test.7z added by None (1MiB - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by cdf (Posted a comment)
  • -email is unavailable- added by caliari (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by rik5 (Posted a comment)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 13 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-04-04 caliari StatusPatch Submitted Fixed
        Open/ClosedOpen Closed
    2018-03-30 caliari Attached File- Added sparse_linear_systems2.diff, #43729
    2018-03-22 caliari Attached File- Added sparse_linear_systems.diff, #43627
        StatusIn Progress Patch Submitted
    2018-03-22 rik5 Release4.2.1 other
    2018-03-01 caliari Attached File- Added iterref.diff, #43422
    2018-02-26 rik5 StatusReady For Test In Progress
    2018-02-19 rik5 StatusConfirmed Ready For Test
    2018-02-13 rik5 Priority5 - Normal 4
    2018-02-13 rik5 StatusNone Confirmed
        Operating SystemMicrosoft Windows Any
    2018-02-13 None Attached File- Added linsolve_test.7z, #43269

    Back to the top

    Powered by Savane 3.13-4448.
    Corresponding source code