bugGNU Octave - Bugs: bug #63281, bsxfun fails to preserve sparse...

 
 

bug #63281: bsxfun fails to preserve sparse output for some functions

Submitter:  Nicholas Jankowski <nrjank>
Submitted:  Thu 27 Oct 2022 06:16:10 PM UTC
   
 
Category:  Octave Function Severity:  2 - Minor
Priority:  5 - Normal Item Group:  Incorrect Result
Status:  None Assigned to:  None
Originator Name:  Nicholas Jankowski Open/Closed:  * Open
Release:  * dev Operating System:  * Any
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Mon 31 Oct 2022 06:33:41 PM UTC, comment #19: 

Agreed.  noting that the diagonal and sparse broadcasting bug reports date back to 2012 and 2014, however, I was thinking priority might be leaning toward improving the workarounds for now.

Nicholas Jankowski <nrjank>
Group Member
Mon 31 Oct 2022 05:24:34 PM UTC, comment #18: 

It seems better to me to fix the real problem instead of inserting workarounds in specific functions.  Once we start doing that, it seems we are just creating more work in the future to undo all the special fixes when the real problem is solved.

John W. Eaton <jwe>
Group administrator
Mon 31 Oct 2022 04:43:09 AM UTC, comment #17: 

oops, that if line should just be:

if (any (strcmp (typeinfo (x), {"sparse matrix", "diagonal matrix"})))


Nicholas Jankowski <nrjank>
Group Member
Mon 31 Oct 2022 04:40:15 AM UTC, comment #16: 

in the interest of restoring some performance gain of avoiding bsxfun and maybe preserving the sparseness:

looking for example at center.m, instead of the current:

## FIXME: Use bsxfun, rather than broadcasting, until broadcasting
##        supports diagonal and sparse matrices (Bugs #41441, #35787).
y = bsxfun (@minus, x, mean (x, dim));
## y = x - mean (x, dim);   # automatic broadcasting


would something like this be acceptable?:


try
  y = x - mean (x, dim);   # automatic broadcasting
catch err
  if any (strcmp (typeinfo (sparse (x)), ...
                  {"sparse matrix", "diagonal matrix"}))
    ##  y = bsxfun (@minus, x, mean (x, dim)); faster, but loses sparseness
    y = bsxfun(@(x,y) x-y, x, mean(x, dim)); # preserves sparseness
  else
    rethrow(err);
  endif
end_try_catch


Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 07:48:11 PM UTC, comment #15: 

Since Matlab doesn't have diagonal (or permutation) matrices and in the places where Octave can create them Matlab would create full matrices instead, then I think the best thing to do would be to make all operations with them behave as if they are full.

Also, given our limited developer resources, I wouldn't object to deprecating and removing them from Octave.

John W. Eaton <jwe>
Group administrator
Thu 27 Oct 2022 07:36:01 PM UTC, comment #14: 

i guess strcmp(typeinfo(A),"diagonal matrix")) ?

Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 07:29:39 PM UTC, comment #13: 

maybe a side topic, but the bsxfun in center was also implemented to catch the same issue with Octave's diagonal matrix type. is there a test like issparse that can be used on a diagonal matrix? isdiag doesn't actually test the matrix type. not a compatibility concern, but if we're going to capture sparseness preservation, is diagonal type preservation able to be tested for too?

Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 07:15:13 PM UTC, comment #12: 

Like for the isfinite vs. isinf and isnan case from bug #63277, it seems that the decision was made at some point long ago that operations which will surely cause the matrix to fill in should return full matrix objects and the others not.

Separate from broadcasting, is Octave compatible with Matlab when performing sparse/full and full/sparse binary operations?  If so, we have tests to ensure that we don't accidentally break that compatibility?  If not, I guess having tests would be a good first step.

John W. Eaton <jwe>
Group administrator
Thu 27 Oct 2022 07:08:44 PM UTC, comment #11: 

oh wait, the Ar vs Asr answers the 'is it diff for a scalar' question.  But also:



Af = full(A);
AA = issparse(A*A)
AAf = issparse(A*Af)
AA =
  logical
   1
AAf =
  logical
   0
>> AA = issparse(A.*A)
AAf = issparse(A.*Af)
AA =
  logical
   1
AAf =
  logical
   1


not really an expansion check, but another sparse 'consistency' datapoint

Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 07:01:14 PM UTC, comment #10: 

and acts like multiplication, or acts like addition...

&

A = sprand (10, 10, 0.1);
x = 1;
Ax=issparse(A & x)
sx = sparse (x);
Asx=issparse(A & sx)
r = 1:10;
Ar=issparse(A & r)
sr = sparse (r);
Asr=issparse(A & sr)
Ax =
  logical
   1
Asx =
  logical
   1
Ar =
  logical
   1
Asr =
  logical
   1


|

A = sprand (10, 10, 0.1);
x = 1;
Ax=issparse(A | x)
sx = sparse (x);
Asx=issparse(A | sx)
r = 1:10;
Ar=issparse(A | r)
sr = sparse (r);
Asr=issparse(A | sr)
Ax =
  logical
   0
Asx =
  logical
   1
Ar =
  logical
   0
Asr =
  logical
   1



I think the Ax vs Asx answers your second question, but maybe 'adding a scalar' matters. will try a few of those next

Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 06:52:32 PM UTC, comment #9: 

Sorry, I could have used something smaller than 10x10...

Anyway, what happens with & instead of +?  Is it the same with some results full and some sparse depending on whether both arguments are sparse or if one is full?  What about | vs. &, do they behave the same?

John W. Eaton <jwe>
Group administrator
Thu 27 Oct 2022 06:50:27 PM UTC, comment #8: 

plus:

A = sprand (10, 10, 0.1);
x = 1;
Ax=issparse(A + x)
sx = sparse (x);
Asx=issparse(A + sx)
r = 1:10;
Ar=issparse(A + r)
sr = sparse (r);
Asr=issparse(A + sr)
Ax =
  logical
   0
Asx =
  logical
   1
Ar =
  logical
   0
Asr =
  logical
   1


minus:

A = sprand (10, 10, 0.1);
x = 1;
Ax=issparse(A - x)
sx = sparse (x);
Asx=issparse(A - sx)
r = 1:10;
Ar=issparse(A - r)
sr = sparse (r);
Asr=issparse(A - sr)
Ax =
  logical
   0
Asx =
  logical
   1
Ar =
  logical
   0
Asr =
  logical
   1


times:

A = sprand (10, 10, 0.1);
x = 1;
Ax=issparse(A .* x)
sx = sparse (x);
Asx=issparse(A .* sx)
r = 1:10;
Ar=issparse(A .* r)
sr = sparse (r);
Asr=issparse(A .* sr)
Ax =
  logical
   1
Asx =
  logical
   1
Ar =
  logical
   1
Asr =
  logical
   1



divide:

A = sprand (10, 10, 0.1);
x = 1;
Ax=issparse(A ./ x)
sx = sparse (x);
Asx=issparse(A ./ sx)
r = 1:10;
Ar=issparse(A ./ r)
sr = sparse (r);
Asr=issparse(A ./ sr)
Ax =
  logical
   1
Asx =
  logical
   1
Ar =
  logical
   1
Asr =
  logical
   1


and:

A = sprand (10, 10, 0.1);
x = 1;
Ax=issparse(and(A,x))
sx = sparse (x);
Asx=issparse(and(A,sx))
r = 1:10;
Ar=issparse(and(A,r))
sr = sparse (r);
Asr=issparse(and(A,sr))
Ax =
  logical
   1
Asx =
  logical
   1
Ar =
  logical
   1
Asr =
  logical
   1


Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 06:45:59 PM UTC, comment #7: 

playing a bit more, the results below are the consistent for minus, but not for .*.

Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 06:42:12 PM UTC, comment #6: 

r2022b

>> A = sprand (10, 10, 0.1)

A =

   (2,1)       0.6551
   (1,2)       0.7547
   (1,4)       0.2760
   (7,5)       0.4984
   (8,5)       0.9597
   (9,5)       0.3404
   (4,7)       0.1190
   (1,8)       0.6797
   (3,8)       0.1626
  (10,8)       0.5853

>> x = 1

x =

     1

>> A + x

ans =

  Columns 1 through 9

    1.0000    1.7547    1.0000    1.2760    1.0000    1.0000    1.0000    1.6797    1.0000
    1.6551    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000
    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.1626    1.0000
    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.1190    1.0000    1.0000
    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000
    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000
    1.0000    1.0000    1.0000    1.0000    1.4984    1.0000    1.0000    1.0000    1.0000
    1.0000    1.0000    1.0000    1.0000    1.9597    1.0000    1.0000    1.0000    1.0000
    1.0000    1.0000    1.0000    1.0000    1.3404    1.0000    1.0000    1.0000    1.0000
    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.0000    1.5853    1.0000

  Column 10

    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000

>> sx = sparse (x)

sx =

   (1,1)        1

>> A + sx

ans =

   (1,1)       1.0000
   (2,1)       1.6551
   (3,1)       1.0000
   (4,1)       1.0000
   (5,1)       1.0000
   (6,1)       1.0000
   (7,1)       1.0000
   (8,1)       1.0000
   (9,1)       1.0000
  (10,1)       1.0000
   (1,2)       1.7547
   (2,2)       1.0000
   (3,2)       1.0000
   (4,2)       1.0000
   (5,2)       1.0000
   (6,2)       1.0000
   (7,2)       1.0000
   (8,2)       1.0000
   (9,2)       1.0000
  (10,2)       1.0000
   (1,3)       1.0000
   (2,3)       1.0000
   (3,3)       1.0000
   (4,3)       1.0000
   (5,3)       1.0000
   (6,3)       1.0000
   (7,3)       1.0000
   (8,3)       1.0000
   (9,3)       1.0000
  (10,3)       1.0000
   (1,4)       1.2760
   (2,4)       1.0000
   (3,4)       1.0000
   (4,4)       1.0000
   (5,4)       1.0000
   (6,4)       1.0000
   (7,4)       1.0000
   (8,4)       1.0000
   (9,4)       1.0000
  (10,4)       1.0000
   (1,5)       1.0000
   (2,5)       1.0000
   (3,5)       1.0000
   (4,5)       1.0000
   (5,5)       1.0000
   (6,5)       1.0000
   (7,5)       1.4984
   (8,5)       1.9597
   (9,5)       1.3404
  (10,5)       1.0000
   (1,6)       1.0000
   (2,6)       1.0000
   (3,6)       1.0000
   (4,6)       1.0000
   (5,6)       1.0000
   (6,6)       1.0000
   (7,6)       1.0000
   (8,6)       1.0000
   (9,6)       1.0000
  (10,6)       1.0000
   (1,7)       1.0000
   (2,7)       1.0000
   (3,7)       1.0000
   (4,7)       1.1190
   (5,7)       1.0000
   (6,7)       1.0000
   (7,7)       1.0000
   (8,7)       1.0000
   (9,7)       1.0000
  (10,7)       1.0000
   (1,8)       1.6797
   (2,8)       1.0000
   (3,8)       1.1626
   (4,8)       1.0000
   (5,8)       1.0000
   (6,8)       1.0000
   (7,8)       1.0000
   (8,8)       1.0000
   (9,8)       1.0000
  (10,8)       1.5853
   (1,9)       1.0000
   (2,9)       1.0000
   (3,9)       1.0000
   (4,9)       1.0000
   (5,9)       1.0000
   (6,9)       1.0000
   (7,9)       1.0000
   (8,9)       1.0000
   (9,9)       1.0000
  (10,9)       1.0000
   (1,10)      1.0000
   (2,10)      1.0000
   (3,10)      1.0000
   (4,10)      1.0000
   (5,10)      1.0000
   (6,10)      1.0000
   (7,10)      1.0000
   (8,10)      1.0000
   (9,10)      1.0000
  (10,10)      1.0000

>> r = 1:10

r =

     1     2     3     4     5     6     7     8     9    10

>> A + r

ans =

  Columns 1 through 9

    1.0000    2.7547    3.0000    4.2760    5.0000    6.0000    7.0000    8.6797    9.0000
    1.6551    2.0000    3.0000    4.0000    5.0000    6.0000    7.0000    8.0000    9.0000
    1.0000    2.0000    3.0000    4.0000    5.0000    6.0000    7.0000    8.1626    9.0000
    1.0000    2.0000    3.0000    4.0000    5.0000    6.0000    7.1190    8.0000    9.0000
    1.0000    2.0000    3.0000    4.0000    5.0000    6.0000    7.0000    8.0000    9.0000
    1.0000    2.0000    3.0000    4.0000    5.0000    6.0000    7.0000    8.0000    9.0000
    1.0000    2.0000    3.0000    4.0000    5.4984    6.0000    7.0000    8.0000    9.0000
    1.0000    2.0000    3.0000    4.0000    5.9597    6.0000    7.0000    8.0000    9.0000
    1.0000    2.0000    3.0000    4.0000    5.3404    6.0000    7.0000    8.0000    9.0000
    1.0000    2.0000    3.0000    4.0000    5.0000    6.0000    7.0000    8.5853    9.0000

  Column 10

   10.0000
   10.0000
   10.0000
   10.0000
   10.0000
   10.0000
   10.0000
   10.0000
   10.0000
   10.0000

>> sr = sparse (r)

sr =

   (1,1)        1
   (1,2)        2
   (1,3)        3
   (1,4)        4
   (1,5)        5
   (1,6)        6
   (1,7)        7
   (1,8)        8
   (1,9)        9
   (1,10)      10

>> A + sr

ans =

   (1,1)       1.0000
   (2,1)       1.6551
   (3,1)       1.0000
   (4,1)       1.0000
   (5,1)       1.0000
   (6,1)       1.0000
   (7,1)       1.0000
   (8,1)       1.0000
   (9,1)       1.0000
  (10,1)       1.0000
   (1,2)       2.7547
   (2,2)       2.0000
   (3,2)       2.0000
   (4,2)       2.0000
   (5,2)       2.0000
   (6,2)       2.0000
   (7,2)       2.0000
   (8,2)       2.0000
   (9,2)       2.0000
  (10,2)       2.0000
   (1,3)       3.0000
   (2,3)       3.0000
   (3,3)       3.0000
   (4,3)       3.0000
   (5,3)       3.0000
   (6,3)       3.0000
   (7,3)       3.0000
   (8,3)       3.0000
   (9,3)       3.0000
  (10,3)       3.0000
   (1,4)       4.2760
   (2,4)       4.0000
   (3,4)       4.0000
   (4,4)       4.0000
   (5,4)       4.0000
   (6,4)       4.0000
   (7,4)       4.0000
   (8,4)       4.0000
   (9,4)       4.0000
  (10,4)       4.0000
   (1,5)       5.0000
   (2,5)       5.0000
   (3,5)       5.0000
   (4,5)       5.0000
   (5,5)       5.0000
   (6,5)       5.0000
   (7,5)       5.4984
   (8,5)       5.9597
   (9,5)       5.3404
  (10,5)       5.0000
   (1,6)       6.0000
   (2,6)       6.0000
   (3,6)       6.0000
   (4,6)       6.0000
   (5,6)       6.0000
   (6,6)       6.0000
   (7,6)       6.0000
   (8,6)       6.0000
   (9,6)       6.0000
  (10,6)       6.0000
   (1,7)       7.0000
   (2,7)       7.0000
   (3,7)       7.0000
   (4,7)       7.1190
   (5,7)       7.0000
   (6,7)       7.0000
   (7,7)       7.0000
   (8,7)       7.0000
   (9,7)       7.0000
  (10,7)       7.0000
   (1,8)       8.6797
   (2,8)       8.0000
   (3,8)       8.1626
   (4,8)       8.0000
   (5,8)       8.0000
   (6,8)       8.0000
   (7,8)       8.0000
   (8,8)       8.0000
   (9,8)       8.0000
  (10,8)       8.5853
   (1,9)       9.0000
   (2,9)       9.0000
   (3,9)       9.0000
   (4,9)       9.0000
   (5,9)       9.0000
   (6,9)       9.0000
   (7,9)       9.0000
   (8,9)       9.0000
   (9,9)       9.0000
  (10,9)       9.0000
   (1,10)     10.0000
   (2,10)     10.0000
   (3,10)     10.0000
   (4,10)     10.0000
   (5,10)     10.0000
   (6,10)     10.0000
   (7,10)     10.0000
   (8,10)     10.0000
   (9,10)     10.0000
  (10,10)     10.0000


Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 06:32:15 PM UTC, comment #5: 

What does Matlab do for the following expressions?


A = sprand (10, 10, 0.1)
x = 1
A + x
sx = sparse (x)
A + sx
r = 1:10
A + r
sr = sparse (r)
A + sr


Are the results all sparse?  Is Matlab consistent with regard to all operators?

John W. Eaton <jwe>
Group administrator
Thu 27 Oct 2022 06:31:35 PM UTC, comment #4: 

*whereas for @and it resulted in sparse

Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 06:31:06 PM UTC, comment #3: 

but even in the first example,  with @plus, having scalar or equal element counts resulted in full, whereas for @and it resulted in scalar. so if it's bypassing expansion code, it's still changing something that results in full instead of sparse for one function but not the other.

Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 06:27:02 PM UTC, comment #2: 

ok, was trying to creates minimal tests. changing a couple to require broadcasting:

plus:

>> bsxfun(@plus, sparse([0 1 0; 1 0 1]), sparse([1 0 1]))
ans =
   1   1   1
   2   0   2

bsxfun(@and, sparse([0 1 0; 1 0 1]), sparse([1 0 1]))
ans =
Compressed Column Sparse (rows = 2, cols = 3, nnz = 2 [33%])

  (2, 1) -> 1
  (2, 3) -> 1

>> bsxfun(@(x,y) x+y, sparse([0 1 0; 1 0 1]), sparse([1 0 1]))
ans =
Compressed Column Sparse (rows = 2, cols = 3, nnz = 5 [83%])

  (1, 1) -> 1
  (2, 1) -> 2
  (1, 2) -> 1
  (1, 3) -> 1
  (2, 3) -> 2


Nicholas Jankowski <nrjank>
Group Member
Thu 27 Oct 2022 06:23:24 PM UTC, comment #1: 

It's not really broadcasting if one argument is a scalar.  Or at least it is handled differently, and has always been a feature of Octave and Matlab to allow matrix OP scalar or scalar OP matrix.

If the dimensions agree exactly, then it is just an ordinary element-by-element operation.

The special broadcasting code is only invoked if there is a dimension mismatch and the dimensions are such that broadcasting makes sense.

John W. Eaton <jwe>
Group administrator
Thu 27 Oct 2022 06:16:10 PM UTC, original submission:  

noticed in center.m (from bug #51249) that bsxfun was used to prevent sparse inputs from triggering nonconformance errors from automatic broadcasting (see bug #41441). It appears that change results in center.m always outputting a full matrix with sparse inputs, due to bsxfun using @minus. It seems whether or not bsxfun preserves sparseness depends on the underlying operation, but not sure how/why. checking with other functions:

plus:

>> bsxfun(@plus, sparse([0 1 0 1 0 1]), 1)
ans =
   1   2   1   2   1   2

>> bsxfun(@plus, sparse([0 1 0 1 0 1]), sparse(1))
ans =
   1   2   1   2   1   2

>> bsxfun(@plus, sparse([0 1 0 1 0 1]), sparse([0 1 0 1 0 1]))
ans =
   0   2   0   2   0   2

>> plus(sparse([0 1 0 1 0 1]), sparse([0 1 0 1 0 1]))
ans =
Compressed Column Sparse (rows = 1, cols = 6, nnz = 3 [50%])

  (1, 2) -> 2
  (1, 4) -> 2
  (1, 6) -> 2

>> plus(sparse([0 1 0 1 0 1]), sparse([1]))
ans =
Compressed Column Sparse (rows = 1, cols = 6, nnz = 6 [100%])

  (1, 1) -> 1
  (1, 2) -> 2
  (1, 3) -> 1
  (1, 4) -> 2
  (1, 5) -> 1
  (1, 6) -> 2


but an anonymous addition function seems fine:


>> bsxfun(@(x,y) x+y, sparse([0 1 0 1 0 1]), sparse([0 1 0 1 0 1]))
ans =
Compressed Column Sparse (rows = 1, cols = 6, nnz = 3 [50%])

  (1, 2) -> 2
  (1, 4) -> 2
  (1, 6) -> 2



While AND seems to have no issues:

bsxfun(@and,sparse([0 1 0]), 1)
ans =

Compressed Column Sparse (rows = 1, cols = 3, nnz = 1 [33%])

  (1, 2) -> 1

>> bsxfun(@and,sparse([0 1 0]), [1 1 1])
ans =
Compressed Column Sparse (rows = 1, cols = 3, nnz = 1 [33%])

  (1, 2) -> 1

>> bsxfun(@and,sparse([0 1 0]), sparse([1 1 1]))
ans =
Compressed Column Sparse (rows = 1, cols = 3, nnz = 1 [33%])

  (1, 2) -> 1

>> and(sparse([0 1 0]),1)
ans =
Compressed Column Sparse (rows = 1, cols = 3, nnz = 1 [33%])

  (1, 2) -> 1

>> and(sparse([0 1 0]),sparse(1))
ans =
Compressed Column Sparse (rows = 1, cols = 3, nnz = 1 [33%])

  (1, 2) -> 1

>> and(sparse([0 1 0]),sparse([1 1 1]))
ans =
Compressed Column Sparse (rows = 1, cols = 3, nnz = 1 [33%])

  (1, 2) -> 1


ran into this specifically with @minus and @rdivide.  realize there are thousands of potential functions that could behave either way. I recognize that automatic broadcasting and sparse/diag arrays are a known issue, but does anyone familiar with bsxfun have an idea of why it's so inconsistent?

Nicholas Jankowski <nrjank>
Group Member

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by nrjank (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    No changes have been made to this item

    Back to the top

    Powered by Savane 3.13-4448.
    Corresponding source code