Mon 08 Feb 2016 04:02:14 AM UTC, comment #18:
Following jwe's recommendation in comment #14, I'm marking this as "won't fix". However, I'm not closing any reports while I'm such a neophyte.
|
Thu 17 Sep 2015 09:02:52 AM UTC, comment #17:
Thanks Jesse.
I think that means this bug can be closed.
|
Wed 30 Apr 2014 02:22:18 AM UTC, comment #16:
For the original example, Z = null(A); Z(3) gives -9.3869e-14 with matlab 7.14.0.739 (R2012a).
|
Fri 03 Jan 2014 11:23:33 PM UTC, comment #15:
Probably Jordy was meant, not me (and currently I have no Matlab access), but a comment: You are right that the numerical problems are principally similar to guessing, given sin(x), whether x may be a multiple of pi. But while I see no use for the latter, there can be a special interest in guessing whether some elements of the basis of the null space of a matrix could be zero (since their being zero can mean that some elements of certain solutions involving this matrix are defined; I struck this problem in computing the defined elements of the covariance matrix of parameters after optimization). And the previous code did already apply a tolerance, just for this purpose (to guess if elements could be zero), only the wrong tolerance.
Maybe I can check with Matlab in the next days, I'll see.
|
Fri 03 Jan 2014 04:02:40 PM UTC, comment #14:
Ping?
Could you check to see what Matlab does for the original example? If it does not produce 0 for the third element, then I propose we don't attempt to fix this problem in Octave's null function. This case with null seems to me to be similar to expecting sin(pi) to produce exactly 0.
|
Wed 15 Jun 2011 01:19:05 PM UTC, comment #13:
Olaf,
Give me a little time to understand all of this. I will probably apply your patch on the default branch once I finish reviewing it.
Thanks
|
Tue 14 Jun 2011 12:54:52 PM UTC, comment #12:
Found a way to also handle the case n-cols greater than n-rows. Also, restricted setting elements to zero to error angles <= .001, since much larger possible error angles can occur and it would be silly to use these to set elements to zero.
New patch.
Since these patches seem to evolve, please regard them as preliminary suggestions and don't apply them (if at all) without asking back. So I needn't hurry to post any new thought.
Note that although the vectorization computes even with unneeded elements, the time for this gets negligible compared with time for svd() for larger matrices.
(file #23524)
|
Tue 14 Jun 2011 07:51:11 AM UTC, comment #11:
I missed a further simplification. Sorry. Made a new patch.
(file #23523)
|
Tue 14 Jun 2011 06:34:38 AM UTC, comment #10:
Sorry, I hit the commit button before attaching. Trying again.
(file #23522)
|
Tue 14 Jun 2011 06:33:15 AM UTC, comment #9:
Because of the numerical difficulties mentioned, I attach a revised patch with a more approximative, but numerically feasible solution. See comments in the patch in null.m. Also, the case cols greater than rows is now excluded and handled as in the original file.
|
Mon 13 Jun 2011 08:03:08 PM UTC, comment #8:
Sorry, in the last sentence I meant sqareroot of eps, of corse.
|
Mon 13 Jun 2011 07:48:29 PM UTC, comment #7:
The point was to show that the numerical error of "eps" assumed by zero() to set basis elements to zero is too low, i.e. the real numerical error is higher. To show this, among others I had to prove that the exact result would be zero (so the deviation from zero is numerical error). I also gave an example for such a deviation, as an argument why the tolerance of zero() should be set to something else.
Letting the user handle numerical error is only an option if the error-bound is indeed fixed; otherwise, the user would have to repeat the singular value decomposition. Even if fixed, the user would have to know the error bound --- if the programmer knows it, it should be already considered in the function IMO.
It turned out that LAPACK itself documents the error bound:
http://www.netlib.org/lapack/lug/node96.html
, and that the error bound is indeed not fixed, but must be computed with the result of the singular value decomposition. I attach a respective patch (against current unstable). It defines an interface function to LAPACKs SDISNA and DDISNA and uses this function in null() to compute the error bound for the angle between the computed basis vectors and the exact basis vectors. From this angle, elements settable to zero are derived.
There is still some inaccuracy in considering these bounds. The influence of removing some small elements on angle is so small that it is just computed to be zero. But one can conclude that the effectively used error bound is in the magnitude of order of the square of eps (i.e. much larger than before (just eps)).
(file #23521)
|
Fri 10 Jun 2011 10:09:20 PM UTC, comment #6:
Olaf,
0.3 - 0.2 - 0.1 should also be zero, but it's not. This is not a bug nor does this need fixing.
I was expecting a discussion for why the tolerance should be set by default to something else and what this tolerance should be. A discussion for proving something is zero while disregarding numerical error is not relevant to this problem.
I also do not think that the tolerance should be a fixed value, since obviously if the dimensionality of the problem is large, the entries in null(A) get smaller, so the tolerance sould depend on this dimensionality (and perhaps the maximum value in each column). I also would like to understand why null by default should use a special tolerance instead of letting the user handle numerical error herself.
Thanks.
|
Fri 10 Jun 2011 10:22:39 AM UTC, comment #5:
(Tried the nomarkup tag, hope the text will be readable ...)
Here is the demanded explanation.
For the assessment of numeric examples, consider an 3-by-m matrix
A=[A1;A2] that has rank 2, but the 2-by-m matrix A2 has only rank
1. If A.'*x==A1.'*x(1)+A2.'*x(2:3)==y, since rank(A2)<2, different linear combinations of A2(:,1) and A2(:,2) could yield the same result, but since by including the single row of A1 into A the rank is increased by 1, no linear combination of A2(:,1) and A2(:,2) can be equal to non_null_constant*A1. So x(1) is uniquely determined (and x(2:3) is not). So all basis vectors of null(A.') should have their first element equal to zero.
Since the numeric problems seem to get worse with the following, I would also like to show that, if A*A.'*x==b, then x(1) is uniquely defined (and so the first elements of all basis-vectors returned by null(A*A.') should be zero). I'll base this on optimization theory:
In the linear model function y=A.'*x, for a given y, x(1) is uniquely defined as shown above. Now let y=A.'*x be the (unique) one that minimizes sumsq(z-y) for a given z. Then, x is also given by A*A.'x=A*z. This completes the prove (I hope without mistakes ...).
As a numeric example for A, take
octave:1> A_correct = [log(1:10); 1:10; 2*(1:10)];
and
octave:2> A_wrong = A_correct([2, 3, 1], :);
A_correct has rank 2, but A_correct([2, 3],:) has only rank
1. A_wrong only has changed order of rows, so that A_wrong([1, 2],:) has rank 1; this changes the above argumentation so that x(3) is uniquely defined and the third element of the basis vectors should be zero.
Now
octave:3> null (A_correct.')
ans =
0.000000000000000
0.894427190999916
-0.447213595499958
and
octave:4> null (A_correct*A_correct.')
ans =
0.000000000000000
-0.894427190999916
0.447213595499958
work correctly,
but
octave:5> null (A_wrong.')
ans =
8.94427190999916e-01
-4.47213595499958e-01
-7.28583859910259e-16
and
octave:6> null (A_wrong*A_wrong.')
ans =
8.94427190999916e-01
-4.47213595499959e-01
6.21724893790088e-15
don't.
If I use the scaling
octave:7> A = A_wrong * diag (sqrt (.7) * ones (1, 10));
I get even larger values of the third element:
octave:8> null (A.')
ans =
-8.94427190999916e-01
4.47213595499958e-01
-3.57353036051222e-15
and
octave:9> null (A * A.')
ans =
8.94427190999915e-01
-4.47213595499960e-01
1.48908663177849e-14
To summarize, the case null(A.') is clearly not treated as it
should. But although in null(A*A.') a part of the inaccuracy probably stems from the multiplication, null() is likely to be used in such a way by some codes. So I suggest the case null(A*A.') should also be treated so that it yields the expected result.
Still I don't know how to find the correct tolerance, except that I suppose that it should be a fixed value, since the basis vectors are normalized.
-n
|
Thu 09 Jun 2011 08:59:15 PM UTC, comment #4:
On the other hand, I think now that using a fixed value (not depending somehow on the scale of elements in A) should make more sense, since the basis vectors are normalized. So using the variable "tol" as tolerance is probably not so good as I thought ... don't know what to do here.
|
Thu 09 Jun 2011 08:55:31 PM UTC, comment #3:
Yes, can you please explain? The computed tolerance
is used for determining how many vectors to place in the nullspace. Why should this tolerance also be used for setting entries of these vectors to zero? And will we get complaints in the future from Matlab users because it doesn't do this on Matlab?
|
Thu 09 Jun 2011 08:24:04 PM UTC, comment #2:
I'm not sure I know what you mean. Should I post the matrix whose null space basis should contain a zero? Here it is:
The last element of ans should be zero.
To explain why I think it should be zero would be rather complicated.
|
Thu 09 Jun 2011 05:15:47 PM UTC, comment #1:
Could you post some sample code that exhibits the problem? It would help a lot in order to pinpoint it.
|
Thu 09 Jun 2011 05:03:26 PM UTC, original submission:
Octave version: 3.5.0+, stable branch, release 5bf8af73fc34 (2011-06-01)
Short description: tolerance for setting basis elements to zero possibly too high.
Long description: Unfortunately I can't theoretically derive a suitable value for this tolerance. But I had a practical problem in which for theoretical reasons a certain element of the basis of the null-space of some matrix should be zero. Due to very slight differences in this matrix (accuracy of calculation) between two systems with Octave, the said element was returned by null() as zero on one system and not as zero on the other system (same null() version, in particular same tolerance used: eps). A larger tolerance "tol" is used by null() for assuming elements of S (from svd()) to be zero. If this tolerance "tol" were also used for setting elements of the returned basis to zero, I would get the expected result of zero on both systems (the order of magnitude of "tol" would be just enough for that).
So I suggest that null() should use the calculated tolerance "tol" also for setting basis elements to zero.
|