Mon 19 Dec 2011 02:55:41 PM UTC, comment #9:
on cygwin where octave is built with
--enable-float-truncate
octave:1> abs(2+3i) - sqrt(13)
ans = 0
I presume for the same reason I wrote on "etc/README.Cygwin":
"--enable-float-truncate" is needed for the following bug:
http://thread.gmane.org/gmane.comp.gnu.octave.bugs/12361/focus=12404
Without it, one of the quadgk test will fail as
"a=a" could be false due to truncation problems with
complex numbers.
|
Fri 09 Dec 2011 07:12:23 PM UTC, comment #8:
Just to add an obvious remark:
The "error" reported is (as close as FP gets)
twice eps. This suggests that someone is not being
as careful as could be the case. It would help some
to know what machine architecture produces the
"error."
In any case, this is not an Octave problem.
Michael
|
Fri 09 Dec 2011 04:30:09 PM UTC, comment #7:
Actually, reading the implementation of the GNU C++ abs(), it seems to eventually get into a call to hypot, which does quite a bit of black magic to my eyes with magic numbers and CPU registers in order to perform the computation.
I suppose a different black magic path could be taken in Windows that accounts for the difference in the result.
Closing this repprt.
|
Fri 09 Dec 2011 03:29:38 PM UTC, comment #6:
What could abs() be possibly doing other than squaring each element (which has no rounding error here, the numbers are too small) add the elements (also no rounding error) and then calling sqrt()?
If abs() is not calling sqrt(), then what is it doing? Because that's the only place where I can imagine roundoff error could occur, since it's the only part that doesn't do integer arithmetic with small numbers (i.e. without roundoff error). It must be doing something other than calling sqrt(13), because the result is not identical to sqrt(13).
The abs() function is delegated to the C++ implementation... it calls the std::complex<double>::abs() defined in the C++ standard header <complex>. So what how is the Windows compiler defining this function?
|
Fri 09 Dec 2011 03:21:51 PM UTC, comment #5:
Richard Smith on the clang mailing list explained it to me as the following:
===
See C++98[expr]p10 / C++11[expr]p11, "The values of the floating operands and the results of floating expressions may be represented in greater precision and range than that required by the type". There's no requirement that the compiler do this in a consistent way, or even do it the same way every time the same expression is evaluated.
(In practice, the result of the computation on, say, x86 will depend on whether the computation is performed using the 80-bit x87 FPU registers or one of the more modern 64-bit registers, whether extended precision is enabled, which rounding mode is set, whether FPU 80-bit registers got spilled to 64-bit stack slots, etc.)
===
It's a compiler issue, so the programmer has no influence over what get spilled where and when. I think that as compilers adapt to newer architectures, you will see differences.
C'est la vie, Edward
|
Fri 09 Dec 2011 02:23:36 PM UTC, comment #4:
I'm a little curious to know what kind of computation is being performed on Windows that accounts for this difference. It's not the first time we see Windows getting the last bit of a computation wrong. We frequently have to increase the tolerance for some of our unit tests because Windows gets it slightly wrong.
Can someone suggest a reason, or do we have a genuine bug on Windows?
|
Fri 09 Dec 2011 09:00:19 AM UTC, comment #3:
Alois, Micheal,
Thanks much for your replies. I am aware of the issue, it still bites me every time. In this case I was very suprised to see it happen in such a simple equation. So, FP user beware (again).
I wouldn't call it a feature, though. Just real life.
Can some-one close this report?
Regards, Edward.
|
Fri 09 Dec 2011 08:26:48 AM UTC, comment #2:
Comment below is helpful. Also, on current development
system (Linux x86_64)
>>abs(a) - sqrt(real(a)real(a) + imag(a)imag(a))
ans = 0
Michael
|
Fri 09 Dec 2011 08:02:09 AM UTC, comment #1:
That's the world of floating point numbers. To make the long story short, it's not a bug, its a feature.
Octave is (mostly) using the floating point representation according to IEEE 754/874, which is supported in hardware by many cpu's, and is therefore quite fast (much much faster than arbitrary precision arithmetic.).
If you want to know more about the issue, see [1,2]. For all practical purposes, you can solve the problem not by comparing against zero
(abs(a) - sqrt(13))==0
but by testing whether the difference is within some multiple of the machine precision e.g.
abs(abs(a) - sqrt(13)) < 10 * eps
Alois
[1] https://secure.wikimedia.org/wikipedia/en/wiki/IEEE_754-2008
[2] What Every Computer Scientist Should Know About Floating-Point Arithmetic, by David Goldberg, published in the March, 1991 issue of Computing Surveys. Copyright 1991,
|
Fri 09 Dec 2011 07:40:08 AM UTC, original submission:
The magnitude of a complex number a must be equal to sqrt(real(a)real(a) + imag(a)imag(a)). It isn't.
To reproduce:
>> a = 2+3i
>> abs(a) - sqrt(13)
ans = 4.4409e-016
>> abs(a) - sqrt(real(a)real(a) + imag(a)imag(a))
ans = 4.4409e-016
|