Thu 26 Jun 2014 04:19:23 AM UTC, comment #34:
I backported the minimal change with protected operators to the stable branche here (http://hg.savannah.gnu.org/hgweb/octave/rev/c457a84bc7d3). Closing report.
|
Wed 25 Jun 2014 01:27:17 AM UTC, comment #33:
It seems to be okay. I compiled the attached file tst_rng.cc under 3.8.2 and then ran the .oct file under 3.8.1 without problem.
(file #31614)
|
Tue 24 Jun 2014 11:42:03 PM UTC, comment #32:
I suppose the change to the operators is OK for stable since it only adds a new protected member function. But would it cause link failures for .oct files compiled with 3.8.2 but then used with 3.8.1 libraries? Does it matter? Mostly I think we care about having code built with older libraries still working with newer versions of Octave libraries in the same stable release series. Wouldn't that still be possible?
|
Tue 24 Jun 2014 10:56:55 PM UTC, comment #31:
That sounds like a big change (development branch). Do you think it is okay to backport the current change which just affects operators to the stable branch?
|
Tue 24 Jun 2014 04:29:09 PM UTC, comment #30:
Octave's range type was implemented simply as a way to save memory. I saw no reason to store all elements of a:b if they weren't going to be needed all at once as an array object. Other than that, the range object should have behavior that is as close as possible to what Matlab does for ranges, which is to immediatly convert to an array on construction.
The mistake was in the implementation of addition and multiplication operators for Octave. I think this is fixed now. It's still possible that there will be a difference in intermediate range values because instead of computing all the values for a range, then multiplying them all by a constant (for example), we are just adjusting the endpoints and recalculating the intermediate values. I don't know, but I suspect it is possible to end up with different values this way.
In any case, I'm considering providing an option that will be enabled by default if using --traditional (aka --maximum-braindamage) to disable the range type optimization so that the behavior is even more like Matlab's (i.e., generate the array value instead of storing the more memory efficient range type).
I'm also considering doing the same for the diagonal and permutation matrix types.
|
Tue 24 Jun 2014 04:11:39 PM UTC, comment #29:
@Rik, Expected is a better word. But matching Matlab is
not, I think, a good target. In the specific case of
using extended bits on but the bits ignored (as Matlab once did)
Octave should, according to Kahan, achieve a factor
of 2 improvement in precision. The cost appears to be
lack of exactly matching Matlab and the possibilitry that
very old not fully FP 754 compliant systems will produce
different results.
Since it is widely held that Matlab made a mistake in
turning extended bits off (for exact compatibility across
all "supported" systems Octave making a better choice
should be a definite selling point.
|
Tue 24 Jun 2014 03:57:48 PM UTC, comment #28:
Perhaps "psychological" was the wrong word. Maybe "expectations" would have been a better choice. It's always been my feeling that we need to change Octave's behavior because, even though we are doing a perfectly acceptable calculation that gets within eps of the exact value, people expect a different result. And when they can get the result they expect from other tools like Matlab or Python we end up looking bad.
|
Tue 24 Jun 2014 06:44:16 AM UTC, comment #27:
> This is not really a bug, but if it's trivial to perform
> the computation in a way that matches Matlab's own
> error (there has to be an error, since there can't be
> exact representation), let's pick Matlab's error.
I still believe that this is a bug in Octave, not in Matlab.
I fully agree that 0.1 does NOT have an exact foating point representation. But still 10 * 0.1 - 1 == 0 is true. However, 9 * 0.1 - 1 == -0.1 is false (the result is 0.099999999...).
But please consider the following results and tell me if you still believe that this is Octave (3.6.4) being right against all the others (Python 2.7.4, R 3.1.0 and Matlab R2012a).
In R 3.1.0:
In Python 2.7.4:
In Matlab R2012a:
And finally in Octave 3.6.4:
Even the simple fact that x1 and x2 are not equal is very disturbing (perhaps is it me again having a psychological issue ?). Correct me if I'm wrong but, for a user of the Octave interpreter (not the C++ API), there is no such thing as a "range object", so [9 10] and (9:10) should be the same, shouldn't they ?
|
Tue 24 Jun 2014 05:16:24 AM UTC, comment #26:
>> In double precision, 10, 0.1 and 1.0 are represented exactly
> This is false [snip]
You're perfectly right, mea culpa, I was mistakingly thinking in decimal floating points (0.1 == 1e-1).
|
Tue 24 Jun 2014 02:42:10 AM UTC, comment #25:
> In double precision, 10, 0.1 and 1.0 are represented exactly
This is false, the only values that can be represented exactly in hardware floats are those where the numerator is a power of 2, which 1/10 is not (it has a pesky 5 in there, and 1/5 has a repeating binary expansion). That is why 0.3 - 0.2 - 0.1 is not 0 in any language that uses hardware floats.
This is not really a bug, but if it's trivial to perform the computation in a way that matches Matlab's own error (there has to be an error, since there can't be exact representation), let's pick Matlab's error.
|
Sun 22 Jun 2014 08:58:06 PM UTC, comment #24:
@Rik,
I was afraid of that. This is the same argument that Matlab
has used. They even go so far as to not allow Matlab to
run on systems which do not provide bit-compatibility. There
was a time when this excluded a significant number of
machines. But, in any case, it reduced their numerical
accuracy.
Note that leaving extended bits on does not mean
that they need to be preserved. As Kahan points out
and his example shows, precision is improved with the
bits ignored. I have not reviewed all the current processor's
FP 754 adherence, but I suspect that practically all
current processors would yield the same results with
extended bits on, but ignored.
You might want to look at:
http://www.cs.berkeley.edu/~wkahan/Mindless.pdf
When this is decided, it would be good to document it.
|
Sun 22 Jun 2014 08:42:40 PM UTC, comment #23:
@Michael: I think the decision was made to go for consistency versus accuracy. Not all platforms will support extra floating point precision in their hardware-based floating point units. To have the same results across different versions of Octave, different hardware, and different compilers Octave does it's very best best to enforce the IEEE standard (only 64 bits).
For example, configure.ac includes the following about --enable-float-truncate option which gets used for MinGW and Cygwin platforms.
|
Sun 22 Jun 2014 02:36:24 PM UTC, comment #22:
While discussing floating point precision I noticed:
(on Matlab and Octave)
e = eps; z = 1+(1+e)*e/2;d = z -1;
yields for d:
d = 2.2204e-16
This means, according to Kahan, that the FP extended bits
are turned off. Matlab has been critized for doing this.
Was this done in Octave intendionally for compatibility?
This only applies to "native" Octave arithmetic, but is
it really required?
If the extended bits were on but ignored the results
are for z
z = 1 , not z = 1.000
and for d
d = 0 , not d = 2.2204e-16
This is not "psychological" it is about making use
of available FP precision.
d is 0.
|
Sun 22 Jun 2014 01:17:02 PM UTC, comment #21:
@Rik : We'll to agree to disagree, then. (We're not talking about any double-precision numerical computation, we're talking about 10 * 0.1 - 1. I don't know how to say it otherwise: 10 * 0.1 - 1 == 1e-17 is FALSE in double-precision arithmetic. And I don't think that I have a psychological problem when I expect that Octave should return exactly [-.1 0] for (9:10) * .1 - 1.)
@JWE: Please consider adding the regression test provided below.
|
Sun 22 Jun 2014 12:58:30 AM UTC, comment #20:
Again, my point is that one shouldn't expect more than machine precision out of numerical computing. And the fact that people still do expect this is the psychological problem. The fraction of real numbers that can be represented exactly by IEEE 64-bit floating point compared to the actual number of real numbers in vanishingly small. Statistically, most Octave calculations will involve appproximations to true numbers and the final result will also be an approximation in the range of eps away from the true answer. In the particular case of 0.1--and approximately 2^52 others--we can do better than expected and jwe has already made that change.
The question still under discussion is whether to backport this to stable. I vote yes, but it is up jwe.
|
Sat 21 Jun 2014 09:15:14 PM UTC, comment #19:
Rik,
I have to disagree with you : this is NOT a psychological issue, this is a bug.
In double precision, 10, 0.1 and 1.0 are represented exactly and the result of 10 * 0.1 - 1 is 0, even in double precision arithmetic. So, I don't see why "technically" we should be happy that the result of (9:10) * .1 - 1 is [-.1 1e-17]...
This is also a bug in the sense of Matlab-compatiblity: Matlab returns [-.1 0] in this situation.
@++
Julien
|
Sat 21 Jun 2014 04:11:12 PM UTC, comment #18:
Rik,
Right!
And, good to apply the patch to stable.
|
Sat 21 Jun 2014 02:37:49 PM UTC, comment #17:
I know all about non-linearity and the fact that small values can make large differences. My point is that at 2e-17 you are below machine precision and shouldn't expect more out of a numerical analysis package. If I solve a system of equations Ax = b with the left division operator I would be quite happy if every component of the solution x was within 2e-17 of the "true" result.
In this case, Octave has a range operator which produces results within eps of the "true" result. So, technically, one should be satisfied the current situation. However, I pointed out that this is a psychological issue where people really expect to see zero and which could generate negative publicity for Octave.
Moreover, since we do have a way to be even more accurate than eps, we might as well take up that opportunity.
|
Sat 21 Jun 2014 08:52:38 AM UTC, comment #16:
Following Julien's commnet #14, "small" values can make a
big difference if they cause a sign change, for example.
In any case, I agree that this change should go in stable.
|
Sat 21 Jun 2014 07:28:32 AM UTC, comment #15:
Ok, it works for me on Ubuntu 13.04 / i686-linux-gnu / gcc version 4.7.3 (Ubuntu/Linaro 4.7.3-1ubuntu1).
|
Sat 21 Jun 2014 06:22:28 AM UTC, comment #14:
Rik,
2e-17 sometimes makes a huge difference in calculations.
Actually, I found this bug because of Julia's "mandel" benchmark, where this 2e-17 results in a significant difference in the ouput (more than 1, I don't remember exactly).
I'm going now to build Octave with John's patch.
@++
Julien
|
Sat 21 Jun 2014 03:22:34 AM UTC, comment #13:
I would vote to apply this back on the stable branch. It's small, but its the kind of thing that I could see getting bad press for Octave. Even though 2e-17 wouldn't make a difference in a calculation, people expect to see zero out of this kind of expression. I think since the constructor is protected you are correct in saying we could preserve binary compatibility.
|
Fri 20 Jun 2014 11:48:56 PM UTC, comment #12:
John said:
I checked in the following change:
http://hg.savannah.gnu.org/hgweb/octave/rev/47d4b680d0e0
Does this fix the problem for you?
This works for me on x86_64 I3, fedora 20.
|
Fri 20 Jun 2014 11:26:41 PM UTC, comment #11:
I checked in the following change:
http://hg.savannah.gnu.org/hgweb/octave/rev/47d4b680d0e0
Does this fix the problem for you?
I'm not sure whether this problem can be fixed without a change in the Range interface, so I'm not sure whether this is appropriate for the stable branch. However, it is just a change in the private interface because we are adding a new constructor that is protected and only used internally in the Range class. Is that an acceptable change or does it break binary compatibility (something we try to guarantee for hte stable release series)?
|
Thu 19 Jun 2014 09:17:04 PM UTC, comment #10:
As a first step, here is a regression test for this bug.
(file #31582)
|
Thu 19 Jun 2014 08:02:34 PM UTC, comment #9:
This is a regression. The problem code I used works in 3.2.4, but fails in 3.4.3. All versions of Octave from 3.4.3 on have the same behavior.
|
Thu 19 Jun 2014 07:54:02 PM UTC, comment #8:
Actually, I was just able to reproduce this with
A range is a special octave_value type. If I convert the range to an ordinary matrix, then things work out.
This, at least, is a clue. I'm really busy right now, though, and can't look at this more.
|
Thu 19 Jun 2014 07:49:38 PM UTC, comment #7:
Really strange. It sure looks like there is some extra precision that is being kept during calculations. Intel floating point operations can use 80 bits rather than IEEE 64 bits, but I was hoping that the --enable-float-truncate would have taken care of that. I think you should post on the Octave maintainers mailing list as this one is going to be difficult to solve.
|
Thu 19 Jun 2014 06:40:01 PM UTC, comment #6:
And the result is the same with the 3.6.4 (3.6.4-1, actually) that has been installed by Ubuntu's packager manager.
|
Thu 19 Jun 2014 06:35:33 PM UTC, comment #5:
I just built rc-3-8-2-1 with --enable-float-truncate.
It didn't change the result.
|
Thu 19 Jun 2014 06:09:38 PM UTC, comment #4:
The only thing I see different is the argument '-mieee-fp' but that should be okay. Maybe try the second solution of re-configuring octave with '--enable-float-truncate'.
|
Thu 19 Jun 2014 06:03:32 PM UTC, comment #3:
Sorry about the linewrapping, here it is again:
|
Thu 19 Jun 2014 06:01:49 PM UTC, comment #2:
I don't know...
|
Thu 19 Jun 2014 05:11:53 PM UTC, comment #1:
I get the correct results for 3.6.4, 3.9.0+, 4.1.0+. I'm running Kubuntu 12.04 with gcc 4.6.3.
You might try configuring with '--enable-float-truncate' and see if the problem goes away. You're not compiling with any option that might turn on '-ffast-math' are you?
|
Thu 19 Jun 2014 05:01:06 PM UTC, original submission:
I get this result with Octave 3.6.4 and 3.9.0+ (both built with gcc 4.7.3 on Ubuntu 13.04 i686-linux-gnu):
Matlab (2012a) returns exactly 0.
Trying to understand the cause of this error, I proceeded with some additional experiments which give rather surprising results:
Again, Matlab returns exactly 0 in all three cases.
Could somebody please confirm these results before I start to investigate ?
|