Mon 23 May 2016 05:54:21 PM UTC, comment #13:
http://hg.savannah.gnu.org/hgweb/octave/rev/f00204dae6ee
|
Mon 23 May 2016 05:54:09 PM UTC, comment #12:
I applied this patch on both stable and merged to default, so this should be fixed in the next release of Octave.
This may allow bug #43505 to be resolved, still not sure if the tests were failing for the same or different reasons.
|
Thu 19 May 2016 03:48:28 PM UTC, comment #11:
I'm attaching a patch that fixes the error I saw on a Debian 32-bit system.
The problem occurs when writing uintN-based images and the library quantum depth is greater than 8, the scaling factor used to quantize the pixels may be less than 1.
There may be an equivalent bug in the read_images function, where the divisor could be less than 1 if the GM quantum depth is 8 and we are reading a 16-bit image file, but I don't have a way to test that currently.
(file #37210)
|
Wed 11 May 2016 09:37:03 PM UTC, comment #10:
Ok, after some more testing (and more confusion from the oct-file to core function conversion), I think this is now due to simple round-off error in the values passed to GM.
It seems like if the pixel value is 256 or less, it is written as a 0. If it's written as 257, it is written as a 1. The divisor being calculated for quantum-depth 16 is 1/257, but round-off error is giving different quantized pixel values on i686 vs x86_64. Should we use xround or xceil, or calculate the divisor differently here? A quick test adding xround to the uint TrueColorMatteType case fixes this bug.
|
Wed 11 May 2016 09:11:29 PM UTC, comment #9:
I inserted my test program directly into _magick_read_.cc, right before _magick_write_ returns, and it produces the correct output file.
So I am now calling Octave with this one-liner:
Meanwhile the body of my test program is running inside of _magick_write_ producing a separate tif file, with what I believe are the exact same pixel values being written, and the files still differ.
|
Tue 10 May 2016 07:23:26 PM UTC, comment #8:
Attaching the standalone C++ program I wrote to emulate what I think Octave is doing.
In Octave running in gdb, I get the following (at _magick_read_.cc line 1215):
These are off by 1, but should still evaluate to 8-bit pixel values of 1, 2, etc. I even added a round when constructing the Color, values went up by 1 (257, 514, etc) but the image file is still incorrect.
(file #37121)
|
Tue 10 May 2016 07:16:22 PM UTC, comment #7:
Ok, building with -O0, writing an 8-bit RGB image with alpha channel produces the wrong color values in the file, but in the debugger all of the pixel color values look correct.
I wrote a simple standalone program to try to call GraphicsMagick in the same way as Octave, and it produces a correct image file. So I can't tell what's different now between that and what Octave is doing to create the file.
|
Tue 10 May 2016 06:27:08 PM UTC, comment #6:
When building with -O2 (default optimization) on i686, the following fails
But it succeeds when I build Octave with -O0. When I examine the pixel values in gdb with -O0, they are identical between i686 and x86_64, but I get optimized out when build with -O2.
Other tests still fail even with -O0, such as uint8 rgb image with alpha, looking at those next.
|
Tue 10 May 2016 05:09:51 PM UTC, comment #5:
Rebuilding with the magick routines moved into liboctinterp instead of an oct-file, I now get all tests passing in imread, and only 6 tests failing in imwrite. So changing the way GraphicsMagick is linked into Octave definitely changed something related to this.
|
Sun 08 May 2016 10:14:05 PM UTC, comment #4:
On x86_64:
On i686:
|
Tue 03 May 2016 12:07:26 PM UTC, comment #3:
This snippet:
Should follow this code path (it's a single channel/color without alpha so an Image of type Magick::GrayscaleType) http://hg.savannah.gnu.org/hgweb/octave/file/58f5a6347851/libinterp/dldfcn/__magick_read__.cc#l1096
Can you print the value of grey at line 1111 to check if we are computing it wrong or if it's an issue in GraphicsMagick?
|
Thu 21 Apr 2016 05:54:54 AM UTC, comment #2:
Thanks for the pointer to bug #43505. It may be related, but this bug does not require any java functions to be called to make it occur on Debian.
GraphicsMagick is compiled with quantum depth set to 16 bits (on both 64-bit and 32-bit Debian installs).
|
Thu 21 Apr 2016 04:40:40 AM UTC, comment #1:
Is it related to bug #43505?
There is also an "off-by-one" error in imwrite,
but it is on Windows machines, if GraphicsMagick is compiled
with quantum-depth=16.
The error is activated (somehow) by
"jobj = javaObject ("java.lang.StringBuffer")", in __run_test_suite__
|
Thu 21 Apr 2016 02:30:42 AM UTC, original submission:
On Debian i686, the imwrite function writes an incorrect TIFF file from an array of uint8 pixel values.
I think this is the same as previously reported bugs, for example bug #33317, but it has not yet been investigated.
I have two builds on up to date Debian systems, the only difference being i686 vs x86_64, and the results of imwrite show an off-by-one error in the TIFF file produced by imwrite. This shows up as unit test failures in imread.m and imwrite.m. See for example https://buildd.debian.org/status/fetch.php?pkg=octave&arch=i386&ver=4.0.1-1&stamp=1458697721
This minimal example shows the error on Debian i686:
Using hexdump on the file shows that the error is in the written TIFF file, not in the reading back of the file. Also reading the file with imread in a 64-bit Octave shows the incorrect vector.
|