Mon 16 Mar 2009 03:00:19 PM UTC, comment #14:
Let's not leave it hanging about, then.
|
Sun 13 Jul 2008 03:47:10 AM UTC, comment #13:
AGG doesn't support 16-bits rendering well internally due to speed optimizating considerations. A workaroud is using 24 or 32 bits at rendering time and then convert it to 16 bits before displaying. The rendering speed won't be slower, but the framebuffer size would be 1.5 or 2 times larger.
I guess we are not going to fix this 16-bits rendering bug...
|
Thu 30 Aug 2007 12:03:14 PM UTC, comment #12:
For sure in our testsuite we check for white when we mean white.
so 255/255/255, which sould still be max/max/max in any depth.
|
Thu 30 Aug 2007 11:46:22 AM UTC, comment #11:
The question is: should it be pure white (RGB24 255/255/255) by definition or not?
BTW, what's the point in using SDL in 16 bit mode? Since it handles 16->32 bit conversion it will be able to do 24->16 bit too. And it probably has pretty efficient algorithms to do that.
Same applies for GTK-AGG. MIT-SHM should be probably disabled for 16 bit modes (ie. work in 24 bit mode and have GDK deal with the conversion).
|
Thu 30 Aug 2007 11:36:02 AM UTC, comment #10:
As I see, white background should still be white with both 16 bit and 24 bit depth.
The attached file shows a different case.
> This effect can be reduced by rendering in 24 bits and then
> reduce it to 16 bit eliminating any error accumulation. But
> this is very CPU intensive and quality will be very poor in 16
> bit mode anyway.
The quality is dependent on the displayer. The renderer could forget about the quality, but better not does anything wrong(inaccurate) before displaying. Just some opinions. Seems 16 bit depth wasn't got much attention by Gnash users before.
|
Thu 30 Aug 2007 09:31:46 AM UTC, comment #9:
The edges visible in 16 bit mode match the positions of the individual movie clip instances in that movie, right? I don't fully understand how the movie works, but if those instances are nearly invisible the cause may be simply inaccuracy / rounding errors.
In 16 bit mode there are only 32 intensity levels per R/G/B channel. The difference between the levels can be recognized easily by the eye.
This effect can be reduced by rendering in 24 bits and then reduce it to 16 bit eliminating any error accumulation. But this is very CPU intensive and quality will be very poor in 16 bit mode anyway.
Instead, if the movieclips should full invisible then there is a bug/problem in the renderer.
What other movie clips do you talk about? Testing other movies just showed the limited capabilities of a 16 bit mode (ie. non-smooth gradients) but nothing I'd not expect.
|
Thu 30 Aug 2007 08:00:34 AM UTC, comment #8:
See attached screen shot.
The difference is not only with masks, but general to all other colors. I can reproduce it with many other files in our testsuite.
yes, masks_test2runner.cpp has an xcheck there, since it uses bit depth 16.
(file #13836)
|
Wed 29 Aug 2007 05:55:26 PM UTC, comment #7:
" -b <bits> Bit depth of output window (16 or 32, default is 16)\n"
Thad description is bogus. It seems the default is based on the renderer: 32 for cairo, 16 otherwise.
Then, for AGG renderer, SDL gui supports 16,24 and 32,
initializing the following pixel formats:
16 : RGBA16
24 : RGB24
32 : RGBA32
Yes, is a confusing mix of depth & pixelformat here.
Anyway, the original bug was about make check failing, wasn't it ?
Did it actually fail zou ?
|
Mon 27 Aug 2007 07:44:40 AM UTC, comment #6:
Yes, after reading of "-b" I digged the source code and found it (is this new?). It certainly does not matter for the GTK GUI and I can't test SDL GUI.
BTW, this (and other, like "-m") options are a bit confusing when they work only with certain GUIs.
|
Mon 27 Aug 2007 12:55:34 AM UTC, comment #5:
see gnash.cpp, function usage().
" -b <bits> Bit depth of output window (16 or 32, default is 16)\n"
I tested with Gnash as a stand alone player, and agg + sdl.
I am sure -b option works here.
|
Fri 24 Aug 2007 03:05:47 PM UTC, comment #4:
> -b16 option triggered RGB555/RGB565, right? I saw the problem with the -b16 option.
Huh?? -b ? Where does that option come from? Anyway, I see no difference whatsoever using "-b 16" (tried with and without MIT-SHM). It would surprise me because the value set by the -b option is never used in GTK-AGG. Unless you use MIT-SHM it will always use RGB24 (see detect_pixelformat() ).
> The rationale would be that the user would accept 8 * the minimum tolerance, isn't it ?
Why multiplicate? We can't detect any error below 8 but anything greater would just work fine (not that accurate, but still correct). Otherwise with a tolerance of 10 you'd end up in a tolerance of 10*8 = 80 !
Better always round up: for tolerance 10 that would be 16
|
Fri 24 Aug 2007 08:13:19 AM UTC, comment #3:
MovieTester tries to initialize a set of renderers for AGG, in this order:
"RGB555", "RGB565", "RGBA16",
"RGB24", "BGR24", "RGBA32", "BGRA32"
Initialization of those not compiled-in will fail.
Only the first succeeding one will be used (due to limitations
of the core lib, not able to use multiple renderers).
Still, I think we found a bug, either in RGB555 renderer OR
in computation of minTolerance. Note that zou in his test is
using a tolerance of 8, which is the min tolerance for that renderer IIRC. Should we multiply the user-requested tolerance
by the min renderer tolerance in this cases ?
The rationale would be that the user would accept 8 * the minimum tolerance, isn't it ?
|
Fri 24 Aug 2007 02:38:59 AM UTC, comment #2:
strk said the Movietester uses RGB555 internally.
I compiled agg with all pixel formats enabled.
Tested with command:
gnash -b16 mask_test.swf
-b16 option triggered RGB555/RGB565, right? I saw the problem with the -b16 option.
Also tested with command:
gnash -b24 mask_test.swf
yes, everything is fine with -b24.
|
Thu 23 Aug 2007 12:00:17 PM UTC, comment #1:
How did you check pixel format RGB555? Using FB GUI? Or does your X server run in RGB555 mode you have --enable-mit-shm ?
With GTK/AGG only RGB24 works, unless you use the MIT-SHM extension...
|
Wed 22 Aug 2007 12:17:02 PM UTC, original submission:
UdoG: see testsuite/misc-ming.all/masks_test2.swf or your own test files with pixel format RGB555, there are same "shades" behind the masks.
Are they expected because of inaccurate pixel format, or something is wrong?
With pixel format RGB24, everything is fine.
|