Fri 28 Nov 2014 08:57:09 AM UTC, comment #2:
According to ECMA-334 9.4.1, a Unicode code point in the range U+10000 to U+10FFFF is represented using two Unicode surrogate code units. I couldn't find any example in the specification, but a quick web search showed that the following expressions represent the same string:
This indeed works with Mono, but aborts xgettext. I'm attaching a tentative patch to fix this.
(file #32566)
|
Thu 28 Feb 2013 10:22:14 AM UTC, comment #1:
#1 and #3 are implemented currently, but the real problem is #2, because conversion to UTF-8 is done before knowing it is translatable.
This problem is the related to the issues I'm facing with unicode support in C/C++. It will need some design changes, maybe something quite simple like storing an "int buffer" instead of a "char buffer" and process it in a later stage, but there are too many places to touch and lots of testing to do before posting any useful patch.
Also, this bug report is old, but if you already need a workaround and this file does not contains any translatable string, you can comment out this file from your POTFILES.in or erase it from xgettext arguments.
|
Wed 16 Feb 2011 04:57:42 PM UTC, original submission:
Running xgettext on a C# file which contains a string literal containing a Unicode escape for a surrogate character leads to an abnormal termination (abort()).
This is because the character reference is parsed correctly, then passed to string_buffer_append_unicode() in x-csharp.cs, which uses u8_uctomb() to encode the character to UTF-8. However, u8_uctomb() denies to encode surrogate characters (correctly, we might say, they should not be encoded separately, the “real” Unicode character should be encoded, not its UTF-16 encoding) and returns -1, after which string_buffer_append_unicode() decides to abort(), as “The caller should have ensured that uc is not out-of-range.”
Reproducible:
test.cs:
Then run:
xgettext --language=C# test.cs
A few thoughts on solving the problem:
- When the string containing the surrogate character is in fact a localizable string, and would get to the .po file, we should ensure the first surrogate is correctly followed by the corresponding low surrogate, and then UTF-8 encode the real Unicode character. But this is probably not completely trivial.
- When the string is not localizable and we are parsing just to get around in the file, we could ignore the underlying meaning of the surrogate character and just ignore it altogether, or encode it UTF-8-like (i.e. CESU-8). It is not only easier, it is also necessary in some cases: There is nothing wrong on having something like “private const char firstSurrogate = '\xD800';”.
- Leaving the final case: What if the string is localizable, but contains invalid UTF-16 (e.g. a bare surrogate)? In that case, xgettext should probably report an error, similar to when it detects invalid byte sequence for the selected encoding (e.g. “Non-ASCII character at…”).
Right now, I would be happy having just #2 above implemented, because that is my problem right now – xgettext crashing on a file having nothing to do with localization.
|