Sat 23 Dec 2017 02:14:16 PM UTC, comment #13:
Can this be included in the 4.2.2 release?
|
Fri 22 Dec 2017 12:31:40 PM UTC, comment #12:
@Dan: Thank you for double checking for side effects. After re-evaluating, I still think that the patch does the correct change:
The double variable "data" is the "fortran_vec" of an Octave "Matrix" type. As you already wrote, the range of values in character conversions comes out to be [-128:127] (without the patch). Later on "convert_to_str" is called on that matrix because there is no numeric data type in the conversion string ("%s") in the example in comment #0. That came through the lines of code in comment #1 where all values outside the range of [0:255] are set to 0. This is where the second bytes of the double byte UTF-8 characters were lost.
With the cast to "unsigned char", the range in "data" is [0:255], matching what Octave expects for chars.
This also means that for mixed conversion string (e.g. "%s %s %f") where the output of (f)scanf is a double vector, the range for characters changes with the patch. E.g., if one byte of a string ("%c", "%s" or "[]") was read as -20 before the patch, it will be read as 236 after the patch.
But I think that will be more consistent than the current behavior because Octave's character type ranges from 0 to 255, too. So a user would probably expect that the results of the scanf family of functions were of the same range (and would not depend on the used compiler).
Hence I also think that the change should apply unconditionally to "%c", "%s" and "[]" conversions (as it does with the supplied patch).
|
Thu 21 Dec 2017 09:40:03 PM UTC, comment #11:
Oy, inside a macro where type-checking isn't obvious. Well, good find. Are there any consequences to this change, in general?
I take it that scanning the character strings from the file (placed in a temporary std::string, tmp[i] being a signed "char") was transferring them to double variable types (i.e., "data") which comes out as [-128:127]. Then translating those back to UTF-8 would drop the sign bit, range-limiting to [0:127]?
And that is why after seeing the
message the UTF-8 characters are broken?:
I guess I'm wondering if there are any instances where we do want to treat characters as signed numbers. Having some sort of cast within a macro tends to make the macro specific rather than general. For example, this FINISH_CHARACTER_CONVERSION() is used for %c, %s and [] format types. Might we want %c to be treated as signed? If so, you could make a macro variable
such that it becomes
or something similar.
|
Thu 21 Dec 2017 07:40:11 PM UTC, comment #10:
I finally found where the read "char" was assigned to a "double" which ultimately led to the warning that discarded half of the utf-8 characters.
The attached patch casts the "char" to "unsigned char" before the assignment. With it, the text file from comment #0 is read without errors.
(file #42699)
|
Sun 17 Dec 2017 10:10:18 PM UTC, comment #9:
Actually there is a more direct route to textscan() that issues no warnings:
People reading UTF-8 must be going that route. That internal string conversion behavior has been in place for ages. Strange no one reading UTF-8 has mentioned in the recent past the fscanf() limitation.
|
Sun 17 Dec 2017 09:13:09 PM UTC, comment #8:
Yes, something needs to be corrected here, I was just looking for some way of finding a quick solution for the OP. "testscan" is more useful than "textread" in that regard because we can get past the problem of UTF-8 (in file) range-limited to [0,127] by first reading as uint8.
I was confused by "help unicode2native" myself, as there was no documentation. But I see now that I had some really simple home-grown version of unicode2native() in my personal utilities path. The current documentation seems OK.
|
Sun 17 Dec 2017 08:48:59 PM UTC, comment #7:
Err, I had a spelling error in there. Make that:
|
Sun 17 Dec 2017 08:48:12 PM UTC, comment #6:
@andy: While using the "CHAR_MIN" and "CHAR_MAX" macros for the range check in ov-re-mat.cc, ov-scalar.cc and co. would probably solve the problem at hand, it would also lead to "strange" effects:
Something like "double(char(200))" would be 200 on systems that have char running from 0 to 255 but would be 0 on others where the "default" char is signed. Likewise, "double(char(-20))" would be 0 on some systems and 236 on others.
But maybe I have missed your point.
Reading the answers to your question on stackoverflow, I think Octave's charMatrix and charNDArray should probably be based on Array<unsigned char> instead of Array<char>. But others are probably more aware of the implications.
@Dan: Interesting find that "textread" seems to do the job. It looks like it would solve the OP's problem. Does that function use a different code path? But I think that the underlying problem why the functions from the scanf family are failing should be checked nevertheless.
"unicode2native" and "native2unicode" are new in Octave 4.3.0+ and not yet available in 4.2.1. Do you have suggestions how the documentation of these functions could be improved?
|
Sun 17 Dec 2017 08:44:53 PM UTC, comment #5:
The issue is mainly, I think, that reading data from a file always casts the read quantity to the specified format. Once past that, the goal seems more achievable. Here's the shortest amount of code
and some longer amount of code:
Note that the translation
doesn't seem necessary. I suppose the usefulness of the native2unicode() comes in when it is something other than the Octave-assumed UTF-8.
|
Sun 17 Dec 2017 07:42:33 PM UTC, comment #4:
@mmuetzel: It's true that "char" is signed on x86 but it's unsigned on arm for example.
There are macros "CHAR_MIN" and "CHAR_MAX" which may help.
I had the same problem compiling some OCT files on armhf:
https://stackoverflow.com/questions/46463064/what-causes-a-char-to-be-signed-or-unsigned-when-using-gcc
|
Sun 17 Dec 2017 07:29:50 PM UTC, comment #3:
There is a routine called textread() that comes close, but it still has some flakiness. For example, the following gives the appearance of working:
but try reading just the header, and the UTF seems lost:
But this sort of works:
|
Sun 17 Dec 2017 07:03:52 PM UTC, comment #2:
Just adding that there seems to be a great deal of UTF-8-related code scripts in the project...and some poorly documented routines such as:
As a work-around, is it possible to somehow scan your file data as signed 8-bit chars searching for the NULL character, then run native2unicode() on that data?
|
Sun 17 Dec 2017 12:52:38 AM UTC, comment #1:
I made the following change to ov-re-mat.cc:
This is certainly not how this should be fixed. However, with that change, all characters from the example file can be successfully read without warning with the following code:
Judging from the above code junk, it looks like octave_values of type char are always expected to be unsigned chars. However, char is signed by default in gcc.
I am not sure what the correct fix would be.
|
Sat 16 Dec 2017 07:42:55 PM UTC, original submission:
I'm trying to read some text files with Spanish characters, as ñ, ó and others.
I don't know how to do it. I have tried with fscanf('%s'), but I get a warning message: 'warning: range error for conversion to character value'
I have prepared a sample file with utf-8 characters to explain my problem using fscanf(fid,'%s'). The file can be downloaded from [1].
If I use fread(), it reads the correct byte value for extended characters, but if I use fscanf(), I receive the message 'warning: range error for conversion to character value', and the extended code characters are not read in the result string. I can read the correct line if I use fgets() or fgetl(), but I need single words to process the file and the same problem occurs with sscanf(), if I try to read the words from a string variable.
I checked the problem in Matlab, and I have verified the Matlab can read the correct characters from string with utf8 extended codes.
[1] http://mercatorlab.com/p1.txt
|