bugGNU Octave - Bugs: bug #52681, Bad reading for UTF-8 characters...

 
 

bug #52681: Bad reading for UTF-8 characters with fscanf()

Submitted by:  None
Submitted on:  Sat 16 Dec 2017 07:42:55 PM UTC  
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Incorrect Result
Status:  Fixed Assigned to:  None
Originator Name:  Santiago Higuera Originator Email:  -email is unavailable-
Open/Closed:  Closed Release:  4.2.1
Operating System:  Any

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

( Jump to the original submission)

Tue 02 Jan 2018 08:18:33 PM UTC, comment #21:

Thanks, Rik. Pushed here: http://hg.savannah.gnu.org/hgweb/octave/rev/62a7d3f292d6

Markus Mützel <mmuetzel>
Project Member
Tue 02 Jan 2018 07:32:11 PM UTC, comment #20:

Yes, push to stable.

Rik <rik5>
Project Administrator
Tue 02 Jan 2018 06:23:08 PM UTC, comment #19:

Is it OK to push the patch as it is to stable (or default)?

Markus Mützel <mmuetzel>
Project Member
Sat 30 Dec 2017 08:01:24 PM UTC, comment #18:

"std::string" is basically a "typedef basic_string<char> string;". Also "buf" is a "std::ostringstream" which is "typedef basic_ostringstream<char> ostringstream;". Casting the integer "c" to "unsigned char" before inserting it in "buf" would have no effect.
We could define our own types based on "unsigned char". But since "std::istream& is" is based on "char" as well, there would probably always be some places with casts between "char" and "unsigned char" were necessary.

Markus Mützel <mmuetzel>
Project Member
Wed 27 Dec 2017 08:29:30 PM UTC, comment #17:

This seems like a fairly local change, so I don't see why it can't be included.

In the corresponding BEGIN_{C,S,CHAR_CLASS}_CONVERSION macros, why not also cast the integer character that is read to unsigned char instead of just char? I see that tmp is a std::string object. Would a cast like that when assigning to tmp just not matter?

John W. Eaton <jwe>
Project Administrator
Tue 26 Dec 2017 07:39:06 PM UTC, comment #16:

I'm adding jwe to the CC list for this bug report.

@jwe: What do you think? Can the patch for this report be pushed to stable, or should it go on the development branch?

Rik <rik5>
Project Administrator
Tue 26 Dec 2017 07:15:40 PM UTC, comment #15:

I know what you are thinking. My experience is generally with programming in all types of languages, especially low level code, is that changing the role of a sign bit typically has some consequence somewhere else. But, as far as testing, I'm not sure how the development branch would really test this right now. There appear to be only a few places that fscanf is used:

textread() has a few internal test scripts, several with "%s", but doesn't appear to be used anywhere else. (The alternate method for reading the example file that I gave uses textscan(), which apparently uses a different route, I guess.) Maybe it is in the packages that fscanf() gets used more.

Perhaps the easiest thing is to add some tests to fscanf() that include UTF-8 characters.

When is the next minor release? You were thinking before the end of the year, but JWE mentioned another schedule the other day.

Dan Sebald <sebald>
Tue 26 Dec 2017 06:43:43 PM UTC, comment #14:

I like that the change is so small, but I'm concerned that we don't fully understand all of its effects. I'd like to commit the patch, but I'm wondering whether it should go on the development branch rather than the stable branch so there could be more testing.

Rik <rik5>
Project Administrator
Sat 23 Dec 2017 02:14:16 PM UTC, comment #13:

Can this be included in the 4.2.2 release?

Markus Mützel <mmuetzel>
Project Member
Fri 22 Dec 2017 12:31:40 PM UTC, comment #12:

@Dan: Thank you for double checking for side effects. After re-evaluating, I still think that the patch does the correct change:

The double variable "data" is the "fortran_vec" of an Octave "Matrix" type. As you already wrote, the range of values in character conversions comes out to be [-128:127] (without the patch). Later on "convert_to_str" is called on that matrix because there is no numeric data type in the conversion string ("%s") in the example in comment #0. That came through the lines of code in comment #1 where all values outside the range of [0:255] are set to 0. This is where the second bytes of the double byte UTF-8 characters were lost.
With the cast to "unsigned char", the range in "data" is [0:255], matching what Octave expects for chars.
This also means that for mixed conversion string (e.g. "%s %s %f") where the output of (f)scanf is a double vector, the range for characters changes with the patch. E.g., if one byte of a string ("%c", "%s" or "[]") was read as -20 before the patch, it will be read as 236 after the patch.
But I think that will be more consistent than the current behavior because Octave's character type ranges from 0 to 255, too. So a user would probably expect that the results of the scanf family of functions were of the same range (and would not depend on the used compiler).
Hence I also think that the change should apply unconditionally to "%c", "%s" and "[]" conversions (as it does with the supplied patch).

Markus Mützel <mmuetzel>
Project Member
Thu 21 Dec 2017 09:40:03 PM UTC, comment #11:

Oy, inside a macro where type-checking isn't obvious. Well, good find. Are there any consequences to this change, in general?

I take it that scanning the character strings from the file (placed in a temporary std::string, tmp[i] being a signed "char") was transferring them to double variable types (i.e., "data") which comes out as [-128:127]. Then translating those back to UTF-8 would drop the sign bit, range-limiting to [0:127]?

And that is why after seeing the

message the UTF-8 characters are broken?:

I guess I'm wondering if there are any instances where we do want to treat characters as signed numbers. Having some sort of cast within a macro tends to make the macro specific rather than general. For example, this FINISH_CHARACTER_CONVERSION() is used for %c, %s and [] format types. Might we want %c to be treated as signed? If so, you could make a macro variable

such that it becomes

or something similar.

Dan Sebald <sebald>
Thu 21 Dec 2017 07:40:11 PM UTC, comment #10:

I finally found where the read "char" was assigned to a "double" which ultimately led to the warning that discarded half of the utf-8 characters.
The attached patch casts the "char" to "unsigned char" before the assignment. With it, the text file from comment #0 is read without errors.

(file #42699)

Markus Mützel <mmuetzel>
Project Member
Sun 17 Dec 2017 10:10:18 PM UTC, comment #9:

Actually there is a more direct route to textscan() that issues no warnings:

People reading UTF-8 must be going that route. That internal string conversion behavior has been in place for ages. Strange no one reading UTF-8 has mentioned in the recent past the fscanf() limitation.

Dan Sebald <sebald>
Sun 17 Dec 2017 09:13:09 PM UTC, comment #8:

Yes, something needs to be corrected here, I was just looking for some way of finding a quick solution for the OP. "testscan" is more useful than "textread" in that regard because we can get past the problem of UTF-8 (in file) range-limited to [0,127] by first reading as uint8.

I was confused by "help unicode2native" myself, as there was no documentation. But I see now that I had some really simple home-grown version of unicode2native() in my personal utilities path. The current documentation seems OK.

Dan Sebald <sebald>
Sun 17 Dec 2017 08:48:59 PM UTC, comment #7:

Err, I had a spelling error in there. Make that:

Dan Sebald <sebald>
Sun 17 Dec 2017 08:48:12 PM UTC, comment #6:

@andy: While using the "CHAR_MIN" and "CHAR_MAX" macros for the range check in ov-re-mat.cc, ov-scalar.cc and co. would probably solve the problem at hand, it would also lead to "strange" effects:
Something like "double(char(200))" would be 200 on systems that have char running from 0 to 255 but would be 0 on others where the "default" char is signed. Likewise, "double(char(-20))" would be 0 on some systems and 236 on others.
But maybe I have missed your point.
Reading the answers to your question on stackoverflow, I think Octave's charMatrix and charNDArray should probably be based on Array<unsigned char> instead of Array<char>. But others are probably more aware of the implications.

@Dan: Interesting find that "textread" seems to do the job. It looks like it would solve the OP's problem. Does that function use a different code path? But I think that the underlying problem why the functions from the scanf family are failing should be checked nevertheless.
"unicode2native" and "native2unicode" are new in Octave 4.3.0+ and not yet available in 4.2.1. Do you have suggestions how the documentation of these functions could be improved?

Markus Mützel <mmuetzel>
Project Member
Sun 17 Dec 2017 08:44:53 PM UTC, comment #5:

The issue is mainly, I think, that reading data from a file always casts the read quantity to the specified format. Once past that, the goal seems more achievable. Here's the shortest amount of code

and some longer amount of code:

Note that the translation

doesn't seem necessary. I suppose the usefulness of the native2unicode() comes in when it is something other than the Octave-assumed UTF-8.

Dan Sebald <sebald>
Sun 17 Dec 2017 07:42:33 PM UTC, comment #4:

@mmuetzel: It's true that "char" is signed on x86 but it's unsigned on arm for example.

There are macros "CHAR_MIN" and "CHAR_MAX" which may help.

I had the same problem compiling some OCT files on armhf:
https://stackoverflow.com/questions/46463064/what-causes-a-char-to-be-signed-or-unsigned-when-using-gcc

Andreas Weber <andy1978>
Project Member
Sun 17 Dec 2017 07:29:50 PM UTC, comment #3:

There is a routine called textread() that comes close, but it still has some flakiness. For example, the following gives the appearance of working:

but try reading just the header, and the UTF seems lost:

But this sort of works:

Dan Sebald <sebald>
Sun 17 Dec 2017 07:03:52 PM UTC, comment #2:

Just adding that there seems to be a great deal of UTF-8-related code scripts in the project...and some poorly documented routines such as:

As a work-around, is it possible to somehow scan your file data as signed 8-bit chars searching for the NULL character, then run native2unicode() on that data?

Dan Sebald <sebald>
Sun 17 Dec 2017 12:52:38 AM UTC, comment #1:

I made the following change to ov-re-mat.cc:

This is certainly not how this should be fixed. However, with that change, all characters from the example file can be successfully read without warning with the following code:

Judging from the above code junk, it looks like octave_values of type char are always expected to be unsigned chars. However, char is signed by default in gcc.
I am not sure what the correct fix would be.

Markus Mützel <mmuetzel>
Project Member
Sat 16 Dec 2017 07:42:55 PM UTC, original submission:

I'm trying to read some text files with Spanish characters, as ñ, ó and others.

I don't know how to do it. I have tried with fscanf('%s'), but I get a warning message: 'warning: range error for conversion to character value'
I have prepared a sample file with utf-8 characters to explain my problem using fscanf(fid,'%s'). The file can be downloaded from [1].

If I use fread(), it reads the correct byte value for extended characters, but if I use fscanf(), I receive the message 'warning: range error for conversion to character value', and the extended code characters are not read in the result string. I can read the correct line if I use fgets() or fgetl(), but I need single words to process the file and the same problem occurs with sscanf(), if I try to read the words from a string variable.

I checked the problem in Matlab, and I have verified the Matlab can read the correct characters from string with utf8 extended codes.

[1] http://mercatorlab.com/p1.txt

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #42699:  bug52681_scanf_utf-8.patch added by mmuetzel (1KiB - application/octet-stream)
file #42661:  p1.txt added by None (153B - text/plain)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by rik5
  • -email is unavailable- added by rik5 (Posted a comment)
  • -email is unavailable- added by andy1978 (Posted a comment)
  • -email is unavailable- added by sebald (Posted a comment)
  • -email is unavailable- added by mmuetzel (Posted a comment)
  • -email is unavailable- added by None (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can add your encouragement to it.
    This task has 0 encouragements so far.

    Only project members can vote.

     

     

     

    Follow 8 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-01-05 rik5 StatusReady For Test => Fixed
        Open/ClosedOpen => Closed
    2018-01-02 mmuetzel StatusPatch Submitted => Ready For Test
    2017-12-26 rik5 Carbon-Copy- => Added jwe
    2017-12-21 mmuetzel Attached File- => Added bug52681_scanf_utf-8.patch, #42699
        StatusNone => Patch Submitted
        Operating SystemGNU/Linux => Any
    2017-12-16 None Attached File- => Added p1.txt, #42661

    Back to the top


    Powered by Savane 3.3