bugGNU Octave - Bugs: bug #57107, regexp functions fail on...

 
 

bug #57107: regexp functions fail on ISO-8859-1 input

Submitted by:  A.R. Burgers <arb>
Submitted on:  Wed 23 Oct 2019 09:38:50 AM UTC  
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Matlab Compatibility
Status:  In Progress Assigned to:  None
Originator Name:  Open/Closed:  Open
Release:  dev Operating System:  Any

Add a New Comment (Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

( Jump to the original submission)

Mon 04 Nov 2019 10:39:20 PM UTC, comment #26: 

@ comment #20:
textread.m invokes strread.m.
strread.m does a lot of byte counting behind the scenes, so I'm not surprised if it breaks. But it can also be that it's actually regexp() or regexprep() that broke; I use a pimped version of strread.m that can handle cuddling literals (textscan can't) and that version broke several months ago after some UTF8(?) fixes in core - and that problem is actually with some otherwise validly looking regexp calls. I still want to isolate those regexp calls and report them in the bug tracker.

I can look at what the patch in comment #20 does to strread.m (in the past I did a lot of work on strread.m) but it is considered legacy.

Philip Nienhuis <philipnienhuis>
Project Member
Mon 04 Nov 2019 10:28:21 AM UTC, comment #25: 

And nope, I can't think of any actual (non-toy) use case for getting the code point values back from char strings. I'm just bothered by the asymmetry in operations on the primitive types. But you can't do round-trips between the various numeric primitives either, due to round-off or overflow issues. So it's hard to say that's a big deal.

Andrew Janke <apjanke>
Mon 04 Nov 2019 07:54:42 AM UTC, comment #24: 

No, I just mean if a user wanted to do custom M-code based comparisons on a character's code point value. isstrprop() and friends would probably be better ways to do it.

Andrew Janke <apjanke>
Mon 04 Nov 2019 07:27:34 AM UTC, comment #23: 

@apjanke: I'm not sure what you mean by your second point in comment #21. Are you referring to the isstrprop and related functions? They should be UTF-8 ready by now.
If you want to get the numeric Unicode code point outside the range of the 8bit char, you could convert to let's say UTF-32:

double (unicode2native("∫", "utf-32le")) * 2.^((0:3).'*8)

Markus Mützel <mmuetzel>
Project Member
Mon 04 Nov 2019 01:50:06 AM UTC, comment #22: 

Oh, wait. Looks like you already addressed that round-trip oddness in the email you sent out to the mailing list. Sorry.

Andrew Janke <apjanke>
Mon 04 Nov 2019 01:48:09 AM UTC, comment #21: 

That brings up an interesting edge case: under this behavior, `double(char)` and `char(double)` are no longer inverses of each other, so you can't round-trip a piece of data through those transformations. Not sure of what practical implications that would have. Aside from that now it's hard to get the Unicode code point value from an input character, if you wanted to do numeric comparison/code block membership tests/whatever on it.

Andrew Janke <apjanke>
Sun 03 Nov 2019 01:31:28 PM UTC, comment #20: 

The attached patch wires in the validation of UTF-8 at a pretty low level. It applies on top of "bug57107_validate_u8.patch".
It breaks "strread" and "textread" (and possibly also other things). But I'm still waiting for feedback on the maintainers mailing list to see if it is worth looking into why.
Nevertheless, it demonstrates what could happen:

octave:1> char (181)
ans = µ
octave:2> double (ans)
ans =

   194   181

octave:3> char ([181 228])
ans = µä
octave:4> double (ans)
ans =

   194   181   195   164

(file #47784)

Markus Mützel <mmuetzel>
Project Member
Sat 02 Nov 2019 01:58:35 PM UTC, comment #19: 

Thanks! That email looks to me like a good description of the issues we're facing.

Andrew Janke <apjanke>
Sat 02 Nov 2019 12:30:15 PM UTC, comment #18: 

@Andrew: I asked about the general issue with invalid UTF-8 on the maintainers mailing list here:
https://octave.1599824.n4.nabble.com/How-should-we-treat-invalid-UTF-8-td4694444.html

Markus Mützel <mmuetzel>
Project Member
Sat 26 Oct 2019 11:15:09 AM UTC, comment #17: 

The attached patch is a lot cleaner validating UTF-8 encoded strings.
It still isn't wired in to do anything meaningful though. But it should be safe enough for testing.

The question remains where we want this conversion from invalid UTF-8 to valid UTF-8 to happen?
It might be surprising to a user if the string they read from a file wasn't byte identical to the content of the file.
At the same time, there are probably a lot of places inside Octave (not only regexp*) where we would need to check if char arrays contained valid UTF-8 before using them safely.
On the other hand, there is nothing that would prevent a user from creating invalid UTF-8 manually (e.g. assigning "a=char(181)"). So validating strings that are read from a file wouldn't suffice anyway.

Thus, the best option I see at the moment is identifying the critical places (e.g. just before passing the strings to PCRE) and validate them using the new function "validate_u8".

(file #47746)

Markus Mützel <mmuetzel>
Project Member
Fri 25 Oct 2019 04:02:23 PM UTC, comment #16: 

I just checked in a patch for bug #55452. With it, I can do the following to read the file from comment #0:

f = fopen('ISO-8859.csv', 'r', 'n', 'iso-8859-1');
str = fgets(f);
str(end) = '';
fclose(f);

regexprep(str, '1', '2')

The output is:

ans =   2T2(°C)
>> double(str)
ans =

    32    32    49    84    49    40   194   176    67    41    32    32

Should we make this bug about choosing a fallback option for invalid UTF-8 so that specifying the input encoding isn't necessary for files encoded in ISO-8859-1?

Markus Mützel <mmuetzel>
Project Member
Thu 24 Oct 2019 06:35:55 PM UTC, comment #15: 

We're using libiconv.
I didn't find an option that does this fallback conversion automatically though.

And definitely don't use the diff I uploaded previously. It leaks and doesn't advance the pointers correctly. (But maybe it was good enough for demonstration purposes.)

Please, go ahead and ask the mailing list for feedback.

Markus Mützel <mmuetzel>
Project Member
Thu 24 Oct 2019 06:30:29 PM UTC, comment #14: 

There are so many possible scenarios and use cases for text encoding. Maybe this is something we should send a survey email out to the octave-users mailing list about, so we can get an idea for what the user community would prefer or need?

Andrew Janke <apjanke>
Thu 24 Oct 2019 06:27:23 PM UTC, comment #13: 

Also I think we'd want to get a library to handle this for us, rather than trying to write our own Unicode encoding/decoding routines, right?

Andrew Janke <apjanke>
Thu 24 Oct 2019 06:26:35 PM UTC, comment #12: 

That sounds like a pretty reasonable approach. I think it would provide "do what I want" behavior for most users without getting too fancy, and would provide decent Matlab compatibility.

Maybe we'd want to do a two-step fallback:

1. Default to UTF-8.
2. If encountering non-UTF-8 byte sequences,
  a) If the the user's locale's encoding is a non-Unicode encoding, fall back to it,
  b) Else fall back to ISO-8859-1 like this.

I don't know if that's actually viable for all multibyte encodings, though (e.g. like Shift-JIS). And I'm pretty sure it's not what Matlab does. But it might be a better behavior for e.g. Eastern European, Arabic, or Thai users.

And we're only talking about what the default behavior should be when a file handle is opened without an encoding specified, right? I would expect that when using an explicitly requested encoding, invalid input would just raise an error. (Unless the user explicitly asked for a fallback behavior somehow.)

Andrew Janke <apjanke>
Thu 24 Oct 2019 05:56:13 PM UTC, comment #11: 

The attached diff isn't intended to be pushed. It is more like a proof of concept of what I was musing:

> __utf8_with_fallback__ (["abc" 120 52 181 121 "ä"])
warning: implicit conversion from numeric to char
ans = abcx4µyä
>> double (ans)
ans =

    97    98    99   120    52   194   181   121   195   164

Note that the (invalid) 181 is converted to (valid) [194 181].

(file #47737)

Markus Mützel <mmuetzel>
Project Member
Thu 24 Oct 2019 02:17:45 PM UTC, comment #10: 

After a little research, I don't think that we should sniff the encoding.
Instead we might want to select one of the fallback options for decoding invalid UTF-8 byte sequences [1].

I'd personally vote for the option:
"The Unicode code points U+0080–U+00FF with the same value as the byte, thus interpreting the bytes according to ISO-8859-1."

That also most closely matches what Matlab seems to be doing. And it would also solve the OR.

[1]: https://en.wikipedia.org/wiki/UTF-8#Invalid_byte_sequences

Markus Mützel <mmuetzel>
Project Member
Thu 24 Oct 2019 12:37:39 PM UTC, comment #9: 

Sniffing does have one major advantage: it can reliably distinguish between UTF-8 files and legacy code page files. Which would be handy on Windows, where you might consider ISO-8859-1 to be the default encoding. But the world has largely gone UTF-8, so many (most?) of your input files are going to be UTF-8.

Andrew Janke <apjanke>
Thu 24 Oct 2019 12:28:24 PM UTC, comment #8: 

> So there are no "non-UTF-8 byte values" and byte values between 128-255 can directly be mapped to UTF-16 (in an effective no-op).


There are non-UTF-8 byte values; it's that UTF-8 isn't involved and it happens to read as correct UCS-2 if you just widen the bytes to 16 bits as unsigned ints (yeah, in an effective no-op).

> I was assuming from your comment #2 that the default encoding used by Matlab on non-Windows systems was UTF-8. But if I follow you correctly in your comment #4, it is ISO-8859-1?


Yes, I believe it's possible. It's not in the doco, so would need to actually do testing on Matlab to verify, which I'm unwilling to do for licensing reasons.

And if this is the case, we need to decide whether Octave should do the same thing for Matlab compatibility, or do something different, because IMHO that's a really bad default behavior. For example, if we did it that way (default ISO-8859-1 everywhere), it would probably break @mleitner's basic use case that he's concerned about here. I think it would be better for almost all users and scenarios if Octave would act like a normal Unix program and take the default encoding from the process's locale.

Andrew Janke <apjanke>
Thu 24 Oct 2019 12:18:14 PM UTC, comment #7: 

> First, think about the performance: even if I would like to just read in the first few bytes of a file, the fopen alone would have to read in quite a large chunk in order to detect the encoding, which I would never use. And how much would you read in, the whole file?


There would be little or no performance effect. In a modern OS, all file I/O is done in blocks or pages: when you read one byte, what actually happens is that the first 4K block is brought in from disk to cache and held there until you read more. And then I/O is usually further buffered by the next language layer. Sniffing would be done within the first block or buffer.

> And further, there is no reliable way to detect the encoding.


This is true. But there are some heuristics; ICU4C provides some which have an okay reputation. Sniffing would only be a convenience for casual users; the real answer to all of these scenarios is that you have to actually know and specify the file encoding to get correct, reliable behavior.

But you've got a good point: sniffing introduces variability and unpredictability into your code's behavior, and could well make Octave I/O both harder to use and harder for maintainers to debug user issues with. It could even introduce variability between different versions or builds of Octave, if they were built with different versions of the library that provides the sniffing algorithms.

> Actually, I have always been perfectly happy with the previous situation -- octave had no idea of encodings, it read the bytes as they came in the file and fed them to the terminal emulator, which cared about how the are displayed.


That scenario is fine, as long as your input files are in the same encoding as your terminal. (And you don't want to do pattern matching on non-basic-English character classes, or do text processing on non-UTF-8 input data, like OP for this bug report does.) Being encoding-aware allows you to work with international data where your input files not in your current locale's encoding, or they are in multiple encodings. Useful if you're working with, say, census data or energy data that comes from multiple countries or continents, or a spreadsheet that your Japanese colleague sent you.

If we make a good choice with the default encoding selection - e.g. taking it from your locale definition, assuming your locale is correctly configured - your scenario will continue to work with no code changes.

> Do you propose to make the "t" and "b" in the mode string of fopen have a meaning, while today (on linux) they are irrelevant (is this what Matlab does)?


If we stay compatible with Matlab, the "b" and "t" modes would only have an effect on Windows, where the "t" mode enables translation between Windows CRLF line endings and Unix-style LF endings. "t" mode has no other effect, and "b" mode would keep the current behavior. On Linux (and in portable code), they would remain irrelevant.

> Please, can you point me to a write-up of what is planned in this regard?


The discussion has been over at https://savannah.gnu.org/bugs/index.php?55452 and on the octave-maintainers mailing list:

Andrew Janke <apjanke>
Thu 24 Oct 2019 12:15:14 PM UTC, comment #6: 

I also don't really like the idea of sniffing the file.

The reasoning behind all of this:
Let's assume a user is reading strings from a file. This in itself doesn't require any knowledge of the used encoding. But if the user wants to use these strings to open a file or folder from the file system or wants to place a legend or annotation in a graph, encoding is important.
To try and remove all of that conversion hassle from the user, we are trying to have all character arrays in Octave encoded consistently and only convert at the interfaces.
Some time ago it was decided that this consistent encoding should be UTF-8 (different from Matlab).

@Andrew:
I think we agree. But I can't explain myself well enough. I was assuming from your comment #2 that the default encoding used by Matlab on non-Windows systems was UTF-8. But if I follow you correctly in your comment #4, it is ISO-8859-1? So there are no "non-UTF-8 byte values" and byte values between 128-255 can directly be mapped to UTF-16 (in an effective no-op).

Markus Mützel <mmuetzel>
Project Member
Thu 24 Oct 2019 06:58:43 AM UTC, comment #5: 

Please don't do sniffing, and even more definitely not by default. First, think about the performance: even if I would like to just read in the first few bytes of a file, the fopen alone would have to read in quite a large chunk in order to detect the encoding, which I would never use. And how much would you read in, the whole file? And further, there is no reliable way to detect the encoding. Yes, you could probably discern quite easily Latin-script languages written in UTF-8 or UTF-16, but for everything else you would have to have also a knowledge of the language, which characters it used and to what frequency, in order to distinguish between non-Latin script languages written in two-byte encodings. And how would you distinguish for instance between German or a Nordic language written in ISO 8859-1 and an Eastern European language written in ISO 8859-2, again by the histogram of characters above 127? Further, it would break the principle of least surprise: what if my file consists of a list of given names of a sample of people taken in England? If fopen reads the first 1024 bytes to decide on the encoding, it will probably choose the default among any ISO-8859 or UTF-8 (as probably no byte will be above 127). However, later in the file an expatriate "Jürgen" might well appear, which is then misread. That in alone would not yet be much of a problem, but the "Jürgen" could appear also in the first 1024 bytes, in which case it would be interpreted differently.

I am a late-comer at this issue of making octave encoding-aware. Actually, I have always been perfectly happy with the previous situation -- octave had no idea of encodings, it read the bytes as they came in the file and fed them to the terminal emulator, which cared about how the are displayed. The only issue in this sense could have been that the number of bytes are not necessarily equal to the number of displayed characters. But I do not see that this would be a problem unless you do manual positioning of characters of a fixed-width font say in a plot -- the much more frequent problem of e.g. how large string buffers to allocate is a no-brainer.

Please, can you point me to a write-up of what is planned in this regard? Do you propose to make the "t" and "b" in the mode string of fopen have a meaning, while today (on linux) they are irrelevant (is this what Matlab does)? If the "b" then keeps the current behaviour, I could live with that, I would only have to use it consistently where today I distinguish between "t" and "b" depending on whether the file will contain text or binary data.

Michael Leitner <mleitner>
Wed 23 Oct 2019 09:49:43 PM UTC, comment #4: 

> There is at least one modern OS that still uses 8bit encodings by default: Windows 10 and its predecessors.


Good point. Windows is weird because it has both Unicode and legacy code page APIs. And both Octave and Matlab are Unicode-enabled. I guess this gets into the semantics of what the "default encoding" is. But you're right.

> But I now see that this bug is marked as affecting GNU/Linux.


My initial testing shows it affects Mac as well.

> Matlab's internal encoding is 16bit wide (maybe UCS-2).


Yep, it's UCS-2. (Though it also generally passes through UTF-16 surrogate pair code units unmolested, so UTF-16 data will generally work too, as long as you're not trying to do character counts.)

> Maybe it reads the non-UTF-8 bytes as is and they "happen" to map the Unicode code points (for a western encoded file).


Nope. Matlab's fopen() opens files with an "encoding" attribute (see https://savannah.gnu.org/bugs/index.php?55452), and when you do text or char-oriented I/O (depending on what read/write function you call, and what you pass for the "precision" argument for low-level I/O functions), it transcodes the input to UCS-2/UTF-16.

It just so happens that for ISO-8859-1 in particular, the non-UTF-8 byte values between 128-255 map to the Unicode code points with the same values, which in UTF-16 are represented by code units with the same numeric values. So the transcoding operation there is a no-op, except for bit width. But that won't work for Octave, because Octave's internal coding is UTF-8.

> I am not sure whether we should do something similar and transcode from a default 8bit encoding if we detect that a source contains invalid UTF-8.


I think Octave should do transcoding. I dunno about "detecting" that the source contains invalid UTF-8. Just for Matlab compatibility; I don't think they sniff the input contents to detect encoding. But maybe that would be an advantage that it's worth losing compatibility for? On Matlab, to be portable and properly internationalized, you pretty much have to explicitly force the encoding from your code when you do I/O. And that would still work on Octave in the face of sniffing for the default case.

Diagnostic: the 4-argout version of fopen returns the encoding. (Not supported in Octave. (Yet.))

f = fopen('foo.txt');
[a,b,c,d] = fopen(f);

Thought: Since Matlab is so Windows-focused, I wonder if it just opens all files as ISO-8859-x by default, regardless of OS?

Andrew Janke <apjanke>
Wed 23 Oct 2019 08:43:44 PM UTC, comment #3: 

There is at least one modern OS that still uses 8bit encodings by default: Windows 10 and its predecessors.
On a western locale the default encoding might well be ISO-8859-1 (or ANSI/CP1252).

But I now see that this bug is marked as affecting GNU/Linux. So it will most probably be necessary to specify the encoding when fopen'ing a file for reading strings.

Matlab's internal encoding is 16bit wide (maybe UCS-2). Maybe it reads the non-UTF-8 bytes as is and they "happen" to map the Unicode code points (for a western encoded file).
I am not sure whether we should do something similar and transcode from a default 8bit encoding if we detect that a source contains invalid UTF-8.

Markus Mützel <mmuetzel>
Project Member
Wed 23 Oct 2019 03:04:43 PM UTC, comment #2: 

Interesting. I wonder how Matlab decides what encoding it is in? A file in a single-byte code page like that is valid under several different encodings.

The doco for fopen (https://www.mathworks.com/help/matlab/ref/fopen.html) says: "If you do not specify an encoding scheme, fopen opens files for processing using the default encoding for your system." But that can't be all that's going on here, because on modern systems, ISO-8859-x isn't the default encoding.

Andrew Janke <apjanke>
Wed 23 Oct 2019 10:38:09 AM UTC, comment #1: 

This is probably related to bug #55452 and bug #55826.
If those two were resolved, the use case in comment #0 would probably work seamlessly again.

Markus Mützel <mmuetzel>
Project Member
Wed 23 Oct 2019 09:38:50 AM UTC, original submission:  

Consider the next in ISO-8859 code page, with Degree C symbol, attached as ISO-8859.txt. The unix file command reports the file type as ISO-8858 text.

  1T1(°C)

and this script reading it:

f = fopen('ISO-8859.csv');
str = fgets(f);
str(end) = '';
fclose(f);

regexprep(str, '1', '2')

results in this error with octave-6.0.0

error: regexprep: the input string is invalid UTF-8

Both octave-5.1.1 and matlab handle this transparently. I guess if dev does not, this will lead to quite a few error reports in the future.

The error is also triggered by commands such as strsplit and strtrim since they invoke regexp functions.
A more extensive test script uu.m is attached.

A.R. Burgers <arb>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #47784:  bug57107_char.patch added by mmuetzel (9KiB - application/octet-stream)
file #47746:  bug57107_validate_u8.patch added by mmuetzel (4KiB - application/octet-stream)
file #47737:  bug57107_utf8_fallback.diff added by mmuetzel (2KiB - application/octet-stream)
file #47732:  ISO-8859.csv added by arb (12B - application/vnd.ms-excel)
file #47733:  uu.m added by arb (671B - text/plain)

 

Carbon-Copy List
  • -email is unavailable- added by mleitner (Posted a comment)
  • -email is unavailable- added by apjanke (Posted a comment)
  • -email is unavailable- added by mmuetzel (Posted a comment)
  • -email is unavailable- added by arb (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can add your encouragement to it.
    This task has 0 encouragements so far.

    Only project members can vote.

     

     

     

    Follow 10 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2019-11-03 mmuetzel Attached File- => Added bug57107_char.patch, #47784
        Summaryregexp functions fail on ISO-8859 input => regexp functions fail on ISO-8859-1 input
    2019-10-26 mmuetzel Attached File- => Added bug57107_validate_u8.patch, #47746
        StatusNone => In Progress
        Operating SystemGNU/Linux => Any
    2019-10-24 mmuetzel Attached File- => Added bug57107_utf8_fallback.diff, #47737
    2019-10-23 mmuetzel Dependencies- => Depends on bugs #55826
        Dependencies- => Depends on bugs #55452
    2019-10-23 arb Attached File- => Added ISO-8859.csv, #47732
        Attached File- => Added uu.m, #47733

    Back to the top


    Powered by Savane 3.5