bugGNU Octave - Bugs: bug #57107, regexp functions fail on...

 
 

bug #57107: regexp functions fail on ISO-8859-1 input

Submitter:  A.R. Burgers <arb>
Submitted:  Wed 23 Oct 2019 09:38:50 AM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Matlab Compatibility
Status:  In Progress Assigned to:  None
Originator Name:  Open/Closed:  * Open
Release:  * dev Operating System:  * Any
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Mon 19 Jun 2023 06:43:35 PM UTC, comment #46: 


> That's the behavior of PCRE(2). Octave only implemented a manual check because PCRE's own check was slow (at least at some point).


I have no idea if Perl's own regex has anything to do with PCRE2, but it has no issue guzzling anything I throw to it regardless if it is valid utf-8 or not.

Here is the output from Perl 5.34.0 on xubuntu 22.04 under en_US.UTF-8 locale


$ perl -e 'print join(" ", split(/(..)/, (unpack "H*", "aäiöü\xFF\x00a中")))."\n"'
 61  c3  a4  69  c3  b6  c3  bc  ff  00  61  e4  b8  ad

$ perl -e 'print $-[0]."\n" while("aäiöü\xFF\x00a中" =~ /(中|ä)/g);'
1
11

$ perl -e 'print $-[0]."\n" while("aäiöü\xFF\x00a中" =~ /[ai]/g);'
0
3
10


also, to match a multi-byte character, I rarely see one putting it inside `[]`, the `/(中|ä)/` is more appropriate to avoid the issue you are seeing.

Qianqian Fang <fangq>
Mon 19 Jun 2023 05:02:17 PM UTC, comment #45: 

Another possible workaround for your use case:

search_buffer = buffer;
search_buffer(buffer>127) = 0;
regexpi(search_buffer,'^\s*(http|https|ftp|file)://')


Use `search_buffer` as the input string for the regexp* functions. But use the result to index into `buffer`.

Markus Mützel <mmuetzel>
Group administrator
Mon 19 Jun 2023 04:43:36 PM UTC, comment #44: 


> 1. instead of throwing an error, can regexp give an warning yet still proceed with the old behavior?


That's the behavior of PCRE(2). Octave only implemented a manual check because PCRE's own check was slow (at least at some point).

> 2. if 1 is not possible, can regexp test if the matching pattern contains no multi-byte characters and if yes, ignore utf-8 string restriction of the input?


That would break other use cases. See comment #43.

> 3. add an octave-specific option to regexp/regexpi/regexprep to allow manual encoding handling (and ignore utf-8 input restriction)?


We've had bad experiences with syntax extensions in the past. I'll leave that decision to others.

Markus Mützel <mmuetzel>
Group administrator
Mon 19 Jun 2023 04:37:38 PM UTC, comment #43: 


> My question is if the search pattern is purely single-byte patterns (say ascii only), does regexp input string must be a valid utf-8 string? I don't see this could ever be a problem. no?


This test containing only ASCII characters in the pattern:

regexp('aäiöü', '[^\w]')


In Octave 4.4.1:

>> test_regexp_utf8

ans =

   2   3   5   6   7   8


In Octave 6.4.0:

>> test_regexp_utf8

ans =

   2   5   7


The old result was wrong (pointing to second bytes of a character). There should be only three matches like in newer versions of Octave.

Markus Mützel <mmuetzel>
Group administrator
Mon 19 Jun 2023 04:27:05 PM UTC, comment #42: 

just to use your example, instead of searching `[aö]`, if the pattern is `[ai]`, I am getting consistent answers on both Octave 4.2.2 and 6.4


octave:1> regexp('aäiöü', '[ai]')
ans =

   1   4


I understand that regexp is trying to handle multi-byte characters consistently, but throwing an error is an overly aggressive response when legacy input is provided. In the past, the caller is responsible to examine input encoding and interpret the output accordingly. Giving this manual encoding handling capability back to programmers makes regexp a life saver for many complex tasks beyond merely string pattern matching. As someone coming from a Perl background, I use regex as a core part of application development.


if restoring the old behavior is not possible, at least can the following options be considered?

1. instead of throwing an error, can regexp give an warning yet still proceed with the old behavior?

2. if 1 is not possible, can regexp test if the matching pattern contains no multi-byte characters and if yes, ignore utf-8 string restriction of the input?

3. add an octave-specific option to regexp/regexpi/regexprep to allow manual encoding handling (and ignore utf-8 input restriction)?

Qianqian Fang <fangq>
Mon 19 Jun 2023 03:56:53 PM UTC, comment #41: 

comment #40:

> See a slightly modified example from comment #36:


> regexp('aäiöü', '[aö]')
>

this is different, in this case, the search pattern contains UTF-8/multi-byte unicode characters.

My question is if the search pattern is purely single-byte patterns (say ascii only), does regexp input string must be a valid utf-8 string? I don't see this could ever be a problem. no?

Qianqian Fang <fangq>
Mon 19 Jun 2023 06:46:38 AM UTC, comment #40: 


> if the pattern to be matched is ASCII only, does regexp really care if the input must be a valid string? can you give me a counter example when it matters?


See a slightly modified example from comment #36:

regexp('aäiöü', '[aö]')


In Octave 4.4.1:

>> test_regexp_utf8
ans =

   1   2   5   6   7


In Octave 6.4.0 (I don't recall the exact version where this was fixed):

>> test_regexp_utf8

ans =

   1   5


The result before the related change was clearly wrong.

> is matching MATLAB's function behavior no longer a priority for octave development?


The function itself is Matlab compatible. The relevant difference is that char-arrays in Matlab are UTF-16 encoded. In Octave, they are UTF-8 encoded byte arrays.
To get the exact behavior in both programs, the internal type for char-arrays would need to change in Octave. That would be a major change that might have unintended impact in many existing code. It might be better to discuss this possible change on discourse to reach a wider audience of developers and users:
https://octave.discourse.group/

Please open a thread there if you think transitioning to UTF-16 encoded char-arrays is worth the possible risk of breaking existing code that was written with the current representation of char arrays in mind.

Markus Mützel <mmuetzel>
Group administrator
Sun 18 Jun 2023 09:23:25 PM UTC, comment #39: 

if the pattern to be matched is ASCII only, does regexp really care if the input must be a valid string? can you give me a counter example when it matters?

also, this workaround only works if the matching string is located at the start of the input. I have a toolbox parsing/writing (https://github.com/fangq/jsonlab) binary JSON (https://json.nlohmann.me/features/binary_formats/bjdata/), where ascii keys and binary data are mixed. Regexp could be a powerful tool for efficiently processing/parsing such data. For example, this works in matlab to locate/count double-typed elements in the buffer:

dat=struct('a',pi, 'b',[], 'c',struct('d',[1,2],'e',12))
regexp(savebj('',dat,'ArrayToStruct',1),'U._ArrayType_SU.double')

or

regexp(savebj('',dat,'ArrayToStruct',1),'U[\x0B]_ArrayType_SU[\x06]double')

but now it fails in octave.

needless to say, from a code maintenance perspective, the proposed workaround not only made the code difficult to read, difficult to generalize and, one has to separately handle MATLAB and octave.

is matching MATLAB's function behavior no longer a priority for octave development?

Qianqian Fang <fangq>
Sun 18 Jun 2023 07:30:05 PM UTC, comment #38: 

IIUC, your use case is exactly the type for which the proposed workaround should be working. You could use something like the following:

first_non_ascii_idx = max([find(buffer>2^7, 1, 'first'), 1]);
regexp(buffer(1:first_non_ascii_idx-1), '^\s*(http|https|ftp|file)://');


That should be working in Octave and in Matlab. And maybe also speed up the regexp if `buffer` can be very large...

Could you please clarify or show an example for which the above approach doesn't work?


Markus Mützel <mmuetzel>
Group administrator
Thu 15 Jun 2023 09:10:24 PM UTC, comment #37: 

@mmuetzel, the example you gave is not the type of use cases that I need regexp for - which is to match ascii-based patterns in an arbitrary (including non-UTF-8) char-array.

I want to emphasize is that such use case is still supported by MATLAB (as well as in Python - re module can match with or without the re.UNICODE flag), however, it has been eliminated by newer octaves. This creates a function behavioral discrepancy and potentially limits MATLAB toolbox authors from porting their software to Octave. To me, it is a big loss of flexibility of regexp.

For example, if I have an arbitrary char-array, I want to tell if the char-array starts with an URL, I could use

regexpi(buffer,'^\s*(http|https|ftp|file)://')

to efficient match many possible protocols empowered by older versions of regexp. Another example is that I read the first 256-bytes from a binary file and want to test it's MAGIC headers (https://en.wikipedia.org/wiki/List_of_file_signatures). With this feature removed, I really don't see how to achieve goals like these in a compact, extensible and versatile fashion.

Qianqian Fang <fangq>
Mon 10 Apr 2023 09:42:06 AM UTC, comment #36: 


> on the other side, why regexp should only work with a valid string? why it can't match an arbitrary byte array?


We are getting on a side tangent here. But let me try to give a bit of background:
In old versions of Octave, `regexp` treated each byte of its input separately. But recall that Octave uses UTF-8 as the encoding for its `char` arrays. As an example in UTF-8, the character `ä` is represented by the two bytes `[195, 164]`. That meant that the following expression only matched the first byte `195` (instead of the entire character `ä`):

regexp('ä', '([äa])', 'tokens')

I hope you can see how that is problematic for string processing (for which `regexp` is meant).

In later versions of Octave, `regexp` treats UTF-8 encoded strings correctly. However, that also means that the input must be valid UTF-8. That is a requirement of PCRE2.

Matlab on the other hand uses UTF-16 as the encoding for their `char` arrays. `255` is a valid code point in that encoding (namely 'ÿ').

In UTF-16, it's still possible to produce invalid encoding sequences. But that is only possible for characters outside the BMP. E.g.:

>> a = '🎉';
>> b = typecast(unicode2native(a, 'UTF-16LE'), 'uint16')

b =

  55356  57225


All surrogate pairs start with their first bit being set. You could form an invalid surrogate pair, e.g., by changing the second value to something below 2^15. But you'd never encounter an invalid surrogate pair if the char array only consists of bytes (with values below 2^8). So, there are no byte sequences that are invalid UTF-16 because no byte can have a value above 2^15.

I'd still say that it might be best to only select the part of your input until the first non-ASCII character before proceeding to pass it to `regexp` (both in Octave and in Matlab). E.g., something along these lines:

a = '.....';  % your byte sequence that you are storing in a `char` array
first_non_ascii_idx = find(a>2^7, 1, 'first');
if (isempty(first_non_ascii_idx ))
  first_non_ascii_idx = numel(a)+1;
end
regexp(a(1:first_non_ascii_idx-1), 'some_regex');


Markus Mützel <mmuetzel>
Group administrator
Sat 08 Apr 2023 08:58:45 PM UTC, comment #35: 

by the way, most other string processing functions in octave-6, such as strcat, strjoin, strfind, strmatch, can handle char(['U' 255]) without any error, except regexp and regexprep

Qianqian Fang <fangq>
Sat 08 Apr 2023 08:53:36 PM UTC, comment #34: 

I understand there are many workaround to replace a regexp call (such as ismember or find, or setdiff), but none of them can be as versatile/extensible as regexp.

it appears that regardless of `DefaultCharacterSet` setting, MATLAB's string functions, including regexp, are able to handle arbitrary byte-array input, but  octave-6's doesn't.

for example, this runs in matlab without a problem, but failed in octave 6

  regexp(char(randi(255,1,100)), '^\s*[\[\{SCHiUIulmLMhdDTFZN]')

on the other side, why regexp should only work with a valid string? why it can't match an arbitrary byte array?

> Matlab uses UTF-16 for their char arrays


do you know any byte sequences that are not valid UTF-16? I am very interested in proving MATLAB's regexp remains working with that input.

Qianqian Fang <fangq>
Sat 08 Apr 2023 08:27:03 PM UTC, comment #33: 

I appreciate the comments. I tried the following, but neither of them solves my issues

1. calling inside octave workspace, or .octaverc, or inside of my encodevarname.m function

_mfile_encoding_('utf-8')

neither of allow to run the second command;

without adding this line, default octave-cli session outputs _mfile_encoding_ as `system`, but "octave --force-gui" outputs `utf-8`. in either case, my system locale (Ubuntu 22.04) is en_US.UTF-8. so I don't believe calling _mfile_encoding_ is relevant (or does anything) to the error reported here.

Searching `__mfile_encoding__` gives very little information on what it does, and how it affects string processing.


2. remove 'builtin' so my function can detect unicode2native, but then the second command failed with

error: unicode2native: converting from UTF-8 to codepage 'UTF-8': Invalid or incomplete multibyte or wide character

Qianqian Fang <fangq>
Sat 08 Apr 2023 04:37:06 PM UTC, comment #32: 


> MATLAB has feature('DefaultCharacterSet','utf8') to set default string encoding, but I can't find an equivalent in octave.


I don't know exactly what that does. But it might be similar to what you'd do with `__mfile_encoding__ ('utf-8')`.

Markus Mützel <mmuetzel>
Group administrator
Sat 08 Apr 2023 04:34:21 PM UTC, comment #31: 


> In MATLAB and older version of octave, I could use regexp to test if the object could be a possible valid UBJSON buffer by something like
>
> regexp(char(['U' 255]), '^\s*[\[\{SCHiUIulmLMhdDTFZN]')
>
> but now this raise an error in Octave 6+ but not elsewhere.


Matlab uses UTF-16 for their char arrays. So, `char(255)` is a valid character for them. Octave uses UTF-8. `char(255)` is not valid as a part of any UTF-8 sequence.
IIUC, you are inspecting a byte sequence that happens to start with something that can be interpreted as ASCII characters. But then changes later on to a "random" byte sequence.
Maybe, you could find the first byte that can't be interpreted as ASCII first with something like `find(~isascii(char(['U' 255])), 1, 'first')`, and then feed only the part up to that to `regexp`.
That might also help speed up the regexp execution a little bit since less data needs to be send forth and back...


In `encodevarname`, you are checking for `exist('unicode2native','builtin')`. But that function is implemented as a .m file in Octave. E.g., for me with Octave 8.1.0 on Windows:

>> which unicode2native
'unicode2native' is a function from the file C:\Program Files\GNU Octave\Octave-8.1.0\mingw64\share\octave\8.1.0\m\s
trings\unicode2native.m
>> exist('unicode2native','builtin')
ans = 0
>> exist('unicode2native','file')
ans = 2


Would it help to adapt that check?

Markus Mützel <mmuetzel>
Group administrator
Sat 08 Apr 2023 04:16:51 PM UTC, comment #30: 

I can give two examples from where my code broke by this bug

part of my toolbox handles parsing/writing a binary JSON format based on UBJSON (https://ubjson.org) and its extension BJData (https://neurojson.org/bjdata/). Either of the formats uses ASCII based data type markers (one of '[{SCHiUIulmLMhdDTFZN', see https://raw.githubusercontent.com/NeuroJSON/bjdata/master/images/BJData_Diagram.png) followed by binary payloads.

In MATLAB and older version of octave, I could use regexp to test if the object could be a possible valid UBJSON buffer by something like

regexp(char(['U' 255]), '^\s*[\[\{SCHiUIulmLMhdDTFZN]')

but now this raise an error in Octave 6+ but not elsewhere.

similarly, my function encodevarname.m (https://github.com/fangq/jsonlab/blob/master/encodevarname.m) converts a string to a valid variable name (and is reversible), and the below call fails in Octave 6, but not elsewhere.

encodevarname('变量')

MATLAB has feature('DefaultCharacterSet','utf8') to set default string encoding, but I can't find an equivalent in octave.

Qianqian Fang <fangq>
Sat 08 Apr 2023 03:59:26 PM UTC, comment #29: 


> what's the downside of setting ISO-8859-1 as the fallback when encoding can not be detected? throwing an error is quite an invasive solution to this issue.


You'd need to ask the developers of the PCRE2 library of your distribution. Octave uses that as the backbone for `regexp`/`regexprep`.

Markus Mützel <mmuetzel>
Group administrator
Sat 08 Apr 2023 03:52:37 PM UTC, comment #28: 

When you end up with non-UTF-8 encoded characters in an Octave char array, something else has probably gone wrong.
Dependent on where those char arrays came from, you might need to change the `__mfile_encoding__` to match the encoding of your .m files, save your .m files in UTF-8 encoding, convert the encoding of the string from the native encoding to UTF-8 with `native2unicode`, or - if you are dealing with byte arrays - use `uint8` arrays instead, ...
In the latter case, you probably shouldn't use `regexp`/`regexprep` on byte arrays anyway.

It's hard to tell what might help without knowing more about the use case. Could you please show a self-contained minimal reproducer of the error you are seeing?

Markus Mützel <mmuetzel>
Group administrator
Sat 08 Apr 2023 03:41:01 PM UTC, comment #27: 

I am wondering what is the status of this bug?

unable to handle non-UTF8 input in regexp/regexprep can potentially break many existing codes. This bug deserve to have a higher priority. MATLAB's regexp/regexprep are able to handle those without any problem.

what's the downside of setting ISO-8859-1 as the fallback when encoding can not be detected? throwing an error is quite an invasive solution to this issue.

I encountered this bug when running tests on Octave 6.4 via github action (see logs below). MATLAB and older versions of Octave do not have this issue

https://github.com/fangq/jsonlab/actions/runs/4645644873/jobs/8221594452
https://github.com/fangq/jsonlab/actions/runs/4645854687/jobs/8221925994

Qianqian Fang <fangq>
Mon 04 Nov 2019 10:39:20 PM UTC, comment #26: 

@ comment #20:
textread.m invokes strread.m.
strread.m does a lot of byte counting behind the scenes, so I'm not surprised if it breaks. But it can also be that it's actually regexp() or regexprep() that broke; I use a pimped version of strread.m that can handle cuddling literals (textscan can't) and that version broke several months ago after some UTF8(?) fixes in core - and that problem is actually with some otherwise validly looking regexp calls. I still want to isolate those regexp calls and report them in the bug tracker.

I can look at what the patch in comment #20 does to strread.m (in the past I did a lot of work on strread.m) but it is considered legacy.

Philip Nienhuis <philipnienhuis>
Group Member
Mon 04 Nov 2019 10:28:21 AM UTC, comment #25: 

And nope, I can't think of any actual (non-toy) use case for getting the code point values back from char strings. I'm just bothered by the asymmetry in operations on the primitive types. But you can't do round-trips between the various numeric primitives either, due to round-off or overflow issues. So it's hard to say that's a big deal.

Andrew Janke <apjanke>
Mon 04 Nov 2019 07:54:42 AM UTC, comment #24: 

No, I just mean if a user wanted to do custom M-code based comparisons on a character's code point value. isstrprop() and friends would probably be better ways to do it.

Andrew Janke <apjanke>
Mon 04 Nov 2019 07:27:34 AM UTC, comment #23: 

@apjanke: I'm not sure what you mean by your second point in comment #21. Are you referring to the isstrprop and related functions? They should be UTF-8 ready by now.
If you want to get the numeric Unicode code point outside the range of the 8bit char, you could convert to let's say UTF-32:

double (unicode2native("∫", "utf-32le")) * 2.^((0:3).'*8)


Markus Mützel <mmuetzel>
Group administrator
Mon 04 Nov 2019 01:50:06 AM UTC, comment #22: 

Oh, wait. Looks like you already addressed that round-trip oddness in the email you sent out to the mailing list. Sorry.

Andrew Janke <apjanke>
Mon 04 Nov 2019 01:48:09 AM UTC, comment #21: 

That brings up an interesting edge case: under this behavior, `double(char)` and `char(double)` are no longer inverses of each other, so you can't round-trip a piece of data through those transformations. Not sure of what practical implications that would have. Aside from that now it's hard to get the Unicode code point value from an input character, if you wanted to do numeric comparison/code block membership tests/whatever on it.

Andrew Janke <apjanke>
Sun 03 Nov 2019 01:31:28 PM UTC, comment #20: 

The attached patch wires in the validation of UTF-8 at a pretty low level. It applies on top of "bug57107_validate_u8.patch".
It breaks "strread" and "textread" (and possibly also other things). But I'm still waiting for feedback on the maintainers mailing list to see if it is worth looking into why.
Nevertheless, it demonstrates what could happen:

octave:1> char (181)
ans = µ
octave:2> double (ans)
ans =

   194   181

octave:3> char ([181 228])
ans = µä
octave:4> double (ans)
ans =

   194   181   195   164



(file #47784)

Markus Mützel <mmuetzel>
Group administrator
Sat 02 Nov 2019 01:58:35 PM UTC, comment #19: 

Thanks! That email looks to me like a good description of the issues we're facing.

Andrew Janke <apjanke>
Sat 02 Nov 2019 12:30:15 PM UTC, comment #18: 

@Andrew: I asked about the general issue with invalid UTF-8 on the maintainers mailing list here:
https://octave.1599824.n4.nabble.com/How-should-we-treat-invalid-UTF-8-td4694444.html

Markus Mützel <mmuetzel>
Group administrator
Sat 26 Oct 2019 11:15:09 AM UTC, comment #17: 

The attached patch is a lot cleaner validating UTF-8 encoded strings.
It still isn't wired in to do anything meaningful though. But it should be safe enough for testing.

The question remains where we want this conversion from invalid UTF-8 to valid UTF-8 to happen?
It might be surprising to a user if the string they read from a file wasn't byte identical to the content of the file.
At the same time, there are probably a lot of places inside Octave (not only regexp*) where we would need to check if char arrays contained valid UTF-8 before using them safely.
On the other hand, there is nothing that would prevent a user from creating invalid UTF-8 manually (e.g. assigning "a=char(181)"). So validating strings that are read from a file wouldn't suffice anyway.

Thus, the best option I see at the moment is identifying the critical places (e.g. just before passing the strings to PCRE) and validate them using the new function "validate_u8".

(file #47746)

Markus Mützel <mmuetzel>
Group administrator
Fri 25 Oct 2019 04:02:23 PM UTC, comment #16: 

I just checked in a patch for bug #55452. With it, I can do the following to read the file from comment #0:

f = fopen('ISO-8859.csv', 'r', 'n', 'iso-8859-1');
str = fgets(f);
str(end) = '';
fclose(f);

regexprep(str, '1', '2')


The output is:

ans =   2T2(°C)
>> double(str)
ans =

    32    32    49    84    49    40   194   176    67    41    32    32


Should we make this bug about choosing a fallback option for invalid UTF-8 so that specifying the input encoding isn't necessary for files encoded in ISO-8859-1?

Markus Mützel <mmuetzel>
Group administrator
Thu 24 Oct 2019 06:35:55 PM UTC, comment #15: 

We're using libiconv.
I didn't find an option that does this fallback conversion automatically though.

And definitely don't use the diff I uploaded previously. It leaks and doesn't advance the pointers correctly. (But maybe it was good enough for demonstration purposes.)

Please, go ahead and ask the mailing list for feedback.

Markus Mützel <mmuetzel>
Group administrator
Thu 24 Oct 2019 06:30:29 PM UTC, comment #14: 

There are so many possible scenarios and use cases for text encoding. Maybe this is something we should send a survey email out to the octave-users mailing list about, so we can get an idea for what the user community would prefer or need?

Andrew Janke <apjanke>
Thu 24 Oct 2019 06:27:23 PM UTC, comment #13: 

Also I think we'd want to get a library to handle this for us, rather than trying to write our own Unicode encoding/decoding routines, right?

Andrew Janke <apjanke>
Thu 24 Oct 2019 06:26:35 PM UTC, comment #12: 

That sounds like a pretty reasonable approach. I think it would provide "do what I want" behavior for most users without getting too fancy, and would provide decent Matlab compatibility.

Maybe we'd want to do a two-step fallback:

1. Default to UTF-8.
2. If encountering non-UTF-8 byte sequences,
  a) If the the user's locale's encoding is a non-Unicode encoding, fall back to it,
  b) Else fall back to ISO-8859-1 like this.

I don't know if that's actually viable for all multibyte encodings, though (e.g. like Shift-JIS). And I'm pretty sure it's not what Matlab does. But it might be a better behavior for e.g. Eastern European, Arabic, or Thai users.

And we're only talking about what the default behavior should be when a file handle is opened without an encoding specified, right? I would expect that when using an explicitly requested encoding, invalid input would just raise an error. (Unless the user explicitly asked for a fallback behavior somehow.)

Andrew Janke <apjanke>
Thu 24 Oct 2019 05:56:13 PM UTC, comment #11: 

The attached diff isn't intended to be pushed. It is more like a proof of concept of what I was musing:

> __utf8_with_fallback__ (["abc" 120 52 181 121 "ä"])
warning: implicit conversion from numeric to char
ans = abcx4µyä
>> double (ans)
ans =

    97    98    99   120    52   194   181   121   195   164


Note that the (invalid) 181 is converted to (valid) [194 181].

(file #47737)

Markus Mützel <mmuetzel>
Group administrator
Thu 24 Oct 2019 02:17:45 PM UTC, comment #10: 

After a little research, I don't think that we should sniff the encoding.
Instead we might want to select one of the fallback options for decoding invalid UTF-8 byte sequences [1].

I'd personally vote for the option:
"The Unicode code points U+0080–U+00FF with the same value as the byte, thus interpreting the bytes according to ISO-8859-1."

That also most closely matches what Matlab seems to be doing. And it would also solve the OR.

[1]: https://en.wikipedia.org/wiki/UTF-8#Invalid_byte_sequences

Markus Mützel <mmuetzel>
Group administrator
Thu 24 Oct 2019 12:37:39 PM UTC, comment #9: 

Sniffing does have one major advantage: it can reliably distinguish between UTF-8 files and legacy code page files. Which would be handy on Windows, where you might consider ISO-8859-1 to be the default encoding. But the world has largely gone UTF-8, so many (most?) of your input files are going to be UTF-8.

Andrew Janke <apjanke>
Thu 24 Oct 2019 12:28:24 PM UTC, comment #8: 


> So there are no "non-UTF-8 byte values" and byte values between 128-255 can directly be mapped to UTF-16 (in an effective no-op).


There are non-UTF-8 byte values; it's that UTF-8 isn't involved and it happens to read as correct UCS-2 if you just widen the bytes to 16 bits as unsigned ints (yeah, in an effective no-op).

> I was assuming from your comment #2 that the default encoding used by Matlab on non-Windows systems was UTF-8. But if I follow you correctly in your comment #4, it is ISO-8859-1?


Yes, I believe it's possible. It's not in the doco, so would need to actually do testing on Matlab to verify, which I'm unwilling to do for licensing reasons.

And if this is the case, we need to decide whether Octave should do the same thing for Matlab compatibility, or do something different, because IMHO that's a really bad default behavior. For example, if we did it that way (default ISO-8859-1 everywhere), it would probably break @mleitner's basic use case that he's concerned about here. I think it would be better for almost all users and scenarios if Octave would act like a normal Unix program and take the default encoding from the process's locale.

Andrew Janke <apjanke>
Thu 24 Oct 2019 12:18:14 PM UTC, comment #7: 


> First, think about the performance: even if I would like to just read in the first few bytes of a file, the fopen alone would have to read in quite a large chunk in order to detect the encoding, which I would never use. And how much would you read in, the whole file?


There would be little or no performance effect. In a modern OS, all file I/O is done in blocks or pages: when you read one byte, what actually happens is that the first 4K block is brought in from disk to cache and held there until you read more. And then I/O is usually further buffered by the next language layer. Sniffing would be done within the first block or buffer.

> And further, there is no reliable way to detect the encoding.


This is true. But there are some heuristics; ICU4C provides some which have an okay reputation. Sniffing would only be a convenience for casual users; the real answer to all of these scenarios is that you have to actually know and specify the file encoding to get correct, reliable behavior.

But you've got a good point: sniffing introduces variability and unpredictability into your code's behavior, and could well make Octave I/O both harder to use and harder for maintainers to debug user issues with. It could even introduce variability between different versions or builds of Octave, if they were built with different versions of the library that provides the sniffing algorithms.

> Actually, I have always been perfectly happy with the previous situation -- octave had no idea of encodings, it read the bytes as they came in the file and fed them to the terminal emulator, which cared about how the are displayed.


That scenario is fine, as long as your input files are in the same encoding as your terminal. (And you don't want to do pattern matching on non-basic-English character classes, or do text processing on non-UTF-8 input data, like OP for this bug report does.) Being encoding-aware allows you to work with international data where your input files not in your current locale's encoding, or they are in multiple encodings. Useful if you're working with, say, census data or energy data that comes from multiple countries or continents, or a spreadsheet that your Japanese colleague sent you.

If we make a good choice with the default encoding selection - e.g. taking it from your locale definition, assuming your locale is correctly configured - your scenario will continue to work with no code changes.

> Do you propose to make the "t" and "b" in the mode string of fopen have a meaning, while today (on linux) they are irrelevant (is this what Matlab does)?


If we stay compatible with Matlab, the "b" and "t" modes would only have an effect on Windows, where the "t" mode enables translation between Windows CRLF line endings and Unix-style LF endings. "t" mode has no other effect, and "b" mode would keep the current behavior. On Linux (and in portable code), they would remain irrelevant.

> Please, can you point me to a write-up of what is planned in this regard?


The discussion has been over at https://savannah.gnu.org/bugs/index.php?55452 and on the octave-maintainers mailing list:

Andrew Janke <apjanke>
Thu 24 Oct 2019 12:15:14 PM UTC, comment #6: 

I also don't really like the idea of sniffing the file.

The reasoning behind all of this:
Let's assume a user is reading strings from a file. This in itself doesn't require any knowledge of the used encoding. But if the user wants to use these strings to open a file or folder from the file system or wants to place a legend or annotation in a graph, encoding is important.
To try and remove all of that conversion hassle from the user, we are trying to have all character arrays in Octave encoded consistently and only convert at the interfaces.
Some time ago it was decided that this consistent encoding should be UTF-8 (different from Matlab).

@Andrew:
I think we agree. But I can't explain myself well enough. I was assuming from your comment #2 that the default encoding used by Matlab on non-Windows systems was UTF-8. But if I follow you correctly in your comment #4, it is ISO-8859-1? So there are no "non-UTF-8 byte values" and byte values between 128-255 can directly be mapped to UTF-16 (in an effective no-op).

Markus Mützel <mmuetzel>
Group administrator
Thu 24 Oct 2019 06:58:43 AM UTC, comment #5: 

Please don't do sniffing, and even more definitely not by default. First, think about the performance: even if I would like to just read in the first few bytes of a file, the fopen alone would have to read in quite a large chunk in order to detect the encoding, which I would never use. And how much would you read in, the whole file? And further, there is no reliable way to detect the encoding. Yes, you could probably discern quite easily Latin-script languages written in UTF-8 or UTF-16, but for everything else you would have to have also a knowledge of the language, which characters it used and to what frequency, in order to distinguish between non-Latin script languages written in two-byte encodings. And how would you distinguish for instance between German or a Nordic language written in ISO 8859-1 and an Eastern European language written in ISO 8859-2, again by the histogram of characters above 127? Further, it would break the principle of least surprise: what if my file consists of a list of given names of a sample of people taken in England? If fopen reads the first 1024 bytes to decide on the encoding, it will probably choose the default among any ISO-8859 or UTF-8 (as probably no byte will be above 127). However, later in the file an expatriate "Jürgen" might well appear, which is then misread. That in alone would not yet be much of a problem, but the "Jürgen" could appear also in the first 1024 bytes, in which case it would be interpreted differently.

I am a late-comer at this issue of making octave encoding-aware. Actually, I have always been perfectly happy with the previous situation -- octave had no idea of encodings, it read the bytes as they came in the file and fed them to the terminal emulator, which cared about how the are displayed. The only issue in this sense could have been that the number of bytes are not necessarily equal to the number of displayed characters. But I do not see that this would be a problem unless you do manual positioning of characters of a fixed-width font say in a plot -- the much more frequent problem of e.g. how large string buffers to allocate is a no-brainer.

Please, can you point me to a write-up of what is planned in this regard? Do you propose to make the "t" and "b" in the mode string of fopen have a meaning, while today (on linux) they are irrelevant (is this what Matlab does)? If the "b" then keeps the current behaviour, I could live with that, I would only have to use it consistently where today I distinguish between "t" and "b" depending on whether the file will contain text or binary data.

Michael Leitner <mleitner>
Wed 23 Oct 2019 09:49:43 PM UTC, comment #4: 


> There is at least one modern OS that still uses 8bit encodings by default: Windows 10 and its predecessors.


Good point. Windows is weird because it has both Unicode and legacy code page APIs. And both Octave and Matlab are Unicode-enabled. I guess this gets into the semantics of what the "default encoding" is. But you're right.

> But I now see that this bug is marked as affecting GNU/Linux.


My initial testing shows it affects Mac as well.

> Matlab's internal encoding is 16bit wide (maybe UCS-2).


Yep, it's UCS-2. (Though it also generally passes through UTF-16 surrogate pair code units unmolested, so UTF-16 data will generally work too, as long as you're not trying to do character counts.)

> Maybe it reads the non-UTF-8 bytes as is and they "happen" to map the Unicode code points (for a western encoded file).


Nope. Matlab's fopen() opens files with an "encoding" attribute (see https://savannah.gnu.org/bugs/index.php?55452), and when you do text or char-oriented I/O (depending on what read/write function you call, and what you pass for the "precision" argument for low-level I/O functions), it transcodes the input to UCS-2/UTF-16.

It just so happens that for ISO-8859-1 in particular, the non-UTF-8 byte values between 128-255 map to the Unicode code points with the same values, which in UTF-16 are represented by code units with the same numeric values. So the transcoding operation there is a no-op, except for bit width. But that won't work for Octave, because Octave's internal coding is UTF-8.

> I am not sure whether we should do something similar and transcode from a default 8bit encoding if we detect that a source contains invalid UTF-8.


I think Octave should do transcoding. I dunno about "detecting" that the source contains invalid UTF-8. Just for Matlab compatibility; I don't think they sniff the input contents to detect encoding. But maybe that would be an advantage that it's worth losing compatibility for? On Matlab, to be portable and properly internationalized, you pretty much have to explicitly force the encoding from your code when you do I/O. And that would still work on Octave in the face of sniffing for the default case.

Diagnostic: the 4-argout version of fopen returns the encoding. (Not supported in Octave. (Yet.))


f = fopen('foo.txt');
[a,b,c,d] = fopen(f);


Thought: Since Matlab is so Windows-focused, I wonder if it just opens all files as ISO-8859-x by default, regardless of OS?

Andrew Janke <apjanke>
Wed 23 Oct 2019 08:43:44 PM UTC, comment #3: 

There is at least one modern OS that still uses 8bit encodings by default: Windows 10 and its predecessors.
On a western locale the default encoding might well be ISO-8859-1 (or ANSI/CP1252).

But I now see that this bug is marked as affecting GNU/Linux. So it will most probably be necessary to specify the encoding when fopen'ing a file for reading strings.

Matlab's internal encoding is 16bit wide (maybe UCS-2). Maybe it reads the non-UTF-8 bytes as is and they "happen" to map the Unicode code points (for a western encoded file).
I am not sure whether we should do something similar and transcode from a default 8bit encoding if we detect that a source contains invalid UTF-8.

Markus Mützel <mmuetzel>
Group administrator
Wed 23 Oct 2019 03:04:43 PM UTC, comment #2: 

Interesting. I wonder how Matlab decides what encoding it is in? A file in a single-byte code page like that is valid under several different encodings.

The doco for fopen (https://www.mathworks.com/help/matlab/ref/fopen.html) says: "If you do not specify an encoding scheme, fopen opens files for processing using the default encoding for your system." But that can't be all that's going on here, because on modern systems, ISO-8859-x isn't the default encoding.

Andrew Janke <apjanke>
Wed 23 Oct 2019 10:38:09 AM UTC, comment #1: 

This is probably related to bug #55452 and bug #55826.
If those two were resolved, the use case in comment #0 would probably work seamlessly again.

Markus Mützel <mmuetzel>
Group administrator
Wed 23 Oct 2019 09:38:50 AM UTC, original submission:  

Consider the next in ISO-8859 code page, with Degree C symbol, attached as ISO-8859.txt. The unix file command reports the file type as ISO-8858 text.


  1T1(°C)


and this script reading it:


f = fopen('ISO-8859.csv');
str = fgets(f);
str(end) = '';
fclose(f);

regexprep(str, '1', '2')


results in this error with octave-6.0.0


error: regexprep: the input string is invalid UTF-8


Both octave-5.1.1 and matlab handle this transparently. I guess if dev does not, this will lead to quite a few error reports in the future.

The error is also triggered by commands such as strsplit and strtrim since they invoke regexp functions.
A more extensive test script uu.m is attached.



A.R. Burgers <arb>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #47784:  bug57107_char.patch added by mmuetzel (9KiB - application/octet-stream)
file #47746:  bug57107_validate_u8.patch added by mmuetzel (4KiB - application/octet-stream)
file #47737:  bug57107_utf8_fallback.diff added by mmuetzel (2KiB - application/octet-stream)
file #47732:  ISO-8859.csv added by arb (12B - application/vnd.ms-excel)
file #47733:  uu.m added by arb (671B - text/plain)

 

Carbon-Copy List
  • -email is unavailable- added by fangq (Posted a comment)
  • -email is unavailable- added by mleitner (Posted a comment)
  • -email is unavailable- added by apjanke (Posted a comment)
  • -email is unavailable- added by mmuetzel (Posted a comment)
  • -email is unavailable- added by arb (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 10 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2019-11-03 mmuetzel Attached File- Added bug57107_char.patch, #47784
        Summaryregexp functions fail on ISO-8859 input regexp functions fail on ISO-8859-1 input
    2019-10-26 mmuetzel Attached File- Added bug57107_validate_u8.patch, #47746
        StatusNone In Progress
        Operating SystemGNU/Linux Any
    2019-10-24 mmuetzel Attached File- Added bug57107_utf8_fallback.diff, #47737
    2019-10-23 mmuetzel Dependencies- Depends on bugs #55826
        Dependencies- Depends on bugs #55452
    2019-10-23 arb Attached File- Added ISO-8859.csv, #47732
        Attached File- Added uu.m, #47733

    Back to the top

    Powered by Savane 3.13-4448.
    Corresponding source code