Tue 08 Apr 2014 03:23:57 PM UTC, comment #7:
The description of the problem here is wrong. It's not really about the signedness of char, but it was about a bug in fwrite. I've changed the description accordingly.
|
Thu 03 Apr 2014 11:58:44 PM UTC, comment #6:
Actually, I think the problem with fwrite was actually a bug in fwrite, not that it couldn't or shouldn't work perfectly fine in this instance. I checked in the following change for it on stable:
http://hg.savannah.gnu.org/hgweb/octave/rev/aa861a98d84d
|
Thu 03 Apr 2014 07:24:25 PM UTC, comment #5:
Thanks for the comments jwe.
You're right, fputs would be better for this case. The fprintf (fid, "%s", str) usage seems to be used in at least a handful of functions under scripts, so perhaps these should be changed together as a cleanup changeset.
Then there are an additional handful of uses of fprintf (fid, "%s\n", str), that might also be more efficiently done as fputs (fid, [str "\n"]).
|
Thu 03 Apr 2014 06:39:45 PM UTC, comment #4:
Sorry to come to this discussion late.
Instead of fprintf (fid, "%s", str), I'd probably use fputs (fid, str) when there is no other data type conversion required.on.
Either way, fputs or fprintf are not exactly the same as fwrite becuase fwrite will pass ASCII 0, but fputs and fprintf won't. I don't think that matters in this instance, but I'm not sure.
|
Fri 28 Mar 2014 03:44:26 AM UTC, comment #3:
Applied this change to the stable branch.
http://hg.savannah.gnu.org/hgweb/octave/rev/2633b5f3106a
There should be no noticeable changes with the current Octave docstrings, but help and related functions should now support UTF-8 characters in Texinfo docstrings.
|
Thu 27 Mar 2014 01:30:30 PM UTC, comment #2:
In writing to the continuing thread on the mailing list, I just realized an even better solution for this that is not a workaround, simply use fprintf when dealing with human-readable text, both for consistency and to let the appropriate library routine deal with the conversion.
So the diff should be
I'll test this later and commit (away from dev box at the moment, if someone else is interested feel free to take care of this). This seems safe enough that it should work on the stable branch.
|
Thu 27 Mar 2014 02:14:48 AM UTC, comment #1:
Confirmed already in the discussion on the mailing list. Also affects the development version. Thanks for transcribing all of this detail to the bug report.
Although, for this particular example, keep in mind that technically the preferred texinfo formatting would be:
And then it's up to the texinfo program to convert the macros to the output format.
But yes, in general for other non-ASCII characters that would be useful to us, like °, ×, µ, or even é this bug report still holds.
|
Wed 26 Mar 2014 09:53:21 PM UTC, original submission:
The _makeinfo_ function writes documentation strings to a temporary file using
fwrite (fid, text);
That function assumes the documentation text is contained in an array of unsigned char's by default. That's fine for documentation written in 7-bit ascii, but when the text contains 8-bit utf-8 characters, those 8-bit bytes are replaced by null characters as can be seen by the following simple example:
The net result of running that function is
If an additional argument "schar" is used for fwrite, all is well and
the utf8 characters are written to the file without issues which appears to prove that Octave strings are generally represented as an array of signed char's (as opposed to an array of unsigned characters as assumed by the _makeinfo_ function.
Indeed if the following patch
is applied, then the following code
yields the following help result:
Without the above patch, the help output is truncated just before
the utf-8 symbol for greater than or equal to. This is expected because fwrite writes null characters in place of the utf8 bytes without the above patch to the _makeinfo_ function.
|