bugGNU roff - Bugs: bug #65108, [troff] support construction of...

 
 

bug #65108: [troff] support construction of general file name request arguments

Submitter:  G. Branden Robinson <gbranden>
Submitted:  Tue 02 Jan 2024 10:59:42 PM UTC
   
 
Category:  Core Severity:  1 - Wish
Item Group:  Feature change Status:  Postponed
Privacy:  Public Assigned to:  None
Open/Closed:  Open Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Thu 14 Nov 2024 07:10:21 PM UTC, comment #19: 

Post-1.24.

G. Branden Robinson <gbranden>
Group administrator
Tue 12 Nov 2024 08:15:43 PM UTC, comment #18: 

comment #17:

> Which emails notifications I receive seems to be pretty erratic.


Savannah seems to be under a lot of churn lately--which is overall a good thing, as useful features are being added.  But yeah, I did notice http://savannah.nongnu.org/support/?111148 yesterday myself.

Dave <barx>
Group Member
Tue 12 Nov 2024 06:48:19 PM UTC, comment #17: 


comment #16:

> comment #8:
> > A.  what to do about `\ ` in GNU soelim and troff.
>
> The soelim part is now covered by bug #66027.


...and the troff part by bug #66434, where the answer is basically "nothing"; I regard support for ".so foo\ bar.txt" in GNU soelim as a backward-compatibility measure.  It's not portable anywhere else I know of anyway.

> (This bug was modified to depend on 66027 the same day comment #8 and comment #9 were posted, but if there was an email notification of this, bug-groff did not archive it.)


Which emails notifications I receive seems to be pretty erratic.

G. Branden Robinson <gbranden>
Group administrator
Tue 12 Nov 2024 06:29:57 PM UTC, comment #16: 

comment #8:

> A.  what to do about `\ ` in GNU soelim and troff.


The soelim part is now covered by bug #66027.  (This bug was modified to depend on 66027 the same day comment #8 and comment #9 were posted, but if there was an email notification of this, bug-groff did not archive it.)

Dave <barx>
Group Member
Tue 10 Sep 2024 09:21:47 PM UTC, comment #15: 

comment #14:

> So, ideally, we want GNU troff requests to be able to refer
> unambiguously to either one.


Yes.  And even more ideally, groff would look at its environment and make an educated guess of how to encode any non-ASCII characters it needs to pass to fopen().

There are plenty of ways this can go awry, of course: the user's environment was not necessarily the environment in which the requested file was created, for instance.  But it would be nice if, more often than not, this Just Worked without the user having to think about encodings and conversions.

> This, I'll quibble with.


I have no quibble with your quibble, but if it changes anything about the groff file-access situation, I can't identify what.

> * \000 itself won't work as "desired".  But this is not a
> practical problem, as 50+ years of Unix and C have led no one
> to expect that they can infix nulls in any file name anywhere.
>
> * The matter of other C0 controls (so, \001 to \037) is a vexing
> one.  I would strongly prefer to stay out of the morass altogether.


That's a reasonable restriction.  I'd guess the mitigating factor of the first point also largely applies to the second.  The Venn diagram of people putting C0 characters into filenames and people attempting to write system exploits is probably a single circle.

Dave <barx>
Group Member
Wed 04 Sep 2024 10:34:04 PM UTC, comment #14: 

comment #13:

> comment #12:
> > I feel like we're saying the same thing, or compatible things.
>
> Quite possibly.
>
> > A file named "résumé1.ms" might be stored on the file system
> > using either character encoding,
>
> ...or, as my example attempted to illustrate, two files might be stored, each using a different encoding.


Yes, a better point than I initially gave it credit for.  So, ideally, we want GNU troff requests to be able to refer unambiguously to either one.

> Similar to the contents of a file, a filename is just a string of bytes.  What characters those bytes mean is defined by the encoding.


This, I'll quibble with.  An encoding is simply a map between integers and abstract characters.  Nowadays, in the post-ISO 8859 watershed when encoding designers got more woke to the difficulties of large character sets and clashing cultural interpretations of certain symbols, these abstract characters tend to have names.  In the innocent days of USAS X.34-1968, one simply printed a chart with numbered boxes and unnamed glyphs, implying that a rendering device should "make the characters look just like that!"

Importantly, what distinguishes ISO 10646 from Unicode is that the former is only a character encoding standard--the aforementioned mapping--whereas Unicode is a character set standard, the normative responsibilities of which have cast a surprisingly large penumbra regarded from the perspective of more innocent 7- and 8-bit character days.
 

> A file can contain metadata to indicate its encoding; if not, there's often enough context for tools like preconv (or even the system's "file" command) to correctly guess it.


Right.  But a file name can't; not on POSIX systems.  There's no "resource fork" to indicate this.  The file system may impose an encoding (maybe), but as far as I know there's no portable way to query such information.
 

> The settings of one's terminal and LC_CTYPE environment variable affect how the string of bytes in a filename is interpreted.


Not always.  And there's the rub.


fopen(3):
       #include <stdio.h>

       FILE *fopen(const char *pathname, const char *mode);



$ sed -n '/^static void do_open/,/^}/p' src/roff/troff/input.cpp
static void do_open(bool append)
{
  symbol stream = get_name(true /* required */);
  if (!stream.is_null()) {
    symbol filename = get_long_name(true /* required */);
    if (!filename.is_null()) {
      errno = 0;
      FILE *fp = fopen(filename.contents(), append ? "a" : "w");
      if (0 /* nullptr */ == fp) {
        error("cannot open file '%1' for %2: %3",
              filename.contents(),
              append ? "appending" : "writing",
              strerror(errno));
        fp = (FILE *)stream_dictionary.remove(stream);
      }
      else
        fp = (FILE *)stream_dictionary.lookup(stream, fp);
      if (fp)
        fclose(fp);
    }
  }
  skip_line();
}


> There may not be enough context to guess.  There's no metadata (that I'm aware of, though I'd be happy to be wrong) to make the name's encoding definitive.


Precisely.

The way we're getting at file names is a C string with no implied encoding.

They're just bytes.  And GNU troff requests are not expressive enough, at present, to supply fopen() with a sequence of "just bytes".  Mostly, that's a good thing, because it keeps the formatter's own language more sane.  But we're limited to printable ASCII characters (with fuzz around the edges, like space 0x20 and delete 0x7F).  Tabs are right out.  Backslashes...should work?  Theoretically?  If doubled?  Do we need to double them again for C's sake, given that it's an escape character there too?  CSTR #54 offers no specification in this area.

We need an escape hatch, as Kernighan famously noted when critiquing Pascal's lack of them in CSTR #100.

That escape hatch is what I mean to provide, by repurposing GNU troff's Unicode special character escape sequence syntax.  That choice I knew would pinch a little when I made it, because it's not actually representing special characters here...or even, in this application, Unicode, due to the range limitation--and that pinch is something I'm feeling now while trying to reach a meeting of the minds with Deri over what we mean we type these things in non-formatting contents.
 

> > That's why I want to be able to support:
> >


> > $ grep -F .so résumé.ms
> > .so r\[u00E9]sum\[u00E9]1.ms
> > .so r\[u00E9]sum\[u00E9]2.ms
> > .so r\[u00E9]sum\[u00E9]3.ms


>
> Agreed, but I think it's ambiguous which of the two files I created in comment #11 a construction like this refers to.


My answer is straightforward.  I mean to apply a transformation to `filename.contents()` in the `do_open()` function above (actually via a helper function, because I'll need it for bug #64071 too) such that sequences matching `\[u0000]..\[u00FF]` map to C language octal escapes in the range \000 to \377.  That transformed string is what I would hand to fopen().

Some complications arise:

  • \000 itself won't work as "desired".  But this is not a practical problem, as 50+ years of Unix and C have led no one to expect that they can infix nulls in any file name anywhere.


  • The matter of other C0 controls (so, \001 to \037) is a vexing one.  I would strongly prefer to stay out of the morass altogether.  To see what I mean, and if you have an hour or so to spare, peruse Austin Group ticket 251.  This issue has received deep attention from experts.


Consequently my plan right now is to reject `\[u0000]` through `\[u001F]`, inclusive--meaning throw an error diagnostic and abort the request.

> They both, from some viewpoint, have the base filename "résumé".


That viewpoint is not the one taken by fopen(), which sees only a sequence of 8-bit bytes, to which it ascribes no particular meaning.  From that stance, the Latin-1 vs. UTF-8 encodings of "résumé" plainly differ.

> They can both coexist on the same file system, even in the same directory.


Yes!  And that's why it's good that fopen() can tell them apart, and so can we, if we will meet it on its own terms!

G. Branden Robinson <gbranden>
Group administrator
Wed 04 Sep 2024 11:44:45 AM UTC, comment #13: 

comment #12:

> I feel like we're saying the same thing, or compatible things.


Quite possibly.

> A file named "résumé1.ms" might be stored on the file system
> using either character encoding,


...or, as my example attempted to illustrate, two files might be stored, each using a different encoding.

Similar to the contents of a file, a filename is just a string of bytes.  What characters those bytes mean is defined by the encoding.

A file can contain metadata to indicate its encoding; if not, there's often enough context for tools like preconv (or even the system's "file" command) to correctly guess it.

The settings of one's terminal and LC_CTYPE environment variable affect how the string of bytes in a filename is interpreted.  There may not be enough context to guess.  There's no metadata (that I'm aware of, though I'd be happy to be wrong) to make the name's encoding definitive.

> That's why I want to be able to support:
>


> $ grep -F .so résumé.ms
> .so r\[u00E9]sum\[u00E9]1.ms
> .so r\[u00E9]sum\[u00E9]2.ms
> .so r\[u00E9]sum\[u00E9]3.ms


Agreed, but I think it's ambiguous which of the two files I created in comment #11 a construction like this refers to.  They both, from some viewpoint, have the base filename "résumé".  They can both coexist on the same file system, even in the same directory.

Dave <barx>
Group Member
Wed 04 Sep 2024 08:20:18 AM UTC, comment #12: 

comment #11:

> original submission:
> > we have no way of knowing what the file system's character encoding is.
> > Might be ISO 8859-1, UTF-8, UTF-16BE/LE, or something else entirely.
>
> I'm not sure now if that's a meaningful question.  The file system seems to just store a string of bytes as the file name, and leave it up to the shell how to interpret that.


> $ mkdir foo
> $ cd foo
> $ echo résumé | iconv -tutf8 | xargs touch
> $ echo résumé | iconv -tlatin1 | xargs touch
> $ echo * | od -c
> 0000000   r 303 251   s   u   m 303 251       r 351   s   u   m 351  \n
> 0000020


> Then a UTF-8 shell produces:


> $ ls
>  résumé  'r'$'\351''sum'$'\351'


> and a Latin-1 shell produces:


> $ ls
> résumé  résumé


> That is, both filenames are valid (but different) strings of Latin-1 characters.  In UTF-8, one of them is a string of valid characters, and one has two invalid bytes in it.


It's also valid Latin-2, Latin-5, Latin-9, and KOI8-R, to name four other encodings supported by groff.
 

> This is an ext4 file system, but I would imagine any other Unix-based one would have to work the same in order to interact with shells consistently.


I feel like we're saying the same thing, or compatible things.

A file named "résumé1.ms" might be stored on the file system using either character encoding, or, on a Widows system, using UTF-16LE.  A groff user with a document that wants to `so` that file name:


$ grep -F .so résumé.ms
.so résumé1.ms
.so résumé2.ms
.so résumé3.ms


...is going to need either an encoding match between résumé.ms's contents and their file system, or some sophistication about character encodings.

That's why I want to be able to support:


$ grep -F .so résumé.ms
.so r\[u00E9]sum\[u00E9]1.ms
.so r\[u00E9]sum\[u00E9]2.ms
.so r\[u00E9]sum\[u00E9]3.ms


That way a person doesn't have to preconv their document.

Or did preconv their document and this is what the program left them with because that tool has no sense of context regarding requests that take file name arguments: `so`, `soquiet`, `mso`, `msoquiet`, `open`, `opena`, `psbb`, `cf`, `fp`, `hpf`, `hpfa`, `nx`, or `trf`.

I feel like we might be talking past each other...?

G. Branden Robinson <gbranden>
Group administrator
Wed 04 Sep 2024 03:49:05 AM UTC, comment #11: 

original submission:

> we have no way of knowing what the file system's character encoding is.
> Might be ISO 8859-1, UTF-8, UTF-16BE/LE, or something else entirely.


I'm not sure now if that's a meaningful question.  The file system seems to just store a string of bytes as the file name, and leave it up to the shell how to interpret that.

$ mkdir foo
$ cd foo
$ echo résumé | iconv -tutf8 | xargs touch
$ echo résumé | iconv -tlatin1 | xargs touch
$ echo * | od -c
0000000   r 303 251   s   u   m 303 251       r 351   s   u   m 351  \n
0000020

Then a UTF-8 shell produces:

$ ls
 résumé  'r'$'\351''sum'$'\351'

and a Latin-1 shell produces:

$ ls
résumé  résumé

That is, both filenames are valid (but different) strings of Latin-1 characters.  In UTF-8, one of them is a string of valid characters, and one has two invalid bytes in it.

This is an ext4 file system, but I would imagine any other Unix-based one would have to work the same in order to interact with shells consistently.

Dave <barx>
Group Member
Thu 08 Aug 2024 12:26:25 PM UTC, comment #10: 

comment #8:

> comment #7:
> > One additional comment on the proposal:
> >
> > comment #3:
> > > Only codes in the range 00-1F and 80-FF are accepted in
> > > [`\[u00XX]`] syntax; those in the range 20-7F are ignored with a
> > > diagnostic advising the user to deobfuscate their inputs.
> >
> > I realize there's no good reason for a user to type "\[u0045]" instead of "E"
>
> There may in fact be one.  It could be a means of obtaining an ordinary character (or the handful of special characters in Unicode Basic Latin) when said characters in their conventional forms are at that time subject to `tr` translation.


This was a bogus digression.  `tr` affects only characters that are sent to the output for transformation to glyphs, and only at the time that this happens.


$ cat EXPERIMENTS/tr-works-only-on-output.roff
.nf
.tr ab
.ds a aunt
\*a
.tr aa
\*a
.pl \n(nlu
$ nroff EXPERIMENTS/tr-works-only-on-output.roff
bunt
aunt


So, disregard.

G. Branden Robinson <gbranden>
Group administrator
Thu 25 Jul 2024 10:53:38 PM UTC, comment #9: 


comment #5:

> So I'm not sure whether or not you advocate retaining soelim's current escape mechanism.  Maybe you're not yet either.


It's a question that tempts me to dither.
 

> My suggestion in comment #2 that support for soelim's escapes might need to be dropped was based my concern that one syntax requiring backslashes for spaces and one not would result in ambiguities in representing edge-case filenames, such as ones containing a backslash followed by a space.  But I think you've eliminated this possibility by not allowing a bare backslash to represent itself, requiring it be doubled if the filename contains a backslash.  (What miscreant named this file anyway?)


Right.

> But if soelim is changed to no longer recognize "\ ", then rule 5a is unnecessary and even a little counterintuitive.


Agreed.

G. Branden Robinson <gbranden>
Group administrator
Thu 25 Jul 2024 10:50:55 PM UTC, comment #8: 

comment #7:

> One additional comment on the proposal:
>
> comment #3:
> > Only codes in the range 00-1F and 80-FF are accepted in
> > [`\[u00XX]`] syntax; those in the range 20-7F are ignored with a
> > diagnostic advising the user to deobfuscate their inputs.
>
> I realize there's no good reason for a user to type "\[u0045]" instead of "E"


There may in fact be one.  It could be a means of obtaining an ordinary character (or the handful of special characters in Unicode Basic Latin) when said characters in their conventional forms are at that time subject to `tr` translation.

I don't know if this is feasible, as I still haven't mastered the character-to-glyph resolution process.  It's one of the more complex aspects of the formatter.
 

> ... but at the same time there seems no reason for groff to object to it.  It's ugly but not ambiguous or any harder to parse than the accepted ranges; if anything, a diagnostic seems to complicate the code, which could otherwise handle every \[u00XX] the same way.
>
> Even if you're wedded to the diagnostic, I'd say at least process the character.  Ignoring it seems needlessly punitive.


I have a very tall prescription pad.  But I'll hold my fire for now.  :)

> (Taking ticket out of "Need Info" assuming comment #5 addressed your questions; let me know if I've overlooked anything.)


That's fine.

I think this is just waiting on me now to start implementing and decide:

A.  what to do about `\ ` in GNU soelim and troff.
B.  whether to accept `\[u0021]` (or `\[u0020`?) through `\[u007E]` (or `\[u007F]`?)
C.  if the answer to "B" is "yes", whether to warn about them

G. Branden Robinson <gbranden>
Group administrator
Sat 20 Jul 2024 08:20:06 PM UTC, comment #7: 

One additional comment on the proposal:

comment #3:

> Only codes in the range 00-1F and 80-FF are accepted in
> [`\[u00XX]`] syntax; those in the range 20-7F are ignored with a
> diagnostic advising the user to deobfuscate their inputs.


I realize there's no good reason for a user to type "\[u0045]" instead of "E"... but at the same time there seems no reason for groff to object to it.  It's ugly but not ambiguous or any harder to parse than the accepted ranges; if anything, a diagnostic seems to complicate the code, which could otherwise handle every \[u00XX] the same way.

Even if you're wedded to the diagnostic, I'd say at least process the character.  Ignoring it seems needlessly punitive.

(Taking ticket out of "Need Info" assuming comment #5 addressed your questions; let me know if I've overlooked anything.)

Dave <barx>
Group Member
Sat 20 Jul 2024 07:50:09 PM UTC, comment #6: 

comment #1:

> It would seem that AT&T troff users (and groff users to date)
> have been pretty conservative about the file names they pass to
> these requests.


In my experience, users who work a lot at the command line (which I bet covers almost all *roff users) tend to avoid filename characters that require escaping.  Users who interact with files exclusively through GUIs have no incentive to avoid certain characters in filenames; the only time they'll ever type a file's name is upon creating it, and even then no extra thought or effort is required to use characters that don't play well with shells.  So the above observation doesn't surprise me much.

Even I would name an mp3 of Tom Jones' most famous song its_not_unusual.mp3, and missing apostrophes drive me up the wall.

Dave <barx>
Group Member
Sat 20 Jul 2024 06:48:09 PM UTC, comment #5: 

Your plan looks solid!

I do have one question about two lines at opposite ends of comment #3 that seem to be in opposition.

> let's rough out a syntax that would work both for existing uses
> of `so` as soelim(1) understands it and for formatter syntax,


and

> Since backslash-space is apparently a GNU extension in the
> first place, we might consider dropping it.


So I'm not sure whether or not you advocate retaining soelim's current escape mechanism.  Maybe you're not yet either.

My suggestion in comment #2 that support for soelim's escapes might need to be dropped was based my concern that one syntax requiring backslashes for spaces and one not would result in ambiguities in representing edge-case filenames, such as ones containing a backslash followed by a space.  But I think you've eliminated this possibility by not allowing a bare backslash to represent itself, requiring it be doubled if the filename contains a backslash.  (What miscreant named this file anyway?)

But if soelim is changed to no longer recognize "\ ", then rule 5a is unnecessary and even a little counterintuitive.

Dave <barx>
Group Member
Thu 18 Jul 2024 10:01:48 PM UTC, comment #4: 

I forgot case #1 for Solaris 10 troff soelim.


printf '.so foo bar file.troff\n' | soelim
foo: No such file or directory
.so foo
bar file.troff


So just no space-in-file-name support of any kind.

Also, I cheated here with an example that I don't plan to work:


.so foo\u[0020]bar\u[0020]file.troff


Because of rule 5d:

> Only codes in the range 00-1F and 80-FF are accepted in this syntax; those in the range 20-7F are ignored with a diagnostic advising the user to deobfuscate their inputs.


...but it should get the idea across.

G. Branden Robinson <gbranden>
Group administrator
Thu 18 Jul 2024 09:54:05 PM UTC, comment #3: 

Well, let's rough out a syntax that would work both for existing uses of `so` as soelim(1) understands it and for formatter syntax, which interprets the `so` under slightly different rules (since it brings to bear the full power of the troff lexical analyzer).

1.  An argument of type `file` (as described in groff(7)) to a request consumes the rest of the rest of the line.
2.  Unescaped spaces can therefore populate the argument.
3.  A leading double quote is recognized and removed; a file name can thus start with spaces.
4.  Any other/remaining double quotes are not treated specially.
5.  Only the following escape sequences are recognized.

5a. `\ ` (backslash-space) represents a space.  It is not necessary in troff, but is recognized to avoid disrupting existing soelim(1) usage.
5b. `\"` ends the file name argument and starts a comment.
5c. `\\` represents a (single) literal backslash.  It is handled however the system's standard C library wants to handle it.
5d. `\[u00XX]` where each X is an uppercase hexadecimal digit encodes a character.  Only codes in the range 00-1F and 80-FF are accepted in this syntax; those in the range 20-7F are ignored with a diagnostic advising the user to deobfuscate their inputs.

How are these handled today?

Specimen:


$ cat EXPERIMENTS/extending-so-syntax.troff
.so foo bar file.troff
.so foo\ bar\ file.troff
.so "foo bar file.troff
.so foo.troff\" comment
.so foo\u[0020]bar\u[0020]file.troff


groff _soelim_:


$ soelim EXPERIMENTS/extending-so-syntax.troff
.lf 1 ./EXPERIMENTS/extending-so-syntax.troff
soelim:./EXPERIMENTS/extending-so-syntax.troff:1: error: can't open 'foo': No such file or directory
.so foo bar file.troff
soelim:./EXPERIMENTS/extending-so-syntax.troff:2: error: can't open 'foo bar file.troff': No such file or directory
.so foo\ bar\ file.troff
soelim:./EXPERIMENTS/extending-so-syntax.troff:3: error: can't open '"foo': No such file or directory
.so "foo bar file.troff
.so foo.troff\" comment
.so foo\u[0020]bar\u[0020]file.troff


DWB 3.3 soelim:

...never mind, DWB 3.3 troff has no soelim.  Wow!  Learned something new today.

Heirloom Doctools soelim:


$ ./bin/soelim ./extending-so-syntax.troff
foo: No such file or directory
.so foo
bar file.troff
foo\: No such file or directory
.so foo\
bar\ file.troff
"foo: No such file or directory
.so "foo
bar file.troff
foo.troff\": No such file or directory
.so foo.troff\"
comment
foo\u[0020]bar\u[0020]file.troff: No such file or directory
.so foo\u[0020]bar\u[0020]file.troff


Uh, that's a little hard to interpret.


$ printf '.so foo bar file.troff\n' | ./bin/soelim
foo: No such file or directory
.so foo
bar file.troff


Interesting that it transforms the input in this way, by adding a newline where it decided to stop lexing the file name.  I'm tempted to call that a bug.


0000000   .   s   o       f   o   o  \n   b   a   r       f   i   l   e
0000020   .   t   r   o   f   f  \n
0000026


The other cases:


$ printf '.so foo\\ bar\\ file.troff\n' | ./bin/soelim
foo\: No such file or directory
.so foo\
bar\ file.troff

$ printf '.so "foo bar file.troff\n' | ./bin/soelim
"foo: No such file or directory
.so "foo
bar file.troff

$ printf '.so "foo.troff\\"comment\n' | ./bin/soelim
"foo.troff\"comment: No such file or directory
.so "foo.troff\"comment

$ printf '.so foo\u[0020]bar\u[0020]file.troff\n' | ./bin/soelim
printf '.so foo\\u[0020]bar\\u[0020]file.troff\n' | ./bin/soelim
foo\u[0020]bar\u[0020]file.troff: No such file or directory
.so foo\u[0020]bar\u[0020]file.troff


There seem to be no further surprises here.

Unix V7 did not have soelim, either.

Let me check Solaris 10.


$ printf '.so foo\\ bar\\ file.troff\n' | soelim
foo\: No such file or directory
.so foo\
bar\ file.troff

$ printf '.so "foo bar file.troff\n' |soelim
"foo: No such file or directory
.so "foo
bar file.troff

$ printf '.so "foo.troff\\"comment\n' |soelim
"foo.troff\"comment: No such file or directory
.so "foo.troff\"comment

$ printf '.so foo\u[0020]bar\u[0020]file.troff\n' |soelim
foo\u[0020]bar\u[0020]file.troff: No such file or directory
.so foo\u[0020]bar\u[0020]file.troff


These look identical to Heirloom to me.  I guess we know now where Heirloom got its inspiration, and perhaps even code, for soelim from.

Since backslash-space is apparently a GNU extension in the first place, we might consider dropping it.  It wasn't portable, and even the rest of the groff ecosystem struggled to handle files with spaces in their names.

I further venture that this exact same syntax could be applied to the `sy`/`pso` problem in bug #62787 and to user-constructed diagnostic messages in bug #64071.

I highly value the prospect of having a parallel syntax for these 3 issues if we can get it.

For soelim(1) itself I would further add that this program will continue to recognize only backslash as an escape character, but GNU troff will recognize the configured escape character.

Thoughts?

G. Branden Robinson <gbranden>
Group administrator
Thu 18 Jul 2024 05:32:55 AM UTC, comment #2: 

comment #1:

> Decide what to do about spaces and tabs.  One can argue about tabs
> being reasonable in file names, but spaces are a fact of life.
>
> As it happens, all requests that take file arguments
> always do so as the their last argument, thus could reuse
> optional-strippable-leading-quote syntax.  This would be friendly
> to users, I think.  No new rule to remember for this argument type.


Friendly to some users, but less friendly to those who use soelim's syntax for handling spaces and backslashes in filenames.

I argued in bug #59442 that groff and soelim shouldn't use competing filename syntaxes, as this would not be user-friendly.  I still think that's true, but the right solution might be to make soelim also use the syntax you've outlined here, breaking back compatibility in exchange for a more scalable solution.

Dave <barx>
Group Member
Tue 09 Jul 2024 12:46:45 PM UTC, comment #1: 

This would affect the `\O` escape sequence and the `cf`, `fp`, `hpf`, `hpfa`, `mso`, `msoquiet`, `nx`, `open`, `opena`,  `psbb`, `so`, `soquiet`, and `trf` requests.

Right now at least some of these use the internal function `get_long_name()`, which is inappropriately loose--it's made for reading groff identifiers, which can contain slashes.

A new, more specialized function for reading file name arguments should:

1.  Reject leading slashes, either when not in unsafe mode, or altogether; and

2.  Reject an argument with any slashes in it in the case of `fp` at least, to avoid having to draw mysterious inferences as in the fix for bug #64577.

Decide what to do about spaces and tabs.  One can argue about tabs being reasonable in file names, but spaces are a fact of life.

As it happens, all requests that take file arguments always do so as the their last argument, thus could reuse optional-strippable-leading-quote syntax.  This would be friendly to users, I think.  No new rule to remember for this argument type.

We might have to punt on \O5, and have "pre-html.cpp" reject an input document that has spaces or tabs in its name.  I know I don't want to add much machinery just to support generalized file arguments in this one documentedly internal-use-only escape sequence.

It would seem that AT&T troff users (and groff users to date) have been pretty conservative about the file names they pass to these requests.

G. Branden Robinson <gbranden>
Group administrator
Tue 02 Jan 2024 10:59:42 PM UTC, original submission:  

See the related bug #64071, which aims to do something similar for `sy` and `pso` requests.


Quoting bug #59442, where I got carried away and brainstormed this.

*

[Something] I want to do is specialize the formatter's logic when
handling file name arguments given to requests.

Presently, GNU troff calls the same internal function to gather an
argument that is a file name as it does to gather a roff identifier.

Maybe that made sense in 1990, but it doesn't today.  File names can
contain spaces and non-ASCII characters (in whatever encoding the file
system happens to support).

Since these arguments used mainly as-is, handed off to standard C
library functions like `fopen()`, I don't anticipate many problems here
(O Fortuna, seize my hostage).  The only exception to that I can think
of off the top of my head is the value of the `.F` register, which
interpolates a file name.  We will need some way to represent this such
things as output.  At first blush, it seems to me that we can
interpolate spaces as-is (if you want the argument quoted, do that
yourself in context), and any unprintable non-Basic Latin bytes in
groff's \[u00xx] notation.

I say "\[u00xx]" instead of "\[uXXXX]" because we have no way of knowing
what the file system's character encoding is.  Might be ISO 8859-1,
UTF-8, UTF-16BE/LE, or something else entirely.

What would be affected by this:

Requests:

cf
fp (when invoked with a 3rd argument)
hpf
hpfa
lf (when invoked with a 2nd argument)
mso
msoquiet
nx (when invoked with a 2nd argument)
open
opena
psbb
so
soquiet
trf

Escape sequences:

\O5 (but since this is mainly used internally to manage temporary files
    by grohtml, maybe lazily postponing this in hope that my Grand Plan
    to revise grohtml to no longer use a dedicated preprocessor is a
    better idea)


Registers:

.F

[snip]

And probably step 1 would be a simple refactor to introduce file name
argument-gathering and -interpolating functions which initially behave
no differently than the status [quo], but simply wrap existing logic for
identifier gathering and whatever one-off thing the `.F` interpolator
does.

*

G. Branden Robinson <gbranden>
Group administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Digest:
   bug dependencies.

 

Carbon-Copy List
  • -email is unavailable- added by barx (Posted a comment)
  • -email is unavailable- added by gbranden (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follow 8 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2024-11-14 gbranden StatusNone Postponed
    2024-11-11 gbranden DependenciesRemoved dependency from bugs #65099 -
    2024-07-26 gbranden Dependencies- bugs #66027 is dependent
    2024-07-20 barx StatusNeed Info None
        Assigned tobarx None
    2024-07-18 gbranden StatusNone Need Info
        Assigned toNone barx
    2024-01-02 gbranden Dependencies- bugs #65099 is dependent

    Back to the top

    Powered by Savane 3.14-3b9d.
    Corresponding source code