mainThe GNU Bourne-Again SHell - Support: sr #110992, bash read multi-line string with...

 
 

sr #110992: bash read multi-line string with Process Substitution

Submitter:  None
Submitted:  Tue 26 Dec 2023 05:36:32 AM UTC
   
 
Category:  None Priority:  5 - Normal
Severity:  3 - Normal Status:  Postponed
Privacy:  Public Assigned to:  None
Originator Email:  -email is unavailable- Open/Closed:  Open
Operating System:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Tue 09 Jan 2024 04:52:22 PM UTC, comment #7: 

It's difficult to separately profile constructs like that with a tool like gprof, which is best suited for profiling entire process executions.

When I profiled the original scripts (two different scripts to equalize the startup and memory allocation overheads), I found that the assignment timings were fairly consistent, but the read timings varied widely: from 20% faster to as much as 40% slower. The execution time was dominated by memory allocation.

When I changed to use a for (( ... )) loop for testing, the execution time was dominated by forking (as you'd expect), and the read version was clearly faster because it had less forking overhead. I'll look into that; I think it's the synchronous nature of command substitution that is causing it.

Anyway, I'm not going to implement this requested feature right now; I don't see it as being of enough general interest. I appreciate the discussion.

Chet Ramey <chet>
Group administrator
Fri 05 Jan 2024 04:27:37 AM UTC, comment #6: 

Probably worthwhile to profile this as well. When these benchmarks are iterated in a while loop, it looks like resources are not being released in the second timed loop. The second benchmark requires more time with every iteration...

while :; do
  echo -n '1)'
    time for a in {1..5000}; do var="$(echo)" ; done
  echo -n '2)'
    time for a in {1..5000}; do read -d "" var < <(echo); done
done

This feature request is for a read option to use file direction as input, with appropriate exit status; ie, end-of-file is expected and not an error condition.

Ignoring exit status is a workaround, as in the above benchmark, however this masks detection of other errors. For example, attempting to set a readonly var, or a command failure.

true/false is expected to appropriately trigger success/fail in the examples below, but success is prohibited the final example, due to the read error on file directed input.

var=$(false) && echo success || echo fail
fail

var=$(true) && echo success || echo fail
success

read -d "" var < <(false) && echo success || echo fail
fail

read -d "" var < <(true) && echo success || echo fail
fail

In the final example, it is not really possible to test if var is set correctly, yet it would be sufficient to identify if an error status was returned by the function stack (here represented by true), and return an appropriate exit status, for the entire read operation.

Maybe the exit status of the command to read will always be masked, eg

cat < <(false) && echo true || echo fail
true

in which case, I'm open to suggestions for greatest performance of loading a variable from command output, with appropriate command exit code? The goal here has been to do that without spawning a bunch of shells, as multiple $() and pipelines are notorious for degrading performance.

Anonymous
Wed 03 Jan 2024 09:09:14 PM UTC, comment #5: 

Thanks for the timing stats; I'll look at them with gprof.

If you want to ignore end of file as an error condition, then ignore the return status from read. You're not using -t or -u,
and since you control the variables you're using, you won't get a variable assignment error -- those are the other things that will cause read to return non-zero.

Are you worried about read errors like EINTR? Those are translated to EOF.

Chet Ramey <chet>
Group administrator
Fri 29 Dec 2023 07:17:16 PM UTC, comment #4: 

time for a in {1..5000} ; do var="$(echo)" ; done

real 0m4.359s
user 0m0.487s
sys 0m3.489s

time for a in {1..5000} ; do read var < <(echo) ; done

real 0m2.951s
user 0m0.890s
sys 0m3.034s

The scope of this feature request aligns with this part of the read doc, "The exit status is zero, unless end-of-file is encountered" ---It would be desirable to read data from a file, therefore a -f option could be utilized in that case, designating end-of-file is expected to terminate the read, and not an error.

Anonymous
Thu 28 Dec 2023 09:34:20 PM UTC, comment #3: 


> Follow-up Comment #2, sr #110992 (group bash):
>
> This inquiry was made after extensive benchmarking and performance tuning.


I'd like to see that data, or at least the test framework. It suggests
there are opportunities for optimizations in some of these operations.

>  As
> it turns out, the read builtin, loading the output a "Command Substitution"
> provides the greatest performance,


I don't think you meant command substitution here. Everything you show and
claim is superior uses process substitution.

> The convenience of loading shell variables with data stored as files
> is a secondary benefit of the requested feature.

Good, since that already exists.

> Imagine the scenario of dozens of data processing commands (primarily shell
> functions and c programs) to assign variables for data processing, on each of
> thousands of files, based on interactive user input.


Ah, interactive input. A new requirement, something you didn't mention
before, and didn't show in any of your examples. Is it relevant to the
example?

> read var < <(command)
>
> is far superior than
>
> var="$(command)"


This is a subjective statement without supporting data. There are too many
variables (hah), not the least the nature of `command'. The fact that
process substitutions run asynchronously, while command substitutions run
synchronously, is just the first difference.

 

> For this application, I minimize IO overhead by filtering data lines from each
> file into a variable (one fopen/fclose per file), then use "Here Strings" to
> input that data into "Command Substitution" and direct the output to read, for
> loading into variables (with no process substitution fork).


(Your read example uses process substitution.)

There is absolutely a process substitution fork. Process substitution
forks the shell to create a new process in which to run the command. That
process runs asynchronously. Depending on `command', it may fork more
times.

>
> read -d "" vars < <(command filename)
> read -d "" var1 < <(command <<<"$vars")
> read -d "" var2 < <(command <<<"$vars")
>
> is far superior than
>
> var1="$(command filename | command | command)"
> var2="$(command filename | command | command)"


These are not the same. Why would you claim they are? It's not a one-to-one
comparison.

> when there are many filenames and vars to process. The sub-shells and
> additional file open/close operations degrade performance.


Which you have apparently introduced?


> The commands in this application stack are optimized to signal with exit
> codes, so if a command fails due to bad data or file permissions, it would be
> desirable to branch the read appropriately.


So why not wait for them as I showed in the previous reply? If the command
fails, you're not interested in what `read' stores in the variable, right?
Run `wait $!' to get the exit status of the process substitution and react
appropriately. You can do all this today.

> Enabling read to load input up to the literal "end-of-transmission" (without
> additionally specifying an EOT character, and injecting it into the data
> stream) would simplify this performance based use of Process Substitution. A
> "read from stdin" command option (eg -f) to allow read to terminate without
> error, provides a means for error checking the operation, without the overhead
> of injecting a symbol for the actual end-of-file event, eg
>
> read -f var < <(command filename) || { logger -s "read var from command
> filename error" ; return 1 ;}


What is the actual error you want to check for? You want a read error, or
an error in the process substitution?


> When a dummy character is used as a read delimiter, it does not need removal,
> read disgards it; however typically an additional command must be added for
> evaluation in the Process Substitution step, to simulate an EOT, at the actual
> end of the output.
>
> The feature request is simply to enable read to operate on directed file input
> (name or fd), with proper exit codes.


What is a `proper exit code' in this scenario? Do you simply want to ignore
read returning non-zero at EOF and get the exit status of the process substitution?

> Mapfile might be made to work, however this application is working with
> variables, not arrays, so I didn't test it.


It doesn't return non-zero at EOF, if that's your concern.

> Regarding substitution, I am pretty sure process substitution (and pipes and
> loops) fork, but command substitution does not---that is why I optimized
> loading variables with read, to use command substitution.


I'm pretty sure you mean the opposite of this, since all your read examples
use process substitution, but process substitution and command substitution
both fork. And not that it's relevant, but what do you mean by `loops
fork'?

>
> Thanks for the eot=$'\004' tip! Nobody has been able to demonstrate generating
> characters in bash! Where in the man is that?


It's a form of quoting, documented in the QUOTING section.


> Can variables and/or decimals be
> used to designate characters? eg how would this be corrected to print "ABC"?
> for a in {101..103} ; do echo -n $"$a" ; done


No, you'd need an eval there, since quoting is done in the parser, and to
use the right form of quoting:

for a in 101 102 103
do
eval echo -n \$\'\\"$a"\'
done

> I tried mapfile, to load an email header into var; an additional <CR> is
> added, I'm not sure why.


What does `email.txt' look like? And where is the additional newline
`added'?

>
> mapfile -d $'\004' var < <(awk 'NR==1,/^$/ {if (length) print}' <email.txt)


What's the purpose of using the delimiter here? What do you expect it to
do? Does `email.txt' end with that character? Does it delimit lines or
something?

The purpose of mapfile is to read a file with one line per array element.
You're defeating that by trying to read everything into var[0]. You'll
get the entire contents of the file/stdin, including all newlines, because
that's what you've told mapfile you want.

> Not sure if there is a side effect using mapfile to load a variable (vs
> array); but the extra <cr> added to var spoils the idea of using mapfile to
> load variables with command substitution (or file direction).


It's not clear why you're using \004 as the delimiter here, but if you use
newline as the delimiter, mapfile will remove it from each line if you tell
it to. Then instead of having a scalar variable with embedded newlines (but
not a trailing one? It's not clear), you have an array variable with one
line per array element that you can either manipulate individually or treat
as a unit.

> This request
> could equally apply to mapfile (exactly) loading a var, from a file (or
> stdin). Maybe I'm not doing it right?


It's not clear why you're not using the array variable to hold individual
lines. read and mapfile are not intended to do exactly the same thing.

> An option enabling read to use stdin (or
> a specified file) to load a variable seems an improvement, intuitive,
> non-conflicting, and useful, from this user's perspective.


In the end, you just want read to ignore EOF and return status 0, correct?
Otherwise, there's no difference from what read does now.

Chet Ramey <chet>
Group administrator
Thu 28 Dec 2023 01:16:28 AM UTC, comment #2: 

This inquiry was made after extensive benchmarking and performance tuning. As it turns out, the read builtin, loading the output a "Command Substitution" provides the greatest performance, for loading a shell variable with command output. The convenience of loading shell variables with data stored as files is a secondary benefit of the requested feature.

Imagine the scenario of dozens of data processing commands (primarily shell functions and c programs) to assign variables for data processing, on each of thousands of files, based on interactive user input.

read var < <(command)

is far superior than

var="$(command)"


For this application, I minimize IO overhead by filtering data lines from each file into a variable (one fopen/fclose per file), then use "Here Strings" to input that data into "Command Substitution" and direct the output to read, for loading into variables (with no process substitution fork).

read -d "" vars < <(command filename)
read -d "" var1 < <(command <<<"$vars")
read -d "" var2 < <(command <<<"$vars")

is far superior than

var1="$(command filename | command | command)"
var2="$(command filename | command | command)"

when there are many filenames and vars to process. The sub-shells and additional file open/close operations degrade performance.

The commands in this application stack are optimized to signal with exit codes, so if a command fails due to bad data or file permissions, it would be desirable to branch the read appropriately.

Enabling read to load input up to the literal "end-of-transmission" (without additionally specifying an EOT character, and injecting it into the data stream) would simplify this performance based use of Process Substitution. A "read from stdin" command option (eg -f) to allow read to terminate without error, provides a means for error checking the operation, without the overhead of injecting a symbol for the actual end-of-file event, eg

read -f var < <(command filename) || { logger -s "read var from command filename error" ; return 1 ;}


When a dummy character is used as a read delimiter, it does not need removal, read disgards it; however typically an additional command must be added for evaluation in the Process Substitution step, to simulate an EOT, at the actual end of the output.

The feature request is simply to enable read to operate on directed file input (name or fd), with proper exit codes.

Mapfile might be made to work, however this application is working with variables, not arrays, so I didn't test it.

Regarding substitution, I am pretty sure process substitution (and pipes and loops) fork, but command substitution does not---that is why I optimized loading variables with read, to use command substitution.

Thanks for the eot=$'\004' tip! Nobody has been able to demonstrate generating characters in bash! Where in the man is that? Can variables and/or decimals be used to designate characters? eg how would this be corrected to print "ABC"? for a in {101..103} ; do echo -n $"$a" ; done

I tried mapfile, to load an email header into var; an additional <CR> is added, I'm not sure why.

mapfile -d $'\004' var < <(awk 'NR==1,/^$/ {if (length) print}' <email.txt)


Not sure if there is a side effect using mapfile to load a variable (vs array); but the extra <cr> added to var spoils the idea of using mapfile to load variables with command substitution (or file direction). This request could equally apply to mapfile (exactly) loading a var, from a file (or stdin). Maybe I'm not doing it right? An option enabling read to use stdin (or a specified file) to load a variable seems an improvement, intuitive, non-conflicting, and useful, from this user's perspective.

Anonymous
Wed 27 Dec 2023 09:15:10 PM UTC, comment #1: 

So you want the contents of a file read into a shell variable? Or the output of a command read into a shell variable?

And the usual command substitution idiom isn't sufficient? By this I mean adding a dummy character to the end of the command's output, then removing it with ${var%?} (or, if you're reading a named file, $(< filename)). Kind of like you use ${eot} in this case. You even get the exit status of the command substitution, though you have to save it yourself and run exit if you want to both preserve it and append the sentinel character.

The reason to add the character is to prevent command substitution from removing trailing newlines. If you're sure there is only one, or if you only care about one, you can just add it back:

lines+=$'\n'

In addition to command substitution, you can use mapfile to read the entire contents of a file, or process substitution if you want to continue to use the same idiom, into an array variable.

I read the linked issue. It makes me think you might want to investigate your options a little more. For instance, what's wrong with

eot=$'\004'

to assign EOT to a variable?

It's not great for performance to run awk in a command substitution, right? And why did you conclude that process substitution is more efficient than a here-document? (That one I really don't understand, since process substitution forks and creates a new process.)

Anyway, if you're interested in the exit status of the process substitution, there's nothing stopping you from running

wait $!

to get it, as long as you're using read or some other shell builtin to collect its output, so the entire thing isn't run in a new process.

If it's not convenient to alter your command to append the right commands to do what you need, you might consider wrapping them into a shell function and running it in the process substitution.

What would the read builtin buy you in this case over assigning the output of command substitution to a variable or using mapfile to read lines into an array variable?

Chet Ramey <chet>
Group administrator
Tue 26 Dec 2023 05:36:32 AM UTC, original submission:  

Bash read multi-line string with Process Substitution feature request.

Per https://github.com/dylanaraps/pure-bash-bible/issues/144 a read builtin command option to support exit with return code zero, on end-of-file, would streamline error checking, simplify regular use, improve performance, and likely easy to implement, without conflicts. Perhaps -f, to set delim to end-of-file, and read stdin or named file.

When Process Substitution is used to performance optimize a shell extension app, the following is cumbersome:

eot="$(awk 'BEGIN{printf "%c",4}')"
IFS= read -d "$eot" lines < <(printf "one \ntwo \n${eot}") && echo success || echo fail


With the described feature, reading "$lines" from an arbitrary command is much simpler, especially when printing "$eot" is not convenient:

read -f - lines < <(ls *.data) && echo success || echo fail


Is this a reasonable extension of read command line options?

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by chet (Posted a comment)
  • -email is unavailable- added by None (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follow 2 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2024-01-09 chet StatusNeed Info Postponed
    2023-12-27 chet StatusNone Need Info

    Back to the top

    Powered by Savane 3.13-02a9.
    Corresponding source code