mainThe GNU Bourne-Again SHell - Support: sr #109300, Feature Request - JOBMAX variable

 
 

sr #109300: Feature Request - JOBMAX variable

Submitter:  None
Submitted:  Fri 21 Apr 2017 04:39:45 PM UTC
   
 
Category:  None Priority:  5 - Normal
Severity:  1 - Wish Status:  Wont Do
Privacy:  Public Assigned to:  None
Originator Email:  -email is unavailable- Open/Closed:  Open
Operating System:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Mon 24 Jul 2017 04:27:23 PM UTC, comment #4: 

Regarding comment #3: A drawback of using the "wait" command by itself is that it waits for all background processes to exit before completing. If background processes take varying amounts of time to complete, that can result in periods of fewer executing background processes than intended.
Chet's suggestion to use the bash extension "wait -n" (that I was previously unaware of), which waits for any one background process to complete, seems to allow a bash script to loosely emulate the behavior of the ksh JOBMAX at user user-specified points; e.g., using a shell loop like:

JOBMAX=8
while read -r pathname
do
  command args "${pathname}" &
  while (( $(jobs -p | wc -l) >= JOBMAX )); do wait -n; done
done < <(find . -type f)

Regarding comment #1: ksh has/had the concept of job pools (though I admittedly never used them). If job pools are  investigated for bash, it might be worth looking at the syntax & semantics of the ksh implementation.

Nathan T. Weeks <weeks>
Mon 10 Jul 2017 06:42:58 PM UTC, comment #3: 

I also think, there is no need for this.

Example:

#!/bin/bash

N="8"

PROCESS() {
command args "$line"
}

while read -r line
        do
            ((i=i%N)); ((i++==0)) && wait
            [[ -f "$file" ]] || continue
            PROCESS &
done < <(find . -type f)

Produces always 8 jobs at a time, and is many times faster than xargs or parallel, especially on ssds.

nick <nick8484>
Wed 14 Jun 2017 02:32:11 PM UTC, comment #2: 

As George says, you can do this by adopting a different model. It can be as simple as using `wait -n' periodically to maintain a maximum number of outstanding background jobs.

Chet Ramey <chet>
Group administrator
Fri 02 Jun 2017 12:31:04 AM UTC, comment #1: 

Personally I think JOBMAX is a problematic way to approach this problem. Generally speaking this is a problem for the shell, there are all kinds of configurations that must be set to accomplish different things (IFS, shopt, etc.) but there's little in the way of resources for limiting the scope of this configuration. You set it, it stays set until it's unset, and whatever code you run along the way inherits those settings.

In the case of JOBMAX, for instance, there's the danger (if you carelessly leave JOBMAX set) that you'll run a piece of code that will deadlock waiting for that (JOBMAX+1)th job to start. And there's nothing about JOBMAX that limits it to "just these jobs I'm running here" - if you have 4 background jobs running already, and you set JOBMAX to 4, you can't run another background job.


So basically I oppose the idea of this as a configuration thing that affects the whole shell process. It makes a bad problem (managing the scope of shell configuration directives) worse (by adding another such directive).


I think a better pattern would be to establish a "job pool" data structure to manage the parallel jobs, and set the options for the job pool (number of concurrent jobs, etc.) in that data structure. And then when you want to launch a job that should follow the rules of that job pool, you don't just kick it off as a background process, you allocate an entry in that job pool first - and that is the part that blocks until a job from the pool has ended.

I think a facility like that could be written as a shell script, but some level of integration into the shell itself would result in a cleaner, more reliable system. A shell function solution might look like this:

    $ # Assuming a "job pool" system implemented as shell functions:
    $ allocate-job-slot my-job-pool job-id-var || abort   # Returns when a slot is available, stores an identifier in $job-id-var
    $ cmd &                                               # We have a job slot, so run a job
    $ set-pid-for-job-slot my-job-pool $job-id-var $!     # Make sure the job slot system can track the job

With more complete shell integration it might look more like this:
    $ # Assuming a "job pool" system integrated into the shell itself with a hypothetical (and not really viable) new syntax for running a job in the pool:
    $ cmd &{my-job-pool}         # "Run this job in the background as part of my-job-pool"

...Or alternately, using the "coproc" syntax as a model:
    $ start-job --pidvar pid --pool_name my-job-pool { cmd; }   # PID of newly-launched job is stored in $pid

...Or more ksh-style and (gasp!) object oriented:
    $ my-job-pool.start-job --pidvar pid { cmd; }               # Can a loadable "built-in" provide this kind of functionality? Hmmm...

Introducing a syntax like this means that if someone decides later on that it should also support job pool control by load average or RAM use or something like that, the syntax doesn't change. Jobs managed by "my-job-pool" are still simply managed by "my-job-pool", and a change like that would be implemented in the configuration options used to create the job pool.


I figure full shell integration of something like this, with new syntax and all that would be a hard sell. But I think JOBMAX is not the way to go.  I'd love to see functionality like this in the shell. If I can implement it as a loadable built-in, I will (eventually...)

George Caswell <tetsujin>
Fri 21 Apr 2017 04:39:45 PM UTC, original submission:  

When running time-intensive commands many times with different inputs, it is often useful to run multiple background jobs asynchronously to decrease the total time to complete all jobs.
In that case, it is generally desirable to limit the number of concurrently-executing background jobs to respect limits on system resource consumed by each job (available processor cores, free memory, I/O or network bandwidth, etc.).

GNU Parallel was expressly designed for this purpose, and provides powerful functionality and control at the cost of installing another utility (in addition to its dependency Perl) and learning the (IMHO) sometimes baroque syntax.

Both GNU and BSD xargs provides a -P maxprocs option that can run multiple identical (except for, e.g., the -I REPLSTR strings) concurrently; however, those commands must adhere to restrictive syntax. 

ksh has supported (since release ksh93t+ from 2010-03-05) an elegant mechanism for this: the JOBMAX variable, described in the ksh 93u+ 2012-08-01 man page thus:

    JOBMAX   This variable defines the maximum number running background jobs that can run at a time.  When this limit is reached, the shell will wait for a job to complete before staring a new job.

Setting this variable to an appropriate integer permits, e.g., a shell loop over numerous input files to generate a background job for each without oversubscribing the system.

It would be very useful to have this (or similar) capability incorporated into a future bash release.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by weeks (Posted a comment)
  • -email is unavailable- added by nick8484 (Posted a comment)
  • -email is unavailable- added by chet (Posted a comment)
  • -email is unavailable- added by tetsujin (Posted a comment)
  • -email is unavailable- added by None (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follows 1 latest change.

    Date Changed by Updated Field Previous Value => Replaced by
    2017-06-14 chet StatusNone Wont Do

    Back to the top

    Powered by Savane 3.13-f8d8.
    Corresponding source code