bugmake - Bugs: bug #42288, limit parallelism based on...

 
 

bug #42288: limit parallelism based on available memory

Submitter:  Dave Yost <yost>
Submitted:  Sun 04 May 2014 10:03:17 PM UTC
   
 
Severity:  3 - Normal Item Group:  Enhancement
Status:  None Privacy:  Public
Assigned to:  None Open/Closed:  Open
Component Version:  None Operating System:  None
Fixed Release:  None Triage Status:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Sun 12 May 2019 08:21:43 PM UTC, comment #3: 

Limiting jobs based on available memory, like -l but for memory, would be significantly less complicated than the original request of expressing how much memory a recipe might consume and having make defer running recipes that are too large and instead choose recipes which fit.

However, even though a "memory limit" feature is more straightforward to implement it still has many issues that need to be resolved before it can be implemented: primarily, how do we measure memory used (or probably more reasonably, memory still available)?  And how to do it portably?

Paul D. Smith <psmith>
Group administrator
Sat 11 May 2019 11:39:54 PM UTC, comment #2: 

I approve this request for improvement.
I have a large build where only a sub project contains rules that need lot of memory. To be able to build it, I have to size the number of jobs according to this subproject.
It would be nice to be able to block the launch of new jobs when the available physical memory is below a certain limit. Like -l flag, but for the memory.

Alain D <alaind>
Sun 04 May 2014 11:12:16 PM UTC, comment #1: 

This is a tricky feature.  First even defining "available memory" is difficult.  Is it physical memory only, not swap?  Is it unused memory, or total memory?

Second, determining the amount of system memory available is extremely system-specific: there's no portable function that does it.  On POSIX systems sysconf() gets you SOME information but it's not available everywhere.

Third, how will the amount of memory required by each target be specified?  Are you just going to say that the maximum amount per target is X and all targets are assumed to use the maximum?  It seems like that could result in a big loss of parallelism if most targets are smaller.

It might be more interesting if make provided a generic method for counting resources available and used, and the caller would provide the details.

You can imagine that today's parallelism feature is a simplified version of this: the user provides the amount of the resource (number of jobs that can be run in parallel), and the cost to run each target is always one.

But suppose we allowed targets to specify they cost two or more resource elements to run?  Maybe a linker runs in parallel itself and so requires multiple cores.

Then you can imagine that a resource could represent something other than a CPU; for example, memory.  Now you can define that certain targets cost more memory than others, and the person invoking make will provide the total amount of memory available.

The big problem with this is deadlocks.  Suppose a target uses 5 job slots but can only get 2 and the others are used elsewhere.  Then either the target can keep the 2 and wait for the rest, which reduces parallelism through the system, or free the 2 and try for the entire 5 later which means those jobs will tend to have to wait a lot, probably.  Maybe that's not so bad.

Then if you introduce multiple resources (CPU and memory, for example) you have even bigger problems: what if you get all the CPU but not memory?  Again you'll have to free everything you got and try again later.

It can be done, of course, but requires thought.

And there are some technical issues; for example on UNIX-y systems given today's implementation the maximum number of "resource items" we can have is the number of bytes in a pipe, typically 4K.  That's probably enough for now (even if the resources represented memory you'd make the count much more granular like 100M or 1G or something) but maybe not forever.  We'd need multiple pipes, or else switch the POSIX-based systems to use POSIX semaphores (like Windows), or something.

Paul D. Smith <psmith>
Group administrator
Sun 04 May 2014 10:03:17 PM UTC, original submission:  

In our Makefile, there is a set of parallelizable jobs that use a lot of memory.

It would be nice to run as many in parallel as possible without thrashing in virtual memory.

It would be nice if there were a command-line option to allow one to express this constraint.

The option might say how much memory the largest job in the set is expected to require.

Dave Yost <yost>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by alaind (Posted a comment)
  • -email is unavailable- added by saturn (interested)
  • -email is unavailable- added by psmith (Posted a comment)
  • -email is unavailable- added by yost (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follows 1 latest change.

    Date Changed by Updated Field Previous Value => Replaced by
    2016-10-05 saturn Carbon-Copy- Added saturn

    Back to the top

    Powered by Savane 3.13-3230.
    Corresponding source code