bugmake - Bugs: bug #42288, limit parallelism based on...

 
 

bug #42288: limit parallelism based on available memory

Submitted by:  Dave Yost <yost>
Submitted on:  Sun 04 May 2014 10:03:17 PM UTC  
 
Severity: 3 - NormalItem Group: Enhancement
Status: NonePrivacy: Public
Assigned to: NoneOpen/Closed: Open
Component Version: NoneOperating System: None
Fixed Release: NoneTriage Status: None

Add a New Comment (Rich MarkupRich Markup):
   

You are not logged in

Please log in, so followups can be emailed to you.

 

Sun 04 May 2014 11:12:16 PM UTC, comment #1:

This is a tricky feature. First even defining "available memory" is difficult. Is it physical memory only, not swap? Is it unused memory, or total memory?

Second, determining the amount of system memory available is extremely system-specific: there's no portable function that does it. On POSIX systems sysconf() gets you SOME information but it's not available everywhere.

Third, how will the amount of memory required by each target be specified? Are you just going to say that the maximum amount per target is X and all targets are assumed to use the maximum? It seems like that could result in a big loss of parallelism if most targets are smaller.

It might be more interesting if make provided a generic method for counting resources available and used, and the caller would provide the details.

You can imagine that today's parallelism feature is a simplified version of this: the user provides the amount of the resource (number of jobs that can be run in parallel), and the cost to run each target is always one.

But suppose we allowed targets to specify they cost two or more resource elements to run? Maybe a linker runs in parallel itself and so requires multiple cores.

Then you can imagine that a resource could represent something other than a CPU; for example, memory. Now you can define that certain targets cost more memory than others, and the person invoking make will provide the total amount of memory available.

The big problem with this is deadlocks. Suppose a target uses 5 job slots but can only get 2 and the others are used elsewhere. Then either the target can keep the 2 and wait for the rest, which reduces parallelism through the system, or free the 2 and try for the entire 5 later which means those jobs will tend to have to wait a lot, probably. Maybe that's not so bad.

Then if you introduce multiple resources (CPU and memory, for example) you have even bigger problems: what if you get all the CPU but not memory? Again you'll have to free everything you got and try again later.

It can be done, of course, but requires thought.

And there are some technical issues; for example on UNIX-y systems given today's implementation the maximum number of "resource items" we can have is the number of bytes in a pipe, typically 4K. That's probably enough for now (even if the resources represented memory you'd make the count much more granular like 100M or 1G or something) but maybe not forever. We'd need multiple pipes, or else switch the POSIX-based systems to use POSIX semaphores (like Windows), or something.

Paul D. Smith <psmith>
Project Administrator
Sun 04 May 2014 10:03:17 PM UTC, original submission:

In our Makefile, there is a set of parallelizable jobs that use a lot of memory.

It would be nice to run as many in parallel as possible without thrashing in virtual memory.

It would be nice if there were a command-line option to allow one to express this constraint.

The option might say how much memory the largest job in the set is expected to require.

Dave Yost <yost>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by psmith (Posted a comment)
  • -unavailable- added by yost (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 0 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    No Changes Have Been Made to This Item

    Back to the top


    Powered by Savane 3.1-cleanup