bugmake - Bugs: bug #18396, stack size setrlimit call...

 
 

bug #18396: stack size setrlimit call interacts badly with Solaris/x86 kernel bug

Submitter:  Scott McPeak <smcpeak>
Submitted:  Tue 28 Nov 2006 08:16:46 PM UTC
   
 
Severity:  3 - Normal Item Group:  Enhancement
Status:  Fixed Privacy:  Public
Assigned to:  psmith Open/Closed:  Closed
Component Version:  3.81 Operating System:  POSIX-Based
Fixed Release:  3.82 Triage Status:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Mon 02 Jan 2023 07:48:16 PM UTC, comment #7: 

Starting with GNU make 3.82, we reset the stack limit back to its original value before we exec the child.

Also note that if you are using posix_spawn() instead of fork/exec, we will never reset the stack limit since posix_spawn() doesn't provide a way to reset it.

Paul D. Smith <psmith>
Group administrator
Fri 29 Jun 2012 08:55:00 AM UTC, comment #6: 

I've been recently caught out by setrlimit. I believe what happened is that pthread_create (in a child process) stack size default is on linux calculated from the stack limit.


From the pthread_create man page:

       On Linux/x86-32, the default stack size for a new thread is 2 megabytes.
       Under the NPTL threading implementation, if the RLIMIT_STACK soft resource
       limit at the time the program started has any value other than "unlimited",
       then it determines the default stack size of new threads.

If we start a program that calls pthread_create from the shell that has a soft limit of 8192k and no hard limit, then the new thread will get a 8192k stack. If the same program is started from make, then it will get a stack of 2048k because make removes the soft limit.

I understand that calling pthread_create without setting the stack size is not recommended and that the defaulting mechanism is suspect. Still, it's surprising to find that make mucks with the limits.

A way out could be to actually restore the limits before the child is exec'ed although that may not be legal or run into problems if the stack is in fact deeper than the original limit.

Gabor Melis <melisgl>
Wed 10 Dec 2008 02:24:03 PM UTC, comment #5: 

I think this is not an enhancement request but rather a bug-fix request.

Anonymous
Fri 18 Jul 2008 07:07:06 PM UTC, comment #4: 

I experienced an converse effect.  I could run a program (kpdf) from shell, but it would crash if started from make.

I used a large hard limit (~1GB) for the stack size and around 100MB as soft limit (used it for a scientific app).  My machine has 2 GB Ram.

kpdf fails to start a helper thread with this message:

  QThread::start: thread creation error: Cannot allocate memory

The problem seems to be that make ups the soft limit to the hard limit and runs its childs with this setting.

One can certainly argue, that kpdf behaves disadvantageous, however, make should not change the user's specified environment.

Anonymous
Wed 29 Nov 2006 05:56:12 PM UTC, comment #3: 

Regarding efficiency:

First, the main point of my report is not "alloca sucks" but rather
that setrlimit is an unexpected thing for make to do.

But having opened up the can of worms, let me play with them a little
(I don't really want to start a big argument, but I do want to defend
my previous statements):

In my experience, the dominant effect on program performance, after
choice of algorithm, is data locality: frequently re-using the same or
nearby memory locations.  Instruction count is at best a second-order
effect.  Andrew Appel has a lovely paper where he "proves" that (of
all things) garbage collection is faster than stack allocation, by
exclusively focusing on instruction count:

  http://citeseer.ist.psu.edu/appel87garbage.html
 
Of course, the analysis is fatally flawed by ignoring locality.

My point regarding alloca is that if I have a lot of data, it cannot
all be "hot" (high locality).  If I put cold data on the stack, the
stack as a whole has poorer locality, because the hot regions (stack
frames) are interspersed with cold regions (the big chunks of data).
So if I intend to allocate infrequently-accessed data, it is better to
put it on the heap where it won't interfere with the locality of stack
accesses.

You say that only "medium" sized data ends up on the stack, which
certainly sounds like a reasonable approach, but if there is enough of
it to warrant calling setrlimit, then I have to suspect that some of
it would be better placed elsewhere, from a strictly
performance-oriented point of view.

Regarding portability:

Indeed, there is no portable way to detect running out of stack space,
whether by alloca or not, though alloca obviously makes running out of
stack space more likely.  Of course, setrlimit is not all that
portable either, so I was assuming that nonportable solutions were on
the table.  Toward that end, an appropriate signal handler (which will
need to use nonportable tricks to distinguish stack overflow from
other types of segfaults), or the GCC -fstack-check argument (when
compiling with GCC, which is after all the common case) may be options.

Moreover, I would argue that the architectural decision to make heavy
reliance on alloca is what leads us into nonportable territory in the
first place.  It is not uncommon to find systems with ~10MB or less of
stack space.  The setrlimit call is a band-aid; in some circumstances
it will help, but that call is not an antidote to running out of stack
space.

Of course, the big advantage of alloca is that it is easy to program
with.  I would not advocate suddenly rewriting GNU make to use malloc
everywhere, since (as you say) that will likely introduce many more
serious bugs than this one in the short term.  (If it was written in
C++ there would be better options, but that ship too has sailed.)

So, at most, I would suggest doing some profiling to find the biggest
consumers of alloca space and change just them (hopefully only a
handful) to use [x]malloc instead.  The primary motivation would be
enhanced portability; the efficiency argument is just meant to allay
fears that this will make everything slower.

Regarding setrlimit itself:

Yes, I would consider this bug (and I do consider it a bug...) to be
fixed if 'make' would reliably return the resource limits to their
original settings before exec'ing its children.

Scott McPeak <smcpeak>
Wed 29 Nov 2006 02:28:44 AM UTC, comment #2: 

I wrote:

> if large amounts of memory are needed they are allocated on the stack


Of course I meant on the heap :-/.

Paul D. Smith <psmith>
Group administrator
Wed 29 Nov 2006 02:27:17 AM UTC, comment #1: 


> If 'make' needs to allocate a large amount (megabytes) of data,
> it would be better to do so on the heap, both from a
> portability standpoint (the stack size) and from a performance
> standpoint (it messes up the normally good locality of stack
> access).


> Alternatively, if it must allocate on the stack, then detecting
> and complaining about a too-low limit would be better in my
> opinion than silently changing it. It's easy to uncap the stack
> size explicitly in build scripts and whatnot when truly
> necessary.


Unfortunately, none of the above is true.  Make needs extra stack space because it makes extensive use of the alloca() function.  It does not allocate huge chunks of memory on the stack: if large amounts of memory are needed they are allocated on the stack.  alloca() is used for modest-sized memory chunks, to hold filenames and smaller strings (make is, at heart, a string manipulation program).  Using heap, which requires a system call to get more memory, for all of these small allocations would be much less efficient than using the stack.  Not to mention the overhead of having to allocate these memory segments to be used for a short time, then freeing them again, over and over.

However, because make is also very recursive, even though no single alloca() call is very large it's quite possible for the entirety of the alloca() invocations for a complex makefile to use quite a bit of stack.

Of course, there's no way to determine how much stack will be used a priori, since it depends entirely on the construction of the makefile, exactly which functions are invoked in which order, and even the current state of the build (whether various targets need to be rebuilt or not).  Further, there is no portable way that I'm aware of to determine how much stack is left before the program runs out.

Finally, there is no way to detect an out of stack error and exit gracefully with a warning as you suggest: the behavior of alloca() is undefined if you run out of stack space (it doesn't just return NULL as malloc() etc. do).

So.  In order to avoid the need for extra stack in GNU make all the invocations of alloca() would need to be rewritten to use xmalloc(), and those functions would need to be changed to be careful to free all that memory whenever the function exited to avoid memory leaks... with what must certainly be a decrease in performance (although of how much we can't be sure).

On the other hand, as you point out, the problem on Solaris is a bug in the kernel which you can hardly blame GNU make for.

However, your point about programs invoked by make inheriting the setrlimit() is definitely something that seems problematic.  Perhaps GNU make could change the stack limit back to what it was after it forks but before it execs its child.  I wonder what happens if you change a limit to something too small for the current processes resources?

Paul D. Smith <psmith>
Group administrator
Tue 28 Nov 2006 08:16:46 PM UTC, original submission:  

I recently experienced the following mysterious phenomenon:

  • Running a program from the shell, it crashes (it dereferences NULL

on purpose; that is all it does).

  • Invoking that same program from with a Makefile, it does not crash.

Instead, it happily reads/writes page 0.

After considerable investigation, I have discovered that the cause is
two interacting issues:

  • There is a kernel bug in Solaris/x86 that causes page 0 to be mapped

if the stack limit is 'unlimited'; see

  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6374692
 

  • GNU make was modified on 2001-09-06 (between 3.79.1 and 3.80) by

Paul Eggert to "Get rid of any avoidable limit on stack size" (quote
from ChangeLog).  See the 'setrlimit' call in main.c.

Consequently, whether a program is running under 'make' greatly
affects how it behaves, as the child processes inherit the resource
limits as well.

While the kernel issue is clearly a bug, I think 'make' behavior is a
misfeature as well.  Generally one expects resource limits to not be
silently changed by shells and shell-like programs such as 'make'.
That 'make' does this is troubling; among other things, diagnosing the
consequences is difficult (I investigated many other possible causes
before finding it).  The Solaris kernel bug is just one way such a
silent change might be manifested.

If 'make' needs to allocate a large amount (megabytes) of data, it
would be better to do so on the heap, both from a portability
standpoint (the stack size) and from a performance standpoint (it
messes up the normally good locality of stack access).

Alternatively, if it must allocate on the stack, then detecting and
complaining about a too-low limit would be better in my opinion than
silently changing it.  It's easy to uncap the stack size explicitly
in build scripts and whatnot when truly necessary.

Output of uname -a:

  SunOS tainted.sf.coverity.com 5.10 Generic_118844-26 i86pc i386 i86pc Solaris

Output of make -v:

  GNU Make 3.81
  Copyright (C) 2006  Free Software Foundation, Inc.
  This is free software; see the source for copying conditions.
  There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
  PARTICULAR PURPOSE.

  This program built for i386-pc-solaris2.10

Scott McPeak <smcpeak>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by melisgl (Posted a comment)
  • -email is unavailable- added by psmith (Posted a comment)
  • -email is unavailable- added by smcpeak (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follow 5 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2023-01-02 psmith StatusNone Fixed
        Assigned toNone psmith
        Open/ClosedOpen Closed
        Fixed ReleaseNone 3.82
    2006-11-29 psmith Item GroupBug Enhancement

    Back to the top

    Powered by Savane 3.13-758e.
    Corresponding source code