bugGNU patch - Bugs: bug #55797, patch-2.7.6 doesn't like unlimited...

 
 

bug #55797: patch-2.7.6 doesn't like unlimited RLIMIT_NOFILE.

Submitter:  Henrik Grubbström <grubba>
Submitted:  Thu 28 Feb 2019 02:55:00 PM UTC
   
 
Category:  None Severity:  3 - Normal
Item Group:  None Status:  Fixed
Privacy:  Public Assigned to:  None
Open/Closed:  Closed
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Fri 28 Jun 2019 09:35:38 AM UTC, comment #11: 

Thanks, I have tried to improve the patch description based on the feedback from comment #10; see the git repository.

Only closing cached file descriptors on EMFILE / ENFILE errors is difficult because such failures can happen in several places in the code, but also in library functions, so that's pretty hard to get working reliably.

Andreas Gruenbacher <agruen>
Group administrator
Fri 28 Jun 2019 09:03:06 AM UTC, comment #10: 

The patch in file #47138 looks unnecessarily complicated to me.

It looks like it changes the cache hash table size to a fixed size of 8 (or rather 7 after making it a prime), and then disables use of the cache in the RLIM_INFINITY case?

If so, I have no problem believing that the patch solves the original issue, but...

As the only use for the hash table is for a cache (with probably only a few tens of entries), I don't see any reason not to just use a fixed size, except possibly if RLIMIT_NOFILE is very low, but I believe that case would be better handled by emptying the cache on EMFILE and/or ENFILE.

Henrik Grubbström <grubba>
Thu 27 Jun 2019 03:32:50 PM UTC, comment #9: 

Could you guys please verify that the patch in file #47138 fixes the problem and report if it causes any other issues?  Thanks.

Andreas Gruenbacher <agruen>
Group administrator
Thu 27 Jun 2019 03:22:18 PM UTC, comment #8: 

I'm curious what the "Don't Crash" patch does if the limit is quite high. Does it still bog down the system as mentioned in the original report?

My fix is just two lines long as basically says "if the limit is between 1 and 1024, use it. Otherwise use the default value of 8."

In other words, a limit of 0, infinite, or greater than 1024 is assumed to be outside the practical range we want to use.

Jesse <newguy>
Thu 27 Jun 2019 03:14:58 PM UTC, comment #7: 

File #47138 seems to work for me, but I've only tested it in the RLIMIT_NOFILE == 1024 and RLIMIT_NOFILE == RLIM_INFINITY cases.

Patch is potentially faster if it can keep more file descriptors open in parallel, so I don't want to introduce arbitrary limits.  (Even if '640K ought to be enough for anybody' ...)

I don't know what your current fix looks like.

Andreas Gruenbacher <agruen>
Group administrator
Thu 27 Jun 2019 02:54:23 PM UTC, comment #6: 

Has the "Don't Crash" patch been tested? It looks like, if there is no limit set, that it sets the max_cached_fds variable to whatever infinity is, which sounds like what is causing the lock-up in the first place. I think this seems dangerous if we don't know what infinity is defined as on each platform that uses GNU patch.

My current fix is smaller and should (hopefully) work regardless of what the limit is set to or what infinity is defined as on the host platform. I'd like to keep things simple, unless the "Don't Crash" patch offers an extra benefit I'm not aware of?

Jesse <newguy>
Thu 27 Jun 2019 09:17:01 AM UTC, comment #5: 

Does patch 0001-Don-t-crash-when-RLIMIT_NOFILE-is-set-to-RLIM_INFINI.patch help?

(file #47138)

Andreas Gruenbacher <agruen>
Group administrator
Wed 26 Jun 2019 08:55:30 PM UTC, comment #4: 

I have applied a fix for this that does a simple range check to avoid really big values if the RLIMIT_NOFILE value is umlimited. Hopefully we can get it tested and into the next version.

Jesse <newguy>
Mon 24 Jun 2019 08:29:02 PM UTC, comment #3: 

Thank you for running this test. This gives me a pretty good idea for fixing the issue.

Jesse <newguy>
Mon 24 Jun 2019 10:15:43 AM UTC, comment #2: 

As I don't have the gnu-patch source at hand, I opted for a trivial program instead:

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/time.h>
#include <sys/resource.h>

int main(int argc, char **argv)
{
  struct rlimit nofile;

  getrlimit(RLIMIT_NOFILE, &nofile);

  fprintf(stderr, "NOFILE: %ld\n", nofile.rlim_cur);

  exit(0);
}



$ gcc -m64 -o rlimit-test rlimit-test.c
$ ./rlimit-test
NOFILE: 1024
$ ulimit -n unlimited && echo OK.
-bash: ulimit: open files: cannot modify limit: Not owner
$ su
Password:
# ./rlimit-test
NOFILE: 1024
# ulimit -n unlimited && echo OK.
OK.
# ./rlimit-test
NOFILE: -3


Note that rlim_t is an unsigned type:

typedef unsigned long   rlim_t;


Henrik Grubbström <grubba>
Fri 21 Jun 2019 09:30:51 PM UTC, comment #1: 

Thanks for reporting the issue. Could you please add a printf() statement to the safe.c file for me? I'd like to see what the value of RLIMIT_NOFILE is when it is unlimited. For instance, I'd like to know if "unlimited" means the value is set to zero or -1 or something else, or if the value is NULL.

Changing the function you quoted to look like this below, should do the trick.

 if (getrlimit (RLIMIT_NOFILE, &nofile) == 0)
    max_cached_fds = MAX (nofile.rlim_cur / 4, max_cached_fds);
  // DEBUG
  printf("NOFILE limit: %d\n", nofile.rlim_cur);



Please post the value that gets printed and I should be able to add a check and fix from there.

Jesse <newguy>
Thu 28 Feb 2019 02:55:00 PM UTC, original submission:  

Observed on Solaris 11.4/amd64 with the bundled gnu-patch:

# pkg list gnu-patch
NAME (PUBLISHER)                                  VERSION                    IFO
text/gnu-patch                                    2.7.6.1-11.4.0.0.1.14.0    i--


This issue seems to have been introduced in v2.7.4, when the "safe" system call replacements were introduced.

From src/safe.c:

static void init_dirfd_cache (void)
{
  struct rlimit nofile;

  max_cached_fds = 8;
  if (getrlimit (RLIMIT_NOFILE, &nofile) == 0)
    max_cached_fds = MAX (nofile.rlim_cur / 4, max_cached_fds);

  cached_dirfds = hash_initialize (max_cached_fds,
                                   NULL,
                                   hash_cached_dirfd,
                                   compare_cached_dirfds,
                                   NULL);

  if (!cached_dirfds)
    xalloc_die ();
}


Having an unlimited RLIMIT_NOFILE causes patch to first take 100% cpu trying to find a suitable prime for the hash table size, and then fail with

patch: **** out of memory


Work-around


  Setting RLIMIT_NOFILE to eg 1024.

Henrik Grubbström <grubba>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by agruen (Updated the item)
  • -email is unavailable- added by newguy (Posted a comment)
  • -email is unavailable- added by grubba (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follow 3 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2019-06-28 agruen StatusNone Fixed
        Open/ClosedOpen Closed
    2019-06-27 agruen Attached File- Added 0001-Don-t-crash-when-RLIMIT_NOFILE-is-set-to-RLIM_INFINI.patch, #47138

    Back to the top

    Powered by Savane 3.13-758e.
    Corresponding source code