bugGNU Wget - Bugs: bug #20496, Failed to allocate -2147483648...

 
 

bug #20496: Failed to allocate -2147483648 bytes; memory exhausted

Submitter:  None
Submitted:  Mon 16 Jul 2007 11:00:20 AM UTC
   
 
Category:  Crash/Freeze/Infloop Severity:  3 - Normal
Priority:  5 - Normal Status:  Duplicate
Privacy:  Public Assigned to:  None
Originator Name:  radio44 Originator Email:  -email is unavailable-
Open/Closed:  Closed Release:  1.10.2
Operating System:  GNU/Linux Reproducibility:  Every Time
Fixed Release:  None Planned Release:  1.12
Regression:  None Work Required:  None
Patch Included:  No
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Wed 01 Aug 2007 06:40:25 PM UTC, comment #8: 

See also bug 20653, on a heuristic mechanism for determining when files aren't HTML.

Micah Cowan <micahcowan>
Tue 31 Jul 2007 11:30:55 PM UTC, comment #7: 

Marking as Closed/Duplicate, since this is now split into separate bug reports for related issues.

Micah Cowan <micahcowan>
Tue 31 Jul 2007 11:28:09 PM UTC, comment #6: 

I have split off the various issues related to this situation into separate bug reports.

  • For the immediate problem of attempting to slurping gigantic files (that are assumed to be HTML) into memory, bug 20647 has been filed. I'll implement a fix for this before the next release (1.11).
  • For the larger problem that we have to do such slurping in the first place, I've filed bug 20645 has been filed. This may never be considered important enough to fix; at any rate, I don't expect to devote attention to it until sometime after the 1.12 release.
  • For the issue that the WMV file was detected as text/html in the first place, I think if we're ever able to address that in some form, it'd be in the form of some sort of metadata database. We already have a report for this, in bug 20387; I've added a comment to that report, referring to this one. The metadata database feature is something I probably will want implemented at some point, but not anytime in the near future; it's more of a "next gen" Wget feature, and bug 20387 is assigned to wget-2.0, to reflect that fact.
Micah Cowan <micahcowan>
Tue 31 Jul 2007 05:01:27 AM UTC, comment #5: 

The issue is that Wget considers c6/vo_imya_rodiny_1943.wmv to be an html file, and attempts to read its entire contents into memory for parsing as HTML.

In my mind, there are two issues involved in this: one is that Wget considers it to be HTML when it's actually a video file; the other is that Wget needs to slurp the entire contents of the file in before it can linearly parse the file.

The slurp problem would be a straightforward, but involved, fix. We won't be doing this in time for 1.11. However, perhaps a stopgap fix restricting the maximum size of a file to be slurped, and refusing to slurp it if the file exceeds that size.

The first problem, though, I'm not sure how to resolve. The Content-Type of the response was text/html, but that content-type refers to the 416 Requested Range Not Satisfiable body, and not to the actual resource identified by the URL. However, Wget had nothing else to go on; we could expect it to interpret the response from HEAD, but that had no content-type, so the default would have been text/html.

As a kludge, I suppose we could request the first byte, or something, to get the real Content-Type.

Micah Cowan <micahcowan>
Tue 31 Jul 2007 01:09:53 AM UTC, comment #4: 

The "Failed to allocate" message also points out the fact that we should be using a size_t, and appropriate format specifiers, in the memfatal function, rather than a long int.

Micah Cowan <micahcowan>
Thu 26 Jul 2007 03:32:29 AM UTC, comment #3: 

Verified still a problem for wget-1.11. Oddly enough, it seems only to happen when the file has been completely downloaded ("Requested range not satisfiable"), and after it has already detected this. With a partial download, no matter what the size, it does not appear to happen.

Micah Cowan <micahcowan>
Tue 17 Jul 2007 10:57:16 AM UTC, comment #2: 

wget -m -c -N -d -v -nH -A.wmv -a ~/wget.log -P /wmvfiles  -np http://t2.rosfilm.net/c6/

file size is 2.1 Gb

Anonymous
Mon 16 Jul 2007 07:28:28 PM UTC, comment #1: 

Hello,

Could you please provide the exact commandline invocation of wget that produces this problem, and the size of the file in question?

Thank you
-Micah

Micah Cowan <micahcowan>
Mon 16 Jul 2007 11:00:20 AM UTC, original submission:  

I want to mirror file archive. There are big size files.

Error message appears when WGET try to continue download very large file, which was already downloaded earlier. At this moment it checks local file and loads it into RAM.
I used utility - TOP to see how WGET takes all RAM, then takes all SWAP memory, and then it cancels with error message:

     "Failed to allocate -2147483648 bytes; memory exhausted."

Why it fully loads  local file  to the memory to check that it is fully downloaded?

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #13353:  wget.log added by None (16KiB - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by micahcowan (Posted a comment)
  •  

    Follow 8 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2007-07-31 micahcowan StatusNeeds Discussion Duplicate
        Open/ClosedOpen Closed
    2007-07-31 micahcowan StatusNone Needs Discussion
    2007-07-31 micahcowan Planned Release1.11 1.12
    2007-07-17 micahcowan StatusNeed Info None
    2007-07-16 micahcowan StatusNone Need Info
        Planned ReleaseNone 1.11
    2007-07-16 None Attached File- Added wget.log, #13353

    Back to the top

    Powered by Savane 3.14-f13d.
    Corresponding source code