Tue 31 Jul 2007 11:28:09 PM UTC, comment #6:
I have split off the various issues related to this situation into separate bug reports.
- For the immediate problem of attempting to slurping gigantic files (that are assumed to be HTML) into memory, bug 20647 has been filed. I'll implement a fix for this before the next release (1.11).
- For the larger problem that we have to do such slurping in the first place, I've filed bug 20645 has been filed. This may never be considered important enough to fix; at any rate, I don't expect to devote attention to it until sometime after the 1.12 release.
- For the issue that the WMV file was detected as text/html in the first place, I think if we're ever able to address that in some form, it'd be in the form of some sort of metadata database. We already have a report for this, in bug 20387; I've added a comment to that report, referring to this one. The metadata database feature is something I probably will want implemented at some point, but not anytime in the near future; it's more of a "next gen" Wget feature, and bug 20387 is assigned to wget-2.0, to reflect that fact.
|
Tue 31 Jul 2007 05:01:27 AM UTC, comment #5:
The issue is that Wget considers c6/vo_imya_rodiny_1943.wmv to be an html file, and attempts to read its entire contents into memory for parsing as HTML.
In my mind, there are two issues involved in this: one is that Wget considers it to be HTML when it's actually a video file; the other is that Wget needs to slurp the entire contents of the file in before it can linearly parse the file.
The slurp problem would be a straightforward, but involved, fix. We won't be doing this in time for 1.11. However, perhaps a stopgap fix restricting the maximum size of a file to be slurped, and refusing to slurp it if the file exceeds that size.
The first problem, though, I'm not sure how to resolve. The Content-Type of the response was text/html, but that content-type refers to the 416 Requested Range Not Satisfiable body, and not to the actual resource identified by the URL. However, Wget had nothing else to go on; we could expect it to interpret the response from HEAD, but that had no content-type, so the default would have been text/html.
As a kludge, I suppose we could request the first byte, or something, to get the real Content-Type.
|
Thu 26 Jul 2007 03:32:29 AM UTC, comment #3:
Verified still a problem for wget-1.11. Oddly enough, it seems only to happen when the file has been completely downloaded ("Requested range not satisfiable"), and after it has already detected this. With a partial download, no matter what the size, it does not appear to happen.
|
Mon 16 Jul 2007 11:00:20 AM UTC, original submission:
I want to mirror file archive. There are big size files.
Error message appears when WGET try to continue download very large file, which was already downloaded earlier. At this moment it checks local file and loads it into RAM.
I used utility - TOP to see how WGET takes all RAM, then takes all SWAP memory, and then it cancels with error message:
"Failed to allocate -2147483648 bytes; memory exhausted."
Why it fully loads local file to the memory to check that it is fully downloaded?
|