Tue 06 Sep 2016 09:03:39 AM UTC, comment #14:
This patch looks great!
it's already four years since it was proposed.
Any idea of why it has not yet been approved?
|
Thu 02 Jul 2009 10:24:09 PM UTC, comment #13:
(There's also the fact that the current behavior makes --accept useless for filtering out which specific HTML files one wants to download: --accept's not just for filename extensions, after all.)
|
Thu 02 Jul 2009 09:56:49 PM UTC, comment #12:
I don't see what the point could be for either --accept or --content-type if recursion is off, especially as both options should have no effect whatsoever if recursion is off (''nothing'' should/does prevent downloading explicitly-specified URLs). The existing behavior for --accept can and does cause problems for some users, and is broken anyway since it only looks at filename extensions, and so .htm and .html are traversed without any means of prevention, but .php and .asp are not. But this isn't really the place for this discussion: feel free to take it up on the mailing list (might check the archives for previous discussions)
|
Thu 02 Jul 2009 09:07:46 PM UTC, comment #11:
Surely thats what the recursive option is for? If recursive is off then I agree that the --content-type option should function as you suggest and only retrieve files that match the content-type, however there is no way that content-types other than HTML, etc.. can be downloaded recursively without recursion of these other content types.
|
Thu 02 Jul 2009 08:56:12 PM UTC, comment #10:
Actually, that behavior of the --accept option is annoying for many applications, and I'd just as soon not duplicate it. For instance, if what you want is to fetch all the images that are linked from one specific page, --accept still doesn't prevent the wasteful download of all kinds of pages we don't want.
If you really want HTML to be grabbed, then it should be requested IMO. We should have a separate option to fine-tune what gets deleted afterward (or, once we move to a streaming html parser, what never gets saved in the first place). There's already an issue filed for that option, IIRC.
|
Thu 02 Jul 2009 08:45:05 PM UTC, comment #9:
I've compiled patch file #16272 against the latest wget source but am having problems trying to use wget with the option "--content-type=image*", as you can see from the output:
…
HTTP request sent, awaiting response... 200 OK
Length: 0 [text/html]
Not downloading, content type rejected.
From what I understand, wget isn't able to recursively crawl the website which is HTML to find images to download as the option rejects anything that doesn't match. Should the --content-type option not function in the same was as the --accept option which seems to download HTML files even if rejected but does not save them to disk?
|
Mon 11 Aug 2008 06:33:42 AM UTC, comment #8:
Attaching new patch which adds some tests. I'm not sure whether I'm using the test system correctly though, and as far as I can see there's no way to verify that it's sending a HEAD request before doing anything else.
(file #16272)
|
Wed 06 Aug 2008 10:41:03 PM UTC, comment #7:
Should get a suite of tests.
|
Mon 28 Jul 2008 06:45:19 AM UTC, comment #6:
Attaching new patch which sends a HEAD request first (which turned out be be a trivial change!) and fixes some logging and return code quirks.
(file #16195)
|
Sat 26 Jul 2008 06:41:03 AM UTC, comment #5:
Ah, understood. This looks a bit hairy though and would be a bigger change. I'll look into it.
|
Sat 26 Jul 2008 06:29:48 AM UTC, comment #4:
If it wasn't included in the response header to GET, it certainly wouldn't be included in the response to HEAD.
No, I mean it should issue an explicit HEAD to determine the content-type, to determine whether it should then issue a GET. The send_head_first bool var is what controls this, I believe. Currently it sends HEAD first when it needs to do time-stamping, and some situations involving Content-Disposition.
|
Sat 26 Jul 2008 06:20:15 AM UTC, comment #3:
> Probably needs to send HEAD first, though, to determine the content-type.
From reading the code, my understanding is that it does exactly that. In gethttp it reads the header of the response to figure out how to continue.
Do you mean do an explicit HEAD if the content-type isn't included in the response header?
|
Sat 26 Jul 2008 05:57:37 AM UTC, comment #2:
Patch looks good. Probably needs to send HEAD first, though, to determine the content-type.
|
Sat 26 Jul 2008 05:42:46 AM UTC, comment #1:
Attaching patch which adds --content-type=LIST and --content-type-exclude=LIST options which allow to specify MIME type patterns to filter retrieved stuff.
(file #16187)
|
Fri 06 Jul 2007 10:56:51 PM UTC, original submission:
From TODO:
"Handle MIME types correctly. There should be an option to (not) retrieve files based on MIME types, e.g. `--accept-types=image/*'. This would work for FTP by translating file extensions to MIME types using mime.types."
I'm not convinced we should bother mapping them to file extensions; we already have -A...
|