Mon 10 Aug 2015 11:46:11 AM UTC, comment #7:
Fixed upstream with:
commit e4db00d74d7c8ade43e57f39344d8505d607308a
Author: Jookia <166291@gmail.com>
Date: Fri Jul 31 23:41:36 2015 +1000
Add option to write URL rejections to a tab-delimited CSV log.
* main.c: Add "--rejected-log" option.
* init.c: Add "rejectedlog" command.
* options.h: Add "rejected_log" parameter string.
* wget.texi: Add brief documentation on new --rejected-log option.
* recur.c: Optionally log details of URLs not traversed.
Add reject_reason enum.
(download_child_p -> download_child): Return a reject_reason.
(descend_redirect_p -> descend_redirect): Return a reject_reason.
(retrieve_tree): Support logging reasons for rejection.
Add write_reject_log_header that writes a CSV format header to a file.
Add write_reject_log_url that writes a url struct to a file in CSV format.
Add write_reject_log_reason that writes the URL and parent URL as well as the
rejection reason to a CSV file.
* Test--rejected-log.px: Add a basic test for the --rejected-log command.
* tests/Makefile.am: Run Test--rejected-log.px.
This allows you to figure out why URLs are being rejected and some context
around it. CSV is used as the output format since it can be used easily parsed,
it's delimited by tabs instead of commas to allow using all (quoted) URL
characters and includes column names which may be used for compatibility.
|
Thu 07 May 2015 03:58:52 PM UTC, comment #5:
I've found myself in need of this feature. I'm trying to download a website recursively without pulling in every single ad and its HTML. I'd like to be able to find out which URLs were rejected, why, and information about the domains (host, port, etc.)
I've patched my copy of Wget to dump all of this in to a CSV file which I can then tool through to get my desired results:
I've included a patch made in a few hours that does this.
(file #33955)
|
Sun 08 Jul 2007 04:46:06 AM UTC, original submission:
It would be useful if wget would dump the list of links that it did not follow to a file. Possible uses:
- mirror a website and get a list of all links from that website to other websites
- download with options that restrict the files that are downloaded (such as --level, --no-parent, and --reject) and use the list of unfollowed links to see if there were interesting files that were skipped
|