(
Jump to the original submission)
Thu 09 Apr 2015 08:25:42 PM UTC, comment #6:
Crawl-delay is host/domain specific. Thus a wget -r 'domain1 domain2 domain3' can't simply wait 'crawl-delay' seconds after a download. We need some specific logic when dequeing the next file. Also how comes --wait into play ? The user might be able to override crawl-delay for domain1 but not for domain2 and domain3.
Today, web servers often allow for 50+ parallel connections from one client - I really don't see the point in implementing crawl-delay.
I could change my mind if someone has a real good reason for it and comes up with a good algorithm / patch to handle all corner cases.
|
Thu 09 Apr 2015 03:27:18 PM UTC, comment #5:
I have read the robots.txt spec thoroughly and found no way to set crawl-delay for a specific file. If someone could look into it that would be nice.
Otherwise I think the best solution is to set --wait to the matching crawl-delay if the user hasn't set --wait already.
|
Tue 11 Dec 2012 02:52:56 PM UTC, comment #4:
An actual syntactic example of the crawl-delay directive used in conjunction with different files would be helpful. Thanks,
|
Wed 04 Jul 2012 09:30:09 PM UTC, comment #3:
Just a quick potential gotcha to mention.
Robots.txt can specify different directives for different directories.
Rather like disallow, crawl-delay can vary for different files.
I'd probably implement it by fetching the file, then sleeping for however long is specified for that specific file.
|
Fri 01 Oct 2010 03:22:53 PM UTC, comment #2:
It has the same effect when compliantly implemented.
Crawl-delay is a robots.txt directive that, when applied, instructs any bots with access to throttle their download frequency.
So I guess you could say it's a "default" --wait.
|
Fri 01 Oct 2010 06:05:21 AM UTC, comment #1:
Is the crawl-delay the same as the --wait or --waitretry command line arguments?
Thanks,
Raj Mohan
|
Wed 08 Sep 2010 09:22:49 PM UTC, original submission:
Have wget read and respect the crawl-delay directive in robots.txt
wget --mirror http://localhost
http://robots.txt:
User-agent: *
Crawl-delay: 10
expected:
Wget would wait 10 seconds between retrievals
actual:
wget downloaded like mad.
This bug has been CC'ed to gentoo at http://bugs.gentoo.org/show_bug.cgi?id=336488
|
(Note: upload size limit is set to 16384 kB, after insertion of the required
escape characters.)
Attach File(s):
Comment:
No files currently attached
Depends on the following items: None found
Items that depend on this one: None found
Follow 2 latest changes.