bugGNU Wget - Bugs: bug #30999, wget should respect robots.txt...

 
 

bug #30999: wget should respect robots.txt directive crawl-delay

Submitter:  Raymond Jennings <shentino>
Submitted:  Wed 08 Sep 2010 09:22:49 PM UTC
   
 
Category:  Feature Request Severity:  3 - Normal
Priority:  5 - Normal Status:  In Progress
Privacy:  Public Assigned to:  schubiger
Originator Name:  Shentino Open/Closed:  Open
Release:  1.12 Operating System:  None
Reproducibility:  Every Time Fixed Release:  None
Planned Release:  None Regression:  None
Work Required:  None Patch Included:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Thu 09 Apr 2015 08:25:42 PM UTC, comment #6: 

Crawl-delay is host/domain specific. Thus a wget -r 'domain1 domain2 domain3' can't simply wait 'crawl-delay' seconds after a download. We need some specific logic when dequeing the next file. Also how comes --wait into play ? The user might be able to override crawl-delay for domain1 but not for domain2 and domain3.

Today, web servers often allow for 50+ parallel connections from one client - I really don't see the point in implementing crawl-delay.

I could change my mind if someone has a real good reason for it and comes up with a good algorithm / patch to handle all corner cases.

Tim Ruehsen <rockdaboot>
Group administrator
Thu 09 Apr 2015 03:27:18 PM UTC, comment #5: 

I have read the robots.txt spec thoroughly and found no way to set crawl-delay for a specific file. If someone could look into it that would be nice.
Otherwise I think the best solution is to set --wait to the matching crawl-delay if the user hasn't set --wait already.

Miquel Llobet <mllobet>
Tue 11 Dec 2012 02:52:56 PM UTC, comment #4: 

An actual syntactic example of the crawl-delay directive used in conjunction with different files would be helpful. Thanks,

Steven Schubiger <schubiger>
Group Member
Wed 04 Jul 2012 09:30:09 PM UTC, comment #3: 

Just a quick potential gotcha to mention.

Robots.txt can specify different directives for different directories.

Rather like disallow, crawl-delay can vary for different files.

I'd probably implement it by fetching the file, then sleeping for however long is specified for that specific file.

Raymond Jennings <shentino>
Fri 01 Oct 2010 03:22:53 PM UTC, comment #2: 

It has the same effect when compliantly implemented.

Crawl-delay is a robots.txt directive that, when applied, instructs any bots with access to throttle their download frequency.

So I guess you could say it's a "default" --wait.

Raymond Jennings <shentino>
Fri 01 Oct 2010 06:05:21 AM UTC, comment #1: 

Is the crawl-delay the same as the --wait or --waitretry command line arguments?

Thanks,
Raj Mohan

Raj Mohan <rmohan>
Wed 08 Sep 2010 09:22:49 PM UTC, original submission:  

Have wget read and respect the crawl-delay directive in robots.txt

wget --mirror http://localhost

http://robots.txt:

User-agent: *
Crawl-delay: 10

expected:

Wget would wait 10 seconds between retrievals

actual:

wget downloaded like mad.

This bug has been CC'ed to gentoo at http://bugs.gentoo.org/show_bug.cgi?id=336488

Raymond Jennings <shentino>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by rockdaboot (Posted a comment)
  • -email is unavailable- added by mllobet (Posted a comment)
  • -email is unavailable- added by schubiger (Updated the item)
  • -email is unavailable- added by rmohan (Posted a comment)
  • -email is unavailable- added by shentino (Submitted the item)
  •  

    Follow 2 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2012-07-04 schubiger StatusNone In Progress
        Assigned toNone schubiger

    Back to the top

    Powered by Savane 3.13-ec04.
    Corresponding source code