bugGNU Wget - Bugs: bug #45801, Allowing to configure HTML engine...

 
 

bug #45801: Allowing to configure HTML engine which links to follow

Submitted by:  Oleksandr Gavenko <gavenkoa>
Submitted on:  Thu 20 Aug 2015 09:29:00 PM UTC  
 
Category: Feature RequestSeverity: 3 - Normal
Priority: 5 - NormalStatus: None
Privacy: PublicAssigned to: None
Originator Name: Open/Closed: Open
Release: NoneOperating System: None
Reproducibility: NoneFixed Release: None
Planned Release: NoneRegression: No
Work Required: NonePatch Included: None

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

Tue 03 Nov 2015 03:25:30 PM UTC, comment #1:

There are --accept-regex and --reject-regex.

For your example below you could use
wget -e robots=off -r --regex-type=pcre --accept-regex '(20151027/$|Scrolling_Survival_Turn_)' --reject-regex ";+" http://replays.wesnoth.org/1.12/

1. --reject-regex ";+" skips these 'sorting' URLs
2. --accept-regex makes Wget just look into subdir 20151027 and from there just download URLs containing 'Scrolling_Survival_Turn_'

Note that for --regex-type=pcre you need PCRE compiled in (just try it out), else you could use POSIX regexes.

Tim Ruehsen <rockdaboot>
Project Administrator
Thu 20 Aug 2015 09:29:00 PM UTC, original submission:

From info page (2 paragraphs):

Note that these two options do not affect the downloading of HTML
files (as determined by a '.htm' or '.html' filename prefix). This
behavior may not be desirable for all users, and may be changed for
future versions of Wget.

Finally, it's worth noting that the accept/reject lists are matched
twice against downloaded files: once against the URL's filename
portion, to determine if the file should be downloaded in the first
place; then, after it has been accepted and successfully downloaded, the
local file's name is also checked against the accept/reject lists to see
if it should be removed. The rationale was that, since '.htm' and
'.html' files are always downloaded regardless of accept/reject rules,
they should be removed after being downloaded and scanned for links,
if they did match the accept/reject lists.

So any URL from href="..." are retrieved even if they are useless.

As result recursive download time dramatically increased.

For example I try to download specific game replays from http://replays.wesnoth.org/1.12/

This site list file hierarchy and I hope that this command do interested me job:

wget -e 'robots=off' -nc -c -np -r -A 'Scrolling_Survival_Turn_1??_*.bz2' -A index.html http://replays.wesnoth.org/1.12/

But because any link checked and each page have service links to sort table data (which are useless for me) it take too long time to wait while wget check them.

I solve my task with --limit=1 and custom scanner for downloaded index.html files:

$ wget -e 'robots=off' -nc -c -np -A index.html -r --level=1 http://replays.wesnoth.org/1.12/

$ find . -type f -name index.html | while read f; do p=${f#./}; p=http://${p%index.html}; command grep -o 'href="Scrolling_Survival_Turn_[5][0-5]_[^"]*\.bz2' $f | while read s; do s=${s#href='"'}; wget $p$s; done; done

If there was options to limit what links to follow in HTML page writing custom scripts was unnecessary.

Seems that instead of literal "Directory-Based Limits" I need glob/regex matching for URLs (not just directory or page names).

There are a lot of confusion with -R/-A options, which useless when you know exactly that type of links to follow:

Oleksandr Gavenko <gavenkoa>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by rockdaboot (Posted a comment)
  • -unavailable- added by gavenkoa (Submitted the item)
  • -unavailable- added by gavenkoa
  •  

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follows 1 latest change.

    Date Changed By Updated Field Previous Value => Replaced By
    Thu 20 Aug 2015 09:29:00 PM UTCgavenkoaCarbon-Copy-=>Added gavenkoa

    Back to the top


    Powered by Savane 3.1-cleanup1