bugGNU Wget - Bugs: bug #20808, -R should reject files _before_...

 
 

bug #20808: -R should reject files _before_ downloading them

Submitter:  Micah Cowan <micahcowan>
Submitted:  Fri 17 Aug 2007 10:45:37 PM UTC
   
 
Category:  Program Logic Severity:  3 - Normal
Priority:  5 - Normal Status:  Duplicate
Privacy:  Public Assigned to:  None
Originator Name:  Open/Closed:  Closed
Release:  1.10.2 Operating System:  GNU/Linux
Reproducibility:  None Fixed Release:  None
Planned Release:  1.13 Regression:  None
Work Required:  1 - Days Patch Included:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Thu 20 Aug 2015 08:56:38 PM UTC, comment #12: 

I try to retrieve specific replays from saved game storage http://replays.wesnoth.org/1.12/

This site just usual directory/file list.

As data grouped per day for 2 year period there are a lot of subdirectories.

I try to get interesting replays by (see http://forums.wesnoth.org/viewtopic.php?p=588686#p588686 ):

wget -e 'robots=off' -nc -c -np -A 'Scrolling_Survival_Turn_1??_*.bz2' -A index.html -r http://replays.wesnoth.org/1.12/

but each subdirectory have links to sort table data on page (query string) and for each page (which is 2 years*365 days) it try to download things that rejected.

It take too long time to wait (even given that wget reuse connections) for wget do useless job.

I quickly solve task with by manually scanning index.html files, just get them by wget (--level=1 do job for limiting amount of processing time):

$ wget -r -np -A index.html --level=1 http://replays.wesnoth.org/1.12/

and retrieve interested files:

$ find . -type f -name index.html | while read f; do p=${f#./}; p=http://${p%index.html}; command grep -o 'href="Scrolling_Survival_Turn_[5-9]._[^"]*\.bz2' $f | while read s; do s=${s#href='"'}; wget $p$s; done; done

It is nice to have ability to list what links to follow, when processed HTML files.

Oleksandr Gavenko <gavenkoa>
Thu 14 Jul 2011 05:12:56 AM UTC, comment #11: 

If this is not a bug, this is at least a misfeature.
The argument about crawling more links is valid - even if the file to reject is removed later.

But still, there is a good reason to not fetch the files to reject in the first place. In my case: i want to mirror some site and don't want the logout-page to be fetched at all.

My suggestion is either a switch "--reject-before-fetch" as modifier for "-R", or "--reject-before-fetch=gif,zip,pdf" as prefilter stage.

good byte

p.s. thanks tp Zenaan Harkness for the pointer to httrack :)

Martin Scheffler <the_bishop>
Thu 04 Nov 2010 05:11:07 PM UTC, comment #10: 

An alternative to wget, for those who need something 'soon', is:
  httrack

Zenaan Harkness <zenaan>
Fri 02 Oct 2009 07:26:02 PM UTC, comment #9: 

This will be covered by the fixes for bug 20364 and bug 22670.

Micah Cowan <micahcowan>
Mon 14 Sep 2009 10:03:49 AM UTC, comment #8: 

I'm trying also now to mirror a Twiki and we have exactly the same problem (I tried with Scientific Linux 4 and also with the most recent Fedora 11). This is a major issue for using wget to replicate a big Wiki/Twiki.

I agree that, according to the documentation, this issue might just be a 'lacking feature' and not a bug. Anyway wget would be much more useful if such an "--ignore" feature would be implemented. Is there any chance of getting it in the near future?

The only alternatives I found to mirror a Twiki are this plugin (http://twiki.org/cgi-bin/view/Plugins/PublishContrib) or a rsync of the original directory, although both ways require direct access to the server hosting the Wiki/Twiki.

Juan Lopez Perez <juanlope>
Fri 13 Feb 2009 05:47:41 PM UTC, comment #7: 

This is a major problem.  For example wiki's are very common today and they have numerous links to different actions (print, edit, history, ...), which are not content that normally would want to be downloaded.  These could be excluded with e.g. --reject="*\\?*", but due to this bug the entire deep structure (typically an order of magnitude or two larger than the content itself!) is fetched and only afterwards rejected.

Wget should definitely have an option not to spider to specific URLs.  If full backward compatibility is required, this could be implemented in addition to rejection policy, e.g. --ignore="*\\?*", or some other similar option.  Alternatively there could be an option that changes the functionality of --reject.

Personally I cannot see much point in the current download-spider-reject functionality.  If the rejected files are to be spidered, they could simply be deleted easily afterwards (by the script calling wget or manually).  The whole point of --reject is to limit the spidering to places of intrest.

Anonymous
  Spam posted by anonymous
Wed 22 Aug 2007 11:41:18 PM UTC, comment #5: 

Note that, according to Wget documentation, -R and -A do not affect the downloading of HTML files, as it will still want to try to follow any links contained therein, so this is not a bug.

It may be more appropriate to formulate this as a feature request, but in the meantime, I've forwarded the issue to the mailing list for further discussion.

Micah Cowan <micahcowan>
Wed 22 Aug 2007 11:28:09 PM UTC, comment #4: 

I would say it is a bug. If I explicitly want to reject that page,
then it shouldn't be downloaded. I don't care what's on the page.

or at least we should have an option to really reject?

Frank Liu <liug>
Wed 22 Aug 2007 11:26:15 PM UTC, comment #3: 

As Mauro Tortonesi  suggested, this behavior is only for html files: wget downloads rejected HTML files in order to parse them for other acceptable URLs. it does not download any other file type.

Here is a simplified test case:

As you can see, "test2.html" should be rejected, but wget downloads it anyways, and then removes it.
 
wget -erobots=off -R "test2.html" -r http://vm.openqnx.com/test.html 
 
--16:16:04--  http://vm.openqnx.com/test.html
Resolving vm.openqnx.com... 209.190.29.235
Connecting to vm.openqnx.com|209.190.29.235|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 55 [text/html]
Saving to: `vm.openqnx.com/test.html'
 
100%[=======================================>] 55          --.-K/s   in 0s     
 
16:16:04 (5.54 MB/s) - `vm.openqnx.com/test.html' saved [55/55]
 
--16:16:04--  http://vm.openqnx.com/test2.html
Connecting to vm.openqnx.com|209.190.29.235|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 22 [text/html]
Saving to: `vm.openqnx.com/test2.html'
 
100%[=======================================>] 22          --.-K/s   in 0s     
 
16:16:05 (1.83 MB/s) - `vm.openqnx.com/test2.html' saved [22/22]
 
Removing vm.openqnx.com/test2.html since it should be rejected.
FINISHED --16:16:05--
Downloaded: 2 files, 77 in 0s (3.51 MB/s)

Frank Liu <liug>
Wed 22 Aug 2007 04:41:36 PM UTC, comment #2: 

I just tried my test case in another environment and confirmed the bug.

Environment:
Solaris 10 with a
refresh new build of latest wget from 1.10.2 http://ftp.gnu.org/pub/gnu/wget/  without any patches.

This means the bug has nothing to do with OS, or Redhat patches.

Frank Liu <liug>
Mon 20 Aug 2007 11:48:06 PM UTC, comment #1: 

OK, I created a test case. Please verify if you can reproduce it with your own version of wget and Unix systems.
 
1) download my test script from http://vm.openqnx.com/test.zip
   unzip it and you will find a short shell script "test.sh".

2) "test.sh" has just a few lines, I can't just paste it here because some of the escape backslashes got dropped last time I pasted. You can review the script before running it.

3) run the script:
  ./test.sh 2>&1 | tee test.log

4) let the script run the mirror for 2 or 3 minute.

5) take a look at the "test.log", search for "reject" and you will see those rejected files that are downloaded first, and then deleted.

Frank Liu <liug>
Fri 17 Aug 2007 10:45:37 PM UTC, original submission:  

In some (all?) cases, wget downloads "rejected" files and then deletes them afterwords. This is contrary to expectations, and I do not know the reason for this decision.

Before working on this bug, care should be taken to reproduce this on a canonical version, as this bug was reported against Fedora Core 6, and Red Hat has severely modified wget from 1.10.2 (mainly, by bringing in a large amount of code from 1.11 development).

From Frank Liu, who originally submitted this description in a comment on bug 20454:

I am using wget on a Fedora Core 6 box, and try to mirror a tikiwiki site.


#!/bin/sh


URL=https://www.someremotesite.com/prefix/twiki/bin/view/Main/

INCLUDE="/prefix/twiki/bin/viewauth/Main,/prefix/twiki/bin/view/Main,/prefix/twiki/pub"
EXCLUDE="/prefix/twiki/bin/*"
REJECT="*\?rev*,*\?sortcol*,*\?raw*,*\?skin*,*\?template*"
wget --no-check-certificate -erobots=off
  --user=myuser --password=mypass
  -I $INCLUDE -X $EXCLUDE -R $REJECT -r $URL -k


Here is part of the log:

...
--15:05:48--
https://www.someremotesite.com/prefix/twiki/bin/view/Main/G1b3Perf
ormanceResults?template=viewprint&rev=2&sortcol=0;table=15;up=0
Reusing existing connection to www.someremotesite.com:443.
HTTP request sent, awaiting response... 200 OK
Length: 65377 (64K) [text/html]
Saving to:
`www.someremotesite.com/prefix/twiki/bin/view/Main/G1b3PerformanceRes
ults?template=viewprint&rev=2&sortcol=0;table=15;up=0'

     0K .......... .......... .......... .......... .......... 78% 52.5K 0s
    50K .......... ...                                        100%
228K=1.0s

15:05:50 (63.0 KB/s) -
`www.someremotesite.com/prefix/twiki/bin/view/Main/G1b3Pe
rformanceResults?template=viewprint&rev=2&sortcol=0;table=15;up=0' saved
 [65377/65377]

Removing
www.someremotesite.com/prefix/twiki/bin/view/Main/G1b3PerformanceResult
s?template=viewprint&rev=2&sortcol=0;table=15;up=0 since it should be
rejected.
...


Micah Cowan <micahcowan>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by gavenkoa (Posted a comment)
  • -email is unavailable- added by the_bishop (Posted a comment)
  • -email is unavailable- added by zenaan (Posted a comment)
  • -email is unavailable- added by zenaan
  • -email is unavailable- added by juanlope (Posted a comment)
  • -email is unavailable- added by liug (Posted a comment)
  • -email is unavailable- added by micahcowan (Submitted the item)
  •  

    Follow 8 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2010-11-04 zenaan Carbon-Copy- Added zenaan
    2009-10-02 micahcowan Planned Release1.14 1.13
        StatusConfirmed Duplicate
        Open/ClosedOpen Closed
    2008-08-21 micahcowan Planned Release1.15 1.14
    2007-08-22 micahcowan Carbon-CopyRemoved -email is unavailable- -
    2007-08-22 micahcowan StatusNone Confirmed
    2007-08-17 micahcowan Carbon-Copy- Added -email is unavailable-

    Back to the top

    Powered by Savane 3.13-758e.
    Corresponding source code