bugGNU Wget - Bugs: bug #20808, -R should reject files _before_...

 
 

bug #20808: -R should reject files _before_ downloading them

Submitted by:  Micah Cowan <micahcowan>
Submitted on:  Fri 17 Aug 2007 10:45:37 PM UTC  
 
Category: Program LogicSeverity: 3 - Normal
Priority: 5 - NormalStatus: Duplicate
Privacy: PublicAssigned to: None
Originator Name: Open/Closed: Closed
Release: 1.10.2Operating System: GNU/Linux
Reproducibility: NoneFixed Release: None
Planned Release: 1.13Regression: None
Work Required: 1 - DaysPatch Included: None

Add a New Comment (Rich MarkupRich Markup):
   

You are not logged in

Please log in, so followups can be emailed to you.

 

(Jump to the original submission Jump to the original submission)

Thu 14 Jul 2011 05:12:56 AM UTC, comment #10:

If this is not a bug, this is at least a misfeature.
The argument about crawling more links is valid - even if the file to reject is removed later.

But still, there is a good reason to not fetch the files to reject in the first place. In my case: i want to mirror some site and don't want the logout-page to be fetched at all.

My suggestion is either a switch "--reject-before-fetch" as modifier for "-R", or "--reject-before-fetch=gif,zip,pdf" as prefilter stage.

good byte

p.s. thanks tp Zenaan Harkness for the pointer to httrack :)

Martin Scheffler <the_bishop>
Thu 04 Nov 2010 05:11:07 PM UTC, comment #9:

An alternative to wget, for those who need something 'soon', is:
httrack

Zenaan Harkness <zenaan>
Fri 02 Oct 2009 07:26:02 PM UTC, comment #8:

This will be covered by the fixes for bug 20364 and bug 22670.

Micah Cowan <micahcowan>
Project Administrator
Mon 14 Sep 2009 10:03:49 AM UTC, comment #7:

I'm trying also now to mirror a Twiki and we have exactly the same problem (I tried with Scientific Linux 4 and also with the most recent Fedora 11). This is a major issue for using wget to replicate a big Wiki/Twiki.

I agree that, according to the documentation, this issue might just be a 'lacking feature' and not a bug. Anyway wget would be much more useful if such an "--ignore" feature would be implemented. Is there any chance of getting it in the near future?

The only alternatives I found to mirror a Twiki are this plugin (http://twiki.org/cgi-bin/view/Plugins/PublishContrib) or a rsync of the original directory, although both ways require direct access to the server hosting the Wiki/Twiki.

Juan Lopez Perez <juanlope>
Fri 13 Feb 2009 05:47:41 PM UTC, comment #6:

This is a major problem. For example wiki's are very common today and they have numerous links to different actions (print, edit, history, ...), which are not content that normally would want to be downloaded. These could be excluded with e.g. --reject="\\?", but due to this bug the entire deep structure (typically an order of magnitude or two larger than the content itself!) is fetched and only afterwards rejected.

Wget should definitely have an option not to spider to specific URLs. If full backward compatibility is required, this could be implemented in addition to rejection policy, e.g. --ignore="\\?", or some other similar option. Alternatively there could be an option that changes the functionality of --reject.

Personally I cannot see much point in the current download-spider-reject functionality. If the rejected files are to be spidered, they could simply be deleted easily afterwards (by the script calling wget or manually). The whole point of --reject is to limit the spidering to places of intrest.

Anonymous
 Spam posted by an anonymous
Wed 22 Aug 2007 11:41:18 PM UTC, comment #5:

Note that, according to Wget documentation, -R and -A do not affect the downloading of HTML files, as it will still want to try to follow any links contained therein, so this is not a bug.

It may be more appropriate to formulate this as a feature request, but in the meantime, I've forwarded the issue to the mailing list for further discussion.

Micah Cowan <micahcowan>
Project Administrator
Wed 22 Aug 2007 11:28:09 PM UTC, comment #4:

I would say it is a bug. If I explicitly want to reject that page,
then it shouldn't be downloaded. I don't care what's on the page.

or at least we should have an option to really reject?

Frank Liu <liug>
Wed 22 Aug 2007 11:26:15 PM UTC, comment #3:

As Mauro Tortonesi suggested, this behavior is only for html files: wget downloads rejected HTML files in order to parse them for other acceptable URLs. it does not download any other file type.

Here is a simplified test case:

As you can see, "test2.html" should be rejected, but wget downloads it anyways, and then removes it.

wget -erobots=off -R "test2.html" -r http://vm.openqnx.com/test.html

--16:16:04-- http://vm.openqnx.com/test.html
Resolving vm.openqnx.com... 209.190.29.235
Connecting to vm.openqnx.com|209.190.29.235|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 55 [text/html]
Saving to: `vm.openqnx.com/test.html'

100%[=======================================>] 55 --.-K/s in 0s

16:16:04 (5.54 MB/s) - `vm.openqnx.com/test.html' saved [55/55]

--16:16:04-- http://vm.openqnx.com/test2.html
Connecting to vm.openqnx.com|209.190.29.235|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 22 [text/html]
Saving to: `vm.openqnx.com/test2.html'

100%[=======================================>] 22 --.-K/s in 0s

16:16:05 (1.83 MB/s) - `vm.openqnx.com/test2.html' saved [22/22]

Removing vm.openqnx.com/test2.html since it should be rejected.
FINISHED --16:16:05--
Downloaded: 2 files, 77 in 0s (3.51 MB/s)

Frank Liu <liug>
Wed 22 Aug 2007 04:41:36 PM UTC, comment #2:

I just tried my test case in another environment and confirmed the bug.

Environment:
Solaris 10 with a
refresh new build of latest wget from 1.10.2 http://ftp.gnu.org/pub/gnu/wget/ without any patches.

This means the bug has nothing to do with OS, or Redhat patches.

Frank Liu <liug>
Mon 20 Aug 2007 11:48:06 PM UTC, comment #1:

OK, I created a test case. Please verify if you can reproduce it with your own version of wget and Unix systems.

1) download my test script from http://vm.openqnx.com/test.zip
unzip it and you will find a short shell script "test.sh".

2) "test.sh" has just a few lines, I can't just paste it here because some of the escape backslashes got dropped last time I pasted. You can review the script before running it.

3) run the script:
./test.sh 2>&1 | tee test.log

4) let the script run the mirror for 2 or 3 minute.

5) take a look at the "test.log", search for "reject" and you will see those rejected files that are downloaded first, and then deleted.

Frank Liu <liug>
Fri 17 Aug 2007 10:45:37 PM UTC, original submission:

In some (all?) cases, wget downloads "rejected" files and then deletes them afterwords. This is contrary to expectations, and I do not know the reason for this decision.

Before working on this bug, care should be taken to reproduce this on a canonical version, as this bug was reported against Fedora Core 6, and Red Hat has severely modified wget from 1.10.2 (mainly, by bringing in a large amount of code from 1.11 development).

From Frank Liu, who originally submitted this description in a comment on bug 20454:

I am using wget on a Fedora Core 6 box, and try to mirror a tikiwiki site.

Here is part of the log:

Micah Cowan <micahcowan>
Project Administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by the_bishop (Posted a comment)
  • -unavailable- added by zenaan (Posted a comment)
  • -unavailable- added by zenaan
  • -unavailable- added by juanlope (Posted a comment)
  • -unavailable- added by liug (Posted a comment)
  • -unavailable- added by micahcowan (Submitted the item)
  •  

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 8 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Thu 04 Nov 2010 05:11:07 PM UTCzenaanCarbon-Copy-=>Added zenaan
    Fri 02 Oct 2009 07:26:02 PM UTCmicahcowanPlanned Release1.14=>1.13
      StatusConfirmed=>Duplicate
      Open/ClosedOpen=>Closed
    Thu 21 Aug 2008 11:46:02 PM UTCmicahcowanPlanned Release1.15=>1.14
    Wed 22 Aug 2007 11:34:57 PM UTCmicahcowanCarbon-CopyRemoved -unavailable-=>-
    Wed 22 Aug 2007 05:51:51 PM UTCmicahcowanStatusNone=>Confirmed
    Fri 17 Aug 2007 10:45:37 PM UTCmicahcowanCarbon-Copy-=>Added -unavailable-

    Back to the top


    Powered by Savane 3.1-cleanup