bugGNU Wget - Bugs: bug #23348, Incorrect exit status on recursive...

 
 

bug #23348: Incorrect exit status on recursive download

Submitter:  None
Submitted:  Sat 24 May 2008 07:46:05 PM UTC
   
 
Category:  Program Logic Severity:  3 - Normal
Priority:  7 - High Status:  Fixed
Privacy:  Public Assigned to:  micahcowan
Originator Name:  Cipriano Groenendal Originator Email:  -email is unavailable-
Open/Closed:  Closed Release:  1.10.2
Operating System:  GNU/Linux Reproducibility:  Every Time
Fixed Release:  1.12 Planned Release:  1.12
Regression:  None Work Required:  1 - Days
Patch Included:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Fri 28 Aug 2009 07:01:53 AM UTC, comment #8: 

Fixed in 62f8104d1814.

Micah Cowan <micahcowan>
Fri 12 Jun 2009 07:34:36 PM UTC, comment #7: 

Probably the lower-priority error conditions can wait for a later release, where perhaps they could also be communicated more granularly, via the SIDB, rather than a one-answer-fits-the-whole-session solution.

In the meantime, we must DTRT when an explicitly requested URL could not be fetched for any reason. Though we'll have to fudge for the case when that URL had already been blacklisted from a previous fetch, but that's not a particularly common case. This should be easy to implement.

Second priority would be to return an error for network failures. If the "normal" fetch couldn't happen, users need to know about it.

Micah Cowan <micahcowan>
Tue 10 Feb 2009 11:33:24 PM UTC, comment #6: 

Revised priorities:

  • write failure/system errors
  • any sort of error preventing retrieval of explicitly-requested URL (not found, auth required, forbidden)
  • network or lookup failure for an URL we didn't explicitly request, but on a host corresponding to an URL we requested.
  • network or lookup failure, but for a host that doesn't correspond to an explicitly-requested URL (when -H is in effect)
  • any other sort of error preventing retrieval of recursively-found links
  • robots/no-follow forbids


Note that distinguishing errors based on a "host corresponding to an URL we requested" would require significant adjustments, whereas "a URL we requested" is itself an easy thing to determine, since it will be the only URL passed to retrieve_tree() (things do get a little more complicated when that URL had already reached the blacklist, having been found via recursion).

Micah Cowan <micahcowan>
Tue 10 Feb 2009 11:29:11 PM UTC, comment #5: 

Wget should definitely return an error upon any type of failure for any explicitly-specified URL arguments. And, as someone else mentioned, for conditions related to actual problems, such as disk or network errors. But then, what about network errors on non-explicitly-specified hosts when -H is in force?

Things we'd ideally distinguish in error codes, in rough order of priority:

  • write failure/system errors
  • network or lookup failure
  • any other sort of error preventing retrieval of explicitly-requested URL (not found, auth required, forbidden)
  • network or lookup failure, but for a host that doesn't correspond to an explicitly-requested URL (when -H is in effect)
  • any other sort of error preventing retrieval of recursively-found links
  • robots/no-follow forbids


Could potentially use a bitmask for these, since in the case of non-fatal errors, more than one might apply. Of course, certain errors are important enough that we wouldn't really care that much whether or not some of the lower-priority ones occurred. Using a bitmask would mean we could only distinguish at most 8 different types of error.

Micah Cowan <micahcowan>
Fri 23 Jan 2009 10:46:31 AM UTC, comment #4: 

Link to the discussion [1] in previous post :

http://osdir.com/ml/web.wget.general/2005-09/msg00091.html

Michel <mbriand>
Fri 23 Jan 2009 09:57:00 AM UTC, comment #3: 

Hi all,

this problem was already discussed on Wget mailing list a few years ago [1]. I found this with a simple googling.

My opinion is that wget should not return an error on exit of recursive mode (even if some files were failing). The default case "semantic" is that individual file failures are acceptable.

This is because in -m or -r mode wget will try a lot of URL that are not specified by the user. It will try for example "robot.txt". Why should it return an error on a failure of this URL not specified ?

But, if all agree, wget shall have a new option to change this behavior. This will be identified cases, new "semantics".

In my view, a new behavior could be that wget returns an error if there is any failure of authentication (HTTP 401). This is very important for scripts that actually use wget and want to be sure their auth passed.

Obviously wget shall also return an error if there is a problem on the local machine: disk full, socket error, ...

What do you think ?

Michel

Anonymous
Fri 01 Aug 2008 11:55:20 PM UTC, comment #2: 

The following patch should fix this bug, but I'm too tired to be sure :)

diff -r d27e06e0e404 src/recur.c
--- a/src/recur.c       Tue Jul 22 13:33:42 2008 -0700
+++ b/src/recur.c       Sat Aug 02 00:52:12 2008 +0100
@@ -447,7 +447,7 @@ retrieve_tree (const char *start_url)
   else if (status == FWRITEERR)
     return FWRITEERR;
   else
-    return RETROK;
+    return status;
 }
 
 /* Based on the context provided by retrieve_tree, decide whether a

Joao Fernando Ferreira <jff>
Sat 24 May 2008 08:59:19 PM UTC, comment #1: 

This should probably only apply when no files were successfully downloaded. Some would like a notice when any of the recursively-found files fail, but that doesn't apply to all cases. Or maybe that's fine, too. IMO, having a single exit status for batched downloads is not terribly useful. Providing status for each download individually would be more helpful.

Micah Cowan <micahcowan>
Sat 24 May 2008 07:46:05 PM UTC, original submission:  

When wget is run with the -r(ecursive) flag, it always exits with a status of 0, eve if the initial file does not exist, unlike when run without this flag


Examples:
[cipri@orion ~]$ wget http://www.example.org/nonexistant; echo $?
1


[cipri@orion ~]$ wget -r http://www.example.org/nonexistant; echo $?
0


Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by mbriand (Posted a comment)
  • -email is unavailable- added by jff (Posted a comment)
  • -email is unavailable- added by micahcowan (Posted a comment)
  • -email is unavailable- added by None (Submitted the item)
  •  

    Follow 12 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2009-08-28 micahcowan StatusConfirmed Fixed
        Open/ClosedOpen Closed
        Fixed ReleaseNone 1.12
    2009-07-06 micahcowan Work RequiredNone 1 - Days
    2008-11-04 micahcowan StatusNeeds Investigation Confirmed
    2008-08-21 micahcowan Planned Release1.15 1.12
    2008-08-02 micahcowan StatusNone Needs Investigation
        Assigned toNone micahcowan
    2008-08-02 micahcowan Priority4 7 - High
    2008-05-24 micahcowan CategoryNone Program Logic
        Priority5 - Normal 4
        Planned ReleaseNone 1.15

    Back to the top

    Powered by Savane 3.13-caa5.
    Corresponding source code