bugGNU Wget - Bugs: bug #50935, TEXTHTML not properly set if page...

 
 

bug #50935: TEXTHTML not properly set if page is already downloaded

Submitter:  None
Submitted:  Thu 04 May 2017 12:08:05 AM UTC
   
 
Category:  Program Logic Severity:  3 - Normal
Priority:  5 - Normal Status:  Confirmed
Privacy:  Public Assigned to:  None
Originator Name:  Originator Email:  -email is unavailable-
Open/Closed:  Open Release:  trunk
Operating System:  GNU/Linux Reproducibility:  Every Time
Fixed Release:  None Planned Release:  None
Regression:  None Work Required:  None
Patch Included:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Sat 13 May 2017 03:24:19 PM UTC, comment #5: 


> As for making a head request, how expensive is that?


Not expensive, response just contains the HTTP headers, the body/payload is empty. And with -p this would just be one request/response cycle.

> Is using a heuristic like if it begins with "<!DOCTYPE html" or an html tag too messy?


You can find a description on what to do here: https://www.w3.org/TR/2011/WD-html5-20110113/parsing.html#determining-the-character-encoding

Also see two related Wget2 issues that I opened due to your report here:
https://gitlab.com/gnuwget/wget2/issues/209
https://gitlab.com/gnuwget/wget2/issues/210

The 'xattr' feature would/could give us the mime type of a downloaded document, but is not supported on all file systems.

> Anyways, is wget2 ready for daily use at all? Are there stable releases?


No releases yet, but pretty stable (automated CI testing on Debian, CentOS, Fedora, OSX, Solaris, manual testing on Windows).
Though not all features/option from Wget1.x are implemented yet (but Wget2 already has many more features)

We badly need any reports from testers, so if you can afford the time give it a try and open as many issues as you like on https://gitlab.com/gnuwget/wget2/issues.

There is currently pretty much activity on fixing issues (alone three GSOC students performing very well).

Tim Ruehsen <rockdaboot>
Group administrator
Fri 12 May 2017 03:35:12 PM UTC, comment #4: 

The problem with the replacement commands you recommended (replacing -pH with -pHE in the second) is that it redownloads the files that it adjusts the extension of.

As for making a head request, how expensive is that?  It would be ideal for my use case if it didn't have to make any network requests at all for already downloaded files.  Is using a heuristic like if it begins with "<!DOCTYPE html" or an html tag too messy?

Anyways, is wget2 ready for daily use at all?  Are there stable releases?

Thanks for the quick response.

Anonymous
Fri 12 May 2017 08:02:24 AM UTC, comment #3: 

Sorry, my stupidity :-)
I was stuck with the first command and everything was fine, so I didn't really check the next command :-(

You are right, if the file exists the -p -nc combination says 'File ... already there; not retrieving.' and does nothing.

Instead it should read and parse that file (after checking that it really is a HTML or CSS). Wget currently has no heuristic, so it should make a HEAD request to check the content-type. What Wget really does is looking at the file name extension.

So you can do the trick with


wget -xHE -nc 'https://news.ycombinator.com/item?id=14245538'
wget -pH -nc 'https://news.ycombinator.com/item?id=14245538'


I will add this issue as a reference in Wget2 development, where we will do it correctly (using HEAD request).

Thanks for your report !

Tim Ruehsen <rockdaboot>
Group administrator
Thu 11 May 2017 03:36:27 PM UTC, comment #2: 

I'm not sure what you mean.  It does not currently do any parsing, but it should.  The -p option in the second call should have it parse the page for page-prerequisites, but it does not.

Anonymous
Wed 10 May 2017 02:19:22 PM UTC, comment #1: 

In your examples no HTML parsing is involved, so this report doesn't make any sense to me.
Could you please provide further information !?


Tim Ruehsen <rockdaboot>
Group administrator
Thu 04 May 2017 12:08:05 AM UTC, original submission:  

Running (for example):

wget -xH -nc 'https://news.ycombinator.com/item?id=14245538'
wget -pH -nc 'https://news.ycombinator.com/item?id=14245538'

results in wget not checking the resulting html file for links.  This is caused by wget saving the file without an html suffix, and only checking the file extension of the file to determine if it is an html file (this check even has a "#### Bogusness alert.").  This could possibly be fixed by checking the file for a "<!DOCTYPE html" header, or checking if it begins with an "<html>" tag.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by rockdaboot (Posted a comment)
  •  

    Follow 2 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2017-05-12 rockdaboot StatusNeed Info Confirmed
    2017-05-10 rockdaboot StatusNone Need Info

    Back to the top

    Powered by Savane 3.13-f8d8.
    Corresponding source code