bugGNU Wget - Bugs: bug #50516, domain.com vs www.domain.com site...

 
 

bug #50516: domain.com vs www.domain.com site duplication

Submitted by:  Ages Ayemtwo <ages2500>
Submitted on:  Sat 11 Mar 2017 08:01:57 PM UTC  
 
Category: Feature RequestSeverity: 3 - Normal
Priority: 5 - NormalStatus: None
Privacy: PublicAssigned to: None
Originator Name: Open/Closed: Open
Release: NoneOperating System: None
Reproducibility: NoneFixed Release: None
Planned Release: NoneRegression: None
Work Required: NonePatch Included: No

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

Mon 13 Mar 2017 09:53:08 AM UTC, comment #1:

Sounds like someone messed up his website setups(s) ;-)

Almost every site does it this way (properly):
Setup all on http://www.domain.com and redirect domain.com to http://www.domain.com.

This kind of mess also isn't limited to two sites, but could involve 3, 4, ... N sites, even having complex relationships (owner, content, cookies, referer). Not easily solvable from a client's view.

I would like to hear some more voices on this. If not, I'll close this as WONTFIX.

Tim Ruehsen <rockdaboot>
Project Administrator
Sat 11 Mar 2017 08:01:57 PM UTC, original submission:

When retrieving http://www.domain.com/, the site author may link a file to domain.com, without the http://www. This also occurs when the opposite is true.

Either scenario results in the website being downloaded twice, creating a hapazard mesh of file links between:

/domain.com/

and

/www.domain.com/

It also means that 404 pages will link to http://domain.com/ in the html of files of one folder, and http://www.domain.com/ in the other.

If one were to overlook the local mess this creates, it still puts extra strain on a large wget process by crawling and downloading near twice as much data than it needs to.

Restricting the site to -D http://www.domain.com runs the risk of missing data. To ensure I get all of the data from the domain in question, I use -D domain.com.

It would be nice for an extra flag to treat domain.com and http://www.domain.com content the same in wget, and store the content in the same folder without content duplication.

I am not requesting that this feature be a default function, but rather an additional flag/feature that treats http://www.domain.com and domain.com as coming from the same domain.

The following URL will exhibit this behavior in wget:

Ages Ayemtwo <ages2500>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by rockdaboot (Posted a comment)
  • -unavailable- added by ages2500 (Submitted the item)
  •  

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    No Changes Have Been Made to This Item

    Back to the top


    Powered by Savane 3.1-cleanup1