bugGNU Wget - Bugs: bug #62782, Cant resume mirror website using...

 
 

bug #62782: Cant resume mirror website using convert links with -c option. How to do it?

Submitter:  elias tsolis <estatistics>
Submitted:  Mon 18 Jul 2022 02:47:10 PM UTC
   
 
Category:  Feature Request Severity:  3 - Normal
Priority:  5 - Normal Status:  None
Privacy:  Public Assigned to:  None
Originator Name:  elias Open/Closed:  Open
Release:  None Operating System:  GNU/Linux
Reproducibility:  None Fixed Release:  None
Planned Release:  None Regression:  None
Work Required:  None Patch Included:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Mon 09 Sep 2024 05:59:57 PM UTC, comment #2: 

The fact that wget can't properly resume website mirrors has bitten me as well, and I wonder what the (technical) reasons for that are.

From the documentation of the `-nc, --no-clobber` option:

> Note that when -nc is specified, files with the suffixes .html or .htm will be loaded from the local disk and parsed as if they had been retrieved from the Web.


Re-crawling local files sounds like the perfect solution to resuming a partial archive.
I tried to get that to work, but it just doesn't seem to be supported for my usecase.
What I tried is the following:

- I don't use `--mirror`, because that implies `--timestamping`, which needs to not be enabled as to not HEAD requests for every file.
  I know that my partial archive is up-to-date and don't need wget to re-check.
  Instead, I use `--recursive` and `--level inf`
- I use `--convert-links`, and I'm not sure why that would be incompatible with `-nc`.
  It doesn't seem to conflict with being able to just re-crawl the files from disk?
- I also use `--adjust-extension` (previously `--html-extension`) and `--restrict-file-names=windows`,
  since the thing I'm crawling is a PHP forum with links like `showthread.php?pid=5`,
  which then get converted to nice filenames like `showthread.php@pid=5.html`.

I understand that another challenge wget faces is that URLs may just redirect, and those need to be followed.
The filename on disk will be what any redirects ultimately led to.

However, let's assume some URL does redirect. That would imply that no file for the original URL would be on disk, correct?
Or approaching that logic backwards: If wget checks for the presence of a file (by following the renaming rules of `--adjust-extension` and `restrict-file-names`) and it is found on disk, would that not imply that it was already crawled and did not redirect?
Of course, that logic only works if we assume the redirects never change,
but with timestamping turned off we're already in "potentially stale" territory anyway and fine with that.

Considering all this, should it not be possible to resume downloading partial archives? Even without any link cache files?

Does anybody know if there are more good reasons why it doesn't work this way?
Or maybe whether this could be done but just requires someone to implement it?



Anonymous
Mon 18 Jul 2022 03:02:03 PM UTC, comment #1: 

It is a huge downside of wget that it cant resume mirror websites downloading.

It appears that wget developers refuse to make an offline url list cache for links / converted links that wget proceeds for whatever reason.


elias tsolis <estatistics>
Mon 18 Jul 2022 02:47:10 PM UTC, original submission:  


Using this command i try to mirror a website, but i had some interruptions (5-6), and tried to restart wget mirroring, but i saw that it is redownloading all files.
(40000+ files)

wget   --mirror  --page-requisites    --html-extension    --convert-links   -c   www.literotica.com/  2>&1 | tee -a wget_log

"
So what to do?

I have read in forums that --convert-links and -nc options are not working together.

I set all downloaded wget files (not directories) in linux read only. Didnt work. It stopt at first file (index.html). Probably, Convert links needs to access already downloaded files.


Questions / Requests

1) Why wget dont make a list of links that are converted therefore no need to rewrite-already downloaded files?
2) Why you dont have an explicit command to not rewrite downloaded already files even with convert-link option? What is the problem here? And to simply read already downloaded files without rewritting in whatever condition are these files?
- I suppose that it will be very simple the conversion from "https://www.exmaple.com" to "/home/".
So why it fails wget to proceed when i set already downloaded files to read only (not directories)?

What other options i have to continue mirroring with convert links without redownloading all files again?

I have seen this question so many times in stackoverflow, and all people says there is no solution.
It is so hard to apply a solution to that problem?

sincerely,
Elias Estatisticseu







elias tsolis <estatistics>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by estatistics (Submitted the item)
  •  

    No changes have been made to this item

    Back to the top

    Powered by Savane 3.14-f13d.
    Corresponding source code