bugGNU Wget - Bugs: bug #64203, wget --warc-dedup has bogus...

 
 

bug #64203: wget --warc-dedup has bogus behavior against duplicated digest

Submitter:  None
Submitted:  Wed 17 May 2023 01:03:48 AM UTC
   
 
Category:  Program Logic Severity:  3 - Normal
Priority:  5 - Normal Status:  None
Privacy:  Public Assigned to:  None
Originator Name:  plcp Originator Email:  -email is unavailable-
Open/Closed:  Open Release:  trunk
Operating System:  GNU/Linux Reproducibility:  Every Time
Fixed Release:  None Planned Release:  None
Regression:  None Work Required:  None
Patch Included:  No
* Mandatory Fields

Add a New Comment Rich Markup
   

Wed 17 May 2023 01:03:48 AM UTC, original submission:  

When wget encounter several urlA,urlB,urlC sharing the same digest hash1 in the input .cdx file of --warc-dedup, it only registers the last (urlC,hash1) pair as potential candidate for deduplication.

When crawling, it rejects (urlA,hash1) and (urlB,hash1) as candidates for a revisit record, even if found as-is during the crawl.

Some steps to reproduce against a toy example, we first build a cdx file:

% wget 'http://perdu.com' 'https://perdu.com' --delete-after --warc-file=testA --warc-cdx=on

We can visualize the two (url,digest) pairs sharing the same digest:

% cat testA.cdx

Then we can verify that deduplication works for the second pair:

% wget 'https://perdu.com' --delete-after --warc-file=testB --warc-dedup=testA.cdx

And we see that deduplication fails for the first pair:

% wget 'http://perdu.com' --delete-after --warc-file=testC --warc-dedup=testA.cdx

We can confirm that only testB got its revisit record:

% zgrep revisit *.warc.gz

In practice, this cause that most frequent files are the ones most commonly NOT deduplicated. This is noticeable during repeated crawls of websites that mostly don't change: the first crawl will download the whole website, and all subsequent crawls will only download files for which deduplication failed (typically fonts, http/https siblings, resources present at several URLs of the website...)

AFAIK current implementation use digest as key of an in-memory hash table, instead of using the (url,digest) pair.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by None (Submitted the item)
  •  

    No changes have been made to this item

    Back to the top

    Powered by Savane 3.14-f13d.
    Corresponding source code