bugGNU Wget - Bugs: bug #60287, Windows recursive download escapes...

 
 

bug #60287: Windows recursive download escapes utf8 URLs twice

Submitter:  Cameron Tacklind <cinderblock>
Submitted:  Thu 25 Mar 2021 09:09:42 AM UTC
   
 
Category:  None Severity:  3 - Normal
Priority:  5 - Normal Status:  None
Privacy:  Public Assigned to:  None
Originator Name:  Open/Closed:  Open
Release:  1.20 Operating System:  Microsoft Windows
Reproducibility:  Every Time Fixed Release:  None
Planned Release:  None Regression:  None
Work Required:  None Patch Included:  No
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Mon 29 Mar 2021 05:05:16 PM UTC, comment #13: 

I thank you for the pointer to the encoding options that I had missed. That has at least allowed my wget downloads to work.

I maintain that this is a bug in wget's recursive download feature as it seems, at least to me, to be unexpected behavior.

I recognize that I am not as familiar with the HTTP & HTML standards, nor all real world use cases where HTTP servers are running on various locales, as some are. So maybe this is expected because of ambiguity in the specifications or real world limitations.

This is why I created a bug here in the bug tracking system and, for instance, not a question on Stack Overflow.

If this is not a bug, I expect the maintainers will express the reasoning that we're missing.

Cameron Tacklind <cinderblock>
Mon 29 Mar 2021 05:19:46 AM UTC, comment #12: 

You are welcome to send patches which would implement what you think should be the correct behavior in Wget.  At the time, based on my study of the Wget sources and its basic design of fetching Web pages, my conclusion was that the only reliable way in Wget on Windows to deal with non-ASCII characters in URLs specified by Web pages is to provide Wget with the remote and local encodings, especially since UTF-8 support on Windows is rudimentary at best.  I thought I was doing fine by helping you and others deal with these situations by explaining how to use those options to your benefit...

Eli Zaretskii <eliz>
Mon 29 Mar 2021 03:28:21 AM UTC, comment #11: 

Except a URI is always in a restricted character set, by design, to make all the encoding issues go away.

I hear the point about writing the file to disk and making sure the path used on disk can be reliably generated from an arbitrary encoding scheme. But that should happen independently from contactinating the relative uri with the base uri, both of which are always in a restricted subset of octets that is a subset of printable ascii characters.

So, while I agree that a conversion to the local charset needs to happen, that should only happen with regard to the file system file name, which is independent from the request line sent to the HTTP server.

The 404 is exactly the problem I think is a bug. The downloaded HTML file has embedded <a> tags with `href` attributes that are never outside of the printable ascii range.

This 404 happens, as far as I can tell, because wget assumes local character set is important instead of doing what is specified in the HTML/HTTP standards, as far as I understand them, of not doing any character encoding translations.

Cameron Tacklind <cinderblock>
Sun 28 Mar 2021 06:57:02 AM UTC, comment #10: 

Without converting charsets, it would be difficult to rely on certain library functions and support certain features.

For example, locale-dependent C library functions work only with the locale's encoding, and will produce wrong results if presented with strings encoded differently.  The IRI support needs to work in UTF-8 internally.  And when writing Web pages to disk, Wget needs to encode the page name so that it would be acceptable as a file name by the local filesystem.

That is why conversion to the locale's charset is rather necessary. Using the original bytes might work for some operations, but not for others, so keeping the original bytes would need some logic for where they can and cannot be used, which is a complication.  It is better to convert once, and then forget about it.

The 404 error is most probably because Wget does attempt to convert encoding, but does it incorrectly when you don't tell it the actual encodings.  So the re-encoded URL is garbled.

Eli Zaretskii <eliz>
Sat 27 Mar 2021 08:19:35 PM UTC, comment #9: 

So is this a fundamental problem with HTML? That you can't encode in the HTML that urlencoded bytes of URLs in a.href are to be interpreted with a particular charset?

Note the problem I'm seeing causes a 404 error before it even gets the file with the non-ascii file name.

Wget is reading a file from disk with only ascii in it. The ascii that is in the a.href of the downloaded file needs to be directly sent in the subsequent HTTP request line.

It seems to me that converting to the local character set should not happen at all.

Cameron Tacklind <cinderblock>
Sat 27 Mar 2021 06:43:55 AM UTC, comment #8: 


> Is this because wget first downloads the html file and then reads the contents off disk


No.  It's because Wget downloads the pages you told it to, and saves them as disk files.  Any links in the downloaded pages that lead to other pages produce additional disk files (e.g., if you told Wget to download recursively).

IOW, the file-name encoding issue happens when a Web page needs to be saved to a file for some reason.

> If the bytes were downloaded with the correct encoding, and written to the file system with the correct encoding, I would expect it to be able to parse the file with the correct encoding.


What is the "correct encoding", though?

> the file `wget-test.html` has no non-ascii characters in it


Of course, it doesn't: the non-ASCII characters appear when we decode the hex-encoded bytes.


Eli Zaretskii <eliz>
Fri 26 Mar 2021 08:08:06 PM UTC, comment #7: 


> Not the local one.

Is this because wget first downloads the html file and then reads the contents off disk to parse and find links before initiating subsequent http requests?

> And not every page you download has these headers, so the remote one isn't always known, either.

On the server I control, I've set nginx to always add the full "Content-Type: text/html; charset=utf8" header.

> The browser just shows the page, it doesn't save it to a disk file.  So encoding of the page's name isn't an issue for the browser, as it is for Wget.

This confirms by assumption that wget reads the file off disk to parse it for links. For instance, if wget downloaded to memory, and parsed the html from memory, there couldn't possibly be encoding issues because the fs isn't used?

If the bytes were downloaded with the correct encoding, and written to the file system with the correct encoding, I would expect it to be able to parse the file with the correct encoding.

But this makes me think about it more, the file `wget-test.html` has no non-ascii characters in it:


$ file -bi wget-test.html
text/html; charset=us-ascii


Cameron Tacklind <cinderblock>
Fri 26 Mar 2021 07:48:56 PM UTC, comment #6: 


> Isn't the encoding specified in the HTTP header?


Not the local one.  (And not every page you download has these headers, so the remote one isn't always known, either.)

You must specify the local encoding, especially on MS-Windows, because Windows filesystems aren't agnostic about encoding file names, they don't allow arbitrary byte sequences to be part of a file name.  The file names are written on disk in UTF-16, and so the file I/O APIs on Windows must convert file names to UTF-16, and for that they need to know its encoding.

> If feels like a bug because my browser handles the links just fine, without the chatset specified by the server.


The browser just shows the page, it doesn't save it to a disk file.  So encoding of the page's name isn't an issue for the browser, as it is for Wget.

Eli Zaretskii <eliz>
Fri 26 Mar 2021 07:27:51 PM UTC, comment #5: 

Isn't the encoding specified in the HTTP header?


Content-Type: text/html; charset=utf8


Maybe I need the charset in the html?


<head><meta charset="UTF-8"></head>


Neither of these seem to get wget to behave as I expect.

If feels like a bug because my browser handles the links just fine, without the chatset specified by the server.

Is the difference that browsers assume utf8 and wget assumes the local charset?

Cameron Tacklind <cinderblock>
Fri 26 Mar 2021 07:49:19 AM UTC, comment #4: 

Why does this feel like a bug to you?  How can Wget be expected to guess the correct encoding, if you don't tell it?

Eli Zaretskii <eliz>
Fri 26 Mar 2021 07:23:06 AM UTC, comment #3: 

Thank you. I had not tried those options.

Curiously, the only option that I needed was --local-encoding=utf8. The remote-encoding option did not change the detected URI encoding of CP1252.

Without --local-encoding=utf8


Loaded example.com/wget-test.html (size 71).
URI encoding = 'CP1252'
example.com/wget-test.html: merge('http://example.com/wget-test.html', 'space-ok%20cyrillic-not%D0%B3.txt') -> http://example.com/space-ok%20cyrillic-not%D0%B3.txt
converted 'http://example.com/space-ok%20cyrillic-not%D0%B3.txt' (CP1252) -> 'http://example.com/space-ok cyrillic-notг.txt' (UTF-8)
appending 'http://example.com/space-ok%20cyrillic-not%C3%90%C2%B3.txt' to urlpos.


With --local-encoding=utf8


Loaded example.com/wget-test.html (size 71).
URI encoding = 'utf8'
example.com/wget-test.html: merge('http://example.com/wget-test.html', 'space-ok%20cyrillic-not%D0%B3.txt') -> http://example.com/space-ok%20cyrillic-not%D0%B3.txt
converted 'http://example.com/space-ok%20cyrillic-not%D0%B3.txt' (utf8) -> 'http://example.com/space-ok cyrillic-notг.txt' (UTF-8)
appending 'http://example.com/space-ok%20cyrillic-not%D0%B3.txt' to urlpos.


Regardless, this still feels like a bug to me. But maybe the issue is just how wget implements the recursive download and isn't really fixable?

Cameron Tacklind <cinderblock>
Thu 25 Mar 2021 09:39:36 AM UTC, comment #2: 

What was the locale on the GNU/Linux machine, where this "just works"?  I'm guessing it was a UTF-8 locale, in which case I'd try the same with a different locale.

I think you must use --remote-encoding=UTF-8 (and perhaps also a suitable --local-encoding) to make this work correctly on MS-Windows.  Did you try that?

Eli Zaretskii <eliz>
Thu 25 Mar 2021 09:13:55 AM UTC, comment #1: 

Likely duplicate that is 10 years old:

https://savannah.gnu.org/bugs/?30330

Cameron Tacklind <cinderblock>
Thu 25 Mar 2021 09:09:42 AM UTC, original submission:  

Steps to reproduce:
1. On a web-server, create an html file with the contents:

<a href="space-ok%20cyrillic-not%D0%B3.txt">target-with-other-char</a>

2. Download that file recursively: `wget -r http://example.com/wget-test.html`

On Linux, we get the expected (truncated) result:

...
2021-03-25 02:01:59 (4.51 MB/s) - ‘example.com/wget-test.html’ saved [71]

--2021-03-25 02:01:59--  http://example.com/space-ok%20cyrillic-not%D0%B3.txt
...

However on Windows, the urlencoded utf8 character is mangled and fails to download.

...
2021-03-25 02:02:29 (4.51 MB/s) - ‘example.com/wget-test.html’ saved [71]

--2021-03-25 02:02:29--  http://example.com/space-ok%20cyrillic-not%C3%90%C2%B3.txt
...

Note that the space (%20) is not mangled.

Cameron Tacklind <cinderblock>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by eliz (Posted a comment)
  • -email is unavailable- added by cinderblock (Submitted the item)
  •  

    No changes have been made to this item

    Back to the top

    Powered by Savane 3.14-3b9d.
    Corresponding source code