GNU Wget - Bugs: bug #21714, File name too long
bug #21714: File name too long
Submitter: | John Doe <johndoe32102002> | ||
Submitted: | Tue 04 Dec 2007 05:29:29 AM UTC | ||
Category: | Program Logic | Severity: | 3 - Normal |
Priority: | 7 - High | Status: | Fixed |
Privacy: | Public | Assigned to: | gscrivano |
Originator Name: | John | Open/Closed: | Closed |
Release: | 1.10.2 | Operating System: | GNU/Linux |
Reproducibility: | Every Time | Fixed Release: | None |
Planned Release: | 1.12.x | Regression: | Yes |
Work Required: | 0 - Hours | Patch Included: | No |
Discussion locked!
Jump to the original submission
Wed 05 Apr 2017 08:20:53 AM UTC, comment #56: |
frank jackosn <frank1982> |
Wed 24 Aug 2016 09:25:15 AM UTC, comment #55: The previous two comments seem to be spam, probably triggered by "File name too long" in the subject. Can someone delete them?
|
Frank Heckenbach <frank> |
Wed 24 Aug 2016 06:19:56 AM UTC, comment #54: I tried the Long Path Tool and it worked wonders for me, it's a great tool for copying and renaming file names with path too long. |
Ronald Carl <ronald87> |
Sat 27 Dec 2014 02:39:27 PM UTC, comment #53: Hello, Did you used long path tool when you faced this problem "File name too long". I did use long path tool when did i faced that type problem. Hope it will help you. |
julias <julias4> |
Sat 29 Sep 2012 02:21:24 PM UTC, comment #52: Thanks. I've verified that the current git version fixes the problem. |
Frank Heckenbach <frank> |
Sat 29 Sep 2012 11:49:26 AM UTC, comment #51: Applied as commit: 67e6027ea130d06aeff365adfbc83f34d019b968 |
Giuseppe Scrivano <gscrivano>![]() ![]() |
Fri 28 Sep 2012 04:35:09 PM UTC, comment #50: I am going to take a look at the patch in the next few days. |
Giuseppe Scrivano <gscrivano>![]() ![]() |
Fri 28 Sep 2012 03:05:32 AM UTC, comment #49: What exactly are we supposed to do to get this patch finally included?
|
Frank Heckenbach <frank> |
Mon 09 Jul 2012 06:43:07 PM UTC, comment #48: Hello,
|
Noël Köthe <nok> |
Wed 30 May 2012 12:03:22 AM UTC, comment #47: Looks good to me. |
Frank Heckenbach <frank> |
Tue 29 May 2012 11:23:49 AM UTC, comment #46: Sorry for the delay - I had been off pc.
|
Tim Ruehsen <rockdaboot>![]() |
Fri 18 May 2012 07:43:17 PM UTC, comment #45: Two minor things about the new patch, both in get_max_length ():
|
Frank Heckenbach <frank> |
Fri 18 May 2012 03:11:50 PM UTC, comment #44: Just accidentially named the path 1.14,.
|
Tim Ruehsen <rockdaboot>![]() |
Thu 29 Mar 2012 06:18:12 PM UTC, comment #43: Can this patch now finally be included, or is anything missing? |
Frank Heckenbach <frank> |
Sun 27 Nov 2011 11:29:55 AM UTC, comment #42: I wrote:
|
Frank Heckenbach <frank> |
Sat 12 Nov 2011 04:40:54 AM UTC, comment #41:
|
Joseph Rios <alotau> |
Fri 11 Nov 2011 11:50:39 PM UTC, comment #40:
|
Frank Heckenbach <frank> |
Fri 11 Nov 2011 06:42:26 PM UTC, comment #39:
|
Giovanni <timido> |
Thu 10 Nov 2011 09:33:29 PM UTC, comment #38: Are you sure you've applied the patch properly? There are some logprintf's that I had included that I don't see in your example output.
|
Joseph Rios <alotau> |
Thu 10 Nov 2011 05:24:29 PM UTC, comment #37:
|
Giovanni <timido> |
Thu 10 Nov 2011 01:12:15 AM UTC, comment #36: Giovanni <timido> wrote:
|
Frank Heckenbach <frank> |
Wed 09 Nov 2011 06:39:24 PM UTC, comment #35: Doesnt seem to work even with patch applied!
|
Giovanni <timido> |
Mon 13 Jun 2011 04:27:50 AM UTC, comment #34: Rob Mangiafico wrote:
|
Frank Heckenbach <frank> |
Fri 01 Apr 2011 02:15:26 AM UTC, comment #33: Is there an updated patch for version 1.12? I tried massaging the 1.11 patch, but now filenames that are short have random invalid characters appended to the filenames. |
Rob Mangiafico <robman> |
Mon 02 Aug 2010 06:19:21 PM UTC, comment #32: Hello,
|
Giovanni <timido> |
Thu 10 Sep 2009 06:19:32 AM UTC, comment #31:
|
Joseph Rios <alotau> |
Fri 04 Sep 2009 03:17:03 AM UTC, comment #30: Some additional comments on the latest patch.
|
Micah Cowan <micahcowan> |
Tue 01 Sep 2009 05:45:58 AM UTC, comment #29: You're right that the fallback behavior may not be necessary; but I'm still not crazy about running a risk of prematurely truncating the filename before we're certain it's too big (especially since get_max_name isn't necessarily checking the same file system). The chances of checking the right filesystem could be improved if we invoke pathconf on the directory argument to -P (or the current working directory, if -P wasn't used), though it still wouldn't be fool-proof (the fail cases, however, are quite tolerably rare).
|
Micah Cowan <micahcowan> |
Tue 01 Sep 2009 04:50:49 AM UTC, comment #28: Your suggestions make sense, but I wonder at the necessity. I think the patch fixes the bug as described by all who reported it. If NAME_MAX has issues on someone's system, then a working wget for long file names may work around a problem that should be addressed elsewhere. I guess I'm saying that adding the additional fail-safe you describe (for the reasons you describe) seems like we're fixing a problem that isn't wget's to worry about. I understand, though, that pro-action is good too.
|
Joseph Rios <alotau> |
Sun 30 Aug 2009 04:11:36 AM UTC, comment #27: So, what I liked about the original patch was that it waited until it was sure that the name was going to be too long before it adjusted anything; what I like about the new patches is that it uses {NAME_MAX} intelligently. But it would also have been nice for it to fall back on the first patch's behavior, in the event that {NAME_MAX} is wrong or something.
|
Micah Cowan <micahcowan> |
Sun 05 Jul 2009 09:50:35 PM UTC, comment #26: Pursuing copyright assignment. |
Micah Cowan <micahcowan> |
Fri 06 Feb 2009 07:20:18 PM UTC, comment #25: I've clean the code to make it "GNU style". I'll consider this the final patch and that the bug is fixed unless I hear otherwise. I have placed TODO tags in the code where there should be a check for path length violations. I didn't do that check because I have no idea what to do if the path is too long. I don't see an obvious solution. In addition, that hasn't been the complaint I originally had or any users have posted.
|
Joseph Rios <alotau> |
Fri 06 Feb 2009 04:37:55 AM UTC, comment #24: I'm uploading a new patch. The major changes are in url_file_name() (of url.c) as discussed below. I also added a couple of functions to util.c: get_name_max() and get_path_max(). I think I wrote them in a way such that repeated calls won't generate additional system calls.
|
Joseph Rios <alotau> |
Fri 06 Feb 2009 12:45:27 AM UTC, comment #23: Hi Joseph,
|
David Han <kayhadrin> |
Thu 05 Feb 2009 07:45:06 PM UTC, comment #22: Hi Micah,
|
Joseph Rios <alotau> |
Wed 12 Nov 2008 11:15:28 PM UTC, comment #21: Note that the current version of the patch doesn't DTRT when -E is given (that is, the final filename won't end in .html).
|
Micah Cowan <micahcowan> |
Wed 29 Oct 2008 08:00:53 PM UTC, comment #20: Yes, url_file_name is the perfect place for this.
|
Micah Cowan <micahcowan> |
Wed 29 Oct 2008 07:17:40 PM UTC, comment #19: It makes sense to do it as you describe. I was catching the error at the point the file failed to open, which is long after the path and filename had been determined. So any chomping at that point could dig into the path.
|
Joseph Rios <alotau> |
Wed 29 Oct 2008 05:41:09 AM UTC, comment #18: Ah. Well I never actually meant to chomp the path: I meant a third of the filename.
|
Micah Cowan <micahcowan> |
Wed 29 Oct 2008 02:07:13 AM UTC, comment #17: Hi Micah,
|
Joseph Rios <alotau> |
Tue 28 Oct 2008 06:04:30 PM UTC, comment #16: Hi Joe, and thanks for the patches.
|
Micah Cowan <micahcowan> |
Tue 28 Oct 2008 04:51:58 PM UTC, comment #15: I'm trying to attach the patches that worked for me (never uploaded anything here, so hoping it works). The only changes were to http.c adding a new branch to an 'if' statement that checks if creating a new file fails and why it fails. Added one line to http.h as well. Someone emailed me for this patch, so I thought it best to post it here. Hopefully if anyone tries it out, he/she can let post a message here letting us know if it works. And if Micah takes a look at it, he can let us know if it seems reasonable and that it doesn't break anything else. |
Joseph Rios <alotau> |
Thu 09 Oct 2008 03:16:57 AM UTC, comment #14: Micah,
|
Joseph Rios <alotau> |
Thu 09 Oct 2008 01:51:55 AM UTC, comment #13: My apologies for raising that subject in the wrong place.
|
David Han <kayhadrin> |
Wed 08 Oct 2008 07:05:08 PM UTC, comment #12: Hi guys, an unrelated bug isn't really the place to discuss new feature ideas; that's really what the mailing list is for.
|
Micah Cowan <micahcowan> |
Wed 08 Oct 2008 03:34:18 PM UTC, comment #11: Maybe a flag to choose database saving isn't a bad idea. Linux and Mac usually come with sqlite installed (right?). It's server-less, has a C/command-line API and saves the db as single file wherever you like, so folks wouldn't necessarily have to have a mysql server running. I'm sure this will take quite a bit more thought, but a workaround like this would definitely solve the problem I've been having with long file names. |
Joseph Rios <alotau> |
Wed 08 Oct 2008 03:44:16 AM UTC, comment #10: Does anyone know if there's a tool to save wget's output to a database instead?
|
David Han <kayhadrin> |
Spam posted by anonymous | |
Wed 20 Aug 2008 08:28:04 PM UTC, comment #8: Using URLs as the basis of filenames still makes tons of sense when they're not too long, though. Throwing out the baby with the bathwater isn't what is needed. And while long URLs are not an extreme rarity, calling them common is a bit of a stretch (I personally have yet to run across one in my own use; that might change if I were downloading bug reports from Savannah, though: it tends to have really huge URLs).
|
Micah Cowan <micahcowan> |
Wed 20 Aug 2008 04:21:38 PM UTC, comment #7: If the URLs are too long, relative links will still be broken when viewing the downloaded files in a browser, because the files simply won't be there. The attempt to save them using a too-long filename will fail.
|
John Nolan <fossjn> |
Mon 21 Jul 2008 06:56:34 PM UTC, comment #6: That's all well and good; but using URLs as the basis for filenames has the advantage that relative URLs are not broken when you then view the downloaded files in a browser.
|
Micah Cowan <micahcowan> |
Mon 21 Jul 2008 06:15:42 PM UTC, comment #5: According to the RFCs, there is no theoretical limit to the size of a URL. In theory, a URL can be arbitrarily long.
|
John Nolan <fossjn> |
Mon 21 Apr 2008 06:56:47 PM UTC, comment #4: I'm unclear how truncating the URL does not result in an artificial file name? Obviously, we'd "unique"-ize it, same as we do for other filenames.
|
Micah Cowan <micahcowan> |
Mon 21 Apr 2008 05:39:18 PM UTC, comment #3: I'd just like to add a suggestion based on my experience with this bug/issue. It seems that wget is actually able to download the "long file name" files with no problem, so the "truncate URL and try again" may be too harsh. Perhaps an option to "create artificial local file name and try again" would be more appropriate. wget could then download to that artificial file name. That would solve the issue with my wget script. From my experience, it seems that the long file name is due to a long list of cgi script inputs in the URL so truncation would likely be useless.
|
Joseph Rios <alotau> |
Sun 13 Jan 2008 09:41:19 PM UTC, comment #2: This has been a problem for several users. It kind of sucks that there's no workaround.
|
Micah Cowan <micahcowan> |
Tue 04 Dec 2007 05:42:57 AM UTC, comment #1: I'm not sure about the "should not exist logically in wget" bit; systems tend to have maximum filename lengths, and the file here exceeded it. Still, it'd be nice to have Wget handle it better, say by picking a shorter name.
|
Micah Cowan <micahcowan> |
Tue 04 Dec 2007 05:29:29 AM UTC, original submission:
I am unable to save Java to my computer from wget because of this error:
|
John Doe <johndoe32102002> |
Depends on the following items: None found
Items that depend on this one: None found
Follow 25 latest changes.
Date | Changed by | Updated Field | Previous Value | => | Replaced by |
---|---|---|---|---|---|
2017-04-05 | rockdaboot | Discussion Lock | Unlocked | ![]() |
Locked |
2012-09-29 | gscrivano | Status | Needs Discussion | ![]() |
Fixed |
Assigned to | None | ![]() |
gscrivano | ||
Open/Closed | Open | ![]() |
Closed | ||
2012-09-28 | gscrivano | Assigned to | alotau | ![]() |
None |
2012-07-09 | nok | Carbon-Copy | - | ![]() |
Added -email is unavailable- |
2012-05-29 | rockdaboot | Attached File | - | ![]() |
Added 0001-filename-length-patch-v5-1.13.4.patch, #25949 |
2012-05-18 | rockdaboot | Attached File | - | ![]() |
Added 0001-filename-length-patch-v4-1.14.patch, #25881 |
2011-11-27 | frank | Attached File | - | ![]() |
Added wget_filename_length.v3-1.13.patch, #24473 |
2011-06-13 | frank | Attached File | - | ![]() |
Added wget_filename_length.v2-1.12.patch, #23520 |
2011-01-30 | gsauthof | Carbon-Copy | - | ![]() |
Added gsauthof |
2009-09-08 | micahcowan | Planned Release | 1.14 | ![]() |
1.12.x |
2009-09-01 | micahcowan | Status | In Progress | ![]() |
Needs Discussion |
2009-08-30 | micahcowan | Status | Awaiting Approval | ![]() |
In Progress |
Planned Release | 1.12 | ![]() |
1.14 | ||
2009-08-18 | micahcowan | Priority | 6 | ![]() |
7 - High |
Planned Release | 1.14 | ![]() |
1.12 | ||
2009-07-05 | micahcowan | Status | Ready For Test | ![]() |
Awaiting Approval |
Planned Release | 1.12 | ![]() |
1.14 | ||
2009-02-06 | alotau | Status | In Progress | ![]() |
Ready For Test |
2009-02-06 | alotau | Attached File | - | ![]() |
Added wget_filename_length.v2.patch, #17414 |
2009-02-06 | alotau | Attached File | - | ![]() |
Added wget_file_length.patch, #17409 |
2008-11-04 | micahcowan | Assigned to | None | ![]() |
alotau |
2008-10-29 | alotau | Attached File | - | ![]() |
Added wget.patch, #16746 |
2008-10-28 | micahcowan | Assigned to | cy6erbr4in | ![]() |
None |
Try Long Path Tool as it can remove any problems that you might have.