bugGNU Wget - Bugs: bug #45803, More URI filters (regex etc) from...

 
 

bug #45803: More URI filters (regex etc) from commandline, file, and program

Submitter:  grarpamp <grarpamp>
Submitted:  Fri 21 Aug 2015 06:14:15 AM UTC
   
 
Category:  Feature Request Severity:  3 - Normal
Priority:  5 - Normal Status:  None
Privacy:  Public Assigned to:  None
Originator Name:  grarpamp Open/Closed:  Open
Release:  1.16.3 Operating System:  None
Reproducibility:  None Fixed Release:  None
Planned Release:  None Regression:  None
Work Required:  None Patch Included:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Wed 02 Sep 2015 06:11:38 AM UTC, comment #2: 

# Parallel

If wget only fetches things serially...
I deferred any parallelism to the sole filter program, in case it
wanted to spread out and recombine its decision process into a
single logical answer.
If wget does fetching in parallel...
yes it could spawn checks in parallel, but it would have to be the
same program, not prog1 prog2 prog3, else there could be three
different results.

# CSV

"
WGET_FILTER_URI. This variable shall contain exactly what is passed
to the current commandline regexes today, ie:
 https://www.example.com/foo/bar?a=b&c=d#123
"

I wanted wget to have a base mode where each of the three methods
would be fed the same exact string by default, such that the user
can test and swap regexes between them equally.
Of course wget could feed other things (such as CSV) via enhanced
modes to the filter program which could in turn do anything it
wants.


# Referer, etc

"
the following optional set
of variables should also be passed to the program if readily
implementable today (each of them can result in different serving
hierarchy contexts
"

What would WGET_FILTER_REFERER be used for? Yes, referers, time of
day, agent, dynamic pages, etc can all serve up different content...
those are typically logical differences within the same service
instance.

The variables I put in this section are the typical set used in
server side configs to present entirely different physical
hierarchies of data or server [virtual] instances and apply to both
HTTP and FTP. (Wget is currently dumb about that regarding its on
disk storage... it doesn't encode such info into the basedir pathname
and thus will clobber itself by physically merging multiple contexts
on disk during recursive spidering. That's a wget design failure
to fix.)

Thus if logical things like referer are felt needed, I'd rather see
the entire set of client request headers to the server be stuffed
into this CSV you speak of as WGET_FILTER_CLIREQ_CSV.

I also wanted the input passing mechanism to be via environment
variables since novice scripters and coders can use those but may
not yet know how to process standard input (or the filesystem) which
would prevent them from using --uri-filter-prog.

I'm not keen on passing more things via the filesystem unless wget's
other metafile handling (such as cookies, logs, and even a future
"resume full prior state of crawl") is also cleaned up in the
process. By this I mean that there should probably be some control
flags such as --statefile-basedir and --statefile-basedir-auto that
will put all these statefiles under one dir (optionally auto mkstemp),
and under default filenames.

The filesystem is also slower, but could be useful in other ways.

It would be possible to support multiple input methods with:
--uri-filter-prog-type=env-basic:env-phys:stdin-req:fs-csv

env-basic: WGET_FILTER_URI
env-phys: my full set of vars
stdin-req: the entire client request via stdin
fs-csv: client request via filesystem
...: and other permutations


The idea was to keep it simple enough to get the three feature
enhancements out to people quickly.

For the first, I put setenv() and system() at utils.c:949
http://git.savannah.gnu.org/cgit/wget.git/tree/src/utils.c
rev: 52228516b5d00c1dcf3623c4e3250490d1eb1d60

I added exit status 2 to the spec as reserved. The program may
utilize it as an exit catchall for URI's that fall through its
explicit accept / reject checks, to whatever the default sense is,
as set with
--uri-filter-prog-default=accept:reject, default reject.

WGET_FILTER_HOST should be as in the original, no DNS conversion.

Feel free to run with it as desired, it should be readily expandable
to anyone's needs.

grarpamp <grarpamp>
Fri 21 Aug 2015 03:49:35 PM UTC, comment #1: 

I like the idea of such a feature but think it could be done differently, especially since we now have a tools to output all the information about a URL and its referrer to a CSV formatted line.

I propose this: Add a --uri-filter-prog=prog1,prog2,prog3 that are started once Wget starts. It's fed CSV formatted lines of the URL and the referrer, and sent to each program one after another and waits for a response of 'YES' or 'NO'.  Wget then filters based on this response.

Jookia <jookia>
Fri 21 Aug 2015 06:14:15 AM UTC, original submission:  

Adding the regex accept / reject URI filter was cool and very useful,
however it now needs to handle multiple regex.

The current regex only supports one expression. And even though
that expression can be compacted by factoring out common elements,
it still becomes very long very fast, and it's also contextually
and programmatically unmanageable... and thus it's not as powerful
as it should be.

So while this is nice and typical ...

--accept-regex='^http://www\.gnu\.org/(foo|bar)/.*$'

This is also typical ... and the limitation of a single regex clearly
causes it to grow out of control into a visually useless and
unmanageable blob.

--accept-regex='^http://www\.gnu\.org/(foo/do(g|t)/rea(l|d/...|ping(\.jpg)?/...)|bar/(mon/[tmbpent?/...|red/[a-z0-9][^-]+-q))/.*$|^ftp://ftp\.gnu\.org/]....*$'
--reject-regex='... ad nauseum ...'

The current semantics ...
 - if both accept and reject are specified, in any order, the URI
   "must fall through all" to be fetched.
 - only the last of multiple --accept-regex are consulted.
 - only the last of multiple --reject-regex are consulted.
... and the inability to do anything other than "POSIX" regexp, are
simply too limiting for complex requirements.

Therefore wget needs to support multiple regex expressions, new
sources for those expressions, and general filter capability.

The easy to implement enhancements below will allow whatever script
or human is calling wget to effectively program and option (on or
off) various regexes, and to add all sorts of external intelligience
to fetching decisions. I have listed them in order of implementation
priority... 1, 2, 3.



1) Call an external program which returns 0 or 1 to signal acceptance
or rejectance of each proposed URI. Wget shall wait for the program
to return. Any other exit status shall cause wget to terminate.
Since this is the most powerful and abstract method, yet potentially
slower than the others, it should be processed last, after all the
other filters before passing the URI to the network. [1] [2]

--uri-filter-prog=prog1

The full URI including protocol, FQDN or IP, path, and parameters
shall be passed to the program in the environment variable
WGET_FILTER_URI. This variable shall contain exactly what is passed
to the current commandline regexes today, ie:

 https://www.example.com/foo/bar?a=b&c=d#123
 ftp://[::1/foo/bar]


However, to support future flexibility, the following optional set
of variables should also be passed to the program if readily
implementable today (each of them can result in different serving
hierarchy contexts, and smart filter programs will utilize them
accordingly):

The full path to the directory into which wget will begin writing
 its output shall be passed in WGET_FILTER_BASEDIR, typically "pwd"
 or option --directory-prefix.
The protocol shall be passed in WGET_FILTER_PROTO, without "://".
Any URI specified username and password strings shall be passed in
 WGET_FILTER_USER and WGET_FILTER_PASS, these two must be set but
 may be empty.
The FQDN or IP shall be passed in WGET_FILTER_HOST, without "[]".
The port shall be passed in WGET_FILTER_PORT, without ":", the value
 shall be all numeric unless wget was unable to convert using
 /etc/services, in which case it will remain as found in the original
 URI or commandline.
The URI path and params shall be passed in WGET_FILTER_URIPATH, any
 leading leading slashes (/) shall be as found in the original URI
 or commandline and shall not be added or removed (examples of zero,
 one, and multiple slashes do exist in the wild).
If wget intends to write a pathname that does not match the original
 URI or commandline (such as the "index.html" in /pathname 302 to
 /pathname/, or /pathname/, or generated iteratives, or backup files,
 etc), that pathname shall be passed in WGET_FILTER_FAKEPATH, this
 must be set but may be empty.



2) Implement multiple regex files via the commandline, as in the
notion of "egrep -f FILE". They shall be read for each proposed URI
to permit dynamic editing on the fly as may be needed during long
spidering / infinite recursion operation, but may be preloaded into
wget for performance. Follows the existing "must fall through all"
semantic. [3]

--accept-regex-file=file1 \
--reject-regex-file=file2 \
--regex-file-mode=(dynamic|preload), default dynamic.



3) Implement multiple regex strings on the commandline. Follows the
existing "must fall through all" semantic.

--accept-regex=regex1 \
--accept-regex=regex2 \
--reject-regex=regex3 \
--reject-regex=regex4 \
--......-regex=regexN [...]



Notes:

[1] Since a single program can implement anything, it is not necessary
to support multiple programs and logic such as:

--uri-filter-prog=progN [...]
--filter-prog-op=(or|and), default logical OR of all returns.
--filter-prog-spawn=(serial|parallel), default serial.

[2] Future features may specify the order in which each filter
method is applied:

--filter-order=regex:regex-file:filter-prog, default as shown.

Or to skip "must fall through all" handling:

--filter-fast-accept=regex:regex-file, default as shown.

[3] Since the user can place nonmatching "comments" in such files,
only one accept and one reject file are needed, these other files
are not necessary:

--accept-regex-file=file3 \
--reject-regex-file=file4 \
--......-regex-file=fileN [...]

grarpamp <grarpamp>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by jookia (Posted a comment)
  • -email is unavailable- added by grarpamp (Submitted the item)
  •  

    No changes have been made to this item

    Back to the top

    Powered by Savane 3.13-758e.
    Corresponding source code