Sat Aug 1 21:37:19 2009, comment #1:
So, this appears to be a misunderstanding of what that value is intended to represent, combined with unavoidable shortcomings in how it is calculated.
That value does not represent how much data was downloaded (which is represented in the left-hand column), but rather the rate (per second) of download corresponding to just that single row. This is calculated by taking how many bytes are represented by this row, and dividing it by the actual time it took to fetch those bytes.
This is susceptible to anomalies. Consider the case where 20k is downloaded, and then it takes a second or two, but then another 80k is downloaded (each row consisting of 50k). Well, the first row will show a download rate of maybe 25k a second, but the next row's data will then already be available in the system buffers, so when Wget tries to fetch it, it will obtain it virtually instantaneously. So, the bytes downloaded for row two will be 50k, but the amount of time taken will be fairly close to zero, resulting in an abnormally high number. This will be especially true on the very last row, which is likely to have less than 50k; so if ~12k is fetched instantaneously because it was already available, the download rate leaps to something huge.
A discussion should take place on what more appropriate download-rate calculation methods might exist.
|