Fri 19 Sep 2008 03:03:42 PM UTC, comment #10:
Looks good!
|
Fri 19 Sep 2008 11:07:29 AM UTC, comment #9:
You're right (of course).
New patch attached.
(file #16524)
|
Fri 19 Sep 2008 03:54:24 AM UTC, comment #8:
The patch looks good. A few comments:
- I think that the function dict_get_case_weight() could usefully simplify a small bit of logic.
- My thought was actually slightly different from what you implemented: I had the notion that the contents of the while loop that your patch inserts into initialize_aggregate_info() would actually be put into accumulate_aggregate_info(). Then there is no additional data pass in initialize_aggregate_info(), because we use the one that is already happening.
|
Thu 18 Sep 2008 04:47:54 AM UTC, comment #7:
Ben's suggestion turned out to be easier, so I've done it that way.
New patch attached.
(file #16513)
|
Tue 02 Sep 2008 05:30:15 AM UTC, comment #6:
>How about I make a translator which drops all but the subject
>variable and the weight variable?
That would definitely be an improvement, in the situation that I raised. OK.
|
Tue 02 Sep 2008 05:04:07 AM UTC, comment #5:
You're right. I hadn't considered that.
How about I make a translator which drops all but the subject variable and the weight variable?
This reader can then be passed to sort_execute.
|
Tue 02 Sep 2008 04:07:33 AM UTC, comment #4:
>Although there is opportunity to reduce the total number of
>passes, we can't get around the need to sort and iterate for each
>median being calculated.
I cannot dispute that.
>With this patch, the total number of sorts is N and the total
>number of passes is (N+1), where N is the number of medians to be
>calculated. With some rework, it could be got down to N sorts and
>N passes, but I'm not sure if it's worth the effort.
However, if I'm reading the code correctly, it sorts all of the data N times, whereas it only needs to sort a single column N times. That means it's doing a factor of M more work than necessary, where M is the number of variables. And there is the very good possibility that a single column could fit in memory even when all the data cannot.
You're right that it might not be worth it: I don't know whether anyone out there is doing lots of AGGREGATE with MEDIAN functions. If you want to commit it as-is, though, would you mind adding a comment about the kind of optimization that is possible?
|
Tue 02 Sep 2008 01:58:56 AM UTC, comment #3:
Although there is opportunity to reduce the total number of passes, we can't get around the need to sort and iterate for each median being calculated.
With this patch, the total number of sorts is N and the total number of passes is (N+1), where N is the number of medians to be calculated. With some rework, it could be got down to N sorts and N passes, but I'm not sure if it's worth the effort.
|
Mon 01 Sep 2008 06:35:57 PM UTC, comment #2:
This looks to me like it implements median correctly, but doesn't it add an additional data-reading pass for every median being calculated? I think that this extra pass could be eliminated by, instead of using sort_execute(), calling sort_create_writer(), then writing a case to the writer in accumulate_aggregate_info(), then obtaining the reader with casewriter_make_reader() in dump_aggregate_info().
|
Sun 24 Aug 2008 07:32:07 AM UTC, comment #1:
This patch implements this.
It must be applied on top of #file 16351
(file #16352)
|
Sun 13 Feb 2005 11:47:59 PM UTC, original submission:
AGGREGATE should support the MEDIAN aggregation function.
|