patchPSPP - Patches: patch #5583, NPAR TESTS

 
 

You are not allowed to post comments on this tracker with your current authentication level.

patch #5583: NPAR TESTS

Submitter:  John Darrington <jmd>
Submitted:  Thu 23 Nov 2006 02:29:22 PM UTC
   
 
Category:  None Item Group:  None
Status:  Done Assigned to:  None
Open/Closed:  Closed

Jump to the original submission

Tue 19 Dec 2006 07:05:36 PM UTC, comment #15: 


> Works for me, except for the non-portability of 'trunc'
> discussed earier (it isn't in BSD's math.h).
> I think Ben mentioned fixing that in gnulib, though.


Yes.  I've been lazy about it though and haven't finished with that work yet.

Ben Pfaff <blp>
Group administrator
Tue 19 Dec 2006 05:32:27 PM UTC, comment #14: 

Works for me, except for the non-portability of 'trunc'
discussed earier (it isn't in BSD's math.h).
I think Ben mentioned fixing that in gnulib, though.

Jason H Stover <jstover>
Group Member
Tue 19 Dec 2006 11:47:16 AM UTC, comment #13: 

OK.  I've removed the asymptotic tests, and added an entry in the manual.

Final patch before I check this in attached.

(file #11558)

John Darrington <jmd>
Group administrator
Sat 16 Dec 2006 11:22:14 PM UTC, comment #12: 


>If you want to use it, try using gsl_cdf_binomial_Q along with
>gsl_cdf_binomial_P. And these functions are now in gsl 1.8, so
>they are no longer necessary in gslextras.


That's good to know.  I've filed a bug to remind us to rid of gslextras when someone has time.

(I haven't been following the rest of the discussion here.)

Ben Pfaff <blp>
Group administrator
Sat 16 Dec 2006 10:27:00 PM UTC, comment #11: 

The patch works for me.

> >   Pr ({less than or equal to 10 males} \cup {more than 10 males})                                           
>                                                                                                               
> >    ...which is 1.0 for a binomial random variable with 20 trials                                            
> >    and null hypothesis success probability of 0.5.                                                          
>                                                                                                               
> Isn't it also 1.0 for ANY hypothesis ??


No. Here is one example: If the null hypothesis prob is 0.9, and the observed proportion is 0.5 with 20 trials. If the null hypothesis were that p >= 0.9, then the p-value would be
Pr (X <= 10) = 7.14e-6. If the null hypothesis is that p = 0.9, then the two-sided p-value is more difficult to compute. In the case of any symmetric distribution, the two-sided p-value is computed as Pr (X <= m - c) + Pr (X => m + c), where m is the expected value of X under the null hypothesis and either m-c or m+c is the observed test statistic. But the distribution we are testing is not symmetric (if p = 0.9). In this case, the two-sided p-value would have to be expressed this way:

Pr (X <= test statistic) + Pr (x >= m + c)

where m is the expected value and c is chosen to make those two probabilities equal. But, in the case of the unsymmetric and discrete binomial distribution with p = 0.9, the expected value of X is 18. Now there is no value c to make Pr (X >= 18 + c) equal to Pr (X <= 10), and we have now run into a problem of p-values that Bayesians are always kvetching about: Why are we using probabilities for extreme and unobserved data to falsify this null hypothesis? We would of course conclude that p is not 0.9, but the aforementioned criticism is still a valid. At this point we should just think about cloning software, since the computations we are talking about don't make practical sense to a human who wants to compute a two-sided p-value of a test of "p = 0.9". p-values have their place, but there are examples of data (such as this case) in which their flaws show, and can't be fixed. So my advice is just to imitate the other software.

The optimization you mention below is probably not necessary, except in extreme cases. Even then I don't think it would be necessary, since gsl uses a VERY precise approximation of the gamma function to compute that probability. If you want to use it, try using gsl_cdf_binomial_Q along with gsl_cdf_binomial_P. And these functions are now in gsl 1.8, so they are no longer necessary in gslextras.

I would use the exact result until the sample size grows enough to cause overflows. I'm guessing that other software reports its result as asymptotic because doing so is a legacy from the days of single-precision and expensive flops. They probably just never changed the documentation, only the code.

Jason H Stover <jstover>
Group Member
Sat 16 Dec 2006 01:27:42 AM UTC, comment #10: 


>   Pr ({less than or equal to 10 males} \cup {more than 10 males})


>    ...which is 1.0 for a binomial random variable with 20 trials
>    and null hypothesis success probability of 0.5.


Isn't it also 1.0 for ANY hypothesis ??  Some of the finer issues seem counter-intuitive to me.  This explains why I'm not a statistician.  However ....

A new patch is attached, which compiles properly against the latest HEAD.   In this patch, I've used the  gslextras_cdf_binomial_P.  I've also dropped the optimisation which sets m = MIN(n1, n2) and p  = m == n1?p : 1 -p because this should be done inside the cdf function --- I don't know if it actually is?

I've changed the code to make the way it works more obvious, and I've clamped the 2 tailed significance to 1.0 --- and now, the exact tests give the same results as those of the Chicago Company.

The assymptotic tests however, are interesting, and I have come to this conclusion about what spss actually does:

If n1 + n2 > 25, then spss labels its reported value as an asymptotic result, BUT this is a lie, and it's actually an exact result.  Probably there is some threshold above which they do actually calculate asymptotic results, but most likely it's well above 25.

So I suppose the question is, do we try to emulate these misleading results, or go with correct ones?







(file #11535)

John Darrington <jmd>
Group administrator
Fri 15 Dec 2006 10:12:38 PM UTC, comment #9: 

Your p-value of 1.0 below is ifne, though it does point to a
quirk of p-values. The p-value of 1.0 means this: Under the hypothesis that the distribution of the sexes is equal, the probability
of seeing a "more extreme" test statistic than 10 males/10 females is 1.0. Perhaps it should be

1-Pr(10 males) = 1 - (20 choose 10) * (1/2048)

...but that isn't how p-values are defined. They are defined to
include the observed value of the test statistic in their computation, so we have a p-value of

Pr ({less than or equal to 10 males} \cup {more than 10 males})

...which is 1.0 for a binomial random variable with 20 trials and
null hypothesis success probability of 0.5.

Jason H Stover <jstover>
Group Member
Fri 15 Dec 2006 10:05:48 PM UTC, comment #8: 

I'm sorry for not checking this before all the recent checkins, but after applying your patch to a recent checkout, I get an error that correlations.c is not found, though correlations.q is. If I type make again, correlations.c is built, but I get these linker errors:

src/language/command.c:297: undefined reference to `cmd_correlations'
src/language/liblanguage.a(command.o)(.rodata+0x4ac):src/language/command.c:299: undefined reference to `cmd_crosstabs'
src/language/liblanguage.a(command.o)(.rodata+0x4cc):src/language/command.c:299: undefined reference to `cmd_examine'
src/language/liblanguage.a(command.o)(.rodata+0x51c): In function `cmd_match_words':
src/language/command.c:330: undefined reference to `cmd_frequencies'
src/language/liblanguage.a(command.o)(.rodata+0x54c):src/language/command.c:333: undefined reference to `cmd_means'
src/language/liblanguage.a(command.o)(.rodata+0x56c):src/language/command.c:327: undefined reference to `cmd_npar_tests'
src/language/liblanguage.a(command.o)(.rodata+0x57c):src/language/command.c:327: undefined reference to `cmd_oneway'
src/language/liblanguage.a(command.o)(.rodata+0x58c):src/language/command.c:376: undefined reference to `cmd_correlations'
src/language/liblanguage.a(command.o)(.rodata+0x59c):src/language/command.c:380: undefined reference to `cmd_rank'
src/language/liblanguage.a(command.o)(.rodata+0x5ac):src/language/command.c:377: undefined reference to `cmd_regression'
src/language/liblanguage.a(command.o)(.rodata+0x60c): In function `count_matching_commands':
src/language/command.c:389: undefined reference to `cmd_t_test'

(Now that final exam week has ended, I can check these patches more quickly than before.)

Jason H Stover <jstover>
Group Member
Sun 10 Dec 2006 05:09:00 AM UTC, comment #7: 

Here's yet another version of the patch, resolving the conflicts against the recent variable.[ch] changes.

(file #11499)

John Darrington <jmd>
Group administrator
Sun 10 Dec 2006 02:59:07 AM UTC, comment #6: 

OK.  I think Jason's right.  The comparisons in Algorithms are simply
an optimisation to ensure that the cumulative distribution is
calculated from the end nearest to the target value.  That's a
seperate issue from what's mentioned in the user  documentation about
the direction of the test (which Algorithms doesn't mention at all).

I've attached a new patch against the latest head.  This one then
gets all the 1-tailed exact tests correct (agrees with SPSS results).

Now I'm totally confused about the 2-tailed exact test which are
reported when p == 0.5

This code

DATA LIST LIST NOTABLE /x w .
BEGIN DATA.
1   10
2   10
END DATA.

WEIGHT BY w.

NPAR TESTS
/BINOMIAL(0.5) = x
.

When run by SPSS produces this:


9.1 NPAR TESTS.  Binomial Test
+-+------#--------+--+--------------+----------+---------------------+
| |      #Category| N|Observed Prop.|Test Prop.|Exact Sig. (2-tailed)|
+-+------#--------+--+--------------+----------+---------------------+
|x|Group1#    1.00|10|          .500|      .500|                1.000|
| |Group2#    2.00|10|          .500|          |                     |
| |Total #        |20|          1.00|          |                     |
+-+------#--------+--+--------------+----------+---------------------+


which seems rediculous to me.  If my null hypothesis is that the
distribution of sexes amoungst the population is 0.5/0.5, and I
randomly select 20 people, and it turns out that 10 are male and 10
are female, SPSS is telling me that the probability of getting these
results, when my null hypothesis is false, is 1.0 !!!!


As it happens, my results using the formulae from Algorithms is 1.176
which is meaningless.  All the other 2-tailed tests agree with SPSS's
results, so my guess is, that SPSS simply clamps the p-value to 1.000
if it exceeds that.

Bizarre!

(file #11497)

John Darrington <jmd>
Group administrator
Thu 07 Dec 2006 05:28:37 PM UTC, comment #5: 

I checked a couple of the tests out. It looks like SPSS is doing
something odd. I attached a file with comments. I'm not sure what
to do about this. The SPSS output is misleading, though not necessarily "wrong", if we accept its inconsistent changes in what hypothesis it tests in different situations. But I don't want to mislead any users.




(file #11471)

Jason H Stover <jstover>
Group Member
Wed 29 Nov 2006 10:17:52 PM UTC, comment #4: 

Sorry about that.  I must have forgotten to cvs add it.

npar-binomial.sh attached

(file #11408)

John Darrington <jmd>
Group administrator
Wed 29 Nov 2006 07:19:01 PM UTC, comment #3: 

I didn't receive a copy of npar-binomial.sh in
the patch. Can you post it?

-Jason

Jason H Stover <jstover>
Group Member
Sat 25 Nov 2006 09:12:37 PM UTC, comment #2: 

I can check the computations of the significance values in the
next few days.

Jason H Stover <jstover>
Group Member
Sat 25 Nov 2006 01:36:38 AM UTC, comment #1: 

Have you checked out the algorithms described on spss.com against what SPSS Statistical Algorithms says?  The latter might be dated; who knows.

In value_dup, I'd prefer to calculate the size as MIN(width, sizeof *val).

In compare_var_index, please use less-than and greater-than to figure out the comparison result; if we ever change the `index' member to an unsigned type (which would make a lot of sense), we'll really appreciate not having to hunt down this bug.

I didn't look very closely at the new files you introduced.  I'll assume that they do the right thing.

Ben Pfaff <blp>
Group administrator
Thu 23 Nov 2006 02:29:22 PM UTC, original submission:  

This patch introduces the NPAR TESTS command.

So far, the only supported subcommands are CHISQUARE and BINOMIAL, but the framework has been written so that others can be added without a lot of hacking to the parser.

Summary descriptives are supported, but percentiles are not, because I want to completely rewrite the percentiles subroutine that we have.

The CHISQUARE subcommand uses some things in common with the FREQUENCIES command.  So there's a bit of refactoring there.

I chose the BINOMIAL subcommand, because I thought it would be easy.   However, despite many hours tearing my hair out, I've been unable to come up with significance values which agree with those produced by spss.  It seems to me that SPSS Statistical Algorithms, the texts it references, the SPSS v13 code and common sense all use difference ideas of what the "significance" of a binomial test is.  In the test, I've inserted the values that SPSS v13 produces.  Consequently this test is failing at the moment.

I thought it would be nothing more than the cdf of the binomial distribution at the particular values, but it seems that it's something else at least in some cases ....


John Darrington <jmd>
Group administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attached Files
file #11558:  npar5.patch added by jmd (140KiB - text/x-patch)
file #11535:  npar4.patch added by jmd (271KiB - text/x-patch)
file #11499:  npar3.patch added by jmd (102KiB - text/x-patch)
file #11497:  npar2.patch added by jmd (88KiB - text/x-patch)
file #11471:  binomial-notes.txt added by jstover (2KiB - text/plain - examination of the first two tests of the NPAR procedure in npar-binomial.sh)
file #11408:  npar-binomial.sh added by jmd (16KiB - application/x-shellscript)
file #11335:  npar.patch added by jmd (101KiB - text/x-patch)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by blp (Posted a comment)
  • -email is unavailable- added by jmd (Updated the item)
  • -email is unavailable- added by jstover (Posted a comment)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follow 20 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2006-12-20 jmd StatusWorks For Me Done
        Assigned tojmd None
        Open/ClosedOpen Closed
    2006-12-19 jstover Assigned tojstover jmd
    2006-12-19 jmd Attached File- Added npar5.patch, #11558
        Assigned tojmd jstover
    2006-12-16 jstover StatusReady For Test/Review Works For Me
        Assigned tojstover jmd
    2006-12-16 jmd Attached File- Added npar4.patch, #11535
        StatusWorks For Me Ready For Test/Review
        Assigned tojmd jstover
    2006-12-15 jstover Assigned tojstover jmd
    2006-12-11 jmd Assigned tojmd jstover
    2006-12-10 jmd Attached File- Added npar3.patch, #11499
    2006-12-10 jmd Attached File- Added npar2.patch, #11497
    2006-12-07 jstover Attached File- Added binomial-notes.txt, #11471
    2006-11-29 jmd Attached File- Added npar-binomial.sh, #11408
    2006-11-25 blp Assigned toNone jmd
    2006-11-25 blp StatusReady For Test/Review Works For Me
    2006-11-23 jmd Attached File- Added npar.patch, #11335

    Back to the top

    Powered by Savane 3.13-d3ae.
    Corresponding source code