bugGNU Octave - Bugs: bug #42742, polygcd fails valid test

 
 

bug #42742: polygcd fails valid test

Submitter:  Michael Godfrey <godfrey>
Submitted:  Sat 12 Jul 2014 01:04:56 PM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Incorrect Result
Status:  Fixed Assigned to:  None
Originator Name:  Godfrey Open/Closed:  * Closed
Release:  * dev Operating System:  * Any
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Mon 06 Oct 2014 04:51:45 AM UTC, comment #10: 

I applied the patch here (http://hg.savannah.gnu.org/hgweb/octave/rev/5d3111977623).  Closing the report, but maybe make a note on the Wiki about investigating this next summer.

Rik <rik5>
Group administrator
Sun 05 Oct 2014 06:24:29 PM UTC, comment #9: 

OK. If Dan's hack works, it makes sense to apply it.

But, as Dan said try to put it on  list for attention sometime
soon.

Michael Godfrey <godfrey>
Group Member
Sun 05 Oct 2014 06:21:26 PM UTC, comment #8: 

While it is tempting to just think of a hack to prevent
r(1) == 0 leading to error, the comment in the doc:

     Caution: This is a numerically unstable algorithm and should not
     be used on large polynomials.

is worth noting.  It would be a good idea to review
some current literature and find a more robust algorithm.

Michael Godfrey <godfrey>
Group Member
Sun 05 Oct 2014 04:48:35 PM UTC, comment #7: 

Attached is a changeset to fix the test.  It a simple change to the test itself to make it more suitable for the computation, i.e., scaling the magnitude of the random polynomial roots down by a factor of three or so.  Scaling by ten (original) puts the coefficients way out of reasonable bounds.  Ten roots with std of ten puts the largest coefficient somewhere arount 10^8 compared to the lead coefficient of 1.

I changed the scaling factor systematically down and increased the number of trials to 100,000.  With a scaling factor of 5, the 100,000 tests consistently failed.  With a scaling factor of 4, the 100,000 tests passed and then a rare failure occurred in the second attempt.

The track I initially followed with the computation of roots rather than using deconv (which uses filter) just didn't seem to work.  The existing routine has much better numerical accuracy than using the roots command, even though it too might not be the best solution.  I think it is a case of the roots command (which uses eig) not being very good.  Try some examples, e.g.,


octave:118> roots(poly([3 3 3 3]))
ans =

   3.0005 + 0.0000i
   3.0000 + 0.0005i
   3.0000 - 0.0005i
   2.9995 + 0.0000i


So, I suggest staying with the existing algorithm but add the tweak of the test.  Also, consider looking into the behavior of roots/eig as a summer project.


(file #32226)

Dan Sebald <sebald>
Sun 05 Oct 2014 01:17:31 AM UTC, comment #6: 

I've looked into this issue a little more.  It doesn't seem to be a tolerance issue, but a case of deconvolution going unstable under the circumstance of the random polynomial coefficients being ill conditioned.

What I was noticing is that on multiple passes through the loop those residues (the 'r') exhibit an error for seemingly no reason and then that error gets amplified with successive passes resulting in r being greater than the tolerance.  And it seems it can be rather big sometimes.  By error, I mean when printing out


if (length(a) <= length(b))
  cv = (conv(d,a) + r);
  cv - b
endif


I was seeing non-zero cv-b on occasion, and the deconv() help says that should be zero, by definition.

So, try this routine


for i=1:100
  a = rand(3,1);
  b = rand(15,1);
  y = conv(b,a);
  [b_est, r] = deconv(y,a);
  z = conv(b_est,a) + r - y;
  a
  roots(a)
  [b b_est]
  if any(z)
    z
    break;
  endif
endfor


and notice how the cases that fail have a(1) near zero and how the deconvolution results can go quickly out of bounds.  But, otherwise the results look pretty accurate.

Well, partly this isn't a surprise, and the reason has to do with 'a' being the feedback coefficients of the filter routine used in deconvolution:


    b = filter (y, a, x);


Above 'a' determine the poles of the system and if the poles are outside the unit circle, filtering will diverge.  So, I've includes the "roots" in the hunk of code above and one can see how the roots (poles) have a magnitude that is far outside the unit circle.

Now, there may be better algorithms for doing this sort of thing (deconvolution is often dodgy, and I did start looking at the roots approach earlier, which I'll think about some more), but I think as far as the algorithm as given this is a case of finding the weakness of the algorithm because of the random choice of data.  There should be restrictions on p1 (and possibly p2) to keep deconvolution numerically stable.  For example, rather than completely random, the polynomials could be {C_p} + {n_p} where C are nice polynomial coefficients and n is added noise.

Dan Sebald <sebald>
Wed 13 Aug 2014 01:41:52 AM UTC, comment #5: 

The following works for me in that it enlarges the tolerance based on the input and all tests pass even when run repeatedly.


Change
tol = sqrt (eps);
to
tol = sqrt (eps (max (b)));


There is an intuitive logic to doing this, but I haven't been able to convince myself rigorously why it should be so.


Rik <rik5>
Group administrator
Wed 06 Aug 2014 10:52:29 PM UTC, comment #4: 

Reiterating Rik's comment, I guess that is pretty much the same as what I pointed out.  The


nz = find (abs (r) > tol);


is the issue.  The vector r can have a very wide range in terms of magnitude.  Yes, perhaps applying norm in some way would do it.

An alternative test for the roots()-based approach might be to incorporate the magnitude of the root.  That is, in the patch I submitted previously is this test:


        if (abs (ra (i_a) - rb (i_b)) < tol)


which tests if the roots of the different polynomials are the same.  That could be changed to something like:


        if (abs (ra (i_a) - rb (i_b)) < tol*norm ([ra(i_a) rb(i_b) 1]))


I put the '1' in there to prevent decreasing the tolerance to something potentially smaller than eps.  This works, but again, the tests fail because of something like the following:


octave-cli:39> test polygcd
  ***** assert (polygcd (poly (1:8), poly (3:12)), poly (3:8), sqrt (eps))
!!!!! test failed
assert (polygcd (poly (1:8), poly (3:12)),poly (3:8),sqrt (eps)) expected
       1     -33     445   -3135   12154  -24552   20160
but got
 Columns 1 through 6:

   1.0000e+00  -3.3000e+01   4.4500e+02  -3.1350e+03   1.2154e+04  -2.4552e+04

 Column 7:

   2.0160e+04
maximum absolute error 9.27736e-05 exceeds tolerance 1.49012e-08


The solution is good, but because the poly() routine multiplies all those roots, the many small differences in the roots all accumulate in a multiplicative way.  So, again, the tolerance of the results may need refinement, but also in some sense I wish the roots function produced some more accurate results too.


(file #31857)

Dan Sebald <sebald>
Wed 06 Aug 2014 09:24:49 PM UTC, comment #3: 

I've attached a patch that implements polygcd() in the way I described.  That is, uses the roots() function and finds common roots.  However, the first impression is that it too has the same sorts of tolerance problems.

I increased the polygcd tolerance (i.e., the input parameter) and then get the results expected, but the resulting polynomial coefficients then don't pass the tolerance test, off by some factor slightly greater than 10*sqrt(eps).

I'm not sure about this numerical stability comment in the documentation for plygcd().  It seems to me the bigger issue is that applying the tolerance in the way it is done is the issue.  Taking a look at some of the tests in polygcd(), it seems to me that in one case a coefficient like 1 should be within 10*sqrt(eps), but then also a coefficient like 20,000 should also be accurate with 10*sqrt(eps).  It's easily possible with polynomial orders of say 8 to have numbers in the range of 1 to 2e4.  There's nothing that takes the magnitude of numbers into account.  One would prefer a relative test, say 0.001%, rather than absolute in this case.


(file #31856)

Dan Sebald <sebald>
Wed 06 Aug 2014 08:44:53 AM UTC, comment #2: 

In the polygcd() routine is this line:


         r = r(nz(1):length(r));


Are the zeros of r always assured to be at the front of the array?  Perhaps so.

Anyway, is there another way of solving this instead of using deconvolution?  The documentation states:

"This is equivalent to the polynomial found by multiplying together all the common roots."

It is pretty straightforward to compute the roots of both of the polynomials using the roots() function.  Then if one were to put the roots in order there could be a fast method of finding common roots (as opposed to a factorial permutation comparison of all combinations of roots).  Here's an illustration.


octave-cli:112> p1 = [1 5 7 3];
octave-cli:114> p2 = [3 12 7];
octave-cli:126> r1 = sort(roots(poly(p1)))
r1 =

   1.00000000000000
   2.99999999999998
   5.00000000000003
   6.99999999999998

octave-cli:127> r2 = sort(roots(poly(p2)))
r2 =

    3.00000000000000
    7.00000000000000
   11.99999999999998

octave-cli:128> poly([r1(2) r1(4)])
ans =

    1.00000000000000   -9.99999999999996   20.99999999999983

octave-cli:129> poly([r2(1) r2(2)])
ans =

    1.00000000000000  -10.00000000000001   21.00000000000002

octave-cli:131> polygcd(poly(p1),poly(p2))
ans =

    1  -10   21


All that is needed is the short simple loop inside a loop that picks the common roots within some tolerance.  (There is the intersect() command, but that might require equality, I'm not sure.)

The question is whether the proposed approach is any more robust in the case of polynomial coefficients being large.

Dan Sebald <sebald>
Tue 05 Aug 2014 08:36:48 PM UTC, comment #1: 

The problem can be partially alleviated by using a tolerance.  The default is sqrt (eps) but I find that if I use 10 times that value the frequency of an error drops from an average of 16/1000 iterations to a mean of 1/1000.

Test code I used was:


bad = 0;
tol = 10*sqrt (eps);
for ii=1:1000
  p = (unique (randn (10, 1)) * 10).';
  p1 = p(3:end);
  p2 = p(1:end-2);
  obs = polygcd (poly (-p1), poly (-p2), tol);
  exp = poly (- intersect (p1, p2));
  if (! size_equal (obs, exp))
    bad++;
  endif
endfor


One possibility is that the tolerance, which defaults to sqrt (eps), should be based on some characteristic of the input vectors to polygcd such as the norm.

The relevant lines in polygcd are


[d, r] = deconv (b, a);
nz = find (abs (r) > tol);
if (isempty (nz))


In cases where it fails the remainder vector r is very nearly zero, i.e., the remainder polynomial is very small.  But there is one coefficient which just manages to exceed the tolerance so the algorithm continues for one more trip through the while loop and gets the wrong answer.

A sample failing set of variables p, p1, p2, obs, exp is attached to the report as polybad.var for those who want a test case.




(file #31842)

Rik <rik5>
Group administrator
Sat 12 Jul 2014 01:04:56 PM UTC, original submission:  

This was first reported on the Mainbtainters List, but it appears
to be a bug in polygcd.

On 07/07/2014 06:57 PM, Rik wrote:

> All,
>
> I'm getting occasional failures from the test code in polygcd.  It seems to
> happen about 10% of the time according to the following code:
>
> for i = 1:100
>    bm(i) = test ("polygcd");
> endfor
> sum (bm)
>
> The test that fails is
>
> %!test
> %! for ii=1:10
> %!   p  = (unique (randn (10, 1)) * 10).';
> %!   p1 = p(3:end);
> %!   p2 = p(1:end-2);
> %!   assert (polygcd (poly (-p1), poly (-p2)), poly (- intersect (p1, p2)),
> sqrt (eps));
> %! endfor
>
> I tried a few different random seeds to see if I could fix the value to
> something that would always pass, but no luck.
>
> The simplest thing is to make this an %!xtest which can occasionally fail.
> But if someone understands polygcd and could suggest a way to modify the
> test that would be preferable.
>
> The error I get is a dimensional mismatch:
>
> !!!!! test failed
> ASSERT errors for:  assert (polygcd (poly (-p1), poly (-p2)),poly
> (-intersect (p1, p2)),sqrt (eps))
>
>    Location  |  Observed  |  Expected  |  Reason
>       .          O(1x1)       E(1x7)      Dimensions don't match
>
>
> --Rik

Here is where this goes wrong:

If the line [d, r] = deconv (b, a);  in polygcd returns a r vector whose first element
is zero, then the line   a = r  /  r(1); in polygcd fails.  At this point a ends up
as a = 1 and x =  1,  and the assert fails as above. I do not right now see why  a = 1
due to divide by zero, but this is the cause of the failure.

The following script will, for me, reliably produce the failure as described above:

 for ii=1:1000
   p  = (unique (randn (10, 1)) * 10).';
   p1 = p(3:end);
   p2 = p(1:end-2);
   assert (polygcd (poly (-p1), poly (-p2)), poly (- intersect (p1, p2)), sqrt (eps));
 endfor

Obviously, the line  a = r / r(1);  should not be executed if r(1) = 0. But, it is
not clear to me right now what should be done instead.

I hope this helps.

Michael Godfrey <godfrey>
Group Member

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #32226:  octave-polygcd-2014oct05.patch added by sebald (1023B - application/octet-stream)
file #31857:  octave-polygcd_2014aug06_2.patch added by sebald (1KiB - application/octet-stream)
file #31856:  octave-polygcd_2014aug06.patch added by sebald (1KiB - application/octet-stream)
file #31842:  polybad.var added by rik5 (663B - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by sebald (Posted a comment)
  • -email is unavailable- added by rik5 (Updated the item)
  • -email is unavailable- added by godfrey (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 6 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2014-10-06 rik5 StatusNone Fixed
        Open/ClosedOpen Closed
    2014-10-05 sebald Attached File- Added octave-polygcd-2014oct05.patch, #32226
    2014-08-06 sebald Attached File- Added octave-polygcd_2014aug06_2.patch, #31857
    2014-08-06 sebald Attached File- Added octave-polygcd_2014aug06.patch, #31856
    2014-08-05 rik5 Attached File- Added polybad.var, #31842

    Back to the top

    Powered by Savane 3.13-758e.
    Corresponding source code