bugGNU Octave - Bugs: bug #48307, sinc loses precision for large...

 
 

bug #48307: sinc loses precision for large arguments

Submitter:  Colin Macdonald <cbm>
Submitted:  Fri 24 Jun 2016 05:49:33 PM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Inaccurate Result
Status:  Invalid Assigned to:  None
Originator Name:  Open/Closed:  * Closed
Release:  * dev Operating System:  * Any
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Fri 13 Apr 2018 05:10:58 PM UTC, comment #33: 


>> format long
>> sind(45)

ans =    7.071067811865475e-01

>>


you are only seeing what is displayed with format short!

Doug Stewart <dastew>
Fri 13 Apr 2018 04:51:43 PM UTC, comment #32: 

I think that there is a problem with the accuracy of the sine function as
IN OCTAVE

>> sind(45)

ans =  0.70711
but in real sin(45) = 1/sqroot(2) = 7071067812;

>> sind(60)

ans =  0.86603
but it value should be sqroot(3)/2 = 0.8660254038.
 It is clear that sin is missing its accuracy only at its 5th place of decimal.
So first sin function needs to be corrected as it might be possible that while expanding the
sine function in tailor series expansion for writting the code only upto few terms are considered but we need to consider upto more terms(approx 100) to get value correct upto 20 places of decimal.I have written code for sine function in c++ and it is giving correct value upto 20 places of decimal.Please correct me if I am wrong and
If this is the correct issue why sinc() is not that much accurate for large input value than I wil start  writing for sinc() into .m language.

SUDHIR KUMAR SUMAN <sudhir27>
Wed 06 Jul 2016 06:38:01 AM UTC, comment #31: 


> So, for me the original report is not a bug and sin works in the right way. Of course, it is possible in principle to enhance the accuracy of sinc, but I do not see an easy way.


Please, the "original report" never said sin didn't work; that particular tangent was not my fault ;-)

> Are you aware of any other software working in pure double precision doing better for the original report?


Not that I found, see my review in comment #28.

Colin Macdonald <cbm>
Tue 05 Jul 2016 03:49:30 PM UTC, comment #30: 


> I looked more carefully at bessel functions: my example in #48316 is a bit flawed: it only uses integer inputs. The relative error is much larger for nearby non-integer inputs. Maybe my expectations really are out-of-line with common practice here! I apologize for the tone of my debate.


Well, I learned something.  It might actually be worthwhile fashioning this type of analysis into a symbolic package demo.  The comparison against known solutions is most insightful.  Could do a visual plot of the difference between the arbitrary-precision value of the symbolic library and the double result of the glibc library as x ranges in order of magnitude.  And a comparison against the glibc library result for known solution like (-3*sqrt(3)/(20000002*pi).

Dan Sebald <sebald>
Tue 05 Jul 2016 07:47:39 AM UTC, comment #29: 

@ Lachlan, comment #20

if you take x=2^24+1/2+1/4 it has an exact binary representation. But


> x = 2^24+1/2+1/4;
> sinc(x)
ans =    1.34157580003058e-08


while the exact result (sage) is


1.3415757952777031231285713571e-8


If we denote by Sin(x) the exact value of sin(x), the implementation of sin function is correct, in the sense that (sin(x)-Sin(x))/Sin(x) is below machine precision for any x, but here we pretend


(sin(pi*x)-Sin(the_exact_pi*x))/Sin(the_exact_pi*x)


to be around machine precision. So, for me the original report is not a bug and sin works in the right way. Of course, it is possible in principle to enhance the accuracy of sinc, but I do not see an easy way. Are you aware of any other software working in pure double precision doing better for the original report?

Marco Caliari <caliari>
Group Member
Tue 05 Jul 2016 07:27:35 AM UTC, comment #28: 

I looked more carefully at bessel functions: my example in #48316 is a bit flawed: it only uses integer inputs.  The relative error is much larger for nearby non-integer inputs.  Maybe my expectations really are out-of-line with common practice here!  I apologize for the tone of my debate.

My reading about other tools:

[Zhang and Jin 1996] (Standard special function reference).  For jn(x), they simply compute sin(x)/x in double precision for j0(x).  SciPy then uses this algorithm via the fortran code "specfun".  For this reason, scipy's j0(pi*x) agrees with Octave's sinc(z).

NR: for jn(x), they call their regular besselj function and scale with sqrt(1/x)---the formula I mentioned below.

Matlab: seems to use sin(x)/x (b/c results match Octave's)

Boost: uses sin(x)/x for j0 and otherwise uses besselj.



Colin Macdonald <cbm>
Tue 05 Jul 2016 05:44:47 AM UTC, comment #27: 

"which just shows that square-rooting algorithms are a little more numerically accurate. With a series computation, all those multiplications accumulate errors, or if we truncate the series then we are inherently tossing precision."

Eh, actually it's the limited precision of the input value.  Computing sqrt(3) is analogous to exactness of the input value.

Dan Sebald <sebald>
Tue 05 Jul 2016 05:38:57 AM UTC, comment #26: 

Ah, so "mp" of mpmath library means "multi-precision', thanks.

I should have thought of the known solution for factor of pi/3 angles.  Octave produces:


>> sprintf("%0.100g", -3*sqrt(3)/(20000002*pi))
ans = -8.26993260433362135722798187502380340418994819629006087779998779296875e-08


which is quite an improvement, six digits in fact.  So that gets into the area of eps:


>> abs((-3*sqrt(3)/(20000002*pi)-(-0.000000082699326043336203093078665439078872301720741882921)) / -0.000000082699326043336203093078665439078872301720741882921)
ans =    1.6004e-16
>> eps
ans =    2.2204e-16


which just shows that square-rooting algorithms are a little more numerically accurate.  With a series computation, all those multiplications accumulate errors, or if we truncate the series then we are inherently tossing precision.

Well, getting somewhere, but I still don't know what can be done.  Bessel functions are an idea, but I'm sure they have their own numerical and speed issues.

Just curious, from what you know about the arbitrary precision library, is it slow to compute trig functions out to sixteen digits compared to the floating point library?

Dan Sebald <sebald>
Tue 05 Jul 2016 05:35:30 AM UTC, comment #25: 

Colin, comment #22: You have said that sinc doesn't have to be calculated in a particular way.  I agree.  My point is that it doesn't matter how sinc is calculated, because the problem is with the input argument.  That is what Dan confirmed with comment #21:

"So, yes, it seems that most of the loss of accuracy is a result of the fact the input (i.e., Octave's storage class) is double and not long double."

Lachlan Andrew <lachlan>
Tue 05 Jul 2016 04:24:20 AM UTC, comment #24: 


>  symbolic IS using long double


Symbolic is using arbitrary precision via the mpmath library.  I have compared the results to Maple, I did so down below and they matched...


Here's something like what you want, although to me this just shows that two independent arbitrary precision calculators give the same result.

Symbolic:

>> sin(pi*x) / (pi*x)
ans = (sym)

    -3⋅√3
  ──────────
  20000002⋅π

>> digits(200)
>> vpa(ans)
ans = (sym)

  -0.000000082699326043336203093078665439078872301720741882921909627713940272260
  518294338639792193025104173122963842414014598385696672944197610371640960743486
  670650135932398542467726274361779425593210227572018156


Maple:

> x := 10000001/3;
                                x := 10000001/3

> A := sin(Pi*x) / (Pi*x);
                                          1/2
                                       3 3
                              A := - -----------
                                     20000002 Pi

> Digits := 200;
                                 Digits := 200

> evalf(A);
-0.826993260433362030930786654390788723017207418829219096277139402722605182\
    94338639792193025104173122963842414014598385696672944197610371640960743\

                                                                -7
    486670650135932398542467726274361779425593210227572018154 10


Colin Macdonald <cbm>
Tue 05 Jul 2016 04:15:59 AM UTC, comment #23: 

Here's one concrete approach: we could find a library for approximating spherical bessel functions jn(z).  And j0(z) is sinc(z)...

(We could relate jn(x) to besselj(1/2,x)*1/sqrt(x)*sqrt(pi/2) but I don't think this would be much or any improvement over sin(x)/x.  Also in my very limited tests, Octave's non-integer-order bessel functions seem to have poorer relative error compared to integers.)

Colin Macdonald <cbm>
Tue 05 Jul 2016 04:14:16 AM UTC, comment #22: 


> the lack of precision in the input, which is insurmountable.


Careful here.  I've said several times that is not a given that sinc must be computed using the formula sin(x)/x.  One could search for an asymptotic expansion for example in large x, or maybe do a contour integral with quadrature in the complex plane.  The we use sin(x)/x for smallish x and switch to that something else for large x.  Although I have not found anyone who does this yet (Matlab and SciPy both seem to just evaluate sin(x)/x like we do for example).

> So I'm assuming Maple is using long double math.


No, Maple is using arbitrary precision arithmetic.  (Generally to slow for core Octave, although Symbolic pkg will do this if a user needs it).

Colin Macdonald <cbm>
Tue 05 Jul 2016 04:13:25 AM UTC, comment #21: 

Attached is a second version of the sinc tests in which I placed the double value of the input, i.e.,


    double x = 10000001;
    x /= 3;


in addition to the long double value version of the above number into the long-double version of the tgamma-based routine.  So here is the contrast:


DOUBLE
sin(pi*x)/(pi*x)   = -8.26993260956150473721458331322065e-08
tgamma_sinc(x)     = -8.26993260666192699379488145923489e-08
LONG DOUBLE
sin(pi*xl)/(pi*xl) = -8.26993260433311404283465912825143e-08
tgamma_sincl(xl)   = -8.26993260433248344169467932760349e-08
tgamma_sincl(x)    = -8.26993260666192689621341857211990e-08


Octave produces the first result in the list above.  The second result is the custom double version of sinc that improves on the estimate in the library--note that it only cuts the "error" in half, roughly, so the improvement isn't the orders of magnitude needed to achieve 3 times eps.  The fifth result is what happens when the double floating point value is passed into the higher resolution long-double routine--note how close it is to the custom double routine result.

So, yes, it seems that most of the loss of accuracy is a result of the fact the input (i.e., Octave's storage class) is double and not long double.  I don't know what can be done without a total overhaul of Octave's floating point class.  Maybe symbolic can be made to work better for division problems where the dividend and divisor are integers...or maybe that is already happening (see below).

Let's go back to the original discrepancy that raised the issue.  It involved the difference between the non-symbolic and symbolic result:


>> x = sym(10000001)/3
x = (sym) 10000001/3
>> d = double(x)              % large non-integer
d =    3.3333e+06
>> P1 = double(sin(pi*x) / (pi*x))
P1 =   -8.2699e-08
>> P2 = double(sinc(x))
P2 =   -8.2699e-08

>> Q = sinc(d)             % this is the value we're checking
Q =   -8.2699e-08

>> (P1-Q)/P1               % relative errors
ans =   -6.3216e-10
>> (P2-Q)/P2
ans =   -6.3222e-10


The relative errors only suggest that symbolic and non-symbolic values are noticeably different.  Why?  In order to find out, I run the symbolic package and get a "sympy version too old" error.  I went to upgrade, but then SymPy wants a version 19 mpmath where I only have version 18.  I stop there.  So, if someone with symbolic package can repeat the above test, but print out the full length of the result with


sprintf("%0.100g", <both results>)


then we can determine how the symbolic result compares to the higher-resolution results.  Maybe it is the case that symbolic IS using long double because it is actually being done with the SymPi library.  If the symbolic library casts 10000001 and 3 to long-doubles then does the computation, it would compare to the high-resolution results of the C program.  All conjecture, so someone should explore that to answer one of the original issues.

Dan Sebald <sebald>
Tue 05 Jul 2016 02:46:12 AM UTC, comment #20: 

Marco, in comment #18 you suggested avoiding calculating  pi*x.  I don't think that will help in general.  The problem is the lack of precision in x, isn't it?

If you think this bug is worth further discussion, feel free to reopen it.  Your comment #8 convinced me that the problem is in the lack of precision in the input, which is insurmountable.

Lachlan Andrew <lachlan>
Mon 04 Jul 2016 04:58:54 PM UTC, comment #19: 

I've attached some C code that explores various Taylor series solutions for sin() with double and long double precision.  (Then to get sinc() do division explicitely.)  I implemented the series with the first ten terms, because the 1/x! coefficient over/underflows for any higher terms.  Is ten terms way more than enough?  Don't know, would have to test.  To extend the series to twenty terms I used the Gamma function (tgamma) to compute factorial in floating point form.  (One could pre-compute those Gamma values to speed up such an approach.)  Then, for the ultimate test, I created a long double version of the Gamma coefficient series.

Now, how about the Taylor (power) series for sin(x)/x?  Well, on a piece of scratch paper, I get:


f    =  sin(x)/x
f'   =  cos(x)/x -   sin(x)/x^2
f''  = -sin(x)/x - 2 cos(x)/x^2 + 2 sin(x)/x^3
f''' = -cos(x)/x + 3 sin(x)/x^2 + 4 cos(x)/x^3 - 6 sin(x)/x^4


which just doesn't seem to lead anywhere convenient.  Expansion about x - x_0 where x_0 is some factor of 2 times pi gets rid of the sine and cosine of course, but the formulas still aren't convenient, and I expect it can't be any more accurate or converge any faster because that 1/x term hangs around at the front.

Anyway, the results I am seeing with the "custom" sin() function are:


sebald@ ~/octave/bug_48307 $ gcc -o sinc sinc.c -lm
sebald@ ~/octave/bug_48307 $ ./sinc
DOUBLE
M_PI = 3.141592653589793115997963468544185161590576171875
10000001/3 = 3333333.6666666665114462375640869140625
sin(pi*x)/(pi*x) = -8.26993260956150473721458331322065049562297645024955272674560546875e-08
table_sinc(x) = -8.26993260666192699379488145923489117450344565440900623798370361328125e-08
tgamma_sinc(x) = -8.26993260666192699379488145923489117450344565440900623798370361328125e-08
LONG DOUBLE
M_PIl = 3.14159265358979323851280895940618620443274267017841339111328125
10000001/3 = 3333333.666666666666742457891814410686492919921875
sin(pi*x)/(pi*x) = -8.2699326043331140428346591282514346722860854033143596097943373024463653564453125e-08
tgamma_sincl(x) = -8.2699326043324834416946793276034964538696858671329437129315920174121856689453125e-08


From that result I take away that we can in fact improve on the double implementation:


sin(pi*x)/(pi*x) = -8.269932609561[snip]e-08
table_sinc(x)    = -8.269932606661[snip]e-08


Is it an adequate improvement in exchange for speed?  Or do we need to go to a long double implementation


tgamma_sincl(x) = -8.2699326043324834416946793276034964538696858671329437129315920174121856689453125e-08


which looks to be along the lines of Maple.

So I'm assuming Maple is using long double math and Octave is using trig functions that toss out a bit of accuracy for increased speed.

Someone feel free to explore further to see just exactly how many terms are needed, whether the algorithm is stable over various regions, so on.  Should we have a fix for all the trig functions?  Or just the sin() computation associated with sinc()?  Can we be content with just using the math libraries sinl() routine?



(file #37708)

Dan Sebald <sebald>
Mon 04 Jul 2016 08:23:09 AM UTC, comment #18: 

Sorry. Another possibility is to avoid the multiplication between pi and x. On wikipedia there is the infinite product formula for sinc, but it is prone to over/underflow.


Marco Caliari <caliari>
Group Member
Mon 04 Jul 2016 08:21:27 AM UTC, comment #17: 

@Lachlan

or to avoid the computation of +verbatim+pi*x-verbatim-. For instance, on wikipedia there is the infinite product formula for sinc. I tried it, but it is prone to over/underflow.

Marco Caliari <caliari>
Group Member
Mon 04 Jul 2016 07:28:05 AM UTC, comment #16: 

As Marco pointed out in comment #8, the only way to improve accuracy substantially is to use long double for the argument.  Using it internally will not help, and nor with options 1, 3 or 4.

Lachlan Andrew <lachlan>
Mon 04 Jul 2016 06:04:43 AM UTC, comment #15: 


>> Colin was initially assuming that the relative error should be of the order of eps.
>> However, that is only true of approximately linear functions.
>
>
>I'm not sure that is true.


I agree that the error might have a tighter bound than O(x * eps).  Take sin(x +- eps) and use sine formulas, then use |sin(x)| < 1 and from there find the error limit.  (One must take into account that this is floating point, so eps may be something that varies with the magnitude of the x itself.)  But regardless of the case, the issue is that there isn't much we can do.

The reason for looking at sin(x) is that all the implementations we've seen in what we've compared are calculated as 1) compute sin(x*pi) 2) divide that quantity by x*pi.  So, it is either the sin() function or the division that is introducing the inaccuracy (and note that I compared against the C result so it isn't Octave doing something strange).  I'm presuming that it is not the division because the division is a fundamental math operation and has most likely gotten a lot of scrutiny, not least of which could be the CPU designers if the division is done with a coprocessor.  Plus, I looked at the C code for the library computing the sin() and it's a complex set of tweaks revolving around Taylor series expansion based upon the magnitude of the argument.  The designers of sin() did use a fairly low order Taylor series expansion, maybe because they wanted to keep the function call efficient.

So, the alternatives are:

1) Improve the sin(x) in the glibc library.  This could be a lot of work.  One has to become comfortable with that particular code in the library, figure out a way to improve accuracy, test it for the various regions.  After several weeks of work then comes the task of convincing the glibc maintainers that they should modify the code.  Because of how tricky the code is, and the fact it appears to originate from a reputable author, I think they would be inclined to leave it as is.  I would first go to the maintainers with a good example of there being inaccuracy in sin() and then ask if someone were to improve the accuracy would the patch be accepted.  I just found a post with a very good description of the sin() function

http://stackoverflow.com/a/2285277

and it sounds like I was looking at the correct code.

2) Change the Octave code to use "long double" trig functions.  That might improve the accuracy some.  It would slow things a bit since long double (128-bit) requires software for basic arithmetic.

3) Find a different library.  (See link above.)

4) As Lachlan suggested, a new routine for sinc() that is more accurate.  Is there one out there in some library Octave already has linked in?  Somewhere else?  Who knows?  Maybe if one writes out the Taylor series for sinc() it comes out to something that converges more quickly than sin() since sinc() does decay.  (I would attempt such a thing starting from the simple C code I gave below because it would take just seconds to recompile the small C file whereas in Octave it would take minutes.  Once satisfied, then the code could be moved into Octave.)  The drawback is that this might be a slower routine than glibc's sin() routine, and it doesn't fix the accuracy of Octave's sin() if there is a problem.

Dan Sebald <sebald>
Mon 04 Jul 2016 05:49:50 AM UTC, comment #14: 

Sorry for speaking imprecisely.  Glibc's sin is a backwards stable algorithm: we got the exact answer of a nearby problem, measured relatively.

We don't have the exact x, so we don't expect the exact answer.
But we hope for a good answer in the backward error sense: the answer we get (5.62e-10) is the exact answer of sin(x+delta) where delta is small in a relative sense to 10000000*pi.  This is what I take home from the example in Comment #8 and #5.


Colin Macdonald <cbm>
Mon 04 Jul 2016 04:58:06 AM UTC, comment #13: 

Colin, the initial bug report may not have been about sin, but sin is the cause of the loss of relative precision you see.

Besselj is an interesting example.  In bug #48316, you pointed out that Besselj is NaN after 1e10.  That seems to rule out its having low error.

You say that the example of comment #5 seems OK.  The error it has in sin(x pi) is about 1e-10 (which is a relative error of infinity, but we'll let that slide).  If sin (x pi) has an error of 1e-10 and we divide by (x pi) to get sinc(x), then we'll divide both the value and error by (x pi), so the relative error is unchanged.  That means that the errors observed in comment #5 are about the same as those in your original post.

My reading of Marco's comment #8 is that sinc is doing the best we can expect, given the limitations of the input.

Finally, the relative error of sin(10000000*pi) is not about eps.  The relative error is


approximate sin(10000000*pi)
----------------------------
true sin(10000000*pi)

=

non-zero / 0

= inf


Lachlan Andrew <lachlan>
Mon 04 Jul 2016 04:36:12 AM UTC, comment #12: 


> Colin was initially assuming that the relative error should be of the order of eps.
> However, that is only true of approximately linear functions.


I'm not sure that is true.

> With something like sin(),


But this bug is not about sin.  Sinc oscillates and decays.  Some other special functions oscillate and decay.  For example, Besselj does and yet its approximation has low relative error.

But anyway, regarding sin(): I don't think there is anything wrong with sin in glibc (why are we even talking about that?!)  sin's implementation has great relative error!  To me the example in comment #5 is fine.  I think Marco has already said so in comment #8: but just in case, note that the relative error is of size eps in the calculation of sin(10000000*pi).  This looks like standard numerical analysis to me.

Of course maybe it is inherently hard to compute sinc for some reason---I said initially that I'm no expert on special functions.

Colin Macdonald <cbm>
Tue 28 Jun 2016 12:46:11 AM UTC, comment #11: 

Dan, perhaps you missed the commends from Marco and me.  (I missed Marco's, because it was sent at about the same time.)

If the argument is double, then we can't get better results than we currently get, simply because the input doesn't have enough precision.  It is a garbage-in-garbage-out problem, which can't be fixed by a better library function.

Colin was initially assuming that the relative error should be of the order of eps.  However, that is only true of approximately linear functions.  With something like sin(), the absolute error for an argument x +/- eps is O(x * eps), but the value is O(1) (i.e., doesn't get bigger as x does for large x), and so the relative error scales up in proportion to x.

I'm marking this as "invalid", and will close it in a few days unless someone convinces me that the above is wrong.

Lachlan Andrew <lachlan>
Mon 27 Jun 2016 05:37:40 PM UTC, comment #10: 

I've looked through the glibc library code, and as I suspected it is a big collection of files concerning sine and cosine.  These are sort of the base routines for which many of the other trig routines are derived (hyperbolic, arc, etc.).  So, it is hard to pin down exactly which files and which routines are pertinent.  To really dig in, one would have to put debug types of code in spots and compile the library.  I really don't want to do such a thing.

My best guess of where to look in the library for 64-bit double (BTW, there is 128-bit long double code in the library, e.g., sinl, apparently not used by Octave) would be in the library directory

glibc/sysdeps/ieee754/dbl-64

There is consideration for the various ranges of |x| for sin(x).  Ranges are x small on the order of 1.0 (e.g., up to 2.4), various quadrants and so on.  It uses a Taylor series polynomial approximation there.  Then there are ranges for |x| moderately large, up to 105414350, for which a reduce_sincos_1() is called to get some integer value that is passed into a slightly different version of the sin/cos functions that eventually call the Taylor series formula.  Then up to 281474976710656 is another range with a similar construct but reduce_sincos_2().  Then there is one last range for huge numbers that uses reduce_and_compute().

What's interesting is that those ranges roughly agree with the plot of https://savannah.gnu.org/bugs/?48307#comment5.  For the case of 1, 10, ..., 10000000, the error shows a typical exponential-like growth.  But for 1e8 the estimate jumps around and gets much worse in some cases.  So, I wonder if there is a transition in there for the 105414350 boundary that the original programmers recognized.

In any case, it is a lot of work to pick that code apart, understand what it is doing, fix it, test it, and then (worst of all) convince some library maintainer that it should be changed.  The best I think we could do as far as an Octave bug goes, is to make the case the glibc sin/cos is not as accurate as other estimates and submit a bug report to the GNU and by chance get some feedback from someone familiar with the code.  Colin, if you could do some exploration similar to what I did by plotting the sin(pi*x) for x=1, 10, 100, etc. for a software program that you know doesn't use glibc, maybe we could contrast the two.

Dan Sebald <sebald>
Mon 27 Jun 2016 08:11:54 AM UTC, comment #9: 

Dan, you're absolutely right that we shouldn't expand a series about 0 if we're dealing with things much greater than 2pi.

I also agree that reducing modulo 2*pi is much more problematic than reducing modulo 2.  There is also bug #45339 about sind etc that should also be able to reduce their arguments modulo 360 but don't.

However, we really only get benefits if the remainders are represented accurately.  Try


octave:1> x = double(10000001)/3
x =    3.3333e+06
octave:2> fprintf ("%.100g\n", mod (x, 2))
1.6666666665114462375640869140625


and you can see that the argument only has about 10 significant figures accuracy.  The fact that  sinc  is accurate to 10 significant figures shows that it is working as expected, given the input.  This is independent of whether or not we take the mod explicitly -- sin automatically works on the modulus.

The patch I proposed works for things like  double(10000001)/4, but nothing can help fractions whose precision is actually lost.

I'm tempted to suggest we close this bug as "invalid".

Lachlan Andrew <lachlan>
Mon 27 Jun 2016 07:51:29 AM UTC, comment #8: 

wrt comment #5, pi*10000000 in double precision differs from the exact value about 3e-8. Even if sin would be able to exactly perform the modulo operation, evaluating sin(pi*10000000) would correspond to evaluate sin(3e-8). The result we get, about 6e-10, is even better. For comparison, both Matlab and gfortran give the same results.

Marco Caliari <caliari>
Group Member
Mon 27 Jun 2016 07:12:13 AM UTC, comment #7: 

I experimented with modulo arithmetic, but with 2*pi and the sin() function.  Didn't work so well.  Maybe this kind of approach works better with the base being an exact floating point number, i.e., 2.  This result looks a little better:

>> x = double(10000001)/3

x =    3.3333e+06

>> sprintf("%0.100g", sin(pi*mod(x,2))/(pi*x))

ans = -8.26993260666192699379488145923489117450344565440900623798370361328125e-08

but still not near the same as the Maple 128-bit result or the gcc/gnu C 128-bit result.

In any case, even if this is a good solution for sinc() (we need to test), it doesn't fix anything for sin() (and probably other trig functions).  It is really the library where this needs to be improved.

It seems to me one needs to go back to the series computation (Taylor, Maclaurin) and use something other than expansion about 0 so that some convergence criteria can be used.  The plots in this paper:

http://pages.pacificcoast.net/~cazelais/187/maclaurin-sin.pdf

illustrate how more terms are required as x becomes large.  And no doubt any discrepancies in the numeric representation of large numbers adds to the inaccuracy.

Should we start digging through the library code?

Dan Sebald <sebald>
Mon 27 Jun 2016 12:37:02 AM UTC, comment #6: 

The attached patch should improve the accuracy for large arguments.  For example,


sinc (1e10)


gives 0.

The downside is that it is about 40% slower for scalars or 20% slower for large vectors.  I'm not sure how important that is.  It could be sped up slightly for large vectors of small arguments by only doing the reduction modulo 2 for large components.

I'd be inclined to have a separate sinc_robust to evaluate sinc accurately for large arguments, and keep the default sinc as something that is fast for typical arguments.

(file #37591)

Lachlan Andrew <lachlan>
Sun 26 Jun 2016 05:52:00 PM UTC, comment #5: 

Experiment with this a little bit:


>> sprintf("%0.100g", 10000000)
ans = 10000000
>> x = [1 10 100 1000 10000 100000 1000000 10000000];
>> plot([0:7], abs(sin(pi*x)));


There's a definite loss of accuracy, and if I go to higher powers (e.g., [0:10]) the sin() function starts to fall apart.

Maybe it is a problem with the C math library Octave uses.

I've not seen the library code, but the sorts of questions would be how the Taylor series is evaluated.  From what I'm recalling about series expansion is that it is done about some type of centroid and is accurate within a certain range, and for better accuracy one does an expansion about a different centroid...that sort of thing.  For trig functions it would make sense to bring that value, no matter how large it is, back within the canonical range of [0,2*pi) or [-pi,pi).

Dan Sebald <sebald>
Sun 26 Jun 2016 05:01:45 PM UTC, comment #4: 

As for the comparison against Maple, that looks to be a double vs. long double numerical result, no type of programming flaw.  Here is a C program you may try, compiled with:


gcc evaluate.c -o evaluate -lm


The result I am seeing is:


sebald@ ~/octave/bug_48307 $ evaluate
DOUBLE
M_PI = 3.141592653589793115997963468544185161590576171875
10000001/3 = 3333333.6666666665114462375640869140625
sin(pi*x)/(pi*x) = -8.26993260956150473721458331322065049562297645024955272674560546875e-08
LONG DOUBLE
M_PIl = 3.14159265358979323851280895940618620443274267017841339111328125
10000001/3 = 3333333.666666666666742457891814410686492919921875
sin(pi*x)/(pi*x) = -8.2699326043331140428346591282514346722860854033143596097943373024463653564453125e-08


PI doesn't come out exactly as the C define is, simply illustrating the finite precision of the floating point.  But, the double results matche Octave--so presumably Octave isn't doing something like computing Pi in a strange way.  The long double result is getting closer to the Maple result.  Tell me what you get for these values.  I'm using a Xeon processor.  I don't know details of Xeon 128-bit math, but it sounds like while 64-bit is hardware implementation, 128-bit might be software extension:

https://software.intel.com/en-us/articles/differences-in-floating-point-arithmetic-between-intel-xeon-processors-and-the-intel-xeon

Maybe your processor is software extension for 128-bit, and who knows if the two methods are similar?

I've a feeling that the discrepancies you originally noted are an issue with symbolic representation and computation.  My guess is something in Octave's symbolic representation isn't carried out as accurately as it could be.  E.g., maybe it does sin(pi*x) / (pi*x) as

sin((sym) pi (sym) 10000001 / (sym) 3) / (sym) pi (sym) 10000001 / (sym) 3

which comes out different than expected.  Speculation right now, but in any case, I'm inclined to think comparing against the Maple result isn't valid.

(file #37585)

Dan Sebald <sebald>
Sun 26 Jun 2016 07:05:18 AM UTC, comment #3: 

OK, let's continue along these lines, but let's step back a bit and just examine what is happening with floating point numbers in Octave.

To illustrate the finite precision of floating point numbers, here is 1/3 in Octave:


>> sprintf('%0.100g', double(1)/3)
ans = 0.333333333333333314829616256247390992939472198486328125


Yes, that seems to be somewhere around 10^16 where numerical accuracy falls apart.

Slightly odd is this result.  You indicated Maple's result is:

-0.82699326043336203093078665439080e-7

but when I display that value, I see:

>> Maple = -0.82699326043336203093078665439080e-7

Maple =   -8.2699e-08

>> sprintf('%0.100g', Maple)

ans = -8.2699326043336200337390017901795236099360408843494951725006103515625e-08

Why doesn't the Maple number turn out to be the same as Octave?  One would think the number representation is the same for the two programs as it is the same processor.  Maybe you should try this test on your system.  Also, do a comparison of division and sine on their own, i.e., compare

x = double(10000001) / 3;
sin(Pi*x)
1/(Pi*x)

between Maple and Octave.  Maybe we can determine if it is the division or sinusoid that might be problematic.
1 / (Pi*x)

Dan Sebald <sebald>
Sun 26 Jun 2016 05:43:35 AM UTC, comment #2: 


>> sinc(x)
ans = (sym) 0
>> sinc(d)
ans =    4.5268e-17


>  You'd need to come up with a more concrete comparison to establish there is a numerical problem.


Without disagreeing with any of your points, I think its very common practice to for a special function library to aim for 15 digits *relative accuracy* (and document when it cannot do that).  Here it cannot.  As x gets large, we lose relative accuracy, I think linearly with x.

I certainly agree its good to not blindly assume symbolic computed results are correct.  Here is Maple (which doesn't seem to have a built-in sinc).


> x := 10000001/3;
                                x := 10000001/3

> Digits := 32;
                                 Digits := 32

> evalf ( sin(Pi*x) / (Pi*x) );
                                                         -7
                   -0.82699326043336203093078665439080 10

And pasting back into Octave:

>> Maple = -0.82699326043336203093078665439080e-7
Maple =   -8.2699e-08
>> Q = sinc(10000001/3)
Q =   -8.2699e-08
>> Q - Maple
ans =   -5.2279e-17
>> (Maple - Q) / Maple
ans =   -6.3216e-10

1.  These results match SymPy's (Symbolic pkg).
2.  Again, we see that the absolute error is fine (< eps), but that the relative error is approx "2e6*eps".

Colin Macdonald <cbm>
Fri 24 Jun 2016 09:44:05 PM UTC, comment #1: 

One would have to look more closely at the trigonometric functions to see if there is really an estimation problem here, or whether this is simply a matter of numerical inaccuracies.  A couple things to keep in mind.  One is that the comparison is between a numeric computation and symbolic computation.  It's difficult to say which is correct.  We do know that sinc(x) where x is a whole number should be zero accept for x = 0.  What type of values do you see for, say,

x = sym(10000001)
d = double(x)
sinc(x)
sinc(d)

?

A second thing to keep in mind is that a double is not a real number, but a floating point number, so there is always going to be inaccuracy that way (i.e., that double(1/3) is not necessarily the same value as the rational number 1/3).

I suppose a third point of question is that in your first example the valuations are convergent to zero and dividing by a small number, i.e., (P1-Q)/P1 will tend to amplify inaccuracies while in the second example the divisor is a large number, relatively speaking.

A lot of questions here.  You'd need to come up with a more concrete comparison to establish there is a numerical problem.

Dan Sebald <sebald>
Fri 24 Jun 2016 05:49:33 PM UTC, original submission:  

For large arguments, the relative error in "sinc" gets progressively worse.


>> x = sym(10000001)/3
x = (sym) 10000001/3
>> d = double(x)              % large non-integer
d =    3.3333e+06
>> P1 = double(sin(pi*x) / (pi*x))
P1 =   -8.2699e-08
>> P2 = double(sinc(x))
P2 =   -8.2699e-08

>> Q = sinc(d)             % this is the value we're checking
Q =   -8.2699e-08

>> (P1-Q)/P1               % relative errors
ans =   -6.3216e-10
>> (P2-Q)/P2
ans =   -6.3222e-10

I'd rather see e.g., 3*eps.

That's all on the real line where the behaviour is decay.  I haven't looked very carefully in the complex plane; here's one example:

>> x = sym(10+20i); d = double(x)
d =  10 + 20i
>> Q = sinc(d)
Q =  1.2343e+25 + 6.1713e+24i
>> P2 = double(sinc(x))
P2 =  1.2343e+25 + 6.1713e+24i
>> abs((Q - P2)/P2)        % 12*eps, this is not too bad.
ans =    2.6456e-15


The implementation is from the definition: sin(pi*x)/(pi*x).  I don't know enough if there is an asymptotic series, or some other tool from approx theory that would do better.

#GSoC2017: fix our special functions?

Colin Macdonald <cbm>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #43917:  sin.cc added by sudhir27 (1KiB - text/x-c++src)
file #37708:  sinc.c added by sebald (2KiB - text/x-c)
file #37591:  bug_48307_sinc.cset added by lachlan (682B - text/x-diff)
file #37585:  evaluate.c added by sebald (586B - text/x-c)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by dastew (Posted a comment)
  • -email is unavailable- added by sudhir27 (Posted a comment)
  • -email is unavailable- added by caliari (Posted a comment)
  • -email is unavailable- added by lachlan (Updated the item)
  • -email is unavailable- added by sebald (Posted a comment)
  • -email is unavailable- added by cbm (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 7 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-04-13 sudhir27 Attached File- Added sin.cc, #43917
    2016-07-04 sebald Attached File- Added sinc.c, #37708
    2016-07-03 lachlan Open/ClosedOpen Closed
    2016-06-28 lachlan StatusPatch Submitted Invalid
    2016-06-27 lachlan Attached File- Added bug_48307_sinc.cset, #37591
        StatusNone Patch Submitted
    2016-06-26 sebald Attached File- Added evaluate.c, #37585

    Back to the top

    Powered by Savane 3.13-cf05.
    Corresponding source code