bugGNU Octave - Bugs: bug #60539, Slow performance of betaincinv.m

 
 

bug #60539: Slow performance of betaincinv.m

Submitter:  Rik <rik5>
Submitted:  Thu 06 May 2021 04:42:50 PM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Performance
Status:  Fixed Assigned to:  None
Originator Name:  Open/Closed:  * Closed
Release:  * dev Operating System:  * Any
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Fri 20 Aug 2021 12:34:46 AM UTC, comment #23: 

I benchmarked new function and I find it is ~3.2X faster than the original.  I updated Michael's code for various Octave coding conventions and checked it in here: http://hg.savannah.gnu.org/hgweb/octave/rev/26bb2cbf6da2.

Although I slightly changed the structure, I maintained the distinction between "step" and "dx" that was the subject of much discussion in this report.

Marking as Fixed and closing report.

Rik <rik5>
Group administrator
Sat 29 May 2021 09:18:16 AM UTC, comment #22: 

Re comment 20:

I repeat what I said in comment 12: There is a difference between step and df. step is the update to x that should be gone in order to get betainc(x) to the correct result, df is the update that actually has been done, which is different from step because the x that should be chosen is in general not representable in finite-precision arithmetics. We have identified a problematic behaviour, which is due to betainc(x) being constant over a range of different x, which in my first implementation lead to the algorithm updating x again and again by small increments on the order of eps (always the same length), until finally betainc(x) made the jump, the algorithm would have to go into the other direction, at which point it stopped. The solution was to stop already if the step length does not decrease any more (it does not have any more to be zero or reverse).

However, if you condition on the behaviour of step, the following problem could appear: step (the desired update) could decrease slowly, but the actual update would stay the same. Then we would have again the old problem. I think now that this should not happen, because the initial point is chosen so that during convergence the derivative always decreases, thus for a constant deficiency in function value (because betainc has this step-like behaviour) step should actually increase (because the derivative is computed directly in terms of exponentials and logarithms which should hopefully be accurate to an eps). But it would be relying on extremely small differences of finite-precision floating-point values, which I am personally quite uncomfortable with. If you do it, please add a comment in the code that a more defensive coding (and in my view also more obvious) would be to test on df, the actual difference in x, if at any time it should appear that there are still problems with the stopping condition.

Other than that, fine, please check it in. Perhaps also a few tests on the previous bug #60528 with really extreme parameters (a/b close to 0 or in the order of 100, y close to 0 and 1, "upper"/"lower") would be nice, always testing whether betaincinv(betainc(x))==x up to an eps.

Michael Leitner <mleitner>
Sat 29 May 2021 08:28:09 AM UTC, comment #21: 

Re comment 19:

Do you imply that there is supposed to be a general semantic difference between singleton expansion and broadcasting, or do you just talk about the specific singleton expansion as implemented by common_size()? At least in the documentation, there is no distinction: according to Sect. 19.2, broadcasting is the one that happens per dimension, which is exactly what bsxfun does (and the "sx" obviously stands for singleton expansion). Expanding only arguments that have length one in all dimensions is a behaviour that is particular to common_size(), I think. And I want to repeat that "If the inputs cannot be brought to a common size" in the documentation of common_size() is not really correct.

I can accept that broadcasting is done only for specific functions (and I would propose to correspondingly add "certain" before "functions" in the second sentence of https://octave.org/doc/v6.2.0/Broadcasting.html) -- I see that in order to have broadcasting work in general for internal functions, one would have to introduce an additional layer that does just this expansion, which then is not faster than if the expansion was done by the user -- the nice thing about automatic broadcasting is that I can do (1:N)+(1:N)' and only one N^2-size matrix (namely for the result) has to be allocated, which is much more performant.

I would think Matlab conformity is no issue here, as this is only about reproducing all that Matlab can compute, while reasonable output where Matlab errors out is specifically allowed, isn't it? But of course it can stay as it is, the only problem is that bsxfun works only for binary functions, that is, I can, e.g., compute bsxfun(@poisspdf,[1 2]',[5 5]) (which does not work automatically), but I cannot do the equivalent for binopdf, which takes three arguments, and I see no way how this could be done without manually expanding the arguments. That is, a variadic bsxfun-generalization could be nice.

Michael Leitner <mleitner>
Fri 28 May 2021 10:50:14 PM UTC, comment #20: 

I agree that a stopping tolerance of eps can underperform and that the algorithm is capable of better results.

I prefer using the code I suggested


    step = -F (x(todo), a(todo), b(todo), y(todo)) ./ ...
       JF (x(todo), a(todo), b(todo), Bln(todo));
    x(todo) += step;
    ind = abs (step - old_step) > 0;
    todo = todo(ind);
    old_step = step(ind);


over that from newbeta2.m


    step = -F (x(todo), a(todo), b(todo), y(todo)) ./ ...
       JF (x(todo), a(todo), b(todo), Bln(todo));
    df_new = (x(todo) + step) - x(todo);
    x(todo) += step;
    ind = abs (df_new) < abs (df);
    todo = todo(ind);
    df = df_new(ind);


as it doesn't require calculation of df_new (2 indexing operations, 1 addition, 1 subtraction).

If you agree, I'll go ahead and finish this up as a changeset and check it in under your name since you developed the critical algorithm around choosing the initial starting point.

Rik <rik5>
Group administrator
Fri 28 May 2021 10:33:49 PM UTC, comment #19: 

Tackling the second question first.  The singleton expansion which is implemented by the common_size() function is different and distinct from "broadcasting".  It's helpful to look in the Octave manual at Section 19.2 Broadcasting which also documents that broadcasting is only implemented for a select number of operations.  The full list is


plus
minus
times
rdivide
ldivide
power
lt
le
eq
gt
ge
ne
and
or
atan2
hypot
max
min
mod
rem
xor


Matlab actually followed after Octave and Python in implementing this feature, and they too only do it for a select few functions.  The list is available in the documentation of bsxfun at https://www.mathworks.com/help/matlab/ref/bsxfun.html.

It's unlikely that Octave would choose to make all functions broadcast, as that would be both a big conceptual change and also a large divergence from Matlab compatibility.

For reference, the following code emits an error in Matlab because the input sizes don't match.


x=0.5;
a=[1 1];
b=[1;1];
betainc (x, a, b)


but


betainc (x, 1, b)


does work and returns a 2-element column vector because variable 'b' is a column vector.

Rik <rik5>
Group administrator
Fri 28 May 2021 03:56:10 PM UTC, comment #18: 

To my point about common_size and singleton expansion in comment 6 that you, Rik, answered in comment 7: I think you are not right. Consider the following:


x=0.5;
a=[1 1];
b=[1;1];


I can compute x+a+b without problems, and it gives me a 2x2-matrix (and that's what I understand as singleton expansion -- you can do element-wise arithmetic operations with operands that for a given dimension have either all the same size or some of them have size one, in which case they will be expanded in this dimension). However, betaincinv(x,a,b) refuses. Note that it is the same for betainc. And this is due to the use of common_size().

According to its documentation, common_size gives ERR=1 "if the inputs cannot be brought to a common size". However, in the simplest example common_size([1 1],[1;1]) I would say they can -- it seems common_size really expands only scalar arguments, like another sentence in its documentation implies. But this is different from singleton expansion as implemented in the interpreter, which acts on a per-dimension basis.

So what is that, a documentation bug of common_size or rather an implementation bug? How it is used in betainc and betaincinv, it looks like an implementation bug, as it could expand a and b without a problem. I cannot judge whether there are instances in the code where it is used and it is really necessary that expansion happens only for scalar arguments.

Michael Leitner <mleitner>
Fri 28 May 2021 09:55:12 AM UTC, comment #17: 

I reported the problem with betainc as bug #60682.

You did understand correctly what I had written, but I was in error. The case with lost accuracy would have been for alpha=1/4, but in this case I see that the initial value is so good that it does not need to iterate at all (so after the first step it has converged, no matter what the stopping criterion). However, where your stopping criterion is unnecessarily inaccurate is just for small y: with alpha=2 and beta=1, betainc is just x^2, so betaincinv is just sqrt(y). My stopping criteria, both the original one as well as the one I suggested in comment #15 and attach here as newbeta2.m give the correct result 1e-20 for x=1e-40, while your implementation gives 2.22e-16. Yes, on an absolute level this is correct to eps(1) (it practically is eps(1)), but on a relative level it is inaccurate by a factor 10000.


y=1e-40;


(file #51491)

Michael Leitner <mleitner>
Thu 27 May 2021 11:14:34 PM UTC, comment #16: 

For reference, since we are getting different results, I attach newbeta.m and biinv3.m.  The first file is your code with the only addition being the reporting of each iteration in the newton_method subfunction.  The second file contains my change to eliminate df/df_new variables and to change the stopping criteria to eps.  Hopefully the issue then becomes reproducible on your machine with badx = 5.717842473940138e-07.

Your tests indicate a problem with betainc.  Can you file a new bug report about that function and that specific issue?

Since it is uncertain when the issue above will be fixed, I think we need to code defensively in betaincinv.m.  My suggestions of eps (or maybe eps/2) works.  I also implemented your suggestion of looking for the delta in the variable "step" to fall to zero.  I did this by storing the last round's step values in the variable old_step and then calculating


    ind = abs (step - old_step) > 0;


This is attached as biinv4.m.  It works, but it doesn't seem to result in any extra precision for the test case I mentioned.  I ran biinv3.m and then biinv4.m and got


iter: 0, x: 0.25
iter: 1, # todo: 1, x: 0.0759261969940135, max (df): 0.174073803005986
iter: 2, # todo: 1, x: 0.0347141003450014, max (df): 0.0412120966490121
iter: 3, # todo: 1, x: 0.0167224581112431, max (df): 0.0179916422337583
iter: 4, # todo: 1, x: 0.00821961591239634, max (df): 0.00850284219884681
iter: 5, # todo: 1, x: 0.0040791709326122, max (df): 0.00414044497978414
iter: 6, # todo: 1, x: 0.00203830950551201, max (df): 0.00204086142710019
iter: 7, # todo: 1, x: 0.00103118309152024, max (df): 0.00100712641399177
iter: 8, # todo: 1, x: 0.000542869669144377, max (df): 0.000488313422375863
iter: 9, # todo: 1, x: 0.000324036344981909, max (df): 0.000218833324162468
iter: 10, # todo: 1, x: 0.000250279883120058, max (df): 7.3756461861851e-05
iter: 11, # todo: 1, x: 0.00023942338681535, max (df): 1.08564963047085e-05
iter: 12, # todo: 1, x: 0.000239177433992639, max (df): 2.45952822710538e-07
iter: 13, # todo: 1, x: 0.000239177307623453, max (df): 1.26369186577854e-10
ans = 2.391773076234193e-04

iter: 0, x: 0.25
iter: 1, # todo: 1, x: 0.0759261969940135, max (df): 0.174073803005986
iter: 2, # todo: 1, x: 0.0347141003450014, max (df): 0.0412120966490121
iter: 3, # todo: 1, x: 0.0167224581112431, max (df): 0.0179916422337583
iter: 4, # todo: 1, x: 0.00821961591239634, max (df): 0.00850284219884681
iter: 5, # todo: 1, x: 0.0040791709326122, max (df): 0.00414044497978414
iter: 6, # todo: 1, x: 0.00203830950551201, max (df): 0.00204086142710019
iter: 7, # todo: 1, x: 0.00103118309152024, max (df): 0.00100712641399177
iter: 8, # todo: 1, x: 0.000542869669144377, max (df): 0.000488313422375863
iter: 9, # todo: 1, x: 0.000324036344981909, max (df): 0.000218833324162468
iter: 10, # todo: 1, x: 0.000250279883120058, max (df): 7.3756461861851e-05
iter: 11, # todo: 1, x: 0.00023942338681535, max (df): 1.08564963047085e-05
iter: 12, # todo: 1, x: 0.000239177433992639, max (df): 2.45952822710538e-07
iter: 13, # todo: 1, x: 0.000239177307623453, max (df): 1.26369186577854e-10
iter: 14, # todo: 1, x: 0.000239177307623419, max (df): 3.31805682026064e-17
iter: 15, # todo: 1, x: 0.000239177307623419, max (df): 2.21499120177644e-20
ans = 2.391773076234193e-04


The answers are identical, but the second stopping criteria took 2 more iterations to get there.

I'm not sure I correctly understand the test case you wanted, but I tried x=1e-4, A = 4, B = 4 with your original code (newbeta) and with mine (biinv3).


format long
newbeta (1e-4, 4, 4)
ans = 4.218405600470292e-02
biinv3 (1e-4, 4, 4)
ans = 4.218405600470292e-02


The answers are identical.

Finally, it was just a thought, but I agree it doesn't seem worth trying to implement any bisection step.

(file #51486, file #51487)

Rik <rik5>
Group administrator
Thu 27 May 2021 04:24:35 PM UTC, comment #15: 

The behaviour you show in comment 13 would be unexpected. However, I cannot reproduce it (using your biinv.m and the literal definition bady = 5.717842473940138e-07) -- it exits after iter 14, with the same x (same according to the displayed accuracy). However, if I go to the next-highest double number bady1=bady+6e-23 (and display x to more digits) I do see three additional steps where max (df) stays 5.4e-20 and x increases each time.

The point seems to be that betainc has a peculiar behaviour that I would actually classify as a bug: consider the following code


x=0.00023917730762341909-(0:10)'*.5e-19;
printf("%.17e\n",x)
printf("%.17e\n",betainc(x,2,4))


You will see that all these x are different (eps(0.000236) is 2.7e-20, less than the step in x). However, betainc outputs only two different values for y on these x (with a step of 20 eps between them), even though double precision would without problems have allowed a distinct value for each one.

So what happened was the following: you started with some y that for the initial 14 iteration gave good convergence (as expected), but at that point gave an x that was still slightly too large. It computed (by the Newton algorithm, with an explicit expression for the derivative that should be accurate to the last eps) the necessary step size. If betainc had work correctly, it would have signalled that the new y is below the target value, the step direction would have been reversed, and the loop would have been aborted. However, in my case betainc did not update the emitted y over three iterations, thus the algorithm again and again performed the step (with the same length, as the supposed error was always the same), until finally betainc did give a lower value, when it aborted. And in your case it was just particularly bad luck that you had some 14 different x for which betainc always gave the same y.

So I would say what we have to do here is correct the stopping criterion of betainc, so that it really gives the best answer (closest representable number to the actual true result). Then such behaviour could not happen.

If you say tol=eps(class(y)), you get a guaranteed accuracy of 2.2e-16 (for doubles), irrespective of y. That's not bad, and considering how the present betainc behaves, perhaps not more than can be expected, but the algorithm itself could in principle be much more accurate, specifically for small y and thus small x -- e.g. for y=1e-4 and alpha=4, your code would only give four digits of accuracy, it seems to me. Note also that this would defeat the purpose of the option "upper", where the only sense I see in providing that possibility is a question of accuracy for large y and beta>1. But yes, your code (after wrapping step by abs() in the condition) would work.

Another stopping criterion that should give answers that are about as accurate as the present one but is efficient also for the deficient betainc would be to break if the absolute value of step has not decreased any more. Due to how I choose the starting points, the step length has to strictly decrease (in infinite-precision arithmetics), and if it does not, then it was either zero (in which case we can savely stop) or we have hit above problem. Could you try that?

Yes, it would be possible to decide whether the minimum is to the left or to the right, this should depend on which of alpha or beta are smaller or larger than one. Yes, if your y is either close to zero and your alpha is large, or if your y is close to one and your beta is large, you will have quite a number of steps where the Newton algorithm converges slowly, because the derivative changes so strongly. In such a case, even bisection would converge faster. But the problem is that you do not know when in general you have to switch from bisection to Newton (which converges much faster around the eventual solution). So it is not worth the effort, as one bisection step takes as long as a Newton step (it is the computation of betainc that has the highest computational effort, I am certain, not the derivative). And further, we would lose the certainty that we are on the correct side of the eventual solution, and we would again have to check whether a step takes us out of the interval (0,1) -- note that this was the initial bug here.

Michael Leitner <mleitner>
Thu 27 May 2021 02:59:58 PM UTC, comment #14: 

Is there a way to be even smarter about the initial guess?  I changed the reporting code to show the initial guess X0 on entrance to the subfunction newton_method.  I labeled this as iter #0.  I then tried the difficult value 1 - 1e-6.  Results are below.


xtst = 1 - 1e-6
xtst = 0.999999000000000
biinv3 (xtst, 2, 4)
iter: 0, x: 0.25
iter: 1, # todo: 1, x: 0.549999525925926, max (df): 0.299999525925926
iter: 2, # todo: 1, x: 0.680907792380916, max (df): 0.13090826645499
iter: 3, # todo: 1, x: 0.768155355469287, max (df): 0.0872475630883709
iter: 4, # todo: 1, x: 0.829610060942496, max (df): 0.0614547054732095
iter: 5, # todo: 1, x: 0.873945144205887, max (df): 0.0443350832633909
iter: 6, # todo: 1, x: 0.906339380982877, max (df): 0.0323942367769896
iter: 7, # todo: 1, x: 0.93017133347546, max (df): 0.0238319524925828
iter: 8, # todo: 1, x: 0.94773273239768, max (df): 0.0175613989222206
iter: 9, # todo: 1, x: 0.960574192526131, max (df): 0.0128414601284503
iter: 10, # todo: 1, x: 0.969662183290227, max (df): 0.00908799076409684
iter: 11, # todo: 1, x: 0.975447395458917, max (df): 0.00578521216868909
iter: 12, # todo: 1, x: 0.97815327971624, max (df): 0.00270588425732304
iter: 13, # todo: 1, x: 0.978737006399498, max (df): 0.000583726683258333
iter: 14, # todo: 1, x: 0.97876173884375, max (df): 2.47324442522867e-05
iter: 15, # todo: 1, x: 0.978761781799343, max (df): 4.29555923286543e-08
ans = 0.978761781799343


The initial guess is set to the point of inflection.  I plotted the function F(2,4) to see the shape (inflection is at 0.25 as promised) and the minimum is off to the right.  Is it possible to know, in advance, whether the minimum is left or right of the inflection point and thus perform an initial bisection step?

In this case, the algorithm starts at 0.25 and has climb all the way to 0.978 which requires 15 iterations.  Just to test my idea, I set a breakpoint in the newton_method subfunction and set the initial guess to be the bisection between 0.25 and 1 or 0.625.  When I then use 'dbcont' to continue execution the convergence is faster, but by only 1 iteration so perhaps it is not worth it given the extra complexity that would be required.


iter: 0, x: 0.625
iter: 1, # todo: 1, x: 0.729998482962963, max (df): 0.104998482962963
iter: 2, # todo: 1, x: 0.802488599632329, max (df): 0.0724901166693664
iter: 3, # todo: 1, x: 0.854288974392737, max (df): 0.0518003747604075
iter: 4, # todo: 1, x: 0.891940465698917, max (df): 0.0376514913061801
iter: 5, # todo: 1, x: 0.919565498856449, max (df): 0.0276250331575323
iter: 6, # todo: 1, x: 0.939921418392225, max (df): 0.0203559195357758
iter: 7, # todo: 1, x: 0.954887758751933, max (df): 0.0149663403597078
iter: 8, # todo: 1, x: 0.965702041241923, max (df): 0.0108142824899903
iter: 9, # todo: 1, x: 0.973054155601817, max (df): 0.00735211435989399
iter: 10, # todo: 1, x: 0.977201545731428, max (df): 0.00414739012961038
iter: 11, # todo: 1, x: 0.978609883222818, max (df): 0.00140833749138997
iter: 12, # todo: 1, x: 0.978760183041749, max (df): 0.000150299818931775
iter: 13, # todo: 1, x: 0.978761781620095, max (df): 1.59857834568065e-06
iter: 14, # todo: 1, x: 0.978761781799482, max (df): 1.79387267097812e-10
ans = 0.978761781799482




Rik <rik5>
Group administrator
Thu 27 May 2021 02:43:19 PM UTC, comment #13: 

Agree with most of this.

Specifically, Octave is for numerical computation, not pure math.  If F(x) and F(x+delta) yield the same answer there is no point distinguishing between them.  This concept still trips up a lot of users who expect either infinite precision (pure math) or exact Matlab-equivalency.

Second, I agree that algorithm should be agnostic about singles/doubles and perform all operations with the desired precision.  This is how other Octave functions that support mixed inputs behave.

I do think df_new calculation can be simplified/eliminated.  The calculation


df_new = (x + step) - x;


will be 0 if step is less than eps (x), but otherwise will be equal to step.  In this case, x is restricted to the range (0, 1) so using a tolerance of eps (class (x)) as a check on step is a sufficient stopping condition.

The existing code is


    step = -F (x(todo), a(todo), b(todo), y(todo)) ./ ...
       JF (x(todo), a(todo), b(todo), Bln(todo));
    df_new = (x(todo) + step) - x(todo);
    x(todo) += step;
    ind = df_new .* df > 0;
    todo = todo(ind);
    df = df_new(ind);


and I propose


    step = -F (x(todo), a(todo), b(todo), y(todo)) ./ ...
       JF (x(todo), a(todo), b(todo), Bln(todo));
    x(todo) += step;
    ind = (step > tol);
    todo = todo(ind);


which gets rid of both the df and df_new variables.  It also seems to have a performance benefit for certain values.  Here is a run of the current code against a particularly hard value.


octave:60> badx
badx = 5.717842473940138e-07
octave:61> newbeta (badx, 2, 4)
iter: 1, # todo: 1, x: 0.0759261969940135, max (df): 0.174073803005986
iter: 2, # todo: 1, x: 0.0347141003450014, max (df): 0.0412120966490121
iter: 3, # todo: 1, x: 0.0167224581112431, max (df): 0.0179916422337583
iter: 4, # todo: 1, x: 0.00821961591239634, max (df): 0.00850284219884681
iter: 5, # todo: 1, x: 0.0040791709326122, max (df): 0.00414044497978414
iter: 6, # todo: 1, x: 0.00203830950551201, max (df): 0.00204086142710019
iter: 7, # todo: 1, x: 0.00103118309152024, max (df): 0.00100712641399177
iter: 8, # todo: 1, x: 0.000542869669144377, max (df): 0.000488313422375863
iter: 9, # todo: 1, x: 0.000324036344981909, max (df): 0.000218833324162468
iter: 10, # todo: 1, x: 0.000250279883120058, max (df): 7.3756461861851e-05
iter: 11, # todo: 1, x: 0.00023942338681535, max (df): 1.08564963047085e-05
iter: 12, # todo: 1, x: 0.000239177433992639, max (df): 2.45952822710551e-07
iter: 13, # todo: 1, x: 0.000239177307623453, max (df): 1.26369186572188e-10
iter: 14, # todo: 1, x: 0.000239177307623419, max (df): 3.31765864780564e-17
iter: 15, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 16, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 17, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 18, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 19, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 20, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 21, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 22, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 23, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 24, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 25, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 26, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 27, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
iter: 28, # todo: 1, x: 0.000239177307623419, max (df): 2.71050543121376e-20
ans = 2.391773076234194e-04


The algorithm spends iterations 14 - 28 changing the guess by an amount less than eps ("double").  When I use the criteria I suggested the algorithm stops after the 13th iteration.


octave:62> biinv3 (badx, 2, 4)
iter: 1, # todo: 1, x: 0.0759261969940135, max (df): 0.174073803005986
iter: 2, # todo: 1, x: 0.0347141003450014, max (df): 0.0412120966490121
iter: 3, # todo: 1, x: 0.0167224581112431, max (df): 0.0179916422337583
iter: 4, # todo: 1, x: 0.00821961591239634, max (df): 0.00850284219884681
iter: 5, # todo: 1, x: 0.0040791709326122, max (df): 0.00414044497978414
iter: 6, # todo: 1, x: 0.00203830950551201, max (df): 0.00204086142710019
iter: 7, # todo: 1, x: 0.00103118309152024, max (df): 0.00100712641399177
iter: 8, # todo: 1, x: 0.000542869669144377, max (df): 0.000488313422375863
iter: 9, # todo: 1, x: 0.000324036344981909, max (df): 0.000218833324162468
iter: 10, # todo: 1, x: 0.000250279883120058, max (df): 7.3756461861851e-05
iter: 11, # todo: 1, x: 0.00023942338681535, max (df): 1.08564963047085e-05
iter: 12, # todo: 1, x: 0.000239177433992639, max (df): 2.45952822710538e-07
iter: 13, # todo: 1, x: 0.000239177307623453, max (df): 1.26369186577854e-10
ans = 2.391773076234193e-04


Rik <rik5>
Group administrator
Wed 26 May 2021 11:11:33 AM UTC, comment #12: 

betaincinv.m, both in the new and old versions, just inverts betainc.m, that is, it finds an x for given y so that the given implementation of betainc in octave return y for given x. Thus, for judging whether betaincinv is working well, you should not compare to the previous implementation or to Matlab or Wolfram alpha, but test what betainc returns on betaincinv's results.

What I see here is that if I say


x=betaincinv (single (1-1e-6), 2, 4)


I get x=0.97864, and this is a single, as required. If I then say


betainc(x,2,4)==single(1-1e-6)


I get logical true, meaning that the betaincinv really returns the best result it could give, a single value for which betainc returns exactly the y (in single precision) that was initially provided -- it is as accurate as in principle can be expected (at least for this specific input). Note that this is not hard in this case, as with the high beta of four betainc is very flat around 1, and you have to go quite far away from 0.97864 so that the result of betainc changes -- if you take Matlab's value


betainc(single(0.9786913),2,4)==single(1-1e-6)


you get true as well. If beta was smaller than 1, this would be the other way round, then it would be possible that there is just no x so that the given y can be returned by betainc (of course, this pertains both to single and double precision). In any case, that's the test you would have to do: give any initial y, test whether the resulting x fulfills betainc(x)==y, and if not, see whether if you add or subtract eps(x) to x you get a closer y. If my reasoning in implementation was correct, this can only happen if betainc is very inaccurate (specifically non-monotonous).

I don't know whether the exact agreement between Matlab and the old algorithm was just a fluke, whether both were modelled after a common ancestor, or whether (God forbid!) one actually copied from the other with their incompatible licences (we are forbidden from looking into Matlab's source code to decide that).

Of course, you can get more accurate single results if you do the computation in double and at the end convert to single. I actually intended to put this up for discussion when I posted the new algorithm, but: that is something the user can always do, just give double instead of single arguments to betaincinv. On the other hand, there is a reason for computing in single -- it uses half of the (temporary) memory. That's why I thought it was intended as it was (namely to compute in single if any input argument was single). In particular, also the old algorithm did the computation in single precision in this case. You could force everything to double and convert at the end, but as I said, the users can do that themselves, and they lose the option to do the computation in memory-tight situations.

As to df_new: yes, in pen-and-paper arithmetics df_new is equal to step. In finite precision arithmetics it is not necessarily (in fact, it is quite unlikely that it would be equal). The point is that the break condition is if a given point does not get updated in a given step (then it won't be updated in the next and all following, so you have to break there), or if the step direction reverses (this can only happen due to numerical inaccuracies of betainc). So for the former case, it can be that the computed step is smaller than eps(x), in which case x will stay the same and df_new is zero, but step is not. That's why I do it like that. Indeed, there is perhaps a possibility for more efficiency here, in that x+step is computed twice. One could say the following


df_new = -x(todo);
x(todo) += step;
df_new += x(todo);


which may or may not be faster (depending on cache misses and so on).

Michael Leitner <mleitner>
Wed 26 May 2021 03:24:37 AM UTC, comment #11: 

doubles are much less of a problem, although the new algorithm produces slightly different results from either Matlab or the previous algorithm.

Also, the algorithm definitely still needs to be accurate for single values since they are an accepted input.  The old algorithm produces 0.9786913 for the example code which is the same as Matlab.  It feels like a regression when the old code was able to handle this input.

Rik <rik5>
Group administrator
Wed 26 May 2021 01:16:21 AM UTC, comment #10: 

If you do doble the results agrees with Wolfram alpha
quite well:

https://www.wolframalpha.com/input/?i=Beta+inverse+cumulative+distribution+function+a%3D2%2C+b%3D4+at+x%3D0.99999900000

0.9787617818

In octave I got ans = 0.978761781799343

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 25 May 2021 11:44:42 PM UTC, comment #9: 

This seems much faster, but I have some concerns about accuracy.

First, I modified the newton_method subfunction to report on its progress.  The modified code is attached as biinv.m (BetaIncINV.m).


  iter = 1;
  while (length (todo) > 0)
    printf ("iter: %d, # todo: %d, x: %.15g, max (df): %.15g\n", iter++, numel (todo), x, max (abs (df(:))));


I then ran some difficult values through it and checked the results against Matlab.  For example,


biinv (single (1-1e-6), 2, 4)
iter: 1, # todo: 1, x: 0.549999475479126, max (df): 0.299999475479126
iter: 2, # todo: 1, x: 0.680907785892487, max (df): 0.130908310413361
iter: 3, # todo: 1, x: 0.76815539598465, max (df): 0.0872476100921631
iter: 4, # todo: 1, x: 0.829610109329224, max (df): 0.061454713344574
iter: 5, # todo: 1, x: 0.873944878578186, max (df): 0.0443347692489624
iter: 6, # todo: 1, x: 0.906338155269623, max (df): 0.0323932766914368
iter: 7, # todo: 1, x: 0.930169761180878, max (df): 0.0238316059112549
iter: 8, # todo: 1, x: 0.947727501392365, max (df): 0.0175577402114868
iter: 9, # todo: 1, x: 0.960563063621521, max (df): 0.0128355622291565
iter: 10, # todo: 1, x: 0.969668209552765, max (df): 0.0091051459312439
iter: 11, # todo: 1, x: 0.975395321846008, max (df): 0.00572711229324341
iter: 12, # todo: 1, x: 0.97806191444397, max (df): 0.00266659259796143
iter: 13, # todo: 1, x: 0.978639125823975, max (df): 0.000577211380004883
ans = 0.9786391


The result in Matlab is 0.9786913.  The last 3 digits are different.

Also, could this code be shortened?


    step = -F (x(todo), a(todo), b(todo), y(todo)) ./ ...
       JF (x(todo), a(todo), b(todo), Bln(todo));
    df_new = (x(todo) + step) - x(todo);
    x(todo) += step;


It seems like df_new is just equal to the existing variable step.


df_new = (x + step) - x
       =  x + step - x
       =  x - x + step
       = step




(file #51480)

Rik <rik5>
Group administrator
Fri 14 May 2021 03:40:10 PM UTC, comment #8: 

I corrected the documentation issue on the stable branch in this cset: http://hg.savannah.gnu.org/hgweb/octave/rev/2a1f57067fbf.

Rik <rik5>
Group administrator
Thu 13 May 2021 09:15:37 PM UTC, comment #7: 

Thanks, I will take a look.

The common_size () function does singleton (scalar) expansion.  Either the inputs y,a,b actually have to have the same size OR some of them can be scalars and they will be expanded to match whichever input is actually a vector or matrix.

The non-Tex documentation is missing the normalization by "beta (a, b)".  That could be fixed separately on the stable branch since it is just a documentation change.

Rik <rik5>
Group administrator
Thu 13 May 2021 08:13:26 PM UTC, comment #6: 

I attach a modified version of betaincinv.m, where more (programmer's) effort is put into generating a good initial point for the Newton iterations, which should then converge quickly and monotonically, without any need for bisection, and I modified the Newton subroutine itself so that it stops whenever the step length is zero or reverses (which with the correct initial points can only happen due to finite precision arithmetics).

In my tests it is faster by about a factor of four.

There are some things left to do: first, in the ifnottex block of the function's definition the division by beta(a,b) is missing (this pertains also to the documentation of betainc.m). Further, I do not see the reason why y, a and b are tested for common size -- why aren't the rules of standard singleton expansion followed here?

(file #51431)

Michael Leitner <mleitner>
Mon 10 May 2021 10:51:37 PM UTC, comment #5: 

After some testing I find that the Newton's method doesn't converge well for some values of X, A, and B.  I suppose this was why the two-step process was used where the initial 10 bisection steps placed the trial position x0 in a region where the Newton's Method process was quadratic in convergence.

My first thought is to go back to the two-step approach, but rather than a fixed 10 iterations of bisection, use some other threshold such as closeness of objective function to 0.  Once Octave was reasonably close then switch over to the Newton's Method which in my testing did converge quickly when in range of the solution.

Alternatively, a more complex solver might be written along the lines of fzero() which decides on a step-by-step basis whether to pursue a Newton step or a bisection step.

Rik <rik5>
Group administrator
Fri 07 May 2021 05:20:20 PM UTC, comment #4: 

Initially your fix for x<0 and x>1 was not there. I think the purpose of the bisection was to give a quite good starting point for the Newton iteration, so that it in most cases it converged and did not go beyond the possible range. Now, with the fix, I agree that it should not be necessary any more.

On another note: Yes, a small F' can be a problem. Specifically, if alpha and beta are both larger than 1, then F' is zero at both 0 and 1. If during the Newton iteration at any point x should become smaller than zero, your fix would set it to eps, the derivative would be practically zero, it would perform a practically infinite step in the positive direction, your other fix sets it to 1-eps, the derivative is again practically zero, and you again go into the negative range. In this case the Newton iteration would never converge, which we have to prevent. That's why I suggested to start at the inflection point, and you are guaranteed to monotonously converge outwards towards the solution (and wouldn't even need the fix).

If either alpha or beta (or both) are smaller than 1, then I would say yes, start at 0.5 with Newton's method, with a meaningful stopping criterion. I see no reason why bisection would be needed at all.

Michael Leitner <mleitner>
Fri 07 May 2021 04:50:57 PM UTC, comment #3: 

Of course, I haven't looked at the value of the derivative of the objective function for various input values.  It may be that for some values F' is very small, even 0, and so bisection would converge faster than Newton's Method.

It still seems like we might want to use a stopping criterion for the bisection rather than always executing 10 rounds.

Rik <rik5>
Group administrator
Fri 07 May 2021 04:25:44 PM UTC, comment #2: 

I instrumented the code to see where some of the hots spots were.  I then ran with a random 1 million point sample.


y = betaincinv (rand (1e6,1), 1, 3);


In the newton_method() function I put separate timings around the the range checking code and around the actual objective function calls


    tic;
    x(x(todo) < 0) = eps;
    x(x(todo) > 1) = 1-eps;
    bm1(it) = toc;
    tic;
    res(todo) = -F(x(todo), a(todo), b(todo), y(todo)) ...
                ./ JF (x(todo), a(todo), b(todo));
    bm2(it) = toc;


The bisection() and newton_method() functions are called twice for the lower tail and upper tail.  The results were


bm_bisect: 1.36663
bm1: 0.0612214
bm2: 1.36306
bm_bisect: 1.50331
bm1: 0.0524464
bm2: 1.32451


Lots to optimize, or not, here.  First, performance of the code to do range checking is immaterial (25X smaller) compared to objective function evaluations.

Second, the code performs 10 rounds of bisection followed by 20 rounds of Newton's Methods.  The bisection starts with initial values of 0, 1 so that the center of the first round will be 0.5.  Given that Newton's method will converge faster than bisection, I think we could just skip this step entirely and start with an initial guess of 0.5 for the solution.  Handily, the bisection function accepts a parameter for number of iterations.  I set that to 1 and then re-ran the function.  The calculated output differed slightly from the original run, but the maximum deviation was 2 eps which I think is acceptable.


Rik <rik5>
Group administrator
Thu 06 May 2021 05:29:12 PM UTC, comment #1: 

If both alpha and beta are greater or equal one, then betainc(x,alpha,beta) has a point of inflection at x=(alpha-1)/(alpha+beta-2), where it changes from a positive curvature to a negative curvature. Thus, in these cases you can skip the initial ten bisection steps and start directly at the inflection point (if alpha==beta==1 you can take x=0.5), and you are guaranteed that the  points visited by the Newton method will be monotonic convergent and never stray beyond 0 or 1 (so the fixes in #60528 wouldn't even be necessary -- they also cost performance). You could special-case that.

If one or the other are smaller than one, you have a divergence of the derivative at the corresponding end, but the curvature is either everywhere positive or negative. There should be some way to get safely on the outer side of the final value and again go monotonously inwards by purely Newton steps, but I do not see it at the moment. If both are smaller than one, you would first have to figure out on which side the final value will be (this is easy, compute the inflection point as above, compute betainc at this point and see whether it is higher or lower), and then continue as above.

Of course, yes, the stopping criterion for the Newton steps should also be fixed.

Michael Leitner <mleitner>
Thu 06 May 2021 04:42:50 PM UTC, original submission:  

The betaincinv.m uses a subfunction newton_method() to do a Newton's Method search.  The search stops either when a relative tolerance test is met, or when the number of iterations hits 20.

The stopping criteria should be modified to halt early if no further progress is being made on the optimization.

Sample Code


betaincinv (1e-6, 1, 3)


If you add debugging printf statements to the newton_method subfunction you will find that this gets close to the final value in just 2 iterations, but then makes no further progress for 18 iterations.  It would be good to detect that and quit.

Rik <rik5>
Group administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #51491:  newbeta2.m added by mleitner (9KiB - text/x-matlab)
file #51486:  newbeta.m added by rik5 (9KiB - text/x-matlab)
file #51487:  biinv3.m added by rik5 (9KiB - text/x-matlab)
file #51484:  F(2,4).png added by rik5 (26KiB - image/png)
file #51480:  biinv.m added by rik5 (9KiB - text/x-matlab)
file #51431:  betaincinv.m added by mleitner (9KiB - text/x-matlab)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by dasergatskov (Posted a comment)
  • -email is unavailable- added by mleitner (Posted a comment)
  • -email is unavailable- added by rik5 (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 9 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2021-08-20 rik5 Open/ClosedOpen Closed
    2021-08-20 rik5 StatusPatch Submitted Fixed
    2021-05-28 mleitner Attached File- Added newbeta2.m, #51491
    2021-05-27 rik5 Attached File- Added newbeta.m, #51486
        Attached File- Added biinv3.m, #51487
    2021-05-27 rik5 Attached File- Added F(2,4).png, #51484
    2021-05-25 rik5 Attached File- Added biinv.m, #51480
    2021-05-14 rik5 StatusConfirmed Patch Submitted
    2021-05-13 mleitner Attached File- Added betaincinv.m, #51431

    Back to the top

    Powered by Savane 3.13-f8d8.
    Corresponding source code