bugGNU Octave - Bugs: bug #61143, sum of single precision numbers...

 
 

bug #61143: sum of single precision numbers hits precision limit due to naive summation

Submitter:  None
Submitted:  Sat 11 Sep 2021 02:49:20 PM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Incorrect Result
Status:  Confirmed Assigned to:  None
Originator Name:  Originator Email:  -email is unavailable-
Open/Closed:  * Open Release:  * dev
Operating System:  * Any Fixed Release:  None
Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Wed 01 Mar 2023 05:45:36 PM UTC, comment #34: 

this issue came up again when discussing single precision results in mean over in bug #63848. That bug is likely going to just focus on making mean avoid the single issue by processing as double.

Noting from comment #25, however, that matlab implemented more accurate block based summation across all data types in 2020 after doing so for single in 2017 (not that we know their block count or routine), it has been suggested that we may want to implement @mleitner's proposed summation algorithm, or something similar, in octave as there are now certain summations/means/etc cases where matlab produces accurate sums and octave does not. 

retitling to focus on sum. leaving single in the title even though an improved sum algorithm would apply to all data types. reverting to Confirmed as there isn't currently any active progress on turning @mleitner's code into a patch.

Nicholas Jankowski <nrjank>
Group Member
Tue 05 Apr 2022 07:23:01 AM UTC, comment #33: 

Yes, this is far from a patch, as I am much too little familiar with the internals of octave to touch something like the fundamental internal sum algorithm. It is just meant to show that you can essentially solve all problems with summation (yes, also for pathological inputs) without losing efficiency, so that if somebody comes around with a) the necessary knowledge of the octave internals and b) the motivation to do this, then this implementation can serve as a guide. Of course, if any user should really have a problem where a robust and efficient summation algorithm is needed, this one can be compiled as external oct-file and used directly.

But to go back to the very original problem (which still makes up the title, namely the precision of summing singles): as a quick solution I propose switching the default summation algorithm of singles to "double" -- this would be very little effort, is at least as fast, and more accurate. Note that we never specified which algorithm we use, so we never gave a guarantee exactly which result would be returned, thus nobody can complain about a better accuracy. I think this would save some hours of head-scratching on the user side.

By the way: mean has an optional argument outtype. According to the documentation, this determines the precision of the output, but does it also determine the precision of the summation algorithm? If yes, I would suggest to change the documentation, if not, I would suggest to either do the summation in the output precision, or better do it always at least in double. The original report seems to indicate that even when requesting double output for mean, the summation is still done in single.

Michael Leitner <mleitner>
Tue 05 Apr 2022 02:34:36 AM UTC, comment #32: 

Michael's implementation was never worked up into a patch against the octave codebase, so no, no patch has been accepted. I'm changing the status from Submitted to In Progress, as it was never really submitted either.

Nicholas Jankowski <nrjank>
Group Member
Tue 05 Apr 2022 02:17:40 AM UTC, comment #31: 

Is this patch accepted?

Anonymous
Tue 21 Sep 2021 09:48:42 AM UTC, comment #30: 

I have attached an implementation of pair-wise summation. In order to allow simple testing, I did not write it as a patch against the internal sum, but as an external compiled oct-file. You just have to call


mkoctfile sum_pair3.cc -o sum_pair3.oct


to use it as sum_pair3(). Again for simplicity, it assumes the input to be in class double, it doesn't do any input validation, and it sums over all elements (has no notion of multi-dimensional arrays).

Some remarks: on the architecture where I implemented that (a few years old Intel notebook, running 32-bit octave on 32-bit debian), the internal sum seems to use 80-bit extended precision internally. You can test that for your own architecture by doing


delta=1e-5;sum([1 delta -1])-delta


for different delta. The returned discrepancy is indicative of the used precision -- on my computer it is always on the order of about 5e-20, where the precision of the 63-bit mantissa of 80-bit extended precision would lead to 2^-63=1.1e-19.

So internal sum has a bit more than three decimal digits headroom to guard against cancellation. For random input values, the accumulated error will show Brownian motion, thus scale proportional to eps*sqrt(N). This means that you have to go to about N=1e7 until you will see that. But again, this is only for non-systematic inputs, while significant deficiencies can be exposed for special inputs, see below.

My implementation of pair-wise summation conceptually corresponds to padding the input to a power of two, and then having


function Sum=pairwise_sum(input)
if length(input)>1
  Sum=pairwise_sum(input(1:end/2))+pairwise_sum(input(end/2+1:end));
else
  Sum=input;
end


Of course it is implemented more efficiently. It emits also the time taken for the summation, and this is because seemingly calling oct-files has an overhead proportional to the length of input -- I suspect this is because the input is copied. Thus, a plain tic/toc will give you longer timings than what is really due to the summing (and if this should eventually replace the internal sum, there will be no copying). In any case, contrary to expectation, the time spent on summing in my implementation is less than for the naive sum, about 60%. And just as for the internal sum, I used extended precision for the running totals, which does not have any effect on the timings compared to using doubles.

And to the accuracy: summing always the same number (with low-order bits set) is most critical for naive summing, see


N=10000000;b=pi*ones(N,1);sum(b)-pi*N


Characteristically, the error goes with the square of N in this case and easily reaches 1000 eps. On the other hand, sum_pair3() is exact.

And this is not only for such a constructed example: For instance, summing sin() over a full period evaluated at equi-distant points should sum exactly to zero.


N=1000000;x=(0.5:N)/N*2*pi;d=sin(x);[sum(d) sum_pair3(d)]


shows here an error of 1.9e-12. This could be thought of to be due to inaccuracies in sin() itself, but the pair-wise sum indeed gives zero, so this is the error of the naive summation.

Concludingly: this shall show that pair-wise summation can practically be implemented in compiled code, ist as efficient (actually, in my implementation more efficient) than the naive sum as it is implemented now, and solves accuracy issues. This is for summing double inputs, while according to my point of view, summing single-precision inputs (which this bug report was initially about) should be done by the identical algorithm (also in extended precision) -- there is no argument against doing that, as far as I can see.

(file #51950)

Michael Leitner <mleitner>
Sun 19 Sep 2021 08:13:29 PM UTC, comment #29: 

Your recursivesum is a variant of what is otherwise called pairwise summation (e.g. comments #10, #12 and #13) -- it divides into four parts instead of two per recursion, and it divides into at most 4^7 parts altogether.

It is a much better algorithm than what Matlab seemingly uses, but as summing a vector is a very basic operation, it should be done in compiled code. I will see whether I can put something together to see how it compares in terms of efficiency.

Michael Leitner <mleitner>
Sat 18 Sep 2021 10:20:43 PM UTC, comment #28: 

I'm not convinced that a change to sum() is necessary, but would like your thoughts on this approach. Is this all that it takes?!


function s = recursivesum (x, depth=0)
  if (length(x) <= 3 || depth >= 7)
    s = sum(x);
  else
    s1 = recursivesum (x(1:4:end), depth+1);
    s2 = recursivesum (x(2:4:end), depth+1);
    s3 = recursivesum (x(3:4:end), depth+1);
    s4 = recursivesum (x(4:4:end), depth+1);
    s = s1+s2+s3+s4;
  end
endfunction


Results:

octave:28> clear all; t = ones(1,100e6,"single"); s = recursivesum(t), class(s)
s =    1e+08
ans = single

octave:29> clear all; t = ones(1,100e6,"single"); s = sum(t), class(s)
s = 16777216
ans = single


This was based on the paper (http://eprints.maths.manchester.ac.uk/2704/1/paper.pdf) linked to from the Matlab link.

Anonymous
Sat 18 Sep 2021 08:33:11 PM UTC, comment #27: 

I think it isn't necessarily a power of two, but yes, that's what they say. It also doesn't look very scientific to me.

Michael Leitner <mleitner>
Sat 18 Sep 2021 11:37:44 AM UTC, comment #26: 

Do I understand correctly from comment #25 that the only thing Matlab does to ensure accuracy is to unroll the summation loop by a factor of 2^k for some small k?

Anonymous
Fri 17 Sep 2021 05:45:45 PM UTC, comment #25: 
Guillaume <gyom>
Tue 14 Sep 2021 08:27:32 AM UTC, comment #24: 

Yes, there is no "one size fits all", because there are so ill-conditioned problems that you really would have to do infinite-precision arithmetics (which is what the human mind does) to decide on which code branches to use. However, the question is rather whether the one size we hand out by default should rather be the most simple conceivable (in the simile with garments, a plain sheet) or whether we should use an algorithm that fits all problems at least as good as the plain sheet, and some much better. And I would say that pairwise summation is such a solution.

So, what are the advantages of staying with the present algorithm? Just the absence of effort in implementing it?

And in order to have something we can discuss about, I suggest the following accuracy aim: the error of  summing a vector a of length N should be bounded by a constant times eps(sum(abs(a)))*sqrt(N).

Michael Leitner <mleitner>
Mon 13 Sep 2021 09:26:32 AM UTC, comment #23: 

It is numerically unstable and results in completely wrong answers
(deviation from correct answer >> sqrt(N)*eps)

Dmitri A. Sergatskov <dasergatskov>
Mon 13 Sep 2021 09:07:12 AM UTC, comment #22: 

Regarding comment #21.  Please define what exactly is "broken" (for almost 30 years) about the current default Octave summation algorithm (recursive summation)?

Kai Torben Ohlhus <siko1056>
Group Member
Mon 13 Sep 2021 08:39:41 AM UTC, comment #21: 

I disagree with Kay. In Matlab's case The issue is presision;
in octave case the issue is a broken algorithm.



octave:2> (1e7 / 9999961 -1 )/ eps("single")
ans = 32.716                       # error in matlab's case
octave:3> x=single(0.1*ones(1e+8,1));
octave:4> sum(x)
ans = 2097152
octave:5> (1e7 / 2097152 -1 )/ eps("single")
ans = 3.1611e+07                   # error in Octave case


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Mon 13 Sep 2021 06:30:10 AM UTC, comment #20: 

Interestingly, the topic of having a more precise summation pops up every few years (last time in 2019 for bug #56884).  However, due to the rare occurrences in practice with users aware of numerical computations it gets lost in the sand after a hot discussion πŸ˜‡

Personally, I tend to agree to comment #18.  Leave things as they are, improve and better document the algorithm used for the "extra" option.  And close this either as duplicate of bug #56884 or as "Won't fix".

https://octave.org/doc/v6.3.0/XREFsum.html

Exceptional (large / ill-conditioned input) cases require special compensated summation algorithms (which there exist a lot in research).  To my knowledge a "one size fits all" numerical algorithm is not available today πŸ˜‡

Like jwe said in comment #7, mostly the bar to when things "break" is only pushed a little higher with higher computational costs.

For example Matlab R2021a seems to accumulate single precision in double precision (nobody knows what Matlab does 🀫) as the OPs example works.  However, just changing to another pathological example


>> x=single(0.1*ones(1e+8,1));
>> sum(x)

ans =

  single

     9999961


and you can argue you paid hundreds of dollars for a wrong result (even without getting a simple warning) or you got what you can get from numerical computations. The interpretation depends on your point of view.  Is it a bug or is it a feature?

Regarding comment #19: if nothing is said about the summation algorithm, like in Octave (perhaps Matlab too), most likely good-old recursive (or naive) sum is used.  With this default, I know what I get and can expect.  This is speed by processor and compiler internals and IEEE-754 arithmetic which has well studied and known limitations, a user aware of numerical computations can deal with πŸ™‚

However, a "heuristic black box sum" (perhaps Matlab) that covers some exceptional rare input cases "good enough" (see example above), still fails other pathological (mostly constructed) examples (as above in Matlab), by users who don't know or care about their input data.

A users who needs 100% summation precision must look at symbolic computation tools.  Numerical tools like Octave/Matlab offer speed for known precision limitations.

Finally, some pointers to the relevant code in Octave:

https://hg.savannah.gnu.org/hgweb/octave/file/aedebbc6b765/libinterp/corefcn/data.cc#l2904

https://hg.savannah.gnu.org/hgweb/octave/annotate/aedebbc6b765/liboctave/operators/mx-inlines.cc#l1688

The currently 2009 (https://hg.savannah.gnu.org/hgweb/octave/rev/192d94cff6c1) implemented sum algorithms of "extra" for double is

Ogita, Rump, Oishi: "Accurate Sum and Dot Product"
SIAM J. Sci. Comput., 26(6), 1955–1988, 2005.
https://doi.org/10.1137/030601818

If you make changes to the default (recursive) summation algorithm: please study this algorithm pretty well, perform lots of tests and document for future programmers any assumption (for optimization, correctness, etc.) you made.

Kai Torben Ohlhus <siko1056>
Group Member
Sun 12 Sep 2021 08:27:14 PM UTC, comment #19: 

Sorry, I just recognized that during writing what eventually became comment #13 a large number of other comments were made. Thus, I mentioned points that were introduced already earlier into the discussion.

Re comment #7: Of course no reordering can solve


sum ([flintmax, 1])


However, this is because flintmax+1 cannot be represented. The initial report here was about summing up many values of equal magnitude. If you do this naively, then at some point you are doing flintmax+1, but there is no need to do it like that. Kahan summation effectively corresponds to summing in quadruple precision, and indeed, also this only works until it doesn't. For pairwise summation, it is different: with this algorithm, you can sum realmax (that is, more than 1e308) ones together and get the correct result, and the failure is then again due to the inability to represent the result.

Re comment #11: Yes, we should be very concerned, but summing singles in double precision does not cost anything in efficiency, and summing doubles by pairwise summation should also cost very little.

And re comment #18: the arguments are not really true. First, as I argued above, it is not so much a question of representation as rather of the summation algorithm. And the algorithm is not specified, neither in Octave nor in Matlab (I think), so the only reasons to use the naive algorithm would be less code and more efficiency (where the latter point is seemingly not valid). No, as was shown in comment #5, summing 100e6 singles does not need to overflow (in fact, overflow means something different). And no, it would not be a bug if the summation of singles is performed in double precision and therefore gives the arithmetically correct result, because, again, it is not specified which algorithm is used. And I am very, very certain that users on the whole would lose less time trying to understand the results of their code if these results are closer to the arithmetically exact values rather than farther away.

Concludingly, I would argue for using more accurate algorithms per default if these do not cost noticeably in efficiency (and they are also not so hard to implement), in particular for things like mean, where you cannot specify the algorithm (and likely also won't be able to do in the future, as it has already an optional string argument, which makes another optional string argument very impractical).

Michael Leitner <mleitner>
Sun 12 Sep 2021 01:14:05 AM UTC, comment #18: 

As this report has become a discussion, may I respectfully offer this input, echoing comment #7. There is no substitute for knowing the hardware representation for floating point types when using numerical software like Octave. This isn't something that can be worked around. Summing up 100e6 singles can and will cause it to overflow as the OP saw, and that is exactly per spec. That sort of error is predictable, easy to understand, easy to avoid as one gains experience with floating point. If the software does automatic upcasting to double, it will cause more confusion for those who do expect errors, and those bugs will be insidious and more difficult to track down. I recommend that we keep the current behavior, and anyone getting unexpected results in single or any of the integer types can always manually invoke sum (foo, "extra") or sum (double (foo)).

Anonymous
Sat 11 Sep 2021 10:05:19 PM UTC, comment #17: 

Yes. I agree with all those points.


octave:1> x=(100*ones(100e+6,1));
octave:2> tic; sum(x); toc
Elapsed time is 0.0735009 seconds.
octave:3> tic; sum(x,"extra"); toc
Elapsed time is 0.116231 seconds.


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 09:59:49 PM UTC, comment #16: 

The previous comment was for single x, isn't it? Because in this case "extra"=="double" according to the documentation, which is essentially equally fast as native summation. And in this case I am with you. But for double x, "extra" seems to imply Kahan summation, which is noticeably slower. Also for doubles such effects as reported can happen (only for much larger arguments of course), and here I would propose pairwise summation as default.

Michael Leitner <mleitner>
Sat 11 Sep 2021 09:33:53 PM UTC, comment #15: 

More:


octave:10> tic; sum(x, "double"); toc
Elapsed time is 0.0703928 seconds.
octave:11> tic; sum(x, "double"); toc
Elapsed time is 0.070359 seconds.
octave:12> tic; sum(x,"extra"); toc
Elapsed time is 0.0702491 seconds.
octave:13> tic; sum(x); toc
Elapsed time is 0.0703919 seconds.


So it looks to me that the easiest fix is to make "extra"
to be a default algorithm for calculation and set the output
type according to the explicit parameter.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 09:19:47 PM UTC, comment #14: 

Here is some benchmarks on a Ryzen computer (with avx optimization):


octave:1> x=single(100*ones(100e+6,1));
octave:2> sum(x)
ans = 2.1475e+09
octave:3> sum(x,"extra")
ans = 1.0000e+10
octave:4> tic; sum(x); toc
Elapsed time is 0.068001 seconds.
octave:5> tic; sum(double(x)); toc
Elapsed time is 0.140945 seconds.
octave:6> tic; sum(x,"extra"); toc
Elapsed time is 0.0682781 seconds.
octave:7> tic; sum(double(x), "extra"); toc
Elapsed time is 0.182113 seconds.


The issue is that it is impossible at the moment to pass "extra" parameters to the functions that use sum() internally.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 08:58:06 PM UTC, comment #13: 

The issue is clear. However, reports like these raise the question whether it wouldn't be better if single values are by default summed in double precision (and at the end converted back). This uses neither more memory nor more bandwidth, and according to my tests it is indeed perhaps even a bit faster. It will obviously in the general case also be more correct, and as yet the summation algorithm is not specified in the documentation, so we would be free to change that.

Further, what about summation of double values: as summing with "extra" takes about two times as long as the default algorithm, I suspect that this uses Kahan summation. This factor two is significant, so we won't like to use Kahan summation per default. However, pairwise summation should be only insignificantly slower than naive summation, consumes only O(log(N)) memory, and would in general also be much more accurate. Julia seems to do it like that: https://github.com/JuliaLang/julia/pull/4039

So the question is whether we should accept a very small increase in computation time (for pairwise summation of doubles, and perhaps no increase at all for double summation of singles) and get much better accuracy per default, thus not tripping up users that perhaps do not yet know What Every Computer Scientist Should Know About Floating-Point Arithmetic. The option "native" would remain, and for doubles could imply "naive".

Of course this would apply also for cumsum, prod, cumprod, mean, var...

Michael Leitner <mleitner>
Sat 11 Sep 2021 08:55:48 PM UTC, comment #12: 

From the wiki page

<<<
Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational costβ€”it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

>>>


and

<<<<
Pairwise summation is the default summation algorithm in NumPy[8] and the Julia technical-computing language,[9] where in both cases it was found to have comparable speed to naive summation (thanks to the use of a large base case).

Other software implementations include the HPCsharp library[10] for the C Sharp language and the standard library summation[11] in D.

>>>>


Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 08:50:22 PM UTC, comment #11: 

Should we be concerned about efficiency?  Maybe it's best to keep special algorithms for these operations in separate functions?

John W. Eaton <jwe>
Group administrator
Sat 11 Sep 2021 08:41:00 PM UTC, comment #10: 

Some of the summation algorithms described here:

https://en.wikipedia.org/wiki/Kahan_summation_algorithm

https://en.wikipedia.org/wiki/Pairwise_summation looks interesting.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 08:27:30 PM UTC, comment #9: 

Should we make all internal sum() calls to be sum(..., "double") in mean() and alike?

I do not see in matlab's docs if they do internal calculations in
native or double precision.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 08:26:08 PM UTC, comment #8: 

Although I don't know that there is a fix for sum function, maybe our computation of mean could be better?  I don't know.

John W. Eaton <jwe>
Group administrator
Sat 11 Sep 2021 08:21:20 PM UTC, comment #7: 

Can reordering ever help this case?


sum ([flintmax, 1])


Performing the summation using double can also help, but only until it doesn't.  I don't think there is a simple "fix" for this problem except to understand that numerical computations using a floating point representation are not exact.  You kind of have to know what you are doing.  I don't see a way around that.

John W. Eaton <jwe>
Group administrator
Sat 11 Sep 2021 08:15:20 PM UTC, comment #6: 

That does not really save anything wrt doing median(double(x))...

Dmitri.

Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 08:02:42 PM UTC, comment #5: 

OK. Thanks. So a more sophisticated summation algorithm could help. For example for the modification of OP's test case 
works fine:



x=single(100*ones(100e+6,1));
xx=reshape(x, 1e4, 1e4);
sum(sum(xx))
mean(mean(xx))


WWMD?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 06:34:06 PM UTC, comment #4: 

Dmitri: It's the point at which the following summation saturates:


s = single (0);
for i = 1:100e6
  t = s + 100;
  if (t == s)
    break;
  else
    s = t;
  end
end
i, s, t


John W. Eaton <jwe>
Group administrator
Sat 11 Sep 2021 05:04:40 PM UTC, comment #3: 

I do not understand why and how int32 comes into play here.
max ieee754 single precision is ~3.4e^38

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 04:53:53 PM UTC, comment #2: 

You are asking that Octave detects and warns about things like


flintmax('single') + 1 == flintmax('single')


John W. Eaton <jwe>
Group administrator
Sat 11 Sep 2021 03:22:51 PM UTC, comment #1: 

Confirm on linux  ver 6.3.1
Also noticed that 2.1475e+09 == 2^31 - 1 which is max signed int32.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 11 Sep 2021 02:49:20 PM UTC, original submission:  

I use single precision to avoid memory overflow, and the input data is 2 byte integer wave file. No need for double precision.
The mean function does give incorrect answer and this I think is because the sum function also gives wrong answer.


>> x=single(100*ones(100e+6,1));  % 100 million samples
>> sum(x)                         % should be 1.0000e+10
ans = 2.1475e+09                  % wrong answer
>> x=single(100*ones(50e+6,1));   % 50 million samples
>> sum(x)
ans = 2.1475e+09                  % same wrong answer

% try with shorter vectors, OK first at 6e+5 samples
>> x=single(100*ones(6e+5,1));
>> sum(x)
ans = 6.0000e+07                  % correct answer

sum(x,'double') or sum(double(x)) solves the problem
mean(x,'double') does not solve this.
mean(double(x)) does solve this.


I think that the functions should return correct answer, and if that not is possible send an error message.


Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #51950:  sum_pair3.cc added by mleitner (1KiB - text/x-c++src)

 

Depends on the following items

Digest:
   bug dependencies.

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by nrjank (Posted a comment)
  • -email is unavailable- added by gyom (Posted a comment)
  • -email is unavailable- added by siko1056 (Posted a comment)
  • -email is unavailable- added by mleitner (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by dasergatskov (Posted a comment)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 9 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2023-03-01 nrjank StatusIn Progress Confirmed
        SummaryFunctions sum and mean returns wrong answer for single precision input sum of single precision numbers hits precision limit due to naive summation
    2022-04-05 nrjank StatusPatch Submitted In Progress
    2021-09-22 siko1056 StatusNeed Info Patch Submitted
        Release6.2.0 dev
        Operating SystemMicrosoft Windows Any
    2021-09-21 mleitner Attached File- Added sum_pair3.cc, #51950
    2021-09-13 siko1056 Dependencies- Depends on bugs #56884
    2021-09-13 siko1056 StatusNone Need Info

    Back to the top

    Powered by Savane 3.13-3230.
    Corresponding source code