bugGNU Octave - Bugs: bug #54572, int64 does not saturate correctly...

 
 

bug #54572: int64 does not saturate correctly in negative direction

Submitter:  Dan Sebald <sebald>
Submitted:  Sun 26 Aug 2018 06:27:09 AM UTC
   
 
Category:  Libraries Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Incorrect Result
Status:  Fixed Assigned to:  None
Originator Name:  Open/Closed:  * Closed
Release:  * dev Operating System:  * GNU/Linux
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Thu 06 Sep 2018 03:37:25 PM UTC, comment #46: 

The title issue, "int64 does not saturate correctly in negative direction" has now been addressed.  Closing this report.

Rik <rik5>
Group administrator
Sat 01 Sep 2018 03:03:47 AM UTC, comment #45: 

One last comment.  I wondered if "goto" is something an optimizing compiler will not remove and whether it retains an extra jump, e.g.,

https://stackoverflow.com/a/41609491

Given the current goto is rather short, it doesn't add too much extra code space to apply the saturation code in all cases.  The numbers are below, and there seems to be no difference one way or another:


50 TRIALS

64-BIT SIGNED MULT
       CURRENT_MULT   CURRENT_NOGOTO_MULT   D_FLOAT_MULT

tgood     14.580           14.792             14.340
tover     12.584           12.188             11.520

32-BIT SIGNED MULT
       CURRENT_MULT   CURRENT_NOGOTO_MULT   D_FLOAT_MULT

tgood     7.4480           7.4520             7.4160
tover     5.8120           5.8160             5.9000

16-BIT SIGNED MULT
       CURRENT_MULT   CURRENT_NOGOTO_MULT   D_FLOAT_MULT

tgood     4.3440           4.3440             4.4000
tover     3.7720           3.7760             5.2840

8-BIT SIGNED MULT
       CURRENT_MULT   CURRENT_NOGOTO_MULT   D_FLOAT_MULT

tgood     2.7200           2.6200             1.9640
tover     2.5840           2.7320             3.7680


I'm attaching the diff FWIW.

... Close this bug report.

(file #44921)

Dan Sebald <sebald>
Sat 01 Sep 2018 01:49:56 AM UTC, comment #44: 

I looked into performance of multiplication yesterday and a bit today.  The division-based algorithms I tried simply slowed things down, by a factor of two basically, on my processor.  Divisions have always required extra steps and repetitive bit shifting; it's actually too much computational power.  For the sake of record, I'm attaching the algorithms I tried as a diff, but it isn't useful for anything.

The last alternative in that code (if one looks) is the use of __builtin_mul_overflow().  The existing algorithm and the builtin optimization are actually quite close.  I took a second look at the current algorithm, and it is not using a post-operation on signed arithmetic.  So, it really isn't relying on any signed behavior, unlike what the add/sub fast algorithms were doing.  It's quite nice really, as it is just doing the aritmetic as (upper_x 2^32 + lower_x) lower_y with some observations about binary arithmetic as regards overflow.

I was a bit concerned the routine is doing:


  // Essentially, what we do is compute sign, multiply absolute values
  // (as above) and impose the sign.

  uint64_t usx = octave_int_abs (x);
  uint64_t usy = octave_int_abs (y);


and that absolute value might turn MIN_VAL into MAX_VAL (i.e., -N becomes N-1 rather than N for two's complement), but I couldn't seem to find any aberrant result around that value.

Out of curiosity, I wondered how efficient the use of LONG_DOUBLE is so I activated it with the following mod:

/* #  undef OCTAVE_ENSURE_LONG_DOUBLE_OPERATIONS_ARE_NOT_TRUNCATED */
#  define OCTAVE_INT_USE_LONG_DOUBLE

Here are timing comparisons between the two: The current word-wise decomposition of multiplication versus using LONG DOUBLE (regardless of whether the criteria is met for computations on my system to be correct):


50 TRIALS

64-BIT SIGNED MULT
       CURRENT_MULT   D_FLOAT_MULT

tgood     16.784        16.632
tover     13.592        13.296

32-BIT SIGNED MULT
       CURRENT_MULT   D_FLOAT_MULT

tgood      7.4400       7.4120
tover      6.3520       6.4160

16-BIT SIGNED MULT
       CURRENT_MULT   D_FLOAT_MULT

tgood     4.3400        4.4000
tover     3.7720        5.2840

8-BIT SIGNED MULT
       CURRENT_MULT   D_FLOAT_MULT

tgood     2.6280        1.9560
tover     2.5840        3.7680



OBSERVATIONS

I tried making the test as apples-to-apples as possible.  Multiplication in both of the cases becomes slower for larger numbers.  The current algorithm can skip one of the integer mults and accumulates when the numbers are small.  The floating point implementation is probably an internal firmware reason in that it can terminate the accumulation when it runs out of bits.  For that reason, I chose a fairly big number that doesn't overflow:


w = int64(ones(5000));
x = w; x(:) = int64(sqrt(double(intmax('int64'))) * 0.9);
y = w;
start = cputime; for i=[1:50]; z = x .* y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int64');
y = x;
start = cputime; for i=[1:50]; z = x .* y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z

w = int32(ones(5000));
x = w; x(:) = int32(sqrt(double(intmax('int32'))) * 0.9);
y = w;
start = cputime; for i=[1:50]; z = x .* y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int32');
y = x;
start = cputime; for i=[1:50]; z = x .* y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z

w = int16(ones(5000));
x = w; x(:) = int16(sqrt(double(intmax('int16'))) * 0.9);
y = w;
start = cputime; for i=[1:50]; z = x .* y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int16');
y = x;
start = cputime; for i=[1:50]; z = x .* y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z

w = int8(ones(5000));
x = w; x(:) = int8(sqrt(double(intmax('int16'))) * 0.9);
y = w;
start = cputime; for i=[1:50]; z = x .* y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int8');
y = x;
start = cputime; for i=[1:50]; z = x .* y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z


Judging from the numbers, for int32 and int64, the two approaches are almost a dead heat.  However, for the smaller non-natural CPU widths, the use of floating point has an obvious speed-up.  Could it be that the optimizing compiler is able to make use of CPU features such as being able to two multiplies at once for shorter data widths?

On the other hand, for the smaller data widths, the floating point becomes much worse.  This is probably because the current approach can test early for certain overflow, whereas maybe the firmware floating point approach can't do that.  Say it IS doing two multiplies at once; well in that case it has to carry both multiplies through all the way rather than do a quick test beforehand of none, one or both multiplies overflowing.  In some sense, the overflow condition isn't super important; if the user's mults are all overflowing, there's a bigger problem.

Summarizing, the advantage of the OCTAVE_INT_USE_LONG_DOUBLE appears to be only for 8-bit multiplies on my system.  Otherwise it is an even comparison.  I've created a patch that removes the OCTAVE_INT_USE_LONG_DOUBLE scenario in case you feel the benefit of double-float doesn't outweigh the complexity and CPU requirements on ALU resultant bit width.  The current integer-base routine looks near as fast as can be done.  Plus, maybe a bit of tweaking can allow the 8-bit version of the integer routine to be optimized for the ALU.  (There is always the SSE approach, too, for those really in need of efficiency.)


(file #44919, file #44920)

Dan Sebald <sebald>
Thu 30 Aug 2018 08:52:59 PM UTC, comment #43: 

Dan, I don't think I made any real changes to mul_internal.  If there are possible problems with that function, then yes, let's fix them.


John W. Eaton <jwe>
Group administrator
Thu 30 Aug 2018 08:02:43 PM UTC, comment #42: 

@JWE, I see all the changes you've made to get rid of the OCTAVE_HAVE_FAST_INT_OPS and to generally clean up the style.  I've recompiled the code.  No BIST errors, and the timing looks about the same, maybe even a little faster by a few percent in some cases.  (Could be the use of ternary conditionals.)

So, I'd consider this bug report complete.  However, I saw the following change for multiplication overflow and it caught my eye:


template <>
int64_t
octave_int_arith_base<int64_t, true>::mul_internal (int64_t x, int64_t y)
{
  // The signed case is far worse.  The problem is that even if neither
  // integer fits into signed 32-bit range, the result may still be OK.
  // Uh oh.


Looking at the changes there it seems to me this is also using a post-int-overflow-operation, in fact a rather complex one at that.  I wonder if we could use the same "check-ahead-of-time" approach that is done with the add() and sub() routines.  If x or y equals 0, then the result is 0.  So, assuming the x != 0 and y != 0 case, can't we just use a simple


if (MAX_INT / x < y)
  u = MAX_INT
else
  u = x * y;


It uses a division, but given how complex the current approach seems, the division might still be faster.

We'd have to account for the sign of x and sign of y, which will have a nuanced MIN_INT/MAX_INT, maybe rounding up/down, kind of thing, but that should be straightforward.  Do you want to investigate since you seem on a roll with oct-inttypes code?  :-)  Or should I have a try?

Dan Sebald <sebald>
Thu 30 Aug 2018 07:14:45 PM UTC, comment #41: 

BTW, since this is C++11 now, I wondered if the following would work since there is now more support for all integer types

http://www.cplusplus.com/reference/cstdlib/abs/

:


--- a/liboctave/util/oct-inttypes.h
+++ b/liboctave/util/oct-inttypes.h
@@ -552,19 +552,8 @@ public:
       }
     return y;
 #else
-    // -INT_MAX is safe because C++ actually allows only three implementations
-    // of integers: sign & magnitude, ones complement and twos complement.
-    // The first test will, with modest optimizations, evaluate at compile
-    // time, and maybe eliminate the branch completely.
-    T y;
-    if (octave_int_base<T>::min_val () < -octave_int_base<T>::max_val ()
-        && x == octave_int_base<T>::min_val ())
-      {
-        y = octave_int_base<T>::max_val ();
-      }
-    else
-      y = (x < 0) ? -x : x;
-    return y;
+    // C++11 has int, long int, and long long int versions of abs().
+    return abs (x);


However, this change causes Octave to hang on abs(intX(#)).

Nonetheless, it seems to me that testing


    if (octave_int_base<T>::min_val () < octave_int_base<T>::max_val ()


is redundant regardless of whether it gets optimized out because in 1's-comp, 2's-comp, sign/mag it seems that assigning


  y = octave_int_base<T>::max_val ();


is fine whenever y == min_val().  Not a critical observation.

Dan Sebald <sebald>
Thu 30 Aug 2018 07:02:39 PM UTC, comment #40: 

I reran the tests with 50 trials rather than 10 to reduce the variance a bit.  Numbers are:


50 TRIALS

64-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS   NEW_CLEAN_INT

tgood     13.428         13.324        13.260          13.236
tover     16.724         16.568        16.740          16.220
to/tg     1.2455         1.2435        1.2624          1.2254

32-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS   NEW_CLEAN_INT

tgood     6.9160         6.9680        6.8960          6.8280
tover     8.6240         8.5640        8.5920          8.5840
to/tg     1.2470         1.2290        1.2459          1.2572

16-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS   NEW_CLEAN_INT

tgood     4.0600         4.4840        4.0240          3.9560
tover     4.9520         4.9840        7.1400          4.9120
to/tg     1.2197         1.1115        1.7744          1.2417

8-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS   NEW_CLEAN_INT

tgood     2.0240         2.7080        1.8520          1.6560
tover     3.4320         2.7280        3.8160          1.8720
to/tg     1.6957         1.0074        2.0605          1.1304


I thought I had discovered a way to reduce the "straightforward" version by a machine code or two by using the following construct:


    // We shall carefully avoid anything that may overflow.
    T u = x + y;

    if (y < 0)
      {
        if (u - octave_int_base<T>::min_val () < 0)
          u = octave_int_base<T>::min_val ();
      }
    else
      {
        if (u - octave_int_base<T>::max_val () > 0)
          u = octave_int_base<T>::max_val ();
      }

    return u;


(that's what the NEW_CLEAN_INT column in the table is), but it does not work by producing the wrong saturation value.  The idea of the above construct is that we can allow the wrap to take place, then unwrap by subtracting MIN or MAX value appropriately and check if it ends up in the improper region.  That concept is fine in low-level machine code (say if programming a DSP or something), but in C/C++ that is not reliable because C/C++ does not define how signed integers wrap.  Here's a nice brief post on the topic

https://stackoverflow.com/a/18195756

that mentions compiler authors often utilize this undefinedness for optimization.

So, my adjustment above is no good for C++, generally speaking.  And I suspect that is what the original issue for this bug report was too; that once the overflow of a signed integer occurs, we can't expect it to be anything, even if there is a limited set of outcomes, be it 1's complement, 2's complement or sign/magnitude.  The optimizing compiler essentially is assuming no overflow occurred.  That is, even though this change to fix the original issue:

http://hg.savannah.gnu.org/hgweb/octave/rev/26c41d8bf170

works, it is still relying on a value of u, post-overflow.  In other words, it is probably the following that originally failed


         + __signbit (~u);


because of optimization.  But in the future it could possibly be its replacement


            ? (u < 0


that fails for overflowed u because of optimization.

This is another argument for simply avoiding the fancier 1's-comp, 2's-comp, sign/mag construct and not work on overflowed integer values.  One can't make any case to a compiler author what the result should be because it is undefined.

Dan Sebald <sebald>
Thu 30 Aug 2018 03:41:48 AM UTC, comment #39: 

Fine by me.  Can always revert the change if someone on iOS or some other compiler notices a slow down, but I suspect with better compilers these days and new SIMD processor features the cleanest basic C++ code wins out.

Dan Sebald <sebald>
Thu 30 Aug 2018 03:23:41 AM UTC, comment #38: 

Any objection to eliminating the OCTAVE_HAVE_FAST_INT_OPS cases then?  At this point, it seems to just obscure for no good reason.  I'd rather have clearer, simple, standard-conforming code unless there is a GOOD reason to do otherwise.

John W. Eaton <jwe>
Group administrator
Thu 30 Aug 2018 03:13:31 AM UTC, comment #37: 

Oh, yes, the change in Comment #34 is fine for consistency sake.  There was no failure because following the logic through, one sees they are equivalent.  Based on the previous test results, the GCC optimizer probably figure's that out and basically skips the 1's complement.

Dan Sebald <sebald>
Thu 30 Aug 2018 03:07:20 AM UTC, comment #36: 

Ah, so that was a reason for some of this, i.e., the 53-bit/64-bit discrepancy.  Since 2008, I'm guessing you introduced templates, which may have obviated that reason.

In any case, I've done some tests here with various builds: with and without the HAVE_FAST_OPS, and a version that uses GCC's __builtin_add_overflow().  I've tested the following on each build:


w = int64(ones(5000));
x = w;
y = w;
start = cputime; for i=[1:10]; z = x + y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int64');
y = w;
start = cputime; for i=[1:10]; z = x + y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z

w = int32(ones(5000));
x = w;
y = w;
start = cputime; for i=[1:10]; z = x + y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int32');
y = w;
start = cputime; for i=[1:10]; z = x + y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z

w = int16(ones(5000));
x = w;
y = w;
start = cputime; for i=[1:10]; z = x + y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int16');
y = w;
start = cputime; for i=[1:10]; z = x + y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z

w = int8(ones(5000));
x = w;
y = w;
start = cputime; for i=[1:10]; z = x + y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int8');
y = w;
start = cputime; for i=[1:10]; z = x + y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z


And here is the result:


64-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS

tgood     2.5840         2.6480         2.6320
tover     3.2720         3.3240         3.2840
to/tg     1.2663         1.2553         1.2477

32-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS

tgood     1.2960         1.3880         1.3000
tover     1.6920         1.7080         1.7240
to/tg     1.3056         1.2305         1.3262

16-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS

tgood     0.73200        0.81200        0.73600
tover     0.98000        1.0720         1.4160
to/tg     1.3388         1.3202         1.9239

8-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS

tgood     0.42000        0.43600        0.38400
tover     0.70000        0.70000        0.77200
to/tg     1.6667         1.6055         2.0104


That aberration  of 1.41 seconds for GCC_BUILTINS and 16-bit adds is not due to system stall.  It actually is some inefficiency in the builtin routine for that particular data width.  I should point out that I used the generic (builtin_add_overflow) builtin overflow routine, not the ones specialized to long long etc.--I couldn't figure out a way for the template T to be "long long" or "int64" mapped to the appropriate type.  It's interesting to see the CPU features at work as well with the decreasing times with reduced data width (Xeon/x86).

CONCLUSION: The GCC compiler seems to be so good at optimizing now that the HAVE_FAST_INTS and __builtin_add_overflow() don't seem necessary; at least not for GCC.  I could run without the -O2 flag and redo the tests, but I'm not interested enough.

--

The following is the same test but using "tic/toc", pretty much the same result; no sense looking at it:


w = int64(ones(5000));
x = w;
y = w;
tic; for i=[1:10]; x + y; endfor; toc
x = w; x(:) = intmax('int64');
y = w;
tic; for i=[1:10]; x + y; endfor; toc
clear w x y z

w = int32(ones(5000));
x = w;
y = w;
tic; for i=[1:10]; x + y; endfor; toc
x = w; x(:) = intmax('int32');
y = w;
tic; for i=[1:10]; x + y; endfor; toc
clear w x y z

w = int16(ones(5000));
x = w;
y = w;
tic; for i=[1:10]; x + y; endfor; toc
x = w; x(:) = intmax('int16');
y = w;
tic; for i=[1:10]; x + y; endfor; toc
clear w x y z

w = int8(ones(5000));
x = w;
y = w;
tic; for i=[1:10]; x + y; endfor; toc
x = w; x(:) = intmax('int8');
y = w;
tic; for i=[1:10]; x + y; endfor; toc
clear w x y z



64-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS

tgood     2.60085        2.60146       2.61581
tover     3.30584        3.28829       3.2446

32-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS

tgood     1.34028        1.36915       1.3331
tover     1.70602        1.71057       1.66417

16-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS

tgood     0.781169       0.800373      0.768512
tover     0.996119       0.9839        1.37523

8-BIT ADD
       WITH_FAST_INT   NO_FAST_INT   GCC_BUILTINS

tgood     0.430963       0.435878      0.403436
tover     0.739937       0.737365      0.766774


Dan Sebald <sebald>
Thu 30 Aug 2018 02:01:55 AM UTC, comment #35: 

If you want some insight into what led to these Jaroslavimizations, start here:

http://lists.gnu.org/archive/html/octave-maintainers/2008-09/msg00080.html

John W. Eaton <jwe>
Group administrator
Thu 30 Aug 2018 01:49:24 AM UTC, comment #34: 

Dan, I carelessly thought that the add and sub cases should be flipped.  Now I see that the variable UY is computed differently in the two functions.  So if I understand correctly now, the attached change is needed to make the functions consistent.

Apparently there is no test that applies to this case?  Or at least I didn't see a failure after my change.

(file #44898)

John W. Eaton <jwe>
Group administrator
Wed 29 Aug 2018 11:15:03 PM UTC, comment #33: 

I'm not even sure the OCTAVE_HAVE_FAST_INT_OPS is that much more efficient than the alternate.  If I had the energy, I'd compile both versions and look at the assembly, but just counting operations...

IN THE NO OVERFLOW CASE


#if defined (OCTAVE_HAVE_FAST_INT_OPS)
    // The typecasts do nothing, but they are here to prevent an optimizing
    // compiler from interfering.  Also, the signed operations on small types
    // actually return int.
    T u = static_cast<UT> (x) - static_cast<UT> (y); [1 op]
    T ux = u ^ x;                                    [1 op]
    T uy = u ^ ~y;                                   [2 op]
    if ((ux & uy) < 0)                               [1 op ... comparing against 0 is is a JMPZ or something]
      {
        u = (__signbit (~u)
             ? octave_int_base<T>::min_val ()
             : octave_int_base<T>::max_val ());
      }
    return u;


5 operations and a few save/loads/jmps unless optimization is good.


    // We shall carefully avoid anything that may overflow.
    T u;
    if (y < 0)                                        [1 op]
      {
        if (x > octave_int_base<T>::max_val () + y)   [2 op]
          {
            u = octave_int_base<T>::max_val ();
          }
        else
          u = x - y;                                  [1 op]
      }
    else
      {
        if (x < octave_int_base<T>::min_val () + y)
          {
            u = octave_int_base<T>::min_val ();
          }
        else
          u = x - y;
      }

    return u;


4 operations and a few load/save/jmp.

On operations alone, it looks like the "slower" routine has less of them.  Granted, in the second case we are always loading that min_val and there is probably one or two more jumps, but this is so close that I'd think it depends on the compiler and architecture pipelining as to which is faster.


IN THE OVERFLOW CASE


#if defined (OCTAVE_HAVE_FAST_INT_OPS)
    // The typecasts do nothing, but they are here to prevent an optimizing
    // compiler from interfering.  Also, the signed operations on small types
    // actually return int.
    T u = static_cast<UT> (x) - static_cast<UT> (y); [1 op]
    T ux = u ^ x;                                    [1 op]
    T uy = u ^ ~y;                                   [2 op]
    if ((ux & uy) < 0)                               [1 op]
      {
        u = (__signbit (~u)                          [2 op]
             ? octave_int_base<T>::min_val ()
             : octave_int_base<T>::max_val ());
      }
    return u;


7 operations and a few save/loads/jmps unless optimization is good.


    // We shall carefully avoid anything that may overflow.
    T u;
    if (y < 0)                                        [1 op]
      {
        if (x > octave_int_base<T>::max_val () + y)   [2 op]
          {
            u = octave_int_base<T>::max_val ();
          }
        else
          u = x - y;
      }
    else
      {
        if (x < octave_int_base<T>::min_val () + y)
          {
            u = octave_int_base<T>::min_val ();
          }
        else
          u = x - y;
      }

    return u;


3 operations and a few load/save/jmp.

The supposed slower method is a clear winner in this case.  The former OCTAVE_HAVE_FAST_INT_OPS is more than anything conserving code space because of reduced jumps.

I'd advocate replacing that OCTAVE_HAVE_FAST_INT_OPS group with the GCC-compiler __building_ssubll_overflow() class of routines to get a certain speed-up.

Dan Sebald <sebald>
Wed 29 Aug 2018 10:19:31 PM UTC, comment #32: 

@JWE: You modified both the add() and sub() routines to use sign comparison rather than adding a value.  It looks to me that the purpose of that sign test code is exactly the same in both cases.  So rather than this (the __signbit(~u)):


  // This is very similar to addition.
  static T
  sub (T x, T y)
  {
#if defined (OCTAVE_HAVE_FAST_INT_OPS)
    // The typecasts do nothing, but they are here to prevent an optimizing
    // compiler from interfering.  Also, the signed operations on small types
    // actually return int.
    T u = static_cast<UT> (x) - static_cast<UT> (y);
    T ux = u ^ x;
    T uy = u ^ ~y;
    if ((ux & uy) < 0)
      {
        u = (__signbit (~u)
             ? octave_int_base<T>::min_val ()
             : octave_int_base<T>::max_val ());
      }


which uses the 1's complement, how about making them consistent?  I.e., __signbit(~u) = !__signbit(u) = !(u < 0), so switch around the ternary operator arguments and get


        u = (u < 0
             ? octave_int_base<T>::max_val ()
             : octave_int_base<T>::min_val ());


the same as for add().  In some sense, that last portion of the routine is independent of the modulo arithmetic that took place before it.

Dan Sebald <sebald>
Wed 29 Aug 2018 09:44:17 PM UTC, comment #31: 

Works here.

This __signbit() function.  Wouldn't it give better chance of optimization as an "inline"?  Or am I missing something here in the fact that the instantiation of the int64_t integer type uses "inline"?

Also, in C/C++ doesn't the standard ensure the following


  // Returns 1 for negative number, 0 otherwise.
  static T
  __signbit (T x)
  {
    return (x < 0);
  }


is sufficient (or perhaps a cast to T is needed.  Sure the optimizing compiler will probably figure that out have the same code, but still.

Also, the signum routine:

+verbatun+
  static T
  signum (T x)
  {
    // With modest optimizations, this will compile without a jump.
    return ((x > 0) ? 1 : 0) - __signbit (x);
  }
-verbatim-

Why even call the __signbit routine?  And garden-variety optimization would still do two comparisons with the above, x > 0 and x < 0 because of the subtraction operator.  How about:


    return ((x < 0) ? -1 : (x > 0));


which on average might cut the comparisons to 1.5.  Or, one could make the argument that the more prevalent case is someone having mostly x >  0, so "return ((x > 0) ? 1 : (x == 0) ? 0 : -1)" would work.  The thing about avoiding the use of addition and subtraction is that sometimes the processor could have an instruction that loads a value into a register where that value is encoded as part of the instruction and not coming from a separate register which would need an additional load.

Here's a changeset you can pick and choose from or disregard.

(file #44897)

Dan Sebald <sebald>
Wed 29 Aug 2018 09:39:51 PM UTC, comment #30: 

The tests pass now where they failed before (gcc with -O2).  Marking as fixed and closing report.

Rik <rik5>
Group administrator
Wed 29 Aug 2018 07:19:31 PM UTC, comment #29: 

I pushed this patch on stable and merged with default:

http://hg.savannah.gnu.org/hgweb/octave/rev/26c41d8bf170

Since I put this change on stable, I also grafted Rik's new tests from default to stable.

John W. Eaton <jwe>
Group administrator
Wed 29 Aug 2018 06:20:59 PM UTC, comment #28: 

Looks good Rik.  There may be a few other corner cases to add, however, such as in this case:


octave:4> intmin ('int64') + intmin ('int64')
ans = -9223372036854775808


The reason is that in all the fast algorithms (be it the __builtin_X group or the whiz-bang version), the direction of wrap has to be deduced from the operation result.  For example, in the patch I had something like:


    if (__builtin_saddll_overflow (x, y, &u))
      {
        if (u < 0)
          u = octave_int_base<T>::max_val ();


being careful with the "< 0" comparison, not "<= 0".  Think about the integer represenation


[-N ... -1][0 ... N-1]


If x = -N and y = -N, the addition (or subtraction for that matter) operation results in u=0.

In the case of unsigned, I suppose it's more straightforward.  Since there are no negative numbers, I suppose addition can only overflow at INT_MAX and subtraction can only overflow at INT_MIN.

Dan Sebald <sebald>
Wed 29 Aug 2018 04:28:19 PM UTC, comment #27: 

Test-driven development is not a bad thing, so I created a new file "integer.tst" in the test/ directory to cover issues related to the integer classes.  Currently it has tests only for the saturation mechanics and they correctly fail for int64 when testing the lower bound.  See https://hg.savannah.gnu.org/hgweb/octave/rev/4530c5824bbe.

Rik <rik5>
Group administrator
Wed 29 Aug 2018 01:43:53 AM UTC, comment #26: 

OK...

Here's a little patch to proof-of-concept the use of the GNU-lib builtins.  It only works for int64 because I hard-coded the use of __builtin_saddll_overflow(), but if you could figure out a way to use the template T to substitute the proper builtin routine (conditioned on the GCC compiler being used), there couldn't be any faster routine.

It works, and I think I have the < 0 (versus <= 0) correct, but there should be plenty of corner cases added to the BIST:


octave:1> intmin ('int64')
ans = -9223372036854775808
octave:2> intmin ('int64') - 1
ans = -9223372036854775808
octave:3> intmin ('int64') + 1
ans = -9223372036854775807
octave:4> intmin ('int64') + intmin ('int64')
ans = -9223372036854775808
octave:5> intmin ('int64') - intmin ('int64')
ans = 0
octave:6> intmin ('int64') - intmax ('int64')
ans = -9223372036854775808
octave:7> intmax ('int64') + intmax ('int64')
ans = 9223372036854775807
octave:8> intmax ('int64') - intmin ('int64')
ans = 9223372036854775807


BTW, something else that would be nice is a special type/class variable definition that exists only in that intermediate world of the interpreter; call it a "long_double".  Rather than ASCII numbers being immediately converted to double they could have the significand stored as a long 64 bit while the exponent is stored in a double as 1eX.  So if the user types


octave:7> int64(-9223372036854775806)
ans = -9223372036854775808


the number inside the function argument list would be treated as

(long_double) -9223372036854775806 x 1e0

and with the proper construct the int64() routine could convert the long_double back to a long without loss of resolution.  It would only be when an assignment is made that the double_long gets converted to double.  (I.e., the user would never be able to create or see any type of class called "double-long".)

(file #44893)

Dan Sebald <sebald>
Tue 28 Aug 2018 10:33:55 PM UTC, comment #25: 

I also agree that it should be sufficient to check x < 0.

I'll work on a patch.

John W. Eaton <jwe>
Group administrator
Tue 28 Aug 2018 09:38:18 PM UTC, comment #24: 

Yes, all of this seems high on the complexity scale for very little affect.

Rik <rik5>
Group administrator
Tue 28 Aug 2018 09:26:19 PM UTC, comment #23: 

Even the __signbit() routine leaves me wondering:


  return static_cast<uint64_t> (x) >> std::numeric_limits<int64_t>::digits;


It's bit-shifting with a quantity called digits; presumably digits means "bits" for int values.  Why not just (x < 0) if it is trying to extract the sign bit?  Was this code supposed to account for the case of defining int64_max to be 2^52 - 1 and int64_min -2^52?  That is, to ensure all data types fit within an IEEE double significand?

Dan Sebald <sebald>
Tue 28 Aug 2018 09:01:02 PM UTC, comment #22: 

The second sample code from comment #16 works for me regardless of optimization level.  I'm attaching a modified version of the original called tst-overflow.rik.cc.  It fails for me at the addition operator.


~/code/cppsrc: g++ -O2 --std=c++11 int-overflow.rik.cc
~/code/cppsrc: a.out
-9223372036854775808 + -2
branch 1: -9223372036854775808 + -2
TMP START
9223372036854775807 + 1
9223372036854775807
TMP END
9223372036854775807
~/code/cppsrc: g++ -O1 --std=c++11 int-overflow.rik.cc -o a.out2
~/code/cppsrc: a.out2
-9223372036854775808 + -2
branch 1: -9223372036854775808 + -2
TMP START
9223372036854775807 + 1
-9223372036854775808
TMP END
-9223372036854775808




(file #44892)

Rik <rik5>
Group administrator
Tue 28 Aug 2018 09:00:53 PM UTC, comment #21: 

@Rik, Am I understanding correctly that you are seeing a __signbit() value of 128?  In the printout I ran, it is producing a value of 1, i.e., adding 1 to max value so it wraps to min value.

Dan Sebald <sebald>
Tue 28 Aug 2018 08:56:50 PM UTC, comment #20: 

Ah, instead of


      u = octave_int_base<T>::max_val () + __signbit (~u);


I think we can write:


      u = (__signbit (~u)
           ? std::numeric_limits<T>::min ()
           : std::numeric_limits<T>::max ());


because it seems as though the intent was to either set the result to the max value for the type (signbit is 0) or, if signbit is 1, add 1 to the max value and wrap around to the min value.  Why not set the min and max directly, simply by checking the sign?

Also, I think octave_int_base<T>::max_val () was the wrong thing here anyway, because if I'm reading the code correctly, that returns an octave_int<T> type, so wouldn't the operator + applied to that value send us back through all the octave_int code for adding the signbit (0 or 1) value??

The original code seems like it tried to be a bit too tricky for simply setting the max or min value if an overflow condition is detected.

I'm hereby coining a new term that may be used to describe this type of problem in Octave:  Jaroslavimization.  :-)

John W. Eaton <jwe>
Group administrator
Tue 28 Aug 2018 08:52:49 PM UTC, comment #19: 

The example of Comment #16 isn't failing for me:


@linux ~/octave/bug/54572 $ gcc -O2 -std=c++11 int-overflow-2.cc -lstdc++
sebald@moorglade ~/octave/bug/54572 $ ./a.out
-9223372036854775808 + -2
u: 9223372036854775806
ux: -2
uy: -9223372036854775808
(ux & uy) < 0:
(~u): -9223372036854775807
__signbit (~u): 1
u: -9223372036854775808
-9223372036854775808


What compiler command are you using?  What output are you seeing when it fails?

Dan Sebald <sebald>
Tue 28 Aug 2018 08:41:30 PM UTC, comment #18: 

I ran the code through the debugger and printed out the intermediate values.


(gdb) p x
$13 = -9223372036854775808
(gdb) p y
$14 = <optimized out>
(gdb) p ux
$15 = -2
(gdb) p uy
$16 = -9223372036854775808
(gdb) p (ux & uy)
$17 = -9223372036854775808
(gdb) p __signbit (~u)
$18 = 128


The code path that is taken ends up going through the if at the very bottom


    if ((ux & uy) < 0)
      {
        u = octave_int_base<T>::max_val () + __signbit (~u);
      }


I compiled again with -g -O1 and verified that the output is correct.  When I run it under gdb the values are identical.  This points to the actual '+' operator on the line above.


(gdb) p x
$1 = -9223372036854775808
(gdb) p y
$2 = <optimized out>
(gdb) p ux
$3 = -2
(gdb) p uy
$4 = -9223372036854775808
(gdb) n
136             u = octave_int_base<T>::max_val () + __signbit (~u);
(gdb) p ux & uy
$5 = -9223372036854775808
(gdb) p __signbit (~u)
$6 = 128




Rik <rik5>
Group administrator
Tue 28 Aug 2018 08:35:39 PM UTC, comment #17: 

OK, interesting...

How much work would it be to make ints use native arithmetic?  If gcc/g++ is used, then the


__builtin_sub_overflow()


family of routines could be used (very fast).  If it isn't gcc/g++ compiler, then Octave could use slower custom equivalents that check before the operation whether it will overflow.

Dan Sebald <sebald>
Tue 28 Aug 2018 08:35:19 PM UTC, comment #16: 

Here's a much simpler version.  No doubles anywhere.  Same trouble with -O2 but not with -O1.

(file #44891)

John W. Eaton <jwe>
Group administrator
Tue 28 Aug 2018 08:15:17 PM UTC, comment #15: 

Oh, I see.  Yeah, I suspect it is that float-53/int-64 issue again.  Recall, before that issue manifested as a float index mapped to an integer negative number (or something like that).  Here, I would guess that the arithmetic with


  // Compute proper thresholds.
  static const S thmin = compute_threshold (static_cast<S> (min_val ()),
                                            min_val ());
  static const S thmax = compute_threshold (static_cast<S> (max_val ()),
                                            max_val ());
[snip]
  compute_threshold (S val, T orig_val)
  {
    val = std::round (val); // Fool optimizations (maybe redundant)
    // If val is even, but orig_val is odd, we're one unit off.
    if (orig_val % 2 && val / 2 == std::round (val / 2))
      // FIXME: is this always correct?
      val *= (static_cast<S> (1) - (std::numeric_limits<S>::epsilon () / 2));
    return val;
  }


is producing a threshold "thmin" that is one resolution bit on the negative side of LINT_MIN or std::numeric_limits<int64_t>::min.  No time to investigate right now.

However, I'm not certain of the above guess because the decrement operator doesn't appear to compute a limit and it fails:


octave:8> intmin ('int64') - 1
I am in INT_DOUBLE_BINOP_DECL (-, int64)
x = -9223372036854775808
(-y) = -1
MIN THRESH: -9.22337e+18
MAX THRESH: 9.22337e+18
ans = 9223372036854775807
octave:9> x = intmin('int64')
x = -9223372036854775808
octave:10> --x
ans = 9223372036854775807


Dan Sebald <sebald>
Tue 28 Aug 2018 08:12:45 PM UTC, comment #14: 

jwe's test case fails for -O2, but works for other optimization levels.  I also verified that OCTAVE_HAVE_FAST_INT_OPS must be defined to show the bug.  There are two places where OCTAVE_HAVE_FAST_INT_OPS is checked.  I ruled out one with testing.  This leaves this bit of code as the problem:


  static T
  add (T x, T y)
  {
#if defined (OCTAVE_HAVE_FAST_INT_OPS)
    // The typecasts do nothing, but they are here to prevent an optimizing
    // compiler from interfering.  Also, the signed operations on small types
    // actually return int.
    T u = static_cast<UT> (x) + static_cast<UT> (y);
    T ux = u ^ x;
    T uy = u ^ y;
    if ((ux & uy) < 0)
      {
        u = octave_int_base<T>::max_val () + __signbit (~u);
      }
    return u;
#else


The comment sounds interesting.  It specifically says that the casts are there to prevent an optimizing compiler from interfering, which it seems like it is.


Rik <rik5>
Group administrator
Tue 28 Aug 2018 07:49:07 PM UTC, comment #13: 

I'm attaching a stripped-down version of the code from oct-inttypes.h and oct-inttypes.cc.  I see the problem with GCC 8.2 using -O2 (-O1 is not enough for me to trigger the bug) when OCTAVE_FAST_INT_OPS is defined.  I could be wrong, but I suspect a bug in Octave, not the compiler.



(file #44890)

John W. Eaton <jwe>
Group administrator
Tue 28 Aug 2018 07:09:52 PM UTC, comment #12: 

Producing a short stand-alone C program might be difficult, at least for me since I don't know very well how the math operators are constructed.

FWIW, I'm attaching a sample program that goes down the path of attempting to uncover a problem and/or testing how robust the GNU library routines for hardware-overflow-check are to this issue.  It doesn't uncover anything really, and it may not be a surprise since the routines could be something much different from typically compiler behavior.  The program output here is:


@linux ~/octave/bug/54572 $ gcc -O2 -std=c++11 overflow_check.cc -lstdc++
@linux ~/octave/bug/54572 $ ./a.out32-bit decrement: -2147483648 -> 2147483647
64-bit decrement: -9223372036854775808 -> 9223372036854775807
32-bit subtract: -2147483648 - 1 = 2147483647 (overflow)
32-bit subtract: -2147483647 - 1 = -2147483648 (no overflow)
64-bit subtract: -9223372036854775808 - 1 = 9223372036854775807 (overflow)
64-bit subtract: -9223372036854775807 - 1 = -9223372036854775808 (no overflow)
@linux ~/octave/bug/54572 $ gcc --versiongcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


The problem from the standpoint of my comprehension is that integer arithmetic typically wraps and I don't know how that is being addressed in the software.  One method is to check before-hand if the operation will saturate by making sure the difference between the saturation limit and first operand is greater than the second operand (or something like that).  Another way might be some type of C signal that catches overflows--if that is what is being used, I could certainly imagine a C++ optimizing compiler bug that is failing to throw that signal.

(file #44889)

Dan Sebald <sebald>
Tue 28 Aug 2018 04:14:10 PM UTC, comment #11: 

If somebody can make a c++ test case I could try to file a bug report on Fedora...

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 28 Aug 2018 03:58:44 PM UTC, comment #10: 

This looks to be compiler-dependent, as well as compiler option dependent.  I have a development version of Octave compiled with gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.10) and the options '-O2'.  This version exhibits the bad behavior.  I have another tree where I have compiled the same code with the same compiler, but with the options '-ggdb3 -O0' (for debugging).  This version works.  This may be a nasty upstream bug where certain optimizations are not being done correctly.

Rik <rik5>
Group administrator
Tue 28 Aug 2018 07:25:54 AM UTC, comment #9: 


octave:4> __octave_config_info__ ("CC")
ans = gcc
octave:5> exit
@linux ~/ $ gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609


I wonder if this is just that floating point 53-bit mantissa (significand) issue when converting to int64:


octave:14> int64(-9223372036854775808)
ans = -9223372036854775808
octave:15> int64(-9223372036854775807)
ans = -9223372036854775808
octave:16> int64(-9223372036854775806)
ans = -9223372036854775808


Then again, maybe not:


octave:45> x = intmin("int64")
x = -9223372036854775808
octave:46> --x
ans = 9223372036854775807
octave:47> class(x)
ans = int64


Dan Sebald <sebald>
Tue 28 Aug 2018 06:55:26 AM UTC, comment #8: 

So with clang compiled binary I get correct (?) results:


octave:4> (intmin('int64') - 1)
ans = -9223372036854775808
octave:5> (intmin('int64') - 2)
ans = -9223372036854775808
octave:6> (intmin('int64') - 3)
ans = -9223372036854775808
octave:7> intmin ('int64') - int64 (1)
ans = -9223372036854775808
octave:8> intmin ('int64') - int64 (2)
ans = -9223372036854775808
octave:9> intmin ('int64') - int64 (3)
ans = -9223372036854775808
octave:10> __octave_config_info__ ("hg_id")
ans = 8b548f2f8086
octave:11>
octave:11> __octave_config_info__ ("CC")
ans = clang


clang version 6.0.1 (tags/RELEASE_601/final)

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 28 Aug 2018 06:48:27 AM UTC, comment #7: 

Hold on, wrong type; I chose uint64, not int64...  OK, that's better:


octave:2> intmin ('int64') - 1
I am in INT_DOUBLE_BINOP_DECL (-, int64)
x = -9223372036854775808
(-y) = -1
ans = 9223372036854775807
octave:3> intmin ('int64') - int64 (1)
ans = 9223372036854775807


but notice that the variation you inquired about doesn't go into that routine.  So this is happening somewhere else, maybe after the macro instantiation?

Dan Sebald <sebald>
Tue 28 Aug 2018 06:46:49 AM UTC, comment #6: 

I get the same results as Dan


octave:1> (intmin('int64') - 1)
ans = 9223372036854775807
octave:2> (intmin('int64') - 2)
ans = 9223372036854775807
octave:3> (intmin('int64') - 3)
ans = 9223372036854775807
octave:4> intmin ('int64') - int64 (1)
ans = 9223372036854775807
octave:5> intmin ('int64') - int64 (2)
ans = 9223372036854775807
octave:6> intmin ('int64') - int64 (3)
ans = 9223372036854775807

octave:7> __octave_config_info__ ("hg_id")
ans = 8b548f2f8086


gcc version 8.1.1 20180712 (Red Hat 8.1.1-5) (GCC)

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 28 Aug 2018 06:34:43 AM UTC, comment #5: 

I'll compile the latest code while typing...

Yes, it may be the non-symmetry of ints that is at issue.  However, this fails as well:

+verbatim-
octave:6> int64 (-2) - intmax ('int64')
ans = 9223372036854775807
octave:5> int64 (-1) - intmax ('int64')
ans = -9223372036854775808
-verbatim-

so it wouldn't seem that in both cases (-y) is negating the higher magnitude extrema.  If that were the case, it would be a compiler bug.

... OK, recompiled, and I added the following:


diff --git a/liboctave/util/oct-inttypes.cc b/liboctave/util/oct-inttypes.cc
--- a/liboctave/util/oct-inttypes.cc
+++ b/liboctave/util/oct-inttypes.cc
@@ -28,6 +28,7 @@ along with Octave; see the file COPYING.
 #include "fpucw-wrappers.h"
 #include "lo-error.h"
 #include "oct-inttypes.h"
+#include <iostream>

 template <typename T>
 const octave_int<T> octave_int<T>::zero (static_cast<T> (0));
@@ -480,6 +481,9 @@ DOUBLE_INT_BINOP_DECL (+, int64)

 INT_DOUBLE_BINOP_DECL (-, uint64)
 {
+std::cerr << "I AM HERE\n";
+std::cerr << "x = " << x << "\n";
+std::cerr << "(-y) = " << (-y) << "\n";
   return x + (-y);
 }


However, I'm not seeing the output when testing:


octave:3> intmin ('int64') - int64 (1)
ans = 9223372036854775807
octave:4> intmin ('int64') - int64 (2)
ans = 9223372036854775807
octave:5> intmin ('int64') - int64 (3)
ans = 9223372036854775807


Strange.  I will investigate...

Dan Sebald <sebald>
Tue 28 Aug 2018 06:01:59 AM UTC, comment #4: 

Hmm, it seems to work for me:


octave:1> (intmin('int64') - 1)
ans = -9223372036854775808
octave:2> (intmin('int64') - 2)
ans = -9223372036854775808
octave:3> (intmin('int64') - 3)
ans = -9223372036854775808


What happens for you with the following?


intmin ('int64') - int64 (1)
intmin ('int64') - int64 (2)
intmin ('int64') - int64 (3)


The code for the mixed int64/double operations is in oct-inttypes.cc.  The code for the int64 - double is just


INT_DOUBLE_BINOP_DECL (-, int64)
{
  return x + (-y);
}


in which X is an octave_int64 object and Y is a double value.  The operator + function tries to be careful about valid ranges but is there trouble here because the range of signed integers is not symmetric?  Or is there something else going on here?  What code path is taken on your system?

John W. Eaton <jwe>
Group administrator
Tue 28 Aug 2018 04:49:07 AM UTC, comment #3: 

Err, multiplication is fine (I subtracted when I should have added 5):


octave:3> (intmin('int32') + 5) * 10
ans = -2147483648
octave:4> (intmin('int64') + 5) * 10
ans = -9223372036854775808


So it is just the original example that is the issue.

Dan Sebald <sebald>
Tue 28 Aug 2018 03:59:49 AM UTC, comment #2: 

Multiplication does the same:


octave:1> (intmin('int32') - 5) * 10
ans = -2147483648
octave:2> (intmin('int64') - 5) * 10
ans = 9223372036854775807


This is probably a one or two line change somewhere, but the problem is finding where that is.  grep on INT64_MAX and variants doesn't seem to lead anywhere.  Also, I recall JWE making all these int class arithmetic operations consistent many years ago; so I searched the bug reports and change history for things like int64, wrap, overflow, etc. and again that didn't lead anywhere.

If you have the debugger working on your system, perhaps you could step into one of these commands and follow where limit saturation is done.

Dan Sebald <sebald>
Mon 27 Aug 2018 04:54:11 PM UTC, comment #1: 

Oh boy, that's a big fail.  I suppose it's because we don't have full BIST coverage.  Marking as confirmed.

Rik <rik5>
Group administrator
Sun 26 Aug 2018 06:27:09 AM UTC, original submission:  

This seems odd:


octave:169> intmin('int64') - 0
ans = -9223372036854775808
octave:170> intmin('int64') - 1
ans = 9223372036854775807
octave:171> intmin('int64') - 2
ans = 9223372036854775807
octave:172> intmin('int64') - 3
ans = 9223372036854775807


However, int32 behaves as expected:


octave:177> intmin('int32') - 0
ans = -2147483648
octave:178> intmin('int32') - 1
ans = -2147483648
octave:179> intmin('int32') - 2
ans = -2147483648
octave:180> intmin('int32') - 3


Dan Sebald <sebald>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #44898:  diffs.txt added by jwe (1KiB - text/plain)
file #44892:  int-overflow.rik.cc added by rik5 (7KiB - text/x-c++src)
file #44891:  int-overflow-2.cc added by jwe (1004B - text/x-c++src)
file #44890:  int-overflow.cc added by jwe (7KiB - text/x-c++src)
file #44889:  overflow_check.cc added by sebald (2KiB - text/x-c++src)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by dasergatskov (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by rik5 (Posted a comment)
  • -email is unavailable- added by sebald (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 17 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-09-06 rik5 StatusConfirmed Fixed
        Open/ClosedOpen Closed
    2018-09-01 sebald Attached File- Added octave-int_remove_goto-djs2018aug31.patch, #44921
    2018-09-01 sebald Attached File- Added simpler_int_mult-djs2018aug31.diff, #44919
        Attached File- Added octave-remove_int_use_long_double-djs2018aug31.patch, #44920
    2018-08-30 jwe Attached File- Added diffs.txt, #44898
        StatusFixed Confirmed
        Open/ClosedClosed Open
    2018-08-29 sebald Attached File- Added octave-inline_some_int_arithmetic-djs2018aug29.patch, #44897
    2018-08-29 rik5 StatusConfirmed Fixed
        Open/ClosedOpen Closed
    2018-08-29 sebald Attached File- Added int64_overflow_proof_of_concept.diff, #44893
    2018-08-28 rik5 Attached File- Added int-overflow.rik.cc, #44892
    2018-08-28 jwe Attached File- Added int-overflow-2.cc, #44891
    2018-08-28 jwe Attached File- Added int-overflow.cc, #44890
    2018-08-28 sebald Attached File- Added overflow_check.cc, #44889
    2018-08-27 rik5 StatusNone Confirmed

    Back to the top

    Powered by Savane 3.13-cf05.
    Corresponding source code