bugGNU Octave - Bugs: bug #57591, Segmentation faults when running...

 
 

bug #57591: Segmentation faults when running the test suite

Submitter:  Markus Mützel <mmuetzel>
Submitted:  Mon 13 Jan 2020 03:34:54 PM UTC
   
 
Category:  Test Suite Severity:  4 - Important
Priority:  5 - Normal Item Group:  Segfault, Bus Error, etc.
Status:  Confirmed Assigned to:  None
Originator Name:  Open/Closed:  * Open
Release:  * dev Operating System:  * GNU/Linux
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Thu 02 Jun 2022 10:13:14 AM UTC, comment #179: 

It was some time since the last time the test suite crashed at the test for gmres.m. But it happened again here on the clang-debian builder:
http://buildbot.octave.org:8010/#/builders/32/builds/1365/steps/7/logs/stdio

Markus Mützel <mmuetzel>
Group administrator
Fri 24 Dec 2021 10:30:34 AM UTC, comment #178: 

The crash occurred on a Windows runner on GitHub:
https://github.com/gnu-octave/octave/runs/4625249100?check_suite_focus=true

The error is a little bit more verbose on that platform:

  sparse\etreeplot.m ............................................. pass    2/2
Magick: caught exception 0xC0000005 "Access violation"...octave: No error
  sparse\gmres.m .................................................
Error: Process completed with exit code -1073741819.


It might be something that we do with the GraphicsMagick library that is causing the intermittent segmentation faults. Not sure how or why that library would be used at all in the tests in sparse/gmres.m though...

Maybe, we aren't cleaning up correctly in a prior test and something in this test is triggering it?

Markus Mützel <mmuetzel>
Group administrator
Thu 02 Dec 2021 08:19:17 PM UTC, comment #177: 
Markus Mützel <mmuetzel>
Group administrator
Thu 02 Dec 2021 07:57:04 PM UTC, comment #176: 

The whole lifetime thingy might be a red herring. And I'm not sure if I correctly understand:
https://en.cppreference.com/w/cpp/language/lifetime

> [...] the following uses of the glvalue expression that identifies that object are undefined, unless the object is being constructed or destructed (separate set of rules applies):
>
> 1. Lvalue to rvalue conversion (e.g. function call to a function that takes a value).


Perusing through the code that is being called in the tests for gmres:
https://hg.savannah.gnu.org/hgweb/octave/file/b8c7c550e86f/libinterp/corefcn/cellfun.cc#l2469

            retcell = do_cellslices_nda (x.array_value (),
                                         lb, ub, dim);


Does x.array_value () constitute a glvalue expression? Is calling a function with that lvalue causing undefined behavior?

Markus Mützel <mmuetzel>
Group administrator
Thu 02 Dec 2021 07:00:15 PM UTC, comment #175: 

Another crash during the tests of gmres:
http://buildbot.octave.org:8010/#/builders/36/builds/285/steps/7/logs/stdio

Looks like that wasn't it either...

Markus Mützel <mmuetzel>
Group administrator
Tue 30 Nov 2021 10:04:41 AM UTC, comment #174: 

I pushed a change to stable that might fix a possible issue with the lifetime of a temporary variable in mgorth:
https://hg.savannah.gnu.org/hgweb/octave/rev/4f8284dee449

Maybe that fixes the intermittent crashes during the tests of `sparse/gmres.m`.

The scope of temporary variables is sometimes hard to predict (at least for me). And I'm not sure if this really was an issue. But the syntax looked slightly suspicious.
Imho, it is often easier (and cleaner) to use an explicit variable (with defined scope) to avoid potential lifetime issues.

Let's see if this makes a difference at all.
Marking as ready for test (again).

Markus Mützel <mmuetzel>
Group administrator
Sat 30 Oct 2021 08:31:59 PM UTC, comment #173: 

Assuming the crash occurred during the tests for gmres, I can't see where that would plot anything. But I might be missing something...

Markus Mützel <mmuetzel>
Group administrator
Sat 30 Oct 2021 08:26:07 PM UTC, comment #172: 

Those happens during plots. Problem with xvfb-run again?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 30 Oct 2021 08:05:21 PM UTC, comment #171: 

It looks like this still occurs occasionally on jwe's new buildbots:
http://buildbot.octave.org:8010/#/builders/32/builds/960/steps/7/logs/stdio

  sparse/etreeplot.m ............................................. pass    2/2
fatal: caught signal Segmentation fault -- stopping myself...
/bin/bash: line 1: 78967 Segmentation fault      /bin/bash ../run-octave --no-init-file --silent --no-history -p /scratch/buildbot/workers/jwe-debian-x86_64-1/clang-debian/build/test/mex /scratch/buildbot/workers/jwe-debian-x86_64-1/clang-debian/build/../src/test/fntests.m /scratch/buildbot/workers/jwe-debian-x86_64-1/clang-debian/build/../src/test
make[3]: *** [Makefile:32234: check-local] Error 139
  sparse/gmres.m .................................................make[3]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-1/clang-debian/build'
make[2]: *** [Makefile:28334: check-am] Error 2
make[2]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-1/clang-debian/build'
make[1]: *** [Makefile:28036: check-recursive] Error 1
make[1]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-1/clang-debian/build'
make: *** [Makefile:28336: check] Error 2
program finished with exit code 2


Markus Mützel <mmuetzel>
Group administrator
Fri 30 Jul 2021 06:29:55 AM UTC, comment #170: 

Also re-appeared on jwe's buildbot's:
http://buildbot.octave.org:8010/#/builders/31/builds/776/steps/7/logs/stdio


  sparse/etreeplot.m ............................................. pass    2/2
fatal: caught signal Segmentation fault -- stopping myself...
/bin/bash: line 1: 4069113 Segmentation fault      /bin/bash ../run-octave --no-init-file --silent --no-history -p /scratch/buildbot/workers/jwe-debian-x86_64-2/gcc-lto-debian/build/test/mex /scratch/buildbot/workers/jwe-debian-x86_64-2/gcc-lto-debian/build/../src/test/fntests.m /scratch/buildbot/workers/jwe-debian-x86_64-2/gcc-lto-debian/build/../src/test
make[3]: *** [Makefile:32312: check-local] Error 139
  sparse/gmres.m .................................................make[3]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-2/gcc-lto-debian/build'
make[2]: *** [Makefile:28396: check-am] Error 2
make[2]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-2/gcc-lto-debian/build'
make[1]: *** [Makefile:28098: check-recursive] Error 1
make[1]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-2/gcc-lto-debian/build'
make: *** [Makefile:28398: check] Error 2
program finished with exit code 2


Markus Mützel <mmuetzel>
Group administrator
Sun 18 Jul 2021 05:51:11 PM UTC, comment #169: 

Here is what I see for blas:


devnull> ldd /usr/local/octave/dev/libexec/octave/7.0.0/exec/x86_64-pc-linux-gnu/octave-gui | grep blas
        libblas.so.3 => /usr/lib/x86_64-linux-gnu/libblas.so.3 (0x00007fda3f660000)
        libopenblas.so.0 => /usr/lib/x86_64-linux-gnu/libopenblas.so.0 (0x00007fda3bd40000)

devnull> ls -l /usr/lib/x86_64-linux-gnu/libblas.so.3
lrwxrwxrwx 1 root root 47 Jan 11  2019 /usr/lib/x86_64-linux-gnu/libblas.so.3 -> /etc/alternatives/libblas.so.3-x86_64-linux-gnu

devnull> ls -l /etc/alternatives/libblas.so.3-x86_64-linux-gnu
lrwxrwxrwx 1 root root 54 May 20  2020 /etc/alternatives/libblas.so.3-x86_64-linux-gnu -> /usr/lib/x86_64-linux-gnu/openblas-serial/libblas.so.3

devnull> ls -l /usr/lib/x86_64-linux-gnu/openblas-serial/libblas.so.3
-rw-r--r-- 1 root root 317712 Apr 18 04:36 /usr/lib/x86_64-linux-gnu/openblas-serial/libblas.so.3

devnull> ls -l /usr/lib/x86_64-linux-gnu/libopenblas.so.0
lrwxrwxrwx 1 root root 51 Nov  4  2019 /usr/lib/x86_64-linux-gnu/libopenblas.so.0 -> /etc/alternatives/libopenblas.so.0-x86_64-linux-gnu

devnull> ls -l /etc/alternatives/libopenblas.so.0-x86_64-linux-gnu
lrwxrwxrwx 1 root root 59 Nov  4  2019 /etc/alternatives/libopenblas.so.0-x86_64-linux-gnu -> /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblas.so.0

devnull> ls -l /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblas.so.0
lrwxrwxrwx 1 root root 23 Apr 18 04:36 /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblas.so.0 -> libopenblasp-r0.3.13.so


so there do appear to be two different BLAS libraries involved.  I've been configuring with --with-blas=-lblas.  I thought that would do the right thing with the Debian alternatives mechanism.

For lapack I see


devnull> ldd /usr/local/octave/dev/libexec/octave/7.0.0/exec/x86_64-pc-linux-gnu/octave-gui | grep lapack
        liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 (0x00007fdd8c978000)

devnull> ls -l /usr/lib/x86_64-linux-gnu/liblapack.so.3
lrwxrwxrwx 1 root root 49 Jan 11  2019 /usr/lib/x86_64-linux-gnu/liblapack.so.3 -> /etc/alternatives/liblapack.so.3-x86_64-linux-gnu

devnull> ls -l /etc/alternatives/liblapack.so.3-x86_64-linux-gnu
lrwxrwxrwx 1 root root 56 May 20  2020 /etc/alternatives/liblapack.so.3-x86_64-linux-gnu -> /usr/lib/x86_64-linux-gnu/openblas-serial/liblapack.so.3

devnull> ls -l /usr/lib/x86_64-linux-gnu/openblas-serial/liblapack.so.3
-rw-r--r-- 1 root root 6920304 Apr 18 04:36 /usr/lib/x86_64-linux-gnu/openblas-serial/liblapack.so.3


John W. Eaton <jwe>
Group administrator
Sun 18 Jul 2021 03:26:41 AM UTC, comment #168: 

I noticed calls to openblas, blas, and lapack. Could this be a problem?

What do you get for

ldd src/.libs/octave-cli  | grep blas

and

ldd src/.libs/octave-cli  | grep lapack

?

I have:


ldd src/.libs/octave-cli  | grep blas
        libopenblaso.so.0 => /lib64/libopenblaso.so.0 (0x00007fa9ef9c7000)
 ldd src/.libs/octave-cli  | grep lapack
(returnss nothing)


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sun 18 Jul 2021 12:10:01 AM UTC, comment #167: 

I'm reopening at least temporarily because after all this time I finally encountered a crash in the gmres.m tests while running "make check" from the command line and I happened to have core files enabled:


  ...
  sparse/gmres.m .................................................fatal: caught signal Segmentation fault -- stopping myself...
/bin/bash: line 1: 2189756 Segmentation fault      (core dumped) /bin/bash ../run-octave --no-init-file --silent --no-history -p /home/jwe/build/octave/test/mex /home/jwe/src/octave/test/fntests.m /home/jwe/src/octave/test
make[3]: *** [Makefile:32312: check-local] Error 139
make[3]: Leaving directory '/net/devnull/scratch/jwe/build/octave'
make[2]: *** [Makefile:28396: check-am] Error 2
make[2]: Leaving directory '/net/devnull/scratch/jwe/build/octave'
make[1]: *** [Makefile:28098: check-recursive] Error 1
make[1]: Leaving directory '/net/devnull/scratch/jwe/build/octave'
make: *** [Makefile:28398: check] Error 2


If this were a problem with the Linux memory manager I would expect the process to be killed, not to stop with a segfault.

I have the following OpenBLAS packages installed on the system where the crash happened:


ii  libopenblas-dev:amd64         0.3.13+ds-3  amd64        Optimized BLAS (linear algebra) library (dev, meta)
ii  libopenblas-pthread-dev:amd64 0.3.13+ds-3  amd64        Optimized BLAS (linear algebra) library (dev, pthread)
ii  libopenblas0:amd64            0.3.13+ds-3  amd64        Optimized BLAS (linear algebra) library (meta)
ii  libopenblas0-pthread:amd64    0.3.13+ds-3  amd64        Optimized BLAS (linear algebra) library (shared lib, pthread)
ii  libopenblas0-serial:amd64     0.3.13+ds-3  amd64        Optimized BLAS (linear algebra) library (shared lib, serial)


These should be the same on all of my buildbot systems.

Here is the top of the call stack at the point of the crash:


(gdb) where
#0  0x00007fd339cf804c in dgemm_incopy_PILEDRIVER () at /usr/lib/x86_64-linux-gnu/libopenblas.so.0
#1  0x00007fd33ac45830 in gotoblas () at /usr/lib/x86_64-linux-gnu/libopenblas.so.0
#2  0x0000000000000001 in  ()
#3  0x00007fd338b7b345 in dgemm_tn () at /usr/lib/x86_64-linux-gnu/libopenblas.so.0
#4  0x00007fd33c2c5908 in dgemm_ () at /usr/lib/x86_64-linux-gnu/libblas.so.3
#5  0x00007fd33cb99f1c in dlarfb_ () at /usr/lib/x86_64-linux-gnu/liblapack.so.3
#6  0x00007fd33cbd7909 in dormqr_ () at /usr/lib/x86_64-linux-gnu/liblapack.so.3
#7  0x00007fd33cbd560a in dormbr_ () at /usr/lib/x86_64-linux-gnu/liblapack.so.3
#8  0x00007fd33cb1e778 in dgelsd_ () at /usr/lib/x86_64-linux-gnu/liblapack.so.3
#9  0x00007fd33ff37b7e in Matrix::lssolve(Matrix const&, long&, long&, double&) const
    (this=0x7fd2f3fe1fb0, b=<optimized out>, info=@0x7fd2f3fe1f30: 0, rank=@0x7fd2f3fe1e68: 0, rcon=@0x7fd2f3fe1f38: -1)
    at /home/jwe/src/octave/liboctave/array/dMatrix.cc:2051
#10 0x00007fd33ff3829b in Matrix::solve(MatrixType&, Matrix const&, long&, double&, void (*)(double), bool, blas_trans_type) const
    (this=this@entry=0x7fd2f3fe1fb0, mattype=..., b=
    ..., info=@0x7fd2f3fe1f30: 0, rcon=@0x7fd2f3fe1f38: -1, sing_handler=0x7fd3418606f0 <solve_singularity_warning(double)>, singular_fallback=true, transt=blas_no_trans) at /home/jwe/src/octave/liboctave/array/dMatrix.cc:1625
#11 0x00007fd341862093 in xleftdiv(Matrix const&, Matrix const&, MatrixType&, blas_trans_type) (a=..., b=..., typ=..., transt=transt@entry=blas_no_trans)
    at /home/jwe/src/octave/libinterp/corefcn/xdiv.cc:353
#12 0x00007fd340ebc7df in oct_binop_ldiv(octave_base_value const&, octave_base_value const&) (a1=<optimized out>, a2=...)
    at /home/jwe/src/octave/libinterp/operators/op-m-m.cc:91
#13 0x00007fd341210f34 in octave::binary_op(octave::type_info&, octave_value::binary_op, octave_value const&, octave_value const&)
    (ti=..., op=octave_value::op_ldiv, v1=..., v2=...) at /home/jwe/src/octave/libinterp/octave-value/ov.h:1417
#14 0x00007fd3412b4413 in octave::tree_binary_expression::evaluate(octave::tree_evaluator&, int) (this=0x7fd1bd2b2e40, tw=...)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-binop.cc:140
#15 0x00007fd3412b02aa in octave::tree_simple_assignment::evaluate(octave::tree_evaluator&, int) (this=0x7fd1bd2b2ed0, tw=...)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-assign.cc:101
#16 0x00007fd3412cd89d in octave::tree_evaluator::visit_statement(octave::tree_statement&) (this=0x7fd2e8005878, stmt=<optimized out>)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-eval.cc:3772
#17 0x00007fd3412bbc04 in octave::tree_statement::accept(octave::tree_walker&) (tw=..., this=0x7fd1bd2b2f10)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-stmt.h:124
#18 octave::tree_evaluator::visit_statement_list(octave::tree_statement_list&) (this=0x7fd2e8005878, lst=...)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-eval.cc:3857
#19 0x00007fd3412ccc5f in octave::tree_statement_list::accept(octave::tree_walker&) (tw=..., this=<optimized out>)
--Type <RET> for more, q to quit, c to continue without paging--
    at /home/jwe/src/octave/libinterp/parse-tree/pt-stmt.h:201
#20 octave::tree_evaluator::visit_while_command(octave::tree_while_command&) (this=0x7fd2e8005878, cmd=...)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-eval.cc:4190
#21 0x00007fd3412cd806 in octave::tree_evaluator::visit_statement(octave::tree_statement&) (this=0x7fd2e8005878, stmt=...)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-eval.cc:3747
#22 0x00007fd3412bbc04 in octave::tree_statement::accept(octave::tree_walker&) (tw=..., this=0x7fd1bd2ba5f0)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-stmt.h:124
#23 octave::tree_evaluator::visit_statement_list(octave::tree_statement_list&) (this=0x7fd2e8005878, lst=...)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-eval.cc:3857
#24 0x00007fd3412c4ef7 in octave::tree_statement_list::accept(octave::tree_walker&) (tw=..., this=0x7fd1bd25fc30)
    at /home/jwe/src/octave/libinterp/parse-tree/pt-stmt.h:201
#25 octave::tree_evaluator::execute_user_function(octave_user_function&, int, octave_value_list const&) (this=this@entry=0x7fd2e8005878, user_function=
    ..., nargout=nargout@entry=2, xargs=...) at /home/jwe/src/octave/libinterp/parse-tree/pt-eval.cc:3503


The Matrix:lssolve stack frame shows


#9  0x00007fd33ff37b7e in Matrix::lssolve (this=0x7fd2f3fe1fb0, b=..., info=@0x7fd2f3fe1f30: 0, rank=@0x7fd2f3fe1e68: 0, rcon=@0x7fd2f3fe1f38: -1)
    at /home/jwe/src/octave/liboctave/array/dMatrix.cc:2051
2051                  F77_XFCN (dgelsd, DGELSD, (m, n, nrhs, tmp_data, m, pretval,
(gdb) list
2046                  rcon = octave::numeric_limits<double>::NaN ();
2047                  retval = Matrix (n, b_nc, octave::numeric_limits<double>::NaN ());
2048                }
2049              else
2050                {
2051                  F77_XFCN (dgelsd, DGELSD, (m, n, nrhs, tmp_data, m, pretval,
2052                                             maxmn, ps, rcon, tmp_rank,
2053                                             work.fortran_vec (), lwork,
2054                                             piwork, tmp_info));
2055
(gdb) p m
$1 = 76
(gdb) p n
$2 = 75
(gdb) p nrhs
$3 = 1
(gdb) p tmp_data
$4 = <optimized out>
(gdb) p m
$5 = 76
(gdb) p maxmn
$6 = 76
(gdb) p ps
$7 = <optimized out>
(gdb) p rcon
$8 = (double &) @0x7fd2f3fe1f38: -1
(gdb) p lwork
$9 = 6601
(gdb) p piwork
$10 = <optimized out>


In lssolve, I see that we want to query the LAPACK functions for the workspace requirements but there are comments there about the calculation being broken in some versions of LAPACK so we also compute the sizes.  Are the formulas correct?  I can't tell from the info on the stack after the crash whether we end up using our own calculated values or the ones that LAPACK provides.  Our calculation is complicated, so it seems quite possible that it is incorrect, or that the requirements may have changed since this code was written.

If it is not a problem with the workspace size that results in a memory fault, then the problem begins to look more like a bug in DGELSD or OpenBLAS.

This crash occurred after my changes here:

http://hg.savannah.gnu.org/hgweb/octave/rev/3ab696e02f55

but I don't think those have anything to do with this crash as they are completely unrelated to matrix calculations.

With some effort, I might be able to use gdb to extract the data necessary to recreate the call to lssolve that resulted in the crash, but if this is a random memory issue the crash may still not be easily reproducible.

John W. Eaton <jwe>
Group administrator
Thu 15 Jul 2021 06:38:17 AM UTC, comment #166: 

This report is already quite long. For the sake of better overview, I'm closing as fixed because the last recurring crash discussed here is most likely fixed now.

If it turns out that this is not the case, we can re-open.

If there should be other segmentation faults while running the test suite in the future, we can open a new report. (E.g. for the crash with OpenBLAS on CentOS stream?)

Markus Mützel <mmuetzel>
Group administrator
Thu 15 Jul 2021 06:30:35 AM UTC, comment #165: 

Cross-referencing a thread on Discourse that is relevant for comment #163:
https://octave.discourse.group/t/bists-with-high-memory-consumption/1275

Markus Mützel <mmuetzel>
Group administrator
Sun 04 Jul 2021 09:56:07 AM UTC, comment #164: 

IIUC, that test requires 800 MB of (contiguous) free memory for the double vector `resvec` (because `dim=100` and `maxit=1e6`). That seems a lot.

I pushed a change here that reduces the size of the `resvec` vector to 80 MB (which is still kind of much but less than before):
https://hg.savannah.gnu.org/hgweb/octave/rev/b2455f0a8297

The test still passes for me with `maxit` set to 1e5. But I don't know whether there was some reason that required `maxit` to be as high in the first place. So, I made that change on the default branch.
If that change is save, we could maybe graft it to stable.

Markus Mützel <mmuetzel>
Group administrator
Sun 20 Jun 2021 09:28:34 AM UTC, comment #163: 

The Windows 32bit runners on GitHub are failing this test with the following error:

>>>>> processing C:\msys64\mingw32\share\octave\7.0.0\m\sparse\gmres.m
***** test
 dim = 100;
 A = spdiags ([[1./(2:2:2*(dim-1)) 0]; 1./(1:2:2*dim-1); ...
 [0 1./(2:2:2*(dim-1))]]', -1:1, dim, dim);
 A = A'*A;
 b = rand (dim, 1);
 [x, resvec] = gmres (@(x) A*x, b, dim, 1e-10, dim,...
                      @(x) x./diag (A), [], []);
 assert (x, A\b, 1e-9*norm (x, Inf));
 [x, flag] = gmres (@(x) A*x, b, dim, 1e-10, 1e6,...
                    @(x) diag (diag (A)) \ x, [], []);
 assert (x, A\b, 1e-9*norm (x, Inf));
 [x, flag] = gmres (@(x) A*x, b, dim, 1e-10, 1e6,...
                    @(x) x ./ diag (A), [], []);
 assert (x, A\b, 1e-7*norm (x, Inf));
!!!!! test failed
out of memory or dimension too large for Octave's index type


So, it could be the Linux OOM killer at work here.

Markus Mützel <mmuetzel>
Group administrator
Fri 18 Jun 2021 03:52:04 PM UTC, comment #162: 

Again with sparse/gmres.m as the last test in the log (on gcc-debian):
http://buildbot.octave.org:8010/#/builders/33/builds/676/steps/7/logs/stdio

Markus Mützel <mmuetzel>
Group administrator
Tue 08 Jun 2021 08:35:39 AM UTC, comment #161: 
Markus Mützel <mmuetzel>
Group administrator
Tue 08 Jun 2021 08:01:44 AM UTC, comment #160: 

Since some of the traces contained Java functions, could it be related to this bug that is fixed in newer OpenBLAS versions?
https://fossies.org/linux/OpenBLAS/Changelog.txt

> Version 0.3.11
> 17-Oct-2020
>
> * Reduced the default BLAS3_MEM_ALLOC_THRESHOLD (used as an upper
>   limit for placing temporary arrays on the stack) to be compatible
>   with a stack size of 1mb (as imposed by the JAVA runtime library)


(Tbh, I don't know what that exactly means.)

Which version of OpenBLAS is running on the buildbot builders?

Markus Mützel <mmuetzel>
Group administrator
Tue 08 Jun 2021 07:35:45 AM UTC, comment #159: 

Another one with "sparse/gmres.m" as the last line in the logs. This time on stable-gcc-lto-debian:
http://buildbot.octave.org:8010/#/builders/35/builds/231/steps/7/logs/stdio

Markus Mützel <mmuetzel>
Group administrator
Sat 15 May 2021 02:54:03 AM UTC, comment #158: 

Another one:

http://buildbot.octave.org:8010/#/builders/33/builds/600

John, what happens if you do

coredumpctl -1 info

as either root or buildbot user?

I see the following:


[buildbotu@i7 ~]$ coredumpctl -1 info
Hint: You are currently not seeing messages from other users and the system.
      Users in groups 'adm', 'systemd-journal', 'wheel' can see all messages.
      Pass -q to turn off this notice.
           PID: 2498206 (lt-octave-gui)
           UID: 1002 (buildbotu)
           GID: 1002 (buildbotu)
        Signal: 11 (SEGV)
     Timestamp: Tue 2021-04-13 06:58:00 EDT (1 months 1 days ago)
  Command Line: /home/buildbotu/fc25-x86_64/clang-fedora/build/src/.libs/lt-octave-gui --no-init-path --path=/home/buildbotu/fc25-x86_64/clang-fedora/build/>
    Executable: /home/buildbotu/fc25-x86_64/clang-fedora/build/src/.libs/lt-octave-gui
 Control Group: /user.slice/user-1002.slice/user@1002.service/app.slice/app-org.gnome.Terminal.slice/vte-spawn-d9a5af14-5055-4776-940e-f8398b363930.scope
          Unit: user@1002.service
     User Unit: vte-spawn-d9a5af14-5055-4776-940e-f8398b363930.scope
         Slice: user-1002.slice
     Owner UID: 1002 (buildbotu)
       Boot ID: 402f28283d4d4ecab8f0a2fac6460af6
    Machine ID: 02cd525d327143c0a4c9a1f2d70e2b16
      Hostname: i7
       Storage: /var/lib/systemd/coredump/core.lt-octave-gui.1002.402f28283d4d4ecab8f0a2fac6460af6.2498206.1618311480000000.zst (missing)
       Message: Process 2498206 (lt-octave-gui) of user 1002 dumped core.

                Stack trace of thread 2498441:
                #0  0x00007f1fc926e74b n/a (/home/buildbotu/fc25-x86_64/clang-fedora/build/libinterp/.libs/liboctinterp.so.8.0.0 + 0x74474b)


It would have been more verbose if the core was still there, but it got cleanned off after few days..

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 13 May 2021 07:13:35 PM UTC, comment #157: 

And if I build w/o java it still crashes:


May 13 15:09:49 ryzen systemd-coredump[1661457]: Process 1659460 (lt-octave-gui) of user 1001 dumped core.

                                                 Stack trace of thread 1659599:
                                                 #0  0x00007f0496a5a82a dgemm_ (libopenblas.so.0)
                                                 #1  0x00007f049d8d1714 n/a (/home/dima/src/octave/gcc_debug/liboctave/.libs/liboctave.so.8.0.1)
                                                 #2  0x00007f049d8d17b5 n/a (/home/dima/src/octave/gcc_debug/liboctave/.libs/liboctave.so.8.0.1)
                                                 #3  0x00007f04a204d6bb n/a (/home/dima/src/octave/gcc_debug/libinterp/.libs/liboctinterp.so.8.0.1)


Dmitri.
--


Dmitri A. Sergatskov <dasergatskov>
Thu 13 May 2021 06:48:28 PM UTC, comment #156: 

Here is a coredump (parts, the hole core is 2.5GB) and some info:



bug-52851/bug-52851.tst ........................................ pass    4/4
  bug-53027/bug-53027.tst ........................................ pass    5/5
  bug-53468/bug-53468.tst ........................................fatal: caught signal Segmentation fault -- stopping myself...
/bin/sh: line 1: 1501145 Segmentation fault      (core dumped) /bin/sh ../run-octave --no-init-file --silent --no-history -p /home/dima/src/octave/gcc_debug/test/mex /home/dima/src/octave/gcc_debug/../test/fntests.m /home/dima/src/octave/gcc_debug/../test
make[3]: *** [Makefile:31684: check-local] Error 139


From journalctl:



May 13 14:38:11 ryzen systemd-coredump[1503420]: Process 1501145 (lt-octave-gui) of user 1001 dumped core.

                                                 Stack trace of thread 1501286:
                                                 #0  0x00007f3f77db17fa pthread_sigmask (libpthread.so.0)
                                                 #1  0x00007f3f19fb09ec _ZN12PosixSignals15chained_handlerEiP9siginfo_tPv.part.6 (libjvm.so)
                                                 #2  0x00007f3f19fb153e JVM_handle_linux_signal (libjvm.so)
                                                 #3  0x00007f3f77db4b20 __restore_rt (libpthread.so.0)
                                                 #4  0x00007f3f7a05b82a dgemm_ (libopenblas.so.0)
                                                 #5  0x00007f3f80ed2714 n/a (/home/dima/src/octave/gcc_debug/liboctave/.libs/liboctave.so.8.0.1)
                                                 #6  0x00007f3f80ed27b5 n/a (/home/dima/src/octave/gcc_debug/liboctave/.libs/liboctave.so.8.0.1)
                                                 #7  0x00007f3f856686b7 n/a (/home/dima/src/octave/gcc_debug/libinterp/.libs/liboctinterp.so.8.0.1)
                                                 #8  0x00007f3f855f7b6d n/a (/home/dima/src/octave/gcc_debug/libinterp/.libs/liboctinterp.so.8.0.1)
                                                 #9  0x00007f3f85644d20 n/a (/home/dima/src/octave/gcc_debug/libinterp/.libs/liboctinterp.so.8.0.1)

<...deleted...>


This is with 7f5bd197fea6 (stable) on CentOS stream
(approx RHEL8.4)

Dmitri.
--



Dmitri A. Sergatskov <dasergatskov>
Thu 13 May 2021 05:50:52 PM UTC, comment #155: 

It is a long shot, but what is openblas library on those debian systems? In fedora openblas is serial, openblasp is pthread, and
openblaso is OpenMP. I have some occasional segafaults in odd places during make check on AMD/ryzen systems when I used default openblas library (none of them happens during individual tests). Lately it is on "make check" at bug-53468 (???).

It is not happening on intel cpus (fedora's buildbots are on i7-2600 at the moment, but used to be on amd fx-8350).
The crash does not happen if I use OpenMP interface
(I have a vague recollection that openmp is recommended for multithreaded application so the threads don't clash).

Dmitri.
--


Dmitri A. Sergatskov <dasergatskov>
Thu 13 May 2021 05:02:03 PM UTC, comment #154: 

This time on gcc-lto-debian. Again with sparse/gmres.m as the last test in the logs:
http://buildbot.octave.org:8010/#/builders/31/builds/593/steps/7/logs/stdio

Markus Mützel <mmuetzel>
Group administrator
Wed 12 May 2021 01:37:59 PM UTC, comment #153: 

Another one on gcc-debian with sparse/gmres.m as the last test in the logs:
http://buildbot.octave.org:8010/#/builders/33/builds/584/steps/7/logs/stdio

Markus Mützel <mmuetzel>
Group administrator
Fri 16 Apr 2021 04:10:01 PM UTC, comment #152: 


$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: Mesa/X.org (0xffffffff)
    Device: llvmpipe (LLVM 11.0.0, 256 bits) (0xffffffff)
    Version: 20.2.6
    Accelerated: no
    Video memory: 7929MB
    Unified memory: no
    Preferred profile: core (0x1)
    Max core profile version: 4.5
    Max compat profile version: 3.1
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.2
OpenGL vendor string: Mesa/X.org
OpenGL renderer string: llvmpipe (LLVM 11.0.0, 256 bits)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 20.2.6
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile

OpenGL version string: 3.1 Mesa 20.2.6
OpenGL shading language version string: 1.40
OpenGL context flags: (none)

OpenGL ES profile version string: OpenGL ES 3.2 Mesa 20.2.6
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20


My Ubuntu lives in a VM. Since it's already using software graphics, setting LIBGL_ALWAYS_SOFTWARE=1 doesn't change anything.

Markus Mützel <mmuetzel>
Group administrator
Thu 15 Apr 2021 10:49:22 PM UTC, comment #151: 

I have the same hg_id. "make check" passes with fail on memory.m
(this may be due to ASAN?). But no crash.

what is output of "glxinfo -B"?

Do you get a crash with  LIBGL_ALWAYS_SOFTWARE=1 ?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 15 Apr 2021 08:10:13 PM UTC, comment #150: 

Creating the axes before the plot avoids the crash:

axes
plot([0 1])


Maybe a threading issue?

Fwiw, all of this is with hg id 0ff064f09927 with the CLI and qt graphics toolkit.

Markus Mützel <mmuetzel>
Group administrator
Thu 15 Apr 2021 08:01:46 PM UTC, comment #149: 

It's not necessary to print. `plot([0 1])` already crashes with the same stack.

Markus Mützel <mmuetzel>
Group administrator
Thu 15 Apr 2021 07:10:24 PM UTC, comment #148: 

Do you get a crash if you run octave and do a plot and print to png?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 15 Apr 2021 07:08:16 PM UTC, comment #147: 

I cannot reproduce it on Fedora 34.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 15 Apr 2021 06:06:48 PM UTC, comment #146: 

@Dmitri: Still the same crash on the default branch.

Markus Mützel <mmuetzel>
Group administrator
Thu 15 Apr 2021 05:58:52 PM UTC, comment #145: 

ASAN segfaults on exit. You should run
make as:


ASAN_OPTIONS="leak_check_at_exit=0" make


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 15 Apr 2021 05:54:35 PM UTC, comment #144: 

It might also be worth noting that initially the segfault seemed to be "all over the place" and more frequent. But recently the number of segmentation faults seems to have gone down. And the only(?) one that still comes up once in a while is the one that seems to be related to the tests in "gmres.m". At least, that functions shows predominantly as the last one in the the logs.

I built with ASan flags following the instructions on this page:
http://wiki.octave.org/Finding_Memory_Leaks

Compilation stopped when the images for the manual were built:

fatal: caught signal Aborted -- stopping myself...
/bin/bash: line 1: 501190 Aborted                 (core dumped) /bin/bash run-octave --norc --silent --no-history --path /home/osboxes/Documents/Repositories/Octave/octave-2/.build/../doc/interpreter/ --eval "geometryimages ('doc/interpreter/', 'voronoi', 'png');"
make[2]: *** [Makefile:31791: doc/interpreter/voronoi.png] Error 134
make[2]: *** Waiting for unfinished jobs....


Tail of asan log:

=================================================================
==507762==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6140000f7fe0 at pc 0x7fb9958216aa bp 0x7ffcc41c3cc0 sp 0x7ffcc41c3468
READ of size 68 at 0x6140000f7fe0 thread T0
    #0 0x7fb9958216a9 in __interceptor_memcpy (/usr/lib/x86_64-linux-gnu/libasan.so.6+0x3a6a9)
    #1 0x7fb9798488a7  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x5cd8a7)
    #2 0x7fb979848bcb  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x5cdbcb)
    #3 0x7fb979847da4  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x5ccda4)
    #4 0x7fb97984560a  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x5ca60a)
    #5 0x7fb979845d9b  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x5cad9b)
    #6 0x7fb97989a297  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x61f297)
    #7 0x7fb97989a6b2  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x61f6b2)
    #8 0x7fb979852b79  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x5d7b79)
    #9 0x7fb97984c706  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x5d1706)
    #10 0x7fb979913322  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x698322)
    #11 0x7fb9793bd647  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x142647)
    #12 0x7fb979565b46  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x2eab46)
    #13 0x7fb979564e57  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x2e9e57)
    #14 0x7fb979608970  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x38d970)
    #15 0x7fb994d1a230 in octave::opengl_functions::glDisable(unsigned int) (/home/osboxes/Documents/Repositories/Octave/octave-2/.build/libgui/.libs/liboctgui.so.6+0x4c8230)
    #16 0x7fb993740d88 in octave::opengl_renderer::set_linestyle(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, double) ../libinterp/corefcn/gl-render.cc:4325
    #17 0x7fb99370ce70 in octave::opengl_renderer::draw_axes_boxes(axes::properties const&) ../libinterp/corefcn/gl-render.cc:1531
    #18 0x7fb99371bcb2 in octave::opengl_renderer::draw_axes(axes::properties const&) ../libinterp/corefcn/gl-render.cc:2346
    #19 0x7fb9936ffba8 in octave::opengl_renderer::draw(graphics_object const&, bool) ../libinterp/corefcn/gl-render.cc:724
    #20 0x7fb99373f6b2 in octave::opengl_renderer::draw(Matrix const&, bool) ../libinterp/corefcn/gl-render.cc:4146
    #21 0x7fb9937025af in octave::opengl_renderer::draw_figure(figure::properties const&) ../libinterp/corefcn/gl-render.cc:791
    #22 0x7fb9936ffa41 in octave::opengl_renderer::draw(graphics_object const&, bool) ../libinterp/corefcn/gl-render.cc:722
    #23 0x7fb994d16be3 in QtHandles::GLCanvas::do_getPixels(octave_handle const&) ../libgui/graphics/GLCanvas.cc:125
    #24 0x7fb994d19b8c in QtHandles::Canvas::getPixels() ../libgui/graphics/Canvas.h:89
    #25 0x7fb994d06b42 in QtHandles::Figure::slotGetPixels() ../libgui/graphics/Figure.cc:342
    #26 0x7fb994dc0564 in QtHandles::Figure::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) libgui/graphics/moc-Figure.cc:105
    #27 0x7fb98e25b650 in QObject::event(QEvent*) (/usr/lib/x86_64-linux-gnu/libQt5Core.so.5+0x2d7650)
    #28 0x7fb98ece5012 in QApplicationPrivate::notify_helper(QObject*, QEvent*) (/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5+0x16b012)
    #29 0x7fb994f59807 in octave::octave_qapplication::notify(QObject*, QEvent*) ../libgui/src/octave-qobject.cc:133
    #30 0x7fb98e22f1c9 in QCoreApplication::notifyInternal2(QObject*, QEvent*) (/usr/lib/x86_64-linux-gnu/libQt5Core.so.5+0x2ab1c9)
    #31 0x7fb98e231bc0 in QCoreApplicationPrivate::sendPostedEvents(QObject*, int, QThreadData*) (/usr/lib/x86_64-linux-gnu/libQt5Core.so.5+0x2adbc0)
    #32 0x7fb98e2871c6  (/usr/lib/x86_64-linux-gnu/libQt5Core.so.5+0x3031c6)
    #33 0x7fb98ba3962a in g_main_context_dispatch (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x5362a)
    #34 0x7fb98ba398d7  (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x538d7)
    #35 0x7fb98ba399a2 in g_main_context_iteration (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x539a2)
    #36 0x7fb98e286842 in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) (/usr/lib/x86_64-linux-gnu/libQt5Core.so.5+0x302842)
    #37 0x7fb98e22da4a in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) (/usr/lib/x86_64-linux-gnu/libQt5Core.so.5+0x2a9a4a)
    #38 0x7fb98e235fc5 in QCoreApplication::exec() (/usr/lib/x86_64-linux-gnu/libQt5Core.so.5+0x2b1fc5)
    #39 0x7fb994f5ba12 in octave::base_qobject::exec() ../libgui/src/octave-qobject.cc:345
    #40 0x7fb994f7ebf3 in octave::qt_application::execute() ../libgui/src/qt-application.cc:73
    #41 0x556641034d9a in main ../src/main-gui.cc:106
    #42 0x7fb98fc50cb1 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x28cb1)
    #43 0x55664103452d in _start (/home/osboxes/Documents/Repositories/Octave/octave-2/.build/src/.libs/octave-gui+0x252d)

0x6140000f7fe0 is located 0 bytes to the right of 416-byte region [0x6140000f7e40,0x6140000f7fe0)
allocated by thread T0 here:
    #0 0x7fb995897517 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.6+0xb0517)
    #1 0x7fb979899ded  (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so+0x61eded)

SUMMARY: AddressSanitizer: heap-buffer-overflow (/usr/lib/x86_64-linux-gnu/libasan.so.6+0x3a6a9) in __interceptor_memcpy
Shadow bytes around the buggy address:
  0x0c2880016fa0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c2880016fb0: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa
  0x0c2880016fc0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c2880016fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c2880016fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c2880016ff0: 00 00 00 00 00 00 00 00 00 00 00 00[fa]fa fa fa
  0x0c2880017000: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x0c2880017010: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c2880017020: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c2880017030: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
  0x0c2880017040: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==507762==ABORTING


This looks like it is something different though. Maybe a mesa bug. (Or is it?)

Anyway, I built again with the same flags on the stable branch.
That succeeded without issues.
Running `test gmres` works fine. Also `test eigs` (the test immediately before gmres in the log) followed `test gmres` doesn't report any issues.
Running `make check` takes a lot of time (so I believe the flags were correctly picked up).
It finally failed with the following (after passing sparse/gmres.m):

  strings/strtrunc.m .............................................fatal: caught signal Segmentation fault -- stopping myself...
/bin/bash: line 1: 664216 Segmentation fault      (core dumped) /bin/bash ../run-octave --no-init-file --silent --no-history -p /home/osboxes/Documents/Repositories/Octave/octave-stable/.build/test/mex /home/osboxes/Documents/Repositories/Octave/octave-stable/.build/../test/fntests.m /home/osboxes/Documents/Repositories/Octave/octave-stable/.build/../test


The asan log was not very informative afaict. Maybe it capped out at approx. 620 MiB. The core dump from that crash is empty.

I still have both builds and the core dump of the crash on the default branch lying around. If someone has hints what I should try with that, please let me know.

Markus Mützel <mmuetzel>
Group administrator
Thu 15 Apr 2021 04:53:50 PM UTC, comment #143: 

The bugs reported so far on this page attached below. All occurred under debian. None under Fedora.

clang-5.0-debian
special-matrix/hadamard.m
strings/hex2dec.m
statistics/range.m

clang-debian
plot/draw/hist.m
polynomial/polyval.m
optimization/lsqnonneg.m
set/ismember.m
_run_test_suite_.m before "print_test_file_name" and after "print_pass_fail" ?

stable-clang-debian
io.tst
specfun/factorial.m

gcc-7-debian
plot/draw/trisurf.m

stable-gcc-7-debian
sparse/gmres.m

gcc-lto-debian
java/javaaddpath.m
sparse/gmres.m
sparse/gmres.m
sparse/gmres.m

stable-gcc-lto-debian
struct.tst

gcc-debian
sparse/gmres.m
sparse/gmres.m
sparse/gmres.m

Hg200 <hg200>
Thu 15 Apr 2021 12:09:46 PM UTC, comment #142: 

My buildbot systems are still not capturing core files and running the test suite (repeatedly) after a crash like this has never reproduced the problem.  So even if I am able to notice the crash and prevent the build tree from being cleaned, I don't know how to obtain the info for debugging.

John W. Eaton <jwe>
Group administrator
Thu 15 Apr 2021 11:12:27 AM UTC, comment #141: 
Markus Mützel <mmuetzel>
Group administrator
Wed 03 Feb 2021 06:33:17 PM UTC, comment #140: 

I suppose it could be configured that way, but on my system that directory is empty.

Also, as you say, it doesn't really solve the problem to just capture the core file.

John W. Eaton <jwe>
Group administrator
Wed 03 Feb 2021 06:04:57 PM UTC, comment #139: 

Just fyi:

On fedora coredumps are saved by systemd-coredump to /var/lib/systemd/coredump/

The trick is to preserve the actual executable that caused it
(so you can get an actual trace).
If I am quick I'd stop buildbots before a new build overwrites the
last one.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Wed 03 Feb 2021 05:46:11 PM UTC, comment #138: 

Unfortunately, I still haven't figured out the proper way to configure my buildbot systems so that they capture core files and failed builds.  Running "make check" again in the directory containing the failed build seems to always succeed.

John W. Eaton <jwe>
Group administrator
Wed 03 Feb 2021 04:57:37 PM UTC, comment #137: 
Markus Mützel <mmuetzel>
Group administrator
Thu 12 Nov 2020 08:46:24 AM UTC, comment #136: 

Another one with the test for "sparse/gmres.m" as the last line in the log before the crash:
http://buildbot.octave.org:8010/#/builders/33/builds/233/steps/7/logs/stdio (gcc-debian)

Markus Mützel <mmuetzel>
Group administrator
Wed 28 Oct 2020 10:47:59 AM UTC, comment #135: 

I'm not sure if I correctly understand the backtrace. It looks like somehow an output from gdb ("Python Exception <class 'gdb.error'> There is no member named _M_dataplus.") made it into Octave's output buffer (in thread 22)...
Is that expected?

IIRC, there was a report in the past where Java didn't play well with gdb. Could this be one of those cases again?

Markus Mützel <mmuetzel>
Group administrator
Tue 27 Oct 2020 06:30:45 PM UTC, comment #134: 

Looking at the dump, Thread #1 is:


Thread 1 (Thread 0x7f32cc109700 (LWP 13095)):
#0  0x00007f32d8f5d30b in  ()
#1  0x00007f32d8f5c754 in  ()
#2  0x00000180cc0f1f50 in  ()
#3  0x00007f32bc5592a0 in  ()
#4  0x00007f32cc0f1f30 in  ()
#5  0x0000000000000180 in  ()
#6  0x00007f32cc0f2140 in  ()
#7  0x00000000ffffffff in  ()
#8  0x00007f32cc0f2020 in  ()
#9  0x00007f32d8f5b22c in  ()
#10 0x00007f32cc0f1fb0 in  ()
#11 0x00007f32f6babca9 in  () at /usr/lib/jvm/java-11-openjdk-amd64/lib/server/libjvm.so
#12 0x00007f32f66aa509 in  () at /usr/lib/jvm/java-11-openjdk-amd64/lib/server/libjvm.so
#13 0x00007f32f6720c56 in  () at /usr/lib/jvm/java-11-openjdk-amd64/lib/server/libjvm.so
#14 0x00007f32f672359c in  () at /usr/lib/jvm/java-11-openjdk-amd64/lib/server/libjvm.so
#15 0x00007f32f402c98a in Java_sun_awt_X11_XToolkit_waitForEvents () at /usr/lib/jvm/java-11-openjdk-amd64/lib/libawt_xawt.so
#16 0x00007f32e03f7a81 in  ()
#17 0x00007f32cc0f24c0 in  ()
#18 0x00007f32f49bfff0 in  ()
#19 0x0000000000000000 in  ()


So is it something to do with Java?  Does it ever crash if the build process is configured with --disable-java?

Rik <rik5>
Group administrator
Tue 27 Oct 2020 05:26:51 PM UTC, comment #133: 

I disabled the buildbot jobs for the build mentioned in comment #132 so that it would not be wiped out by a later execution of the buildbot stable-clan-debian build.

I didn't find a core file there.  Maybe I missed it?  Anyway, I was able to execute the tests in a loop until I triggered a segfault.  The output from make at the time of the segfault was


sparse/spconvert.m .............................................fatal: caught signal Segmentation fault -- stopping myself...
/bin/bash: line 1: 4174536 Segmentation fault      (core dumped) /bin/bash ../run-octave --no-init-file --silent --no-history -p /scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian/build/test/mex /scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian/build/../src/test/fntests.m /scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian/build/../src/test
make[3]: *** [Makefile:31664: check-local] Error 139
make[3]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian-save/build'
make[2]: *** [Makefile:27801: check-am] Error 2
make[2]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian-save/build'
make[1]: *** [Makefile:27503: check-recursive] Error 1
make[1]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian-save/build'
make: *** [Makefile:27803: check] Error 2


The full strack trace for all threads is attached.  A simple "where" command in gdb shows the process was in thread 1 at the time of the fault.  The interpreter appears to be attempting to execute a shell command with "system".  There is no direct call to system in the tests for spconvert.m, so maybe it is from a call to "print_usage" when it is calling makeinfo to format the help text?


(file #50140)

John W. Eaton <jwe>
Group administrator
Sun 25 Oct 2020 02:41:39 AM UTC, comment #132: 

And now it is
jwe-debian-x86_64-0/stable-clang-debian/

http://buildbot.octave.org:8010/#/builders/36/builds/83/steps/7/logs/stdio

Dmitri A. Sergatskov <dasergatskov>
Fri 23 Oct 2020 02:24:34 PM UTC, comment #131: 

My buildbot systems all have the following packages installed:


ii  libsuitesparse-dev:amd64    1:5.8.1+dfsg-2 amd64
ii  libsuitesparseconfig5:amd64 1:5.8.1+dfsg-2 amd64


John W. Eaton <jwe>
Group administrator
Thu 22 Oct 2020 09:03:07 PM UTC, comment #130: 

For the record, on Fedora buildbot
suitesparse.x86_64                          5.4.0-5.fc33

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 22 Oct 2020 08:53:15 PM UTC, comment #129: 

That's interesting.  We should probably wait a few more days to see if this establishes itself as a true pattern.

Rik <rik5>
Group administrator
Thu 22 Oct 2020 04:37:57 PM UTC, comment #128: 

The point where the test suite crashes on the buildbots seems to become less random over the last few days. The three most recent crashes all had "sparse/gmres.m" as the last test in the log:
http://buildbot.octave.org:8010/#/builders/31/builds/161/steps/7/logs/stdio (gcc-lto-debian)
http://buildbot.octave.org:8010/#/builders/33/builds/186/steps/7/logs/stdio (gcc-debian)
http://buildbot.octave.org:8010/#/builders/31/builds/176/steps/7/logs/stdio (gcc-lto-debian)

Markus Mützel <mmuetzel>
Group administrator
Mon 12 Oct 2020 09:29:22 PM UTC, comment #127: 

http://buildbot.octave.org:8010/#/builders/32/builds/124/steps/7/logs/stdio

"clang-debian" again. Seems to be crashed just before "print_test_file_name" and after "print_pass_fail" this time.

Hg200 <hg200>
Sat 03 Oct 2020 10:31:30 AM UTC, comment #126: 

Another one:
http://buildbot.octave.org:8010/#/builders/32/builds/113/steps/7/logs/stdio

"clang-debian" with "set/ismember.m" last in the log.

Markus Mützel <mmuetzel>
Group administrator
Sat 03 Oct 2020 12:18:07 AM UTC, comment #125: 

clang-debian/113


  profiler/profshow.m ............................................ pass    4/4
  set/intersect.m ................................................ pass   28/28
fatal: caught signal Segmentation fault -- stopping myself...
/bin/bash: line 1: 941528 Segmentation fault      (core dumped) /bin/bash ../run-octave --no-init-file --silent --no-history -p /scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build/test/mex /scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build/../src/test/fntests.m /scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build/../src/test
make[3]: *** [Makefile:31841: check-local] Error 139
  set/ismember.m .................................................make[3]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build'
make[2]: *** [Makefile:27961: check-am] Error 2
make[2]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build'
make[1]: *** [Makefile:27663: check-recursive] Error 1
make[1]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build'
make: *** [Makefile:27963: check] Error 2
program finished with exit code 2
elapsedTime=380.436578


Dmitri A. Sergatskov <dasergatskov>
Wed 30 Sep 2020 11:02:13 AM UTC, comment #124: 

The next one:
http://buildbot.octave.org:8010/#/builders/35/builds/52/steps/7/logs/stdio

"stable-gcc-lto-debian" with "struct.tst" last in the log.

Markus Mützel <mmuetzel>
Group administrator
Tue 29 Sep 2020 05:57:16 PM UTC, comment #123: 

On fedora there is a service that stores coredumps in its own directory
(and writes a trace to syslog).
I assume there is something similar available for debian, e.g.:
https://manpages.debian.org/stretch/systemd-coredump/coredumpctl.1.en.html

You'd still need to act fast if you want to get the trace
since the executable gets wiped out too...

Sincerely,

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 29 Sep 2020 05:46:05 PM UTC, comment #122: 

Unfortunately, the buildbot directories are currently reinitialized at each run, so the generated core files have disappeared by the time I'm able to check for them.  Maybe I can fix the buildbot config to preserve these failures somehow.

John W. Eaton <jwe>
Group administrator
Tue 29 Sep 2020 06:35:46 AM UTC, comment #121: 

And another one:
http://buildbot.octave.org:8010/#/builders/31/builds/96/steps/7/logs/stdio

"gcc-lto-debian" with "java/javaaddpath.m" last in the log.

Markus Mützel <mmuetzel>
Group administrator
Mon 28 Sep 2020 07:10:51 AM UTC, comment #120: 

Another one here:
http://buildbot.octave.org:8010/#/builders/32/builds/84/steps/7/logs/stdio

That one was for "clang-debian" with `optimization/lsqnonneg.m` as the last test in the log.

Markus Mützel <mmuetzel>
Group administrator
Tue 22 Sep 2020 06:47:10 AM UTC, comment #119: 

jwe: Thanks for checking.
I guess that probably proves that the segfaults are not caused by bug #58790.

Markus Mützel <mmuetzel>
Group administrator
Tue 22 Sep 2020 01:19:18 AM UTC, comment #118: 

Markus: I see 2 in /proc/sys/vm/overcommit_memory on all my buildbot workers so I think that setting is already active.
overcomm

John W. Eaton <jwe>
Group administrator
Mon 21 Sep 2020 07:32:06 PM UTC, comment #117: 


for i in $(find . -type f);do cat $i | grep -q Segmentation && cat $i | awk '/\./ && /Leaving/ && /directory/' && ls $i; done


Hg200 <hg200>
Mon 21 Sep 2020 06:27:48 PM UTC, comment #116: 

The faults reported on this side since july are

clang-5.0-debian
  special-matrix/hadamard.m
  strings/hex2dec.m
  statistics/range.m

clang-debian
  plot/draw/hist.m
  polynomial/polyval.m

stable-clang-debian
  io.tst

gcc-7-debian
  plot/draw/trisurf.m

stable-gcc-7-debian
  sparse/gmres.m

All of them happened under debian. None under fedora. Most of them were clang, but not all. No repetition in an .m file yet.

Provided all log files are stored in the same folder, we could download all log files in raw with wget and grep for "segfault" and if found, return the line "Leaving directory". But since it is uncompressed it looks like 15 Gigs or so since begin of july? Hmm a bit too much to download.

Hg200 <hg200>
Mon 21 Sep 2020 06:01:37 PM UTC, comment #115: 

@jwe: Not sure if this applies to all distributions:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-memory-captun

IIUC, something similar to `echo 2 > /proc/sys/vm/overcommit_memory` has immediate effect. The change from comment #90 only has effect after the next re-boot.

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 05:37:16 PM UTC, comment #114: 

Markus: I didn't restart the systems.  I assumed that the change I made using sysctl took effect immediately.  Is that not correct?  One of the systems was last rebooted 53 days ago.  The others have been up for more than 100 days.  I can restart all of them if necessary.


John W. Eaton <jwe>
Group administrator
Mon 21 Sep 2020 05:33:31 PM UTC, comment #113: 

For the Fortran libraries we use (or the libraries like BLAS and Lapack that have interfaces defined by their Fortran heritage) the important thing is not pointer size (those libraries don't use pointers like we think of in C++ or even modern Fortran) but the size of the integers that appear in their public interfaces for things like array dimensions and pivot vectors (which sort of serve the purpose of pointers, but are offsets into specific arrays).

On 64-bit systems, Octave now uses 64-bit integers for array dimensions and indexing by default.  But most Linux distributions supply BLAS and Lapack libraries (and other libraries that depend on them) compiled to use 32-bit integers.  That's a Fortran legacy thing, where INTEGER and REAL (i.e., single precision floating point numbers) typically occupy the same amount of storage.

As already noted, we also currently handle the case of Octave built with 64-bit dimensions and indexing and calling Fortran libraries that are compiled to use 32-bit integers for dimensions and indexing.

The current assumption is that all Fortran libraries that Octave uses will use the same convention, either 32-bit or 64-bit integers for dimensions and indexing.

I see no point in attempting to handle a mixture (some libraries using 32-bit and others using 64-bit integers) because these libraries also depend on each other.  It seems to me that mixing them arbitrarily is likely to lead to more confusion than just requiring that they all use the same convention.

Note also that (at least as I understood the state of things the last time I looked at this problem in detal) we don't really have a check to ensure that all the libraries that Octave depends on actually use the same convention.  We attempt to test the integer size of the BLAS library in the configure script.  That test requires executing a program, so guesses (or configure options) are used when cross compiling.  Whatever is determined for the BLAS library is assumed for all the rest.

John W. Eaton <jwe>
Group administrator
Mon 21 Sep 2020 04:53:41 PM UTC, comment #112: 

Thanks, Rik. You did a much better job in explaining the situation.

The second snippet (the one with reference BLAS) is the "preferred" configuration (if there is such a thing).
I'd call the first an "experimental" configuration. But not because you are using OpenBLAS but because it is built with 64bit pointers.


Back on topic:
There was another segfault on the buildbots. This time "stable-gcc-debian" with "miscellaneous/ls.m" as the last test in the log.

@jwe: Did you restart the workers after your change in comment #90.

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 04:42:03 PM UTC, comment #111: 

comment #109:

> You can build Octave with 64bit indexing inside Octave but still use BLAS/LAPACK (and related) libraries that use 32bit indexing.


I was not able to make the configure report put out this:


  64-bit array dims and indexing:       yes
  64-bit BLAS array dims and indexing:  yes


When I used the reference blas and lapack implementations, I got instead:


  64-bit array dims and indexing:       yes
  64-bit BLAS array dims and indexing:  no


Perhaps that's intended, and if not, perhaps it's due to how the reference blas implementation is built on NixOS:

https://github.com/NixOS/nixpkgs/blob/8a5eb89b0f70999c08ce9ce6df89238671e186dc/pkgs/build-support/alternatives/blas/default.nix

I'm building up a benchmark table where every blas implementation is tested, against Octave 6.0.90 and 5.2.0, and I'll report the results (probably tomorrow) in NixOS' discourse thread. The benchmark I'm using is this: https://openbenchmarking.org/test/system/octave-benchmark .

Doron Behar <doronbehar>
Mon 21 Sep 2020 04:03:08 PM UTC, comment #110: 

Summarizing the experience of Octave developers to date which Markus relayed:

1) Building Octave with 64-bit pointers is routine and is the default in the configure script (you have to go out of your way to use 32-bit pointers.

2) Building Fortran libraries with 64-bit pointers is problematic (not always, but enough of the time to not make it a default).  This isn't code that Octave controls so if there are issues you will need to go to the authors of the particular packages.  What has been discovered is that you need absolute consistency between all of the libraries for this to have even a chance of working.  If one library is built with 64-bit pointers, they all need to be built that way.

3) Performance and correctness are in pseudo-opposition.  The tradeoffs a library coder makes for performance may cause slight deviations from reference behavior.  The choice of which feature to prioritize is left to the user.  If they want performance they can install ATLAS or OpenBLAS.  If they are worried about conformance to standard they can install a reference BLAS.  Octave has chosen to leave the decision to users.  I think distributions should as well and make various BLAS libraries available for selection by the user.

4) Intel MKL BLAS is definitely high performance, but definitely has bugs reported against it on the Octave bug tracker.  I see that you plan not to use it for other reasons (non-free software), but that will make your life easier in this case.

Rik <rik5>
Group administrator
Mon 21 Sep 2020 03:57:43 PM UTC, comment #109: 

After reading the discourse message:
There might be a misunderstanding. You can build Octave with 64bit indexing inside Octave but still use BLAS/LAPACK (and related) libraries that use 32bit indexing.
Octave itself is careful enough to not call those libraries with too large indices.

From the download page of the Octave Windows builds:
https://www.gnu.org/software/octave/download#ms-windows

> Unless your computer has more than ~32GB of memory and you need to solve linear algebra problems with arrays containing more than ~2 billion elements, this version will offer no advantage over the recommended Windows-64 version above.

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 02:56:42 PM UTC, comment #108: 

OK Thanks for explaining this Markus. I'm currently consulting with my distro about the choice of the blas implementation at https://discourse.nixos.org/t/openblas-vs-reference-blas-implementation/9086 . TBH, I feel rather confident, since we use Nix, to put 64bit indexing into production, as the ecosystem makes it easy to debug why issues occur. 64 bit is the future, and Nix is there too.

For the record, Intel MKL BLAS is not free software and hence it cannot be used for packaging Octave for NixOS.

Thanks a lot for your help, and the work on Octave 6 !

Doron Behar <doronbehar>
Mon 21 Sep 2020 11:19:40 AM UTC, comment #107: 

Aaaah. I misunderstood your "you" referring to "me".

After reading the link from SuiteSparse: I don't know if Octave will work correctly with Intel MKL BLAS (which they recommend). I believe there have been reports about buggy behavior.

From an Octave point of view: Use either the reference BLAS (probably not the best performance) or OpenBLAS with optimizations for your processor (to have better performance).
Only use 64bit indexing in the Fortran libraries if you really need it.

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 11:12:44 AM UTC, comment #106: 

If you really need 64bit indices, this is a (probably incomplete) list of libraries you need to match:
BLAS
LAPACK
OpenBLAS
ARPACK
qrupdate
all SuiteSparse libraries
SUNDIALS
...

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 11:12:30 AM UTC, comment #105: 

comment #103:

> I never recommended using OpenBLAS. In fact, I asked whether you could reproduce with the reference BLAS libraries.
>
> If you ask for my recommendation, I'd advice against going down that rabbit hole of 64bit indices in BLAS/LAPACK. Instead stick to 32bit indices for BLAS/LAPACK and all related numeric libraries.


Yes you do: https://octave.org/doc/v5.2.0/External-Packages.html#External-Packages quoting:

> Basic Linear Algebra Subroutine library. Accelerated BLAS libraries such as OpenBLAS (https://www.openblas.net/) or ATLAS (http://math-atlas.sourceforge.net) are recommended for best performance. The reference implementation (http://www.netlib.org/blas) is slow, unmaintained, and suffers from certain bugs in corner case inputs.


The tests that previously failed, succeed when I use the same openblas for octave, qrupdate, suitesparse and arpack, though I'm worried about the degraded performence warning of suitesparse..

Doron Behar <doronbehar>
Mon 21 Sep 2020 11:08:02 AM UTC, comment #104: 

In case I was unclear in my previous comment: If you use OpenBLAS, make sure its indices match the other libraries. I'd recommend to build it (and all other related libraries) for 32bit indices.

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 11:03:49 AM UTC, comment #103: 

I never recommended using OpenBLAS. In fact, I asked whether you could reproduce with the reference BLAS libraries.

If you ask for my recommendation, I'd advice against going down that rabbit hole of 64bit indices in BLAS/LAPACK. Instead stick to 32bit indices for BLAS/LAPACK and all related numeric libraries.

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 10:57:04 AM UTC, comment #102: 

Markus After investigating I learned that also suitesparse and arpack are not using openblas but use the reference blas implementation, which is compiled on NixOS without 64 bit support, at least now.

Moreover, you recommend using openblas, but suitesparse recommends not using it, see:

https://github.com/DrTimothyAldenDavis/SuiteSparse#about-the-blas-and-lapack-libraries

I can attempt to make octave use a 64 bit build of the reference blas implementation, and make suitesparse use the same 64 bit reference blas as well. Alternatively I can also make suitesparse use openblas. Doing either will be against either of your recommendations. What should I do?

Please, and thanks for your help.

Doron Behar <doronbehar>
Mon 21 Sep 2020 10:09:38 AM UTC, comment #101: 

Thanks for the advise Markus, I'll look into it!

Doron Behar <doronbehar>
Mon 21 Sep 2020 10:08:36 AM UTC, comment #100: 

The output of the `ldd` command is:


        linux-vdso.so.1 (0x00007ffea5fa3000)
        liboctinterp.so.8 => not found
        liboctave.so.8 => not found
        libstdc++.so.6 => /nix/store/z5g0y84g2iknwwgfhw9wslbbzgw1w22k-gfortran-9.3.0-lib/lib/libstdc++.so.6 (0x00007f67b47b1000)
        libm.so.6 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libm.so.6 (0x00007f67b4670000)
        libgomp.so.1 => /nix/store/z5g0y84g2iknwwgfhw9wslbbzgw1w22k-gfortran-9.3.0-lib/lib/libgomp.so.1 (0x00007f67b4638000)
        libgcc_s.so.1 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libgcc_s.so.1 (0x00007f67b461c000)
        libpthread.so.0 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libpthread.so.0 (0x00007f67b45fb000)
        libc.so.6 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libc.so.6 (0x00007f67b443c000)
        /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/ld-linux-x86-64.so.2 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib64/ld-linux-x86-64.so.2 (0x00007f67b4994000)
        libdl.so.2 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libdl.so.2 (0x00007f67b4437000)


I think the liboctinterp and liboctave shared objects not found, is due to something in our build sandbox - the working directory of me when I entered the sandbox after the build failed, is not the same as the working directory of the real builder.

Doron Behar <doronbehar>
Mon 21 Sep 2020 10:06:53 AM UTC, comment #99: 

If you are using OpenBLAS with 64bit indices, take care to also build all dependent numeric libraries with 64bit indices.
In the case of chol, a potentially incompatible library might be qrupdate.

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 09:53:52 AM UTC, comment #98: 

For reference, here's the files for the openblas package on NixOS:


result
├── bin
├── include
│   ├── cblas.h
│   ├── f77blas.h
│   ├── lapacke_config.h
│   ├── lapacke.h
│   ├── lapacke_mangling.h
│   ├── lapacke_utils.h
│   ├── lapack.h
│   └── openblas_config.h
└── lib
    ├── cmake
    │   └── openblas
    │       ├── OpenBLASConfig.cmake
    │       └── OpenBLASConfigVersion.cmake
    ├── libblas.so -> libopenblasp-r0.3.10.so
    ├── libblas.so.3 -> libopenblasp-r0.3.10.so
    ├── libcblas.so -> libopenblasp-r0.3.10.so
    ├── libcblas.so.3 -> libopenblasp-r0.3.10.so
    ├── liblapacke.so -> libopenblasp-r0.3.10.so
    ├── liblapacke.so.3 -> libopenblasp-r0.3.10.so
    ├── liblapack.so -> libopenblasp-r0.3.10.so
    ├── liblapack.so.3 -> libopenblasp-r0.3.10.so
    ├── libopenblasp-r0.3.10.so
    ├── libopenblas.so -> libopenblasp-r0.3.10.so
    ├── libopenblas.so.0 -> libopenblasp-r0.3.10.so
    └── pkgconfig
        ├── blas.pc
        ├── cblas.pc
        ├── lapack.pc
        └── openblas.pc

6 directories, 25 files


Doron Behar <doronbehar>
Mon 21 Sep 2020 09:49:44 AM UTC, comment #97: 

Thanks for the prompt replies. Here's how my disto, NixOS, is building openblas:

https://github.com/doronbehar/nixpkgs/blob/pkg/octave/pkgs/development/libraries/science/math/openblas/default.nix

As can be seen in the `postInstall` attribute. the `liblapack.so` and `libblas.so` libraries are linked to the same shared object. You probably both understand this subject better then I am, but I think that if there's a incompatibility issue here, it's between openblas and it self, or between octave and it.

NixOS is building every package in a sandbox. Hence, no other blas / lapack libraries are potentially used if they were not declared in the inputs of the build "recipe". Please rest assured that my build wasn't influenced by a mixture of blas implementations, if that was your concern.

I've re-initiated the build of octave, so in a while I should be able to tell you Dimitri the output of `ldd src/.libs/octave-cli`.

P.S The Nix Expression for Octave which I'm testing is here:

https://github.com/doronbehar/nixpkgs/blob/pkg/octave/pkgs/development/interpreters/octave/default.nix

Doron Behar <doronbehar>
Mon 21 Sep 2020 09:32:11 AM UTC, comment #96: 

Most likely you have mis-matched blas libraries.
What is an output of
ldd src/.libs/octave-cli
?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Mon 21 Sep 2020 09:29:12 AM UTC, comment #95: 

Could you re-run the tests for the build with 64bit OpenBLAS?
Does it segfault again at the same test?

I'm not sure if we are running continuous tests for that configuration. AFAICT, 64bit BLAS/LAPACK libraries are still quite rare.
It might be we are doing something wrong in Octave. But it might also be an error in OpenBLAS. It might be not very well tested with 64bit indices...

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 09:20:45 AM UTC, comment #94: 

With the reference blas and lapack, the tests did not fail, but I did not succeed in reaching this configure report, prior to building:


  64-bit array dims and indexing:       yes
  64-bit BLAS array dims and indexing:  yes


Instead I got:


  64-bit array dims and indexing:       yes
  64-bit BLAS array dims and indexing:  no


Doron Behar <doronbehar>
Mon 21 Sep 2020 09:17:29 AM UTC, comment #93: 

Is this segmentation fault reproducible?

Does the same happen if you used reference BLAS and LAPACK instead of OpenBLAS?

Markus Mützel <mmuetzel>
Group administrator
Mon 21 Sep 2020 09:05:03 AM UTC, comment #92: 

I experience tests failing with gcc and the RC 6.0.90. The failure happens I think in an earlier stage of the tests. Here's my full build log:

https://gist.github.com/doronbehar/eb3111a3bf11ac753bf380da5fbe88b9

I'm using openblas' lapack and blas implementation, and in order for them to be detected I use:


  F77_INTEGER_8_FLAG = "-fdefault-integer-8";


Otherwise blas and lapack are not detected, not sure if that's relevant.

Doron Behar <doronbehar>
Wed 16 Sep 2020 01:51:27 PM UTC, comment #91: 

Thanks.
I admit that it is quite unlikely that this could be caused by overcommitting memory in that case.
But we'll be sure when the next crash occurs.

Markus Mützel <mmuetzel>
Group administrator
Wed 16 Sep 2020 01:46:58 PM UTC, comment #90: 

Markus, all four of my buildbot worker systems have 30GB swap space.  Rarely is more than a few 100MB of that used, as far as I can tell by watching the systems with top from time to time.

Three of the systems have 32GB RAM, the other (oldest one) has 16GB.

I executed


sudo sysctl vm.overcommit_memory=2


on all four systems.

John W. Eaton <jwe>
Group administrator
Wed 16 Sep 2020 08:12:40 AM UTC, comment #89: 

@jwe: To test whether these random crashes are caused by bug #58790, could you increase the swap size or (for some time) disable memory overcommit on the workers?

See e.g. here for how that could be done on Linux:
https://serverfault.com/a/142003

Markus Mützel <mmuetzel>
Group administrator
Tue 15 Sep 2020 06:23:08 AM UTC, comment #88: 

Another one for clang-debian:
http://buildbot.octave.org:8010/#/builders/32/builds/52/steps/7/logs/stdio

Last test appearing in the log: polynomial/polyval.m

Markus Mützel <mmuetzel>
Group administrator
Fri 11 Sep 2020 07:13:22 AM UTC, comment #87: 

Another one for gcc-lto-debian:
http://buildbot.octave.org:8010/#/builders/31/builds/39/steps/7/logs/stdio

Last test in the log was sparse/gmres.m.

I wonder if this is bug #58790, i.e. Octave is killed by the kernel because available memory on the system was low.

Markus Mützel <mmuetzel>
Group administrator
Sun 06 Sep 2020 12:06:09 PM UTC, comment #86: 

Another segmentation fault while running the test suite:
http://buildbot.octave.org:8010/#/builders/36/builds/15/steps/7/logs/stdio

Looks like this was for stable-clang-debian during io.tst.

Markus Mützel <mmuetzel>
Group administrator
Thu 27 Aug 2020 03:38:03 PM UTC, comment #85: 

AFAICT, the old builders will disappear from the waterfall view by themselves when they have been inactive for a while.
All stable builders disappeared for me from time to time when there was little action on the stable branch.
If you scroll very far down (and I mean veeeery far), I guess the epfl builders will eventually pop in.

Markus Mützel <mmuetzel>
Group administrator
Thu 27 Aug 2020 03:27:20 PM UTC, comment #84: 

The buildbot systems are currently using

gcc version 10.2.0 (Debian 10.2.0-5)
clang version 9.0.1-13

If someone wants to set up other builders to test older compiler versions or libraries, then I'd be glad to add them to our master config file.  Maybe it would be best to use older systems that are intentionally not upgraded or to use some container or VM to fix the version?  I think Kai is working on a docker image that could be used for this purpose?

John W. Eaton <jwe>
Group administrator
Thu 27 Aug 2020 03:22:15 PM UTC, comment #83: 

I don't plan to delete history, but I don't really want them displayed by default.  The old entries take up horizontal space and add clutter in the waterfall display.  But now I see there is a "show old builders" option in the waterfall display settings.  The option is disabled by default, but toggling it removes the old builders from the display.  The reversed sense of this setting is a known bug.  I will see about changing the default in the master.cfg file, at least for the waterfall display page.

John W. Eaton <jwe>
Group administrator
Thu 27 Aug 2020 02:40:49 PM UTC, comment #82: 

Please, don't remove the history of the deprecated builders (unless they would cause problems otherwise).

It sometimes served as a kind of "lazy-man's-bisect" for me. (Sometimes also months back.)

Markus Mützel <mmuetzel>
Group administrator
Thu 27 Aug 2020 02:34:36 PM UTC, comment #81: 


comment #79:

> [...] but they still appear on the buildbot web display.  I'm not sure how to remove them.


You have to remove them from the state.sqlite database with SQL statements (while the Buildbot Master is not running).  Or remove that file entirely to forget all history.  In any case make a backup before working on that file.


# sqlite3 state.sqlite
SQLite version 3.26.0 2018-12-01 12:34:55
Enter ".help" for usage hints.
sqlite> .tables
build_properties       change_users           patches
builder_masters        changes                scheduler_changes
builders               changesource_masters   scheduler_masters
builders_tags          changesources          schedulers
buildrequest_claims    configured_workers     sourcestamps
buildrequests          connected_workers      steps
builds                 logchunks              tags
buildset_properties    logs                   users
buildset_sourcestamps  masters                users_info
buildsets              migrate_version        workers
change_files           object_state
change_properties      objects


sqlite> SELECT * FROM workers;
1|ubuntu-1804-worker-01|{"admin": null, "host": null, "access_uri": null, "version": "2020.08.12"}|0|0


sqlite> SELECT * FROM builders;
1|octave-stable||097d6ab83a4824a0a88317812b687c5e919fc4db
2|octave-mxe-stable-w64-64||eaf8b4220ced7c8ec9bc8f95edc55307a214ee23
3|octave-mxe-stable-w64||5e82c51d06a1833b0065bb0281c1ed231b688796
4|octave-mxe-stable-w32||2dc32fc9b95a4f0839a75aff983d6a552e92625d
5|octave-stable-doxygen||1bf254e1654404c338635dadaec105637dc3e932


Kai Torben Ohlhus <siko1056>
Group Member
Thu 27 Aug 2020 02:22:31 PM UTC, comment #80: 

Thank you for that update.

Just out of interest: Which versions are clang and gcc that are used currently?

I think it's a good thing that the old builders are still "there". They'll disappear eventually from the waterfall view unless one scrolls down to a time when they were still active.

Markus Mützel <mmuetzel>
Group administrator
Thu 27 Aug 2020 02:15:23 PM UTC, comment #79: 

Yes, I rearranged the jobs on my buildbot systems in an attempt to balance the loads.

I also upgraded those systems to the latest Debian testing packages and lost some old compilers (Clang 4 & 5 and GCC 7) so those builds are no longer active but they still appear on the buildbot web display.  I'm not sure how to remove them.

John W. Eaton <jwe>
Group administrator
Thu 27 Aug 2020 08:03:38 AM UTC, comment #78: 
Markus Mützel <mmuetzel>
Group administrator
Mon 24 Aug 2020 09:57:23 AM UTC, comment #77: 

The buildbots still seem to crash randomly during the BISTs:
http://buildbot.octave.org:8010/#/builders/19/builds/1756/steps/7/logs/stdio (gcc-7-debian)
http://buildbot.octave.org:8010/#/builders/12/builds/1898/steps/7/logs/stdio (clang-5.0-debian)

Both crashes occurred roughly at the same time. (But on different workers. Or have they been re-assigned recently?)

Markus Mützel <mmuetzel>
Group administrator
Mon 24 Aug 2020 09:45:21 AM UTC, comment #76: 

Like we agreed in the online meetings a few weeks back, this bug shouldn't block the RC of Octave 6.
Lowering severity.

Markus Mützel <mmuetzel>
Group administrator
Wed 19 Aug 2020 04:08:07 PM UTC, comment #75: 

Reverting status to "Confirmed" after jwe's comment #74.
I guess we'll see with time if the build bots are still occasionally crashing while running the test suite.

The changes in comment #72 are very likely to have contributed to fixing bug #56952 though.

Markus Mützel <mmuetzel>
Group administrator
Wed 19 Aug 2020 03:48:21 PM UTC, comment #74: 

No, the test suite runs in one copy of Octave, so you are right, this change is unlikely to have an effect on those crashes.

But for creating the figures when building Octave, we do run multiple independent scripts, so I think the patch does help to avoid those problems.

John W. Eaton <jwe>
Group administrator
Wed 19 Aug 2020 11:12:46 AM UTC, comment #73: 

If I correctly understand jwe's fix, it changes the order of tasks done when closing Octave.
Does running the test suite with "make check" open and close Octave repeatedly?
I never verified that. But I assumed it would be comparable to running "__run_test_suite__" at the Octave prompt.

Markus Mützel <mmuetzel>
Group administrator
Sun 16 Aug 2020 03:58:36 AM UTC, comment #72: 

OK, I pushed my changes to stable and merged with default:

http://hg.savannah.gnu.org/hgweb/octave/rev/d075c2f26d1d

John W. Eaton <jwe>
Group administrator
Sat 15 Aug 2020 11:07:48 PM UTC, comment #71: 

It looks like Rik's patch (shortcut processing of txt files) made it to crash with both gcc and clang.

Dmitri.

Dmitri A. Sergatskov <dasergatskov>
Sat 15 Aug 2020 07:31:00 PM UTC, comment #70: 

Thanks for the patch

1.) I checked whether my segfault occurs at the same position as Dimitri's (see comment #55). Result: Yes, it does.

2.) I applied JWE's patch (comment #69) and made a clang build. Result: no segfault with unset DISPLAY. Then stopped with lldb at "octave::graphics_toolkit::close at graphics-toolkit.h:279". One further step goes to "gnuplot_graphics_toolkit::close at _init_gnuplot_.cc:151". Looks good.

Hg200 <hg200>
Sat 15 Aug 2020 04:52:21 PM UTC, comment #69: 

What appears to be happening is that the _gnuplot_init_.oct file (the one that defines the gnuplot graphics toolkit) is being closed before the toolkit is unloaded.  So when that happens, the pointer to the toolkit object is invalid.  The same problem exists with fltk.  It doesn't happen with qt because that is not dynamically loaded (with dlopen).  The same sequence of events happens when Octave is compiled with GCC, but for whatever reason, it the crash isn't happening.  So, it seems that we need to either prevent the toolkit's .oct file from being dlclosed (I thought the mlock in the init function would do that!) or we need to ensure that when it is dlclosed, that it is also removed/unregistered from the toolkit list.

I'm attaching a possible change to consider.  I'm not sure is the best solution, but it should at least avoid the crash.

(file #49678)

John W. Eaton <jwe>
Group administrator
Sat 15 Aug 2020 09:33:22 AM UTC, comment #68: 

It looks like this report is getting sidetracked again. It originally was about segfaults when running the test suite.
The errors on creating the graphics for the manual are probably better tracked in bug #56952.

At the moment, it's probable (but not entirely certain) the errors here are related to graphics.

Markus Mützel <mmuetzel>
Group administrator
Fri 14 Aug 2020 11:51:27 PM UTC, comment #67: 

Build with CLANG:


unset DISPLAY
./run-octave --eval "figure (1,\"visible\",\"off\")"
octave: X11 DISPLAY environment variable not set
octave: disabling GUI features
fatal: caught signal Segmentation fault -- stopping myself...
Segmentation fault (core dumped)


Build with GCC - no segfault:


unset DISPLAY
./run-octave --eval "figure (1,\"visible\",\"off\")"
octave: X11 DISPLAY environment variable not set
octave: disabling GUI features


Hg200 <hg200>
Fri 14 Aug 2020 10:31:33 PM UTC, comment #66: 

hmm - ok. from two clang builds with DISPLAY="" one did catch a segfault. an "incremental make" seems to reproduce the segfault consitently. it is in the .txt file section as already reported below. Rik's delay also does not help here. I am on default.

Hg200 <hg200>
Fri 14 Aug 2020 10:13:08 PM UTC, comment #65: 

I'm working on stable.  It looks like a delay didn't help though.

Rik <rik5>
Group administrator
Fri 14 Aug 2020 10:12:35 PM UTC, comment #64: 

I can also reproduce the crash with clang 10 (on Fedora 32).
Fedora's buildbots have DISPLAY set, so there is not crash there.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:52:15 PM UTC, comment #63: 

Are you working off stable or default?
In any case here is the output on 45a9dcee45db+ (stable)


rm -f src/octave-gui-6.0.1 && \
cd src && ln -s octave-gui octave-gui-6.0.1
rm -f doc/interpreter/plot-axesproperties.texi-t doc/interpreter/plot-axesproperties.texi && /bin/sh run-octave --norc --silent --no-history --path ../doc/interpreter --eval "genpropdoc ('axes');" > doc/interpreter/plot-axesproperties.texi-t && mv doc/interpreter/plot-axesproperties.texi-t doc/interpreter/plot-axesproperties.texi
fatal: caught signal Segmentation fault -- stopping myself...
/bin/sh: line 1: 2062846 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path ../doc/interpreter --eval "genpropdoc ('axes');" > doc/interpreter/plot-axesproperties.texi-t
make[2]: *** [Makefile:31022: doc/interpreter/plot-axesproperties.texi] Error 139
make[2]: Leaving directory '/home/dima/src/octave/clang_debug'
make[1]: *** [Makefile:27468: all-recursive] Error 1
make[1]: Leaving directory '/home/dima/src/octave/clang_debug'
make: *** [Makefile:11093: all] Error 2


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:41:22 PM UTC, comment #62: 

@Dmitri: Since you have a repeatable segfault, could you try the attached patch?


diff -r 45a9dcee45db doc/interpreter/genpropdoc.m
--- a/doc/interpreter/genpropdoc.m        Fri Aug 14 13:37:07 2020 -0700
+++ b/doc/interpreter/genpropdoc.m        Fri Aug 14 14:38:09 2020 -0700
@@ -1911,7 +1911,8 @@ function s = getstructure (objname, base
   endif

   if (isfigure (hf))
-    close (hf)
+    close (hf);
+    pause (0.5);
   endif

 endfunction


I also attached it to this bug report.  This is obviously not determining the root cause, but it might be good enough for the documentation.


(file #49674)

Rik <rik5>
Group administrator
Fri 14 Aug 2020 09:19:58 PM UTC, comment #61: 

The build bots show that the segfault has now shifted from the "txt" images to the calling of genpropdoc.m.  I think it is significant that genpropdoc creates graphics figures and objects in order to query their default properties.  I went ahead and merged the change I made to the image generation files on stable to default since it seems to have improved things.

Rik <rik5>
Group administrator
Fri 14 Aug 2020 09:19:16 PM UTC, comment #60: 

It is critical that you do not have DISPLAY set when you do make.
It finishes OK in my case when DISPLAY set to default.

So

DISPLAY="" make -j1 V=1

should do it.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:17:38 PM UTC, comment #59: 

clang-devel-9.0.1-2.fc31.x86_64
clang-libs-9.0.1-2.fc31.x86_64
clang-tools-extra-9.0.1-2.fc31.x86_64
clang-9.0.1-2.fc31.x86_64


Hg200 <hg200>
Fri 14 Aug 2020 09:13:35 PM UTC, comment #58: 

What's clang version?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:11:50 PM UTC, comment #57: 

i still can't reproduce. switches are:


./configure CC=clang CXX=clang++
make V=1 -j12
C compiler:                    clang  -pthread  -Wall -W -Wshadow -Wformat -Wpointer-arith -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wcast-align -Wcast-qual -g -O2
C++ compiler:                  clang++  -pthread  -Wall -W -Wshadow -Woverloaded-virtual -Wold-style-cast -Wformat -Wpointer-arith -Wwrite-strings -Wcast-align -Wcast-qual -g -O2


;-(((

Hg200 <hg200>
Fri 14 Aug 2020 09:07:09 PM UTC, comment #56: 

It looks like processing of TEXI files triggers it now.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:01:29 PM UTC, comment #55: 

It still crashes on my workstation:


ake[2]: Entering directory '/home/dima/src/octave/clang_debug'
/bin/sh config.status oct-conf-post.h-tmp oct-conf-post.h
config.status: creating oct-conf-post.h-tmp
config.status: executing oct-conf-post.h commands
/bin/sh config.status liboctave/mk-version-h.sh-tmp liboctave/mk-version-h.sh
config.status: creating liboctave/mk-version-h.sh-tmp
config.status: executing liboctave/mk-version-h.sh commands
/bin/sh config.status libinterp/corefcn/mk-mxarray-h.sh-tmp libinterp/corefcn/mk-mxarray-h.sh
config.status: creating libinterp/corefcn/mk-mxarray-h.sh-tmp
config.status: executing libinterp/corefcn/mk-mxarray-h.sh commands
/bin/sh config.status build-aux/subst-config-vals.sh-tmp build-aux/subst-config-vals.sh
config.status: creating build-aux/subst-config-vals.sh-tmp
config.status: executing build-aux/subst-config-vals.sh commands
/bin/sh config.status liboctave/external/mk-f77-def.sh-tmp liboctave/external/mk-f77-def.sh
config.status: creating liboctave/external/mk-f77-def.sh-tmp
config.status: executing liboctave/external/mk-f77-def.sh commands
rm -f doc/interpreter/plot-axesproperties.texi-t doc/interpreter/plot-axesproperties.texi && /bin/sh run-octave --norc --silent --no-history --path ../doc/interpreter --eval "genpropdoc ('axes');" > doc/interpreter/plot-axesproperties.texi-t && mv doc/interpreter/plot-axesproperties.texi-t doc/interpreter/plot-axesproperties.texi
fatal: caught signal Segmentation fault -- stopping myself...
/bin/sh: line 1: 2052762 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path ../doc/interpreter --eval "genpropdoc ('axes');" > doc/interpreter/plot-axesproperties.texi-t
make[2]: *** [Makefile:31022: doc/interpreter/plot-axesproperties.texi] Error 139
make[2]: Leaving directory '/home/dima/src/octave/clang_debug'
make[1]: *** [Makefile:27468: all-recursive] Error 1
make[1]: Leaving directory '/home/dima/src/octave/clang_debug'
make: *** [Makefile:11093: all] Error 2


And the backtrace:


(gdb) thread apply all bt

Thread 2 (Thread 0x7f4753af6700 (LWP 2052933)):
#0  0x00007f487422e4dc in sigtimedwait () from /lib64/libc.so.6
#1  0x00007f48745ca95c in sigwait () from /lib64/libpthread.so.0
#2  0x00007f487bfdd3cf in signal_watcher (arg=0x7f487d7fbbe0 <octave::generic_sig_handler(int)>) at ../liboctave/wrappers/signal-wrappers.c:697
#3  0x00007f48745c02de in start_thread () from /lib64/libpthread.so.0
#4  0x00007f48742f1e83 in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f487e074940 (LWP 2052762)):
#0  0x00007f487d67abce in octave::graphics_toolkit::close (this=0x15d3410) at ../libinterp/corefcn/graphics-toolkit.h:279
#1  0x00007f487d676f8a in octave::gtk_manager::unload_all_toolkits (this=0x10ae3c0) at ../libinterp/corefcn/gtk-manager.h:107
#2  0x00007f487d671cab in octave::interpreter::shutdown (this=0x10ad150) at ../libinterp/corefcn/interpreter.cc:902
#3  0x00007f487cb6d975 in octave::cli_application::execute (this=0x7ffcb8683990) at ../libinterp/octave.cc:381
#4  0x0000000000401839 in main (argc=15, argv=0x7ffcb8683cb8) at ../src/main-cli.cc:95


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 08:41:40 PM UTC, comment #54: 

Interesting clues.  Just for fun, I'm testing the idea of short-circuiting the building of "txt" images for the documentation in this changeset (https://hg.savannah.gnu.org/hgweb/octave/rev/45a9dcee45db).  I now need to wait for the buildbots to notice this, unless someone has the permissions and knows how to kick off a manual build of stable-clang-4.0-debian and stable-clang-5.0-debian.

Rik <rik5>
Group administrator
Fri 14 Aug 2020 08:04:03 PM UTC, comment #53: 

Actually serial make crashes as well.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 07:57:32 PM UTC, comment #52: 

I can reproduce this on local computer (with stable) with clang 9
if I do
DISPLAY="" make -j32 V=1


Here is backtrace


(gdb) thread apply all bt

Thread 2 (Thread 0x7f50b4e98700 (LWP 1921806)):
#0  0x00007f51cd5d04dc in sigtimedwait () from /lib64/libc.so.6
#1  0x00007f51cd96c95c in sigwait () from /lib64/libpthread.so.0
#2  0x00007f51d537f3cf in signal_watcher (arg=0x7f51d6b9dbe0 <octave::generic_sig_handler(int)>) at ../liboctave/wrappers/signal-wrappers.c:697
#3  0x00007f51cd9622de in start_thread () from /lib64/libpthread.so.0
#4  0x00007f51cd693e83 in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f51d7416940 (LWP 1919268)):
#0  0x00007f51d6a1cbce in octave::graphics_toolkit::close (this=0x1b5c3e0) at ../libinterp/corefcn/graphics-toolkit.h:279
#1  0x00007f51d6a18f8a in octave::gtk_manager::unload_all_toolkits (this=0x16a33c0) at ../libinterp/corefcn/gtk-manager.h:107
#2  0x00007f51d6a13cab in octave::interpreter::shutdown (this=0x16a2150) at ../libinterp/corefcn/interpreter.cc:902
#3  0x00007f51d5f0f975 in octave::cli_application::execute (this=0x7ffc924ac400) at ../libinterp/octave.cc:381
#4  0x0000000000401839 in main (argc=15, argv=0x7ffc924ac728) at ../src/main-cli.cc:95
(gdb)


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 07:09:04 PM UTC, comment #51: 

What's interesting is that the failures all seem to be with files which are not related to actual plotting.  It seems that it is the generation of images in the ".txt" format which are failing, but looking at the m-files in doc/interpreter one sees


  if (strcmp (typ , "txt"))
    image_as_txt (d, nm);


and then


## generate something for the texinfo @image command to process
function image_as_txt (d, nm)
  fid = fopen (fullfile (d, [nm ".txt"]), "wt");
  fputs (fid, "\n");
  fputs (fid, "+---------------------------------+\n");
  fputs (fid, "| Image unavailable in text mode. |\n");
  fputs (fid, "+---------------------------------+\n");
  fclose (fid);
endfunction


So, no real plotting is being done and it may be the speed with which the graphics system is setup and torn down which is the problem.

Taking plotimages.m as representative, the function begins


function plotimages (d, nm, typ)
  set_graphics_toolkit ();
  set_print_size ();
  hide_output ();
  outfile = fullfile (d, [nm "." typ]);
  if (strcmp (typ, "png"))
    set (groot, "defaulttextfontname", "*");
  endif
  if (strcmp (typ, "eps"))
    d_typ = "-depsc2";
  else
    d_typ = ["-d", typ];
  endif

  if (strcmp (typ , "txt"))
    image_as_txt (d, nm);


and then ends with


  hide_output ();
endfunction


Shooting in the dark, what if we move the test for the "txt" format to the top of the file with this code


function plotimages (d, nm, typ)

  if (strcmp (typ , "txt"))
    image_as_txt (d, nm);
    return;
  endif

  set_graphics_toolkit ();
  set_print_size ();


The graphics system will never get invoked.

Rik <rik5>
Group administrator
Fri 14 Aug 2020 03:01:52 PM UTC, comment #50: 

Huh, since my changes related to bug #58814, the clang builds performed by buildbot on my Debian systems seem to all be failing when generating graphics for the manual.  Maybe it is related to those systems not running the builds in a framebuffer context?  I will try to take a look at that.

John W. Eaton <jwe>
Group administrator
Sat 04 Jul 2020 06:45:34 AM UTC, comment #49: 

this one crashed yesterday and then again 4 hours ago with a core dumped:

http://buildbot.octave.org:8010/#/builders/12/builds/1847


Hg200 <hg200>
Fri 26 Jun 2020 03:28:02 PM UTC, comment #48: 

.profile are fro non interactive logins.
You can put some dummy variable there for a check.
Make sure you are using a correct home directory
(/var/lib/buildbot as far as I can tell).

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 26 Jun 2020 03:23:17 PM UTC, comment #47: 

The shell commands appear to be run using /bin/sh in non-interactive mode so startup files like .profile are not executed, at least as far as I can tell.

John W. Eaton <jwe>
Group administrator
Fri 26 Jun 2020 03:02:19 PM UTC, comment #46: 

You should be able to add ulimit to .profile in buildbot home directory.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 26 Jun 2020 01:51:01 PM UTC, comment #45: 

The following build on one of my buildbot systems failed:

http://buildbot.octave.org:8010/#/builders/22/builds/516

I've been repeatedly running the test suite using this build for the last 8 hours or so and it hasn't failed once.  I'm using


while true ; do
  if nice -n 19 xvfb-run -a -s 'screen 0 640x480x24' make V=1 check ; then
    echo "OK $?"
  else
    echo "NOT OK: $?"
    break
  fi
done


I said earlier that I would set the default ulimit for the buildbots so that we would generate core files, but I'm not sure of the best way to do that.  If I understand correctly, buildbot starts new shells to do each shell command step and I'd rather not have to add ulimit commands to each one.  So it seems that a change like this should be made on the worker systems instead but I don't know what startup file to add the ulimit command to on the build worker system.  Any ideas?

John W. Eaton <jwe>
Group administrator
Thu 25 Jun 2020 01:37:18 PM UTC, comment #44: 

Yes, we are storing the graphics_object and it uses a shared_ptr to hold the base_graphics_object that contains the actual data.  But it does not provide copy-on-write semantics, so although the interpreter thread won't delete the underlying base_graphics_object while the GUI thread holds the reference, it can still change the contents unexpectedly while we are doing something with the data unless we are locking correctly.

I don't know for sure that is the problem here.  Can we guarantee that we are getting the locking right?  To me, that seems harder than implementing copy-on-write semantics for objects, but I'm also not sure what is appropriate for these objects.

Also, if the real problem in this case is a crash in Mesa, then are there difference in versions between the systems where the crashes happen frequently vs. those where it is rare (or maybe never happens?)

John W. Eaton <jwe>
Group administrator
Thu 25 Jun 2020 10:58:16 AM UTC, comment #43: 

@John: The GUI Object (a Figure here) already stores a reference to the underlying graphics_object, see this excerpt from Object.h:


    // Store the graphics object directly so that it will exist when
    // we need it.  Previously, it was possible for the graphics
    // toolkit to get a handle to a figure, then have the interpreter
    // thread delete the corresponding object before the graphics
    // toolkit (GUI) thread had a chance to display it.  It should be OK
    // to store this object and use it in both threads (graphics_object
    // uses a std::shared_ptr) provided that we protect access with
    // mutex locks.
    graphics_object m_go;


After this addition, we should have changed all the logic and removed m_handle since m_go lets us access the object directly.

Anyway, is it me or what we see in the backtrace is a crash in Mesa, not in Octave?


Pantxo Diribarne <pantxo>
Group Member
Wed 24 Jun 2020 09:01:34 PM UTC, comment #42: 

In answer to the question in comment #37, no, there is no special code that I know of to detect whether we are using a real display or some framebuffer thing.

What I was thinking might be happening is that the GUI thread accesses a graphics object (which belongs to the interpreter) and uses it without acquiring and holding a lock so that the interpreter could modify or delete the graphics object while the GUI thread is using it.  Unlike octave_value objects, the graphics objects do not have copy-on-write semantics, so it seems there could be trouble.

Here is QtHandles::Figure::slotGetPixels, which is one of the functions that shows up in the stack trace shown in comment #41:


  uint8NDArray
  Figure::slotGetPixels (void)
  {
    uint8NDArray retval;
    Canvas *canvas = m_container->canvas (m_handle);

    if (canvas)
      {
        gh_manager& gh_mgr = m_interpreter.get_gh_manager ();

        gh_mgr.process_events ();
        octave::autolock guard (gh_mgr.graphics_lock ());
        retval = canvas->getPixels ();
      }

    return retval;
  }


If I understand correctly, m_handle is the figure number for the current figure and is used to find the graphics object for the figure object.  Is it possible that processing events could invalidate m_handle?  It doesn't seem like that's what's happening here because if it were, then I would expect the call to gh_mgr.get_object in GLCanvas::do_getPixels to fail.

But this is the kind of thing that looks suspect to me.  It seems like the way the GUI thread is using of graphics handles and objects that belong to the interpreter thread is not clearly defined.

If the GUI thread stores a handle to a graphics object (or a graphics object itself) then it seems to me that it should somehow grab a reference to it in a way that can either be checked later to ensure that it remains valid, or that will prevent it from being modified/deleted until the GUI thread no longer needs it.


John W. Eaton <jwe>
Group administrator
Sat 20 Jun 2020 06:34:52 PM UTC, comment #41: 

I built with enabled address sanitizer flags and ran "make check". During the fixed tests in publish/publish.tst, I got a heap-buffer-overflow.
I was able to reproduce reproduce it twice (out of two tests) when running the complete test suite.

The attached log contains the backtrace and other info from the address sanitizer.

I'm not sure if this is related or something different. But it might be a graphics/threading issue afaict.

Wrt what Pantxo wrote on the maintainer's mailing list [1]: I came across this blog post [2]. It looks like the general idea could be applied cross-platform.
Would that be helpful?

[1]: https://lists.gnu.org/archive/html/octave-maintainers/2020-06/msg00067.html

[2]: https://devblogs.microsoft.com/oldnewthing/20130712-00/?p=3823

(file #49330)

Markus Mützel <mmuetzel>
Group administrator
Thu 11 Jun 2020 02:05:29 PM UTC, comment #40: 

RE: comment #38, I'll try to make that change soon.  We should be testing full Qt builds.  I could also set up some separate builds to continue testing with gnuplot, but that's a lower priority for me.

John W. Eaton <jwe>
Group administrator
Wed 10 Jun 2020 02:44:46 PM UTC, comment #39: 

Low prio and JFYI: I have spent a considerable amount of time to force a segfault in Fedora, either with gcc or with clang. E.g. i ran the test suite in a forever loop over a day. I also had no luck with "nice -19" that is the adjustment on the build bots.

Fedora Core 31
gcc version 9.3.1
clang version 9.0.1
Target: x86_64

Hg200 <hg200>
Wed 10 Jun 2020 02:09:00 PM UTC, comment #38: 

I think it would be interesting to have some (or all) of the debians buildbot run make through xvfb as well, so the documentation will be built with qt graphics. So we will see if there are any crashes.

Dmitri.

Dmitri A. Sergatskov <dasergatskov>
Wed 10 Jun 2020 01:53:25 PM UTC, comment #37: 

I haven't found a "no-extras" buildbot that crashed with that error so far.

The only one I found with a crash was this one here:
http://buildbot.octave.org:8010/#/builders/7/builds/1615/steps/6/logs/stdio

Please help improve Octave by contributing tests for these files
(see the list in the file /scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build/test/fntests.log).
double free or corruption (out)
fatal: caught signal Aborted -- stopping myself...
/bin/bash: line 1: 3679392 Aborted                 /bin/bash ../run-octave --no-init-file --silent --no-history -p /scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build/test/mex /scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build/../src/test/fntests.m /scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build/../src/test
make[3]: *** [Makefile:31583: check-local] Error 134
make[3]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build'
make[2]: *** [Makefile:27717: check-am] Error 2
make[2]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build'
make[1]: *** [Makefile:27419: check-recursive] Error 1
make[1]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build'
make: *** [Makefile:27719: check] Error 2
program finished with exit code 2
elapsedTime=784.988686


But that one looks different and has probably been fixed in the meantime.

Also I haven't found a Fedora buildbot that crashed with that signature. (The ones mentioned in some of the comments here are due to bug #55225).

If I correctly understood yesterday, the Debian workers use Xvfb for plotting with the "qt" graphics toolkit while running the test suite.
Could that be related? Does Octave use a different code paths if the framebuffer is virtual?

Markus Mützel <mmuetzel>
Group administrator
Sun 07 Jun 2020 05:12:42 PM UTC, comment #36: 

I've never worked with coredumps. So I can't judge how useful that would be.

Markus Mützel <mmuetzel>
Group administrator
Thu 04 Jun 2020 03:13:36 PM UTC, comment #35: 

The buildbots don't explicitly enable core dumps, so it depends on the prevailing system settings.  We could change that.

John W. Eaton <jwe>
Group administrator
Thu 04 Jun 2020 02:16:44 PM UTC, comment #34: 

And another one for "clang-5.0-debian" (with "plot/util/saveas.m" as the last test in the output):
http://buildbot.octave.org:8010/#/builders/12/builds/1797/steps/7/logs/stdio

Do the buildbots store core dumps?

Markus Mützel <mmuetzel>
Group administrator
Wed 03 Jun 2020 05:29:57 PM UTC, comment #33: 

Thanks for the hint. I'll try to produce a core dump. But the crashes are very rare for me. I've seen maybe one every few weeks or months...

While some of the crashes on the build bots occured when plotting tests run, they generally seem to be in more random places.
I've probably missed a bunch of them (see bug #58393). But here a list of crashes with the last test occurring in the log in no particular order grouped by builders:

clang-4.0-debian:
  miscellaneous/copyfile.m
  plot/appearance/title.m
  plot/appearance/legend.m
  sparse/sprand.m
  linear-algebra/bandwidth.m
  optimization/optimget.m

clang-5.0-debian:
  special-matrix/hadamard.m
  miscellaneous/tar.m
  statistics/movmedian.m
  plot/appearance/annotation.m

gcc-7-debian:
  miscalleneous/isfile.m

gcc-7-lto-debian:
  java/usejava.m

At first glance, it doesn't look like these functions have anything special in common. I'm not sure if it has anything to do with graphics.

Markus Mützel <mmuetzel>
Group administrator
Wed 03 Jun 2020 02:58:07 PM UTC, comment #32: 

Try enabling core dumps.  In bash, it is "ulimit -c unlimited".  Then you can run gdb on the core dump and the executable file that produced it.

If these are happening in the Qt graphics code, then I suspect more threading issues, probably because we are not using mutexes appropriately when accessing graphics objects.

John W. Eaton <jwe>
Group administrator
Wed 03 Jun 2020 12:12:29 PM UTC, comment #31: 

And here (clang-4.0-debian):
http://buildbot.octave.org:8010/#/builders/10/builds/1548/steps/7/logs/stdio

I tried to reproduce by running the test suite in gdb "make check RUN_OCTAVE_OPTIONS=-g" repeatedly. But it looks like the crash never occurs in the debugger (or I was just unlucky).
However, my gcc is version 9.3.0. But even with that version I've seen the occasional crash when running the test suite. Unfortunately, these happened when I didn't run in a debugger.
Is there any way to get any additional information when a program crashed in Ubuntu 20.04 (like the system log on Windows)?

Also looking on which buildbots failed, I'm starting to doubt whether the crashes depend on compiler or version.

Markus Mützel <mmuetzel>
Group administrator
Wed 03 Jun 2020 11:51:13 AM UTC, comment #30: 
Markus Mützel <mmuetzel>
Group administrator
Sat 30 May 2020 02:38:01 PM UTC, comment #29: 

And another one from a clang buildbot (clang-4.0-debian):
http://buildbot.octave.org:8010/#/builders/10/builds/1539/steps/7/logs/stdio

Markus Mützel <mmuetzel>
Group administrator
Tue 26 May 2020 09:37:20 AM UTC, comment #28: 

This time it was a builder that uses gcc (gcc-7-debian) that crashed while running the test suite:
http://buildbot.octave.org:8010/#/builders/19/builds/1623/steps/6/logs/stdio

Markus Mützel <mmuetzel>
Group administrator
Fri 15 May 2020 07:47:07 AM UTC, comment #27: 

The other bug is probably bug #56952.

Markus Mützel <mmuetzel>
Group administrator
Fri 15 May 2020 07:44:47 AM UTC, comment #26: 

Another recent segfault during the test suite with clang on Debian:
http://buildbot.octave.org:8010/#/builders/10/builds/1488/steps/6/logs/stdio

Randomly checking a bunch of build logs, it looks like the error here seems to be mostly (or exclusively) with the clang buildbots on Debian. (They would probably be easier to spot if tests with a segfault would be marked as failed, see comment #13.)

The failing buildbots on Fedora might be caused by something similar. But they are probably something different.

Changing the bug title again to something more appropriate. Sorry for the distraction.

Markus Mützel <mmuetzel>
Group administrator
Thu 14 May 2020 10:20:22 PM UTC, comment #25: 

I was trying to point out the difference in comment #18 between a bug in the test suite and a bug building documentation.

This report is about the test suite, and I think we can make more progress here because it should be possible to build an image with debugging symbols and then run 'make check' until a segfault is captured.

Rik <rik5>
Group administrator
Thu 14 May 2020 07:00:02 PM UTC, comment #24: 

I just realized i confused by all those coredumps. Those are during building docs, not test suits. I think we have a building docs crash bug somewhere as well.

Dmitri.

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 06:59:05 PM UTC, comment #23: 

Is it just me or does it seem like this bug has now been completely hijacked by a different bug?

The original bug was about segmentation faults running the test suite with clang builds. The recent comments seem to be about segmentation faults when building the doc images.

These are separate issues, please keep that in mind and do with this report whatever you think is best.

Mike Miller <mtmiller>
Group Member
Thu 14 May 2020 06:57:20 PM UTC, comment #22: 

This is the trace for that crash:


May 14 12:12:09 i7 systemd-coredump[528353]: Process 527783 (lt-octave-gui) of user 1002 dumped core.

                                             Stack trace of thread 527783:
                                             #0  0x00007f63e16c3ef9 _ZN11QMetaObject12invokeMethodEP7QObjectPKcN2Qt14ConnectionTypeE22QGenericReturnArgument16QGenericAr>
                                             #1  0x00007f63e40a36a0 n/a (/home/buildbotu/fc25-x86_64/gcc-fedora/build/libgui/.libs/liboctgui.so.6.0.0 + 0x1826a0)
May 14 12:12:09 i7 systemd[1]: systemd-coredump@3-528352-0.service: Succeeded.


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 06:51:07 PM UTC, comment #21: 

Buildbots job run with V=1, so the latest failure
http://buildbot.octave.org:8010/#/builders/11/builds/1521/steps/5/logs/stdio

But it is not 100% reproducible. Sometimes it etex, sometimes it
is GraphicsMagick, sometimes it is postscript.

Dmitri.
--
 


Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 06:38:53 PM UTC, comment #20: 

Can you run with V=1 to maybe show exactly what command is being executed and failing?

John W. Eaton <jwe>
Group administrator
Thu 14 May 2020 06:28:38 PM UTC, comment #19: 

I t always looks to me that there is a race condition in parallel make (octave starts writing an output file to disk, but etex
already uses it to compile some document etc...). It never happens with serial make.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 03:43:39 PM UTC, comment #18: 

Does this problem go away if 'make -j1' is used?  I seem to remember that this is caused by high load on the system.

If it does go away, then this might be something to do with run-octave script and competition for some shared resource.

But that is likely to be a different error than the one from the test suite.

Rik <rik5>
Group administrator
Thu 14 May 2020 03:29:41 PM UTC, comment #17: 

When compiled with debug flags (-O0 -ggdb3) the failure rate is much lower and the one I got did not generate crash dump:


 GEN      doc/interpreter/splinefit6.png
  MAKEINFO ../doc/interpreter/octave.info
  TEXI2DVI doc/interpreter/octave.dvi
  MAKEINFO doc/interpreter/octave.html/.octave-html-stamp
/usr/bin/texi2dvi: etex exited with bad status, quitting.
make[2]: *** [Makefile:31088: doc/interpreter/octave.dvi] Error 1
make[2]: *** Waiting for unfinished jobs....
make[2]: Leaving directory '/home/dima/src/octave/gcc_debug'
make[1]: *** [Makefile:27415: all-recursive] Error 1
make[1]: Leaving directory '/home/dima/src/octave/gcc_debug'
make: *** [Makefile:11050: all] Error 2


Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 03:06:51 PM UTC, comment #16: 

OK, I updated buildbot to Fedora 32.
gcc version 10.1.1 20200507 (Red Hat 10.1.1-1) (GCC)

clang version 10.0.0 (Fedora 10.0.0-1.fc32)

It is still crashes the same way (running build on dev as user):


 GEN      doc/interpreter/interpderiv2.txt
fatal: caught signal Segmentation fault -- stopping myself...
fatal: caught signal Segmentation fault -- stopping myself...
fatal: caught signal Segmentation fault -- stopping myself...
  GEN      doc/interpreter/plot.txt
  GEN      doc/interpreter/hist.txt
  GEN      doc/interpreter/errorbar.txt
  GEN      doc/interpreter/polar.txt
  GEN      doc/interpreter/mesh.txt
/bin/sh: line 1: 139633 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path /home/dima/src/octave/gcc_def/../doc/interpreter/ --eval "geometryimages ('doc/interpreter/', 'delaunay', 'txt');"
make[2]: *** [Makefile:31213: doc/interpreter/delaunay.txt] Error 139
make[2]: *** Waiting for unfinished jobs....
/bin/sh: line 1: 139587 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path /home/dima/src/octave/gcc_def/../doc/interpreter/ --eval "geometryimages ('doc/interpreter/', 'convhull', 'txt');"
make[2]: *** [Makefile:31211: doc/interpreter/convhull.txt] Error 139
/bin/sh: line 1: 139580 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path /home/dima/src/octave/gcc_def/../doc/interpreter/ --eval "geometryimages ('doc/interpreter/', 'griddata', 'txt');"
make[2]: *** [Makefile:31209: doc/interpreter/griddata.txt] Error 139
make[2]: Leaving directory '/home/dima/src/octave/gcc_def'
make[1]: *** [Makefile:27415: all-recursive] Error 1
make[1]: Leaving directory '/home/dima/src/octave/gcc_def'
make: *** [Makefile:11050: all] Error 2


This is a trace I got in the system logs:

May 14 10:57:55 i7 systemd-coredump[140692]: Process 139633 (lt-octave-gui) of user 1001 dumped core.

                                             Stack trace of thread 139633:
                                             #0  0x00007fac4c40d750 n/a (n/a + 0x0)
                                             #1  0x00007fac8845749e n/a (/home/dima/src/octave/gcc_def/libgui/.libs/liboctgui.so.6.0.0 + 0x18249e)
May 14 10:57:55 i7 systemd-coredump[140696]: Process 139587 (lt-octave-gui) of user 1001 dumped core.

                                             Stack trace of thread 139587:
                                             #0  0x0000000001e6c948 n/a (n/a + 0x0)
                                             #1  0x00007f1a0ae0049e n/a (/home/dima/src/octave/gcc_def/libgui/.libs/liboctgui.so.6.0.0 + 0x18249e)
May 14 10:57:55 i7 systemd[1]: systemd-coredump@0-140691-0.service: Succeeded.
May 14 10:57:55 i7 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@0-140691-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 14 10:57:55 i7 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@1-140694-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 14 10:57:55 i7 systemd[1]: systemd-coredump@1-140694-0.service: Succeeded.
May 14 10:57:55 i7 systemd-coredump[140697]: Process 139580 (lt-octave-gui) of user 1001 dumped core.

                                             Stack trace of thread 139580:
                                             #0  0x00007f2a1fa7fdd2 _ZN7QObject7connectEPKS_PKcS1_S3_N2Qt14ConnectionTypeE (libQt5Core.so.5 + 0x275dd2)
                                             #1  0x00007f2a2244249e n/a (/home/dima/src/octave/gcc_def/libgui/.libs/liboctgui.so.6.0.0 + 0x18249e)
May 14 10:57:55 i7 systemd[1]: systemd-coredump@2-140695-0.service: Succeeded.


I will try a debug build to see if I get more info.

Dmitri.
--



Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 02:39:44 PM UTC, comment #15: 

This is going to be really hard to debug unless we can get a stacktrace.

For starters, does someone have a machine set up which mimics one of the failing buildbot configurations (such as gcc on Fedora or an older version of clang)?  Can you get semi-repeatable crashes on that machine?

Rik <rik5>
Group administrator
Thu 14 May 2020 08:48:14 AM UTC, comment #14: 

Changing the bug title to reflect that the segfaults occur not only with clang and also rarely happen on Ubuntu.

Markus Mützel <mmuetzel>
Group administrator
Thu 14 May 2020 08:41:47 AM UTC, comment #13: 

The errors might be more prevalent than the current green markings in the buildbot's waterfall view might suggest.
If a test causes the buildbot to interrupt with a segmentation fault, the overall test is still marked as "green".
I wrote about this a while back on the mailing list:
https://octave.1599824.n4.nabble.com/buildbots-False-pass-results-for-segmentation-fault-in-test-td4695266.html

Maybe someone with more expertise in setting up buildbot could take a look?

That won't solve the actual issue. But it might make it easier to judge the impact and prevalence.

Markus Mützel <mmuetzel>
Group administrator
Wed 13 May 2020 08:09:41 PM UTC, comment #12: 

On the buildbot (still fedora 31):

gcc -v
Using built-in specs.
COLLECT_GCC=/usr/bin/gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/9/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl --enable-offload-targets=nvptx-none --without-cuda-driver --enable-gnu-indirect-function --enable-cet --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 9.3.1 20200408 (Red Hat 9.3.1-2) (GCC)
[dima@i7 ~]$ cat /etc/redhat-release
Fedora release 31 (Thirty One)

clang -v
clang version 9.0.1 (Fedora 9.0.1-2.fc31)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-redhat-linux/9
Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/9
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-redhat-linux/9
Candidate multilib: .;@m64
Candidate multilib: 32;@m32
Selected multilib: .;@m64

On CentOS
gcc version 8.3.1 20190507
clang version 8.0.1 (Red Hat 8.0.1-1.module_el8.1.0+215+a01033fb)

The problem did not manifest itself with 5.x release (at least not as obvious and quite reproducible as it now).

I think many people do not see that problem very often, because they do incremental re-build that does not re-make the docs.

Dmitri.
--

Dmitri.

Dmitri A. Sergatskov <dasergatskov>
Wed 13 May 2020 07:58:55 PM UTC, comment #11: 

The previous problem seemed to be with the clang compiler for old versions (4 & 5).  It might be the same case with gcc on Fedora, the compiler there is old or bad in some manner.  What were version numbers for gcc and Fedora?

Rik <rik5>
Group administrator
Wed 13 May 2020 06:25:08 PM UTC, comment #10: 

It fails with gcc on Fedora/Centos as well.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Mon 20 Jan 2020 07:59:31 AM UTC, comment #9: 

Thanks for testing again.

There seem to be occasional segfaults also for the gcc-7-lto-debian buildbot:
http://buildbot.octave.org:8010/#/builders/24/builds/1141/steps/6/logs/stdio

This might be related or it might be caused by something different.

Markus Mützel <mmuetzel>
Group administrator
Fri 17 Jan 2020 10:03:36 PM UTC, comment #8: 

I ran the test suite twice while compiling other projects in the background, and no difference for me.

Mike Miller <mtmiller>
Group Member
Fri 17 Jan 2020 06:23:28 AM UTC, comment #7: 

@Mike: Thanks for your tests. The segfaults occur only intermittently on the buildbots. It might be that they only occur when the machine is under heavy load. E.g. when an mxe job is running on the same machine at the same time.

Could you please try to stress your machine while you are running the test suite and check if it still doesn't segfault?

Markus Mützel <mmuetzel>
Group administrator
Thu 16 Jan 2020 10:53:06 PM UTC, comment #6: 

I further built the default branch with Clang 6 and Clang 10 (Git snapshot), and the full test suite runs fine with them. I don't have any older versions of Clang readily available without setting up a container or VM.

So if this buildbot segmentation fault is real, it seems to have been fixed with Clang 6 and later.

I don't think there's any particular reason why those buildbots use Clang 4 and 5, probably just old configurations that haven't been updated.

Mike Miller <mtmiller>
Group Member
Thu 16 Jan 2020 08:10:44 AM UTC, comment #5: 

That wasn't clear to me either.
Would it make sense to update the buildbots to use newer clang versions? Or is there a particular reason they run with clang 4 and clang 5?

Markus Mützel <mmuetzel>
Group administrator
Wed 15 Jan 2020 09:10:31 PM UTC, comment #4: 

Ok, that wasn't clear in this report. If this error only occurs with older versions of Clang, and never with newer versions, then it's probably not worth fixing, right?

Mike Miller <mtmiller>
Group Member
Wed 15 Jan 2020 09:00:19 PM UTC, comment #3: 

8 is reasonably new, the fault is with clang 4 and 5.
Fedora has 9 and it is also fine.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Wed 15 Jan 2020 08:55:34 PM UTC, comment #2: 

I am not able to reproduce this segmentation fault on my system (Debian) with Clang version 8 (with or without xvfb-run). The full test suite runs for me with only 2 test failures in publish.tst, same exact results as with GCC.

Mike Miller <mtmiller>
Group Member
Tue 14 Jan 2020 04:22:52 PM UTC, comment #1: 

Maybe need to compile a version with debugging symbols and attempt "__run_test_suite__" manually under a debugger so that a backtrace can be obtained.

This could be a problem with clang, but more likely it is something generic that we are doing incorrectly that only occasionaly surfaces for combinations of compiler, libraries, and machine.

Rik <rik5>
Group administrator
Mon 13 Jan 2020 03:34:54 PM UTC, original submission:  

The test suite repeatedly fails to complete on the clang buildbots:
http://buildbot.octave.org:8010/#/builders/12/builds/1525/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/10/builds/1322/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/12/builds/1522/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/10/builds/1317/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/10/builds/1315/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/12/builds/1516/steps/6/logs/stdio
And several more.

The segmentation fault seems to occur at random tests. It occurs with both the clang 4.0 and the clang 5.0 buildbots.

This should ideally be fixed before releasing Octave 6.1.
If we don't aim to support clang, the priority can probably be lowered.

Markus Mützel <mmuetzel>
Group administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #49678:  shutdown-diffs.txt added by jwe (2KiB - text/plain)
file #49674:  57591.genpropdoc.diff added by rik5 (353B - text/x-patch)
file #49330:  asan_publish.log added by mmuetzel (7KiB - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by siko1056 (Posted a comment)
  • -email is unavailable- added by pantxo (Posted a comment)
  • -email is unavailable- added by hg200 (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by dasergatskov (Posted a comment)
  • -email is unavailable- added by rik5 (Posted a comment)
  • -email is unavailable- added by mmuetzel (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 21 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2021-12-02 mmuetzel StatusReady For Test Confirmed
    2021-11-30 mmuetzel StatusConfirmed Ready For Test
    2021-07-30 mmuetzel StatusFixed Confirmed
    2021-07-18 jwe Open/ClosedClosed Open
    2021-07-15 mmuetzel StatusConfirmed Fixed
        Open/ClosedOpen Closed
    2021-05-12 mmuetzel SummarySegmentation faults when running the test suite (mostly with clang) Segmentation faults when running the test suite
    2020-10-27 jwe Attached File- Added stable-clang-debian-stack-trace.txt, #50140
    2020-09-29 doronbehar Carbon-CopyRemoved 111330 -
    2020-08-24 mmuetzel Severity5 - Blocker 4 - Important
    2020-08-19 mmuetzel StatusReady For Test Confirmed
    2020-08-16 jwe StatusPatch Submitted Ready For Test
    2020-08-15 jwe Attached File- Added shutdown-diffs.txt, #49678
        StatusConfirmed Patch Submitted
    2020-08-14 rik5 Attached File- Added 57591.genpropdoc.diff, #49674
    2020-06-20 mmuetzel Attached File- Added asan_publish.log, #49330
        StatusNone Confirmed
    2020-05-15 mmuetzel StatusConfirmed None
        SummarySegmentation faults when running the test suite (more prevalent on Fedora) Segmentation faults when running the test suite (mostly with clang)
    2020-05-14 rik5 StatusNone Confirmed
    2020-05-14 mmuetzel SummarySegmentation faults with clang when running the test suite Segmentation faults when running the test suite (more prevalent on Fedora)

    Back to the top

    Powered by Savane 3.13-f8d8.
    Corresponding source code