bugGNU Octave - Bugs: bug #57591, Segmentation faults when running...

 
 

bug #57591: Segmentation faults when running the test suite (mostly with clang)

Submitted by:  Markus Mützel <mmuetzel>
Submitted on:  Mon 13 Jan 2020 03:34:54 PM UTC  
 
Category:  Test Suite Severity:  4 - Important
Priority:  5 - Normal Item Group:  Segfault, Bus Error, etc.
Status:  Confirmed Assigned to:  None
Originator Name:  Open/Closed:  Open
Release:  dev Operating System:  GNU/Linux

Add a New Comment (Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

( Jump to the original submission)

Thu 12 Nov 2020 08:46:24 AM UTC, comment #136: 

Another one with the test for "sparse/gmres.m" as the last line in the log before the crash:
http://buildbot.octave.org:8010/#/builders/33/builds/233/steps/7/logs/stdio (gcc-debian)

Markus Mützel <mmuetzel>
Project Member
Wed 28 Oct 2020 10:47:59 AM UTC, comment #135: 

I'm not sure if I correctly understand the backtrace. It looks like somehow an output from gdb ("Python Exception <class 'gdb.error'> There is no member named _M_dataplus.") made it into Octave's output buffer (in thread 22)...
Is that expected?

IIRC, there was a report in the past where Java didn't play well with gdb. Could this be one of those cases again?

Markus Mützel <mmuetzel>
Project Member
Tue 27 Oct 2020 06:30:45 PM UTC, comment #134: 

Looking at the dump, Thread #1 is:

Thread 1 (Thread 0x7f32cc109700 (LWP 13095)):
#0  0x00007f32d8f5d30b in  ()
#1  0x00007f32d8f5c754 in  ()
#2  0x00000180cc0f1f50 in  ()
#3  0x00007f32bc5592a0 in  ()
#4  0x00007f32cc0f1f30 in  ()
#5  0x0000000000000180 in  ()
#6  0x00007f32cc0f2140 in  ()
#7  0x00000000ffffffff in  ()
#8  0x00007f32cc0f2020 in  ()
#9  0x00007f32d8f5b22c in  ()
#10 0x00007f32cc0f1fb0 in  ()
#11 0x00007f32f6babca9 in  () at /usr/lib/jvm/java-11-openjdk-amd64/lib/server/libjvm.so
#12 0x00007f32f66aa509 in  () at /usr/lib/jvm/java-11-openjdk-amd64/lib/server/libjvm.so
#13 0x00007f32f6720c56 in  () at /usr/lib/jvm/java-11-openjdk-amd64/lib/server/libjvm.so
#14 0x00007f32f672359c in  () at /usr/lib/jvm/java-11-openjdk-amd64/lib/server/libjvm.so
#15 0x00007f32f402c98a in Java_sun_awt_X11_XToolkit_waitForEvents () at /usr/lib/jvm/java-11-openjdk-amd64/lib/libawt_xawt.so
#16 0x00007f32e03f7a81 in  ()
#17 0x00007f32cc0f24c0 in  ()
#18 0x00007f32f49bfff0 in  ()
#19 0x0000000000000000 in  ()

So is it something to do with Java?  Does it ever crash if the build process is configured with --disable-java?

Rik <rik5>
Project Administrator
Tue 27 Oct 2020 05:26:51 PM UTC, comment #133: 

I disabled the buildbot jobs for the build mentioned in comment #132 so that it would not be wiped out by a later execution of the buildbot stable-clan-debian build.

I didn't find a core file there.  Maybe I missed it?  Anyway, I was able to execute the tests in a loop until I triggered a segfault.  The output from make at the time of the segfault was

sparse/spconvert.m .............................................fatal: caught signal Segmentation fault -- stopping myself...
/bin/bash: line 1: 4174536 Segmentation fault      (core dumped) /bin/bash ../run-octave --no-init-file --silent --no-history -p /scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian/build/test/mex /scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian/build/../src/test/fntests.m /scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian/build/../src/test
make[3]: *** [Makefile:31664: check-local] Error 139
make[3]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian-save/build'
make[2]: *** [Makefile:27801: check-am] Error 2
make[2]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian-save/build'
make[1]: *** [Makefile:27503: check-recursive] Error 1
make[1]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/stable-clang-debian-save/build'
make: *** [Makefile:27803: check] Error 2

The full strack trace for all threads is attached.  A simple "where" command in gdb shows the process was in thread 1 at the time of the fault.  The interpreter appears to be attempting to execute a shell command with "system".  There is no direct call to system in the tests for spconvert.m, so maybe it is from a call to "print_usage" when it is calling makeinfo to format the help text?

(file #50140)

John W. Eaton <jwe>
Project Administrator
Sun 25 Oct 2020 02:41:39 AM UTC, comment #132: 

And now it is
jwe-debian-x86_64-0/stable-clang-debian/

http://buildbot.octave.org:8010/#/builders/36/builds/83/steps/7/logs/stdio

Dmitri A. Sergatskov <dasergatskov>
Fri 23 Oct 2020 02:24:34 PM UTC, comment #131: 

My buildbot systems all have the following packages installed:

ii  libsuitesparse-dev:amd64    1:5.8.1+dfsg-2 amd64
ii  libsuitesparseconfig5:amd64 1:5.8.1+dfsg-2 amd64

John W. Eaton <jwe>
Project Administrator
Thu 22 Oct 2020 09:03:07 PM UTC, comment #130: 

For the record, on Fedora buildbot
suitesparse.x86_64                          5.4.0-5.fc33

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 22 Oct 2020 08:53:15 PM UTC, comment #129: 

That's interesting.  We should probably wait a few more days to see if this establishes itself as a true pattern.

Rik <rik5>
Project Administrator
Thu 22 Oct 2020 04:37:57 PM UTC, comment #128: 

The point where the test suite crashes on the buildbots seems to become less random over the last few days. The three most recent crashes all had "sparse/gmres.m" as the last test in the log:
http://buildbot.octave.org:8010/#/builders/31/builds/161/steps/7/logs/stdio (gcc-lto-debian)
http://buildbot.octave.org:8010/#/builders/33/builds/186/steps/7/logs/stdio (gcc-debian)
http://buildbot.octave.org:8010/#/builders/31/builds/176/steps/7/logs/stdio (gcc-lto-debian)

Markus Mützel <mmuetzel>
Project Member
Mon 12 Oct 2020 09:29:22 PM UTC, comment #127: 

http://buildbot.octave.org:8010/#/builders/32/builds/124/steps/7/logs/stdio

"clang-debian" again. Seems to be crashed just before "print_test_file_name" and after "print_pass_fail" this time.

Hg200 <hg200>
Sat 03 Oct 2020 10:31:30 AM UTC, comment #126: 

Another one:
http://buildbot.octave.org:8010/#/builders/32/builds/113/steps/7/logs/stdio

"clang-debian" with "set/ismember.m" last in the log.

Markus Mützel <mmuetzel>
Project Member
Sat 03 Oct 2020 12:18:07 AM UTC, comment #125: 

clang-debian/113

  profiler/profshow.m ............................................ pass    4/4
  set/intersect.m ................................................ pass   28/28
fatal: caught signal Segmentation fault -- stopping myself...
/bin/bash: line 1: 941528 Segmentation fault      (core dumped) /bin/bash ../run-octave --no-init-file --silent --no-history -p /scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build/test/mex /scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build/../src/test/fntests.m /scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build/../src/test
make[3]: *** [Makefile:31841: check-local] Error 139
  set/ismember.m .................................................make[3]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build'
make[2]: *** [Makefile:27961: check-am] Error 2
make[2]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build'
make[1]: *** [Makefile:27663: check-recursive] Error 1
make[1]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-0/clang-debian/build'
make: *** [Makefile:27963: check] Error 2
program finished with exit code 2
elapsedTime=380.436578

Dmitri A. Sergatskov <dasergatskov>
Wed 30 Sep 2020 11:02:13 AM UTC, comment #124: 

The next one:
http://buildbot.octave.org:8010/#/builders/35/builds/52/steps/7/logs/stdio

"stable-gcc-lto-debian" with "struct.tst" last in the log.

Markus Mützel <mmuetzel>
Project Member
Tue 29 Sep 2020 05:57:16 PM UTC, comment #123: 

On fedora there is a service that stores coredumps in its own directory
(and writes a trace to syslog).
I assume there is something similar available for debian, e.g.:
https://manpages.debian.org/stretch/systemd-coredump/coredumpctl.1.en.html

You'd still need to act fast if you want to get the trace
since the executable gets wiped out too...

Sincerely,

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 29 Sep 2020 05:46:05 PM UTC, comment #122: 

Unfortunately, the buildbot directories are currently reinitialized at each run, so the generated core files have disappeared by the time I'm able to check for them.  Maybe I can fix the buildbot config to preserve these failures somehow.

John W. Eaton <jwe>
Project Administrator
Tue 29 Sep 2020 06:35:46 AM UTC, comment #121: 

And another one:
http://buildbot.octave.org:8010/#/builders/31/builds/96/steps/7/logs/stdio

"gcc-lto-debian" with "java/javaaddpath.m" last in the log.

Markus Mützel <mmuetzel>
Project Member
Mon 28 Sep 2020 07:10:51 AM UTC, comment #120: 

Another one here:
http://buildbot.octave.org:8010/#/builders/32/builds/84/steps/7/logs/stdio

That one was for "clang-debian" with `optimization/lsqnonneg.m` as the last test in the log.

Markus Mützel <mmuetzel>
Project Member
Tue 22 Sep 2020 06:47:10 AM UTC, comment #119: 

jwe: Thanks for checking.
I guess that probably proves that the segfaults are not caused by bug #58790.

Markus Mützel <mmuetzel>
Project Member
Tue 22 Sep 2020 01:19:18 AM UTC, comment #118: 

Markus: I see 2 in /proc/sys/vm/overcommit_memory on all my buildbot workers so I think that setting is already active.
overcomm

John W. Eaton <jwe>
Project Administrator
Mon 21 Sep 2020 07:32:06 PM UTC, comment #117: 

for i in $(find . -type f);do cat $i | grep -q Segmentation && cat $i | awk '/\./ && /Leaving/ && /directory/' && ls $i; done

Hg200 <hg200>
Mon 21 Sep 2020 06:27:48 PM UTC, comment #116: 

The faults reported on this side since july are

clang-5.0-debian
  special-matrix/hadamard.m
  strings/hex2dec.m
  statistics/range.m

clang-debian
  plot/draw/hist.m
  polynomial/polyval.m

stable-clang-debian
  io.tst

gcc-7-debian
  plot/draw/trisurf.m

stable-gcc-7-debian
  sparse/gmres.m

All of them happened under debian. None under fedora. Most of them were clang, but not all. No repetition in an .m file yet.

Provided all log files are stored in the same folder, we could download all log files in raw with wget and grep for "segfault" and if found, return the line "Leaving directory". But since it is uncompressed it looks like 15 Gigs or so since begin of july? Hmm a bit too much to download.

Hg200 <hg200>
Mon 21 Sep 2020 06:01:37 PM UTC, comment #115: 

@jwe: Not sure if this applies to all distributions:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-memory-captun

IIUC, something similar to `echo 2 > /proc/sys/vm/overcommit_memory` has immediate effect. The change from comment #90 only has effect after the next re-boot.

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 05:37:16 PM UTC, comment #114: 

Markus: I didn't restart the systems.  I assumed that the change I made using sysctl took effect immediately.  Is that not correct?  One of the systems was last rebooted 53 days ago.  The others have been up for more than 100 days.  I can restart all of them if necessary.

John W. Eaton <jwe>
Project Administrator
Mon 21 Sep 2020 05:33:31 PM UTC, comment #113: 

For the Fortran libraries we use (or the libraries like BLAS and Lapack that have interfaces defined by their Fortran heritage) the important thing is not pointer size (those libraries don't use pointers like we think of in C++ or even modern Fortran) but the size of the integers that appear in their public interfaces for things like array dimensions and pivot vectors (which sort of serve the purpose of pointers, but are offsets into specific arrays).

On 64-bit systems, Octave now uses 64-bit integers for array dimensions and indexing by default.  But most Linux distributions supply BLAS and Lapack libraries (and other libraries that depend on them) compiled to use 32-bit integers.  That's a Fortran legacy thing, where INTEGER and REAL (i.e., single precision floating point numbers) typically occupy the same amount of storage.

As already noted, we also currently handle the case of Octave built with 64-bit dimensions and indexing and calling Fortran libraries that are compiled to use 32-bit integers for dimensions and indexing.

The current assumption is that all Fortran libraries that Octave uses will use the same convention, either 32-bit or 64-bit integers for dimensions and indexing.

I see no point in attempting to handle a mixture (some libraries using 32-bit and others using 64-bit integers) because these libraries also depend on each other.  It seems to me that mixing them arbitrarily is likely to lead to more confusion than just requiring that they all use the same convention.

Note also that (at least as I understood the state of things the last time I looked at this problem in detal) we don't really have a check to ensure that all the libraries that Octave depends on actually use the same convention.  We attempt to test the integer size of the BLAS library in the configure script.  That test requires executing a program, so guesses (or configure options) are used when cross compiling.  Whatever is determined for the BLAS library is assumed for all the rest.

John W. Eaton <jwe>
Project Administrator
Mon 21 Sep 2020 04:53:41 PM UTC, comment #112: 

Thanks, Rik. You did a much better job in explaining the situation.

The second snippet (the one with reference BLAS) is the "preferred" configuration (if there is such a thing).
I'd call the first an "experimental" configuration. But not because you are using OpenBLAS but because it is built with 64bit pointers.

Back on topic:
There was another segfault on the buildbots. This time "stable-gcc-debian" with "miscellaneous/ls.m" as the last test in the log.

@jwe: Did you restart the workers after your change in comment #90.

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 04:42:03 PM UTC, comment #111: 

comment #109:

> You can build Octave with 64bit indexing inside Octave but still use BLAS/LAPACK (and related) libraries that use 32bit indexing.


I was not able to make the configure report put out this:

  64-bit array dims and indexing:       yes
  64-bit BLAS array dims and indexing:  yes

When I used the reference blas and lapack implementations, I got instead:

  64-bit array dims and indexing:       yes
  64-bit BLAS array dims and indexing:  no

Perhaps that's intended, and if not, perhaps it's due to how the reference blas implementation is built on NixOS:

https://github.com/NixOS/nixpkgs/blob/8a5eb89b0f70999c08ce9ce6df89238671e186dc/pkgs/build-support/alternatives/blas/default.nix

I'm building up a benchmark table where every blas implementation is tested, against Octave 6.0.90 and 5.2.0, and I'll report the results (probably tomorrow) in NixOS' discourse thread. The benchmark I'm using is this: https://openbenchmarking.org/test/system/octave-benchmark .

Doron Behar <doronbehar>
Mon 21 Sep 2020 04:03:08 PM UTC, comment #110: 

Summarizing the experience of Octave developers to date which Markus relayed:

1) Building Octave with 64-bit pointers is routine and is the default in the configure script (you have to go out of your way to use 32-bit pointers.

2) Building Fortran libraries with 64-bit pointers is problematic (not always, but enough of the time to not make it a default).  This isn't code that Octave controls so if there are issues you will need to go to the authors of the particular packages.  What has been discovered is that you need absolute consistency between all of the libraries for this to have even a chance of working.  If one library is built with 64-bit pointers, they all need to be built that way.

3) Performance and correctness are in pseudo-opposition.  The tradeoffs a library coder makes for performance may cause slight deviations from reference behavior.  The choice of which feature to prioritize is left to the user.  If they want performance they can install ATLAS or OpenBLAS.  If they are worried about conformance to standard they can install a reference BLAS.  Octave has chosen to leave the decision to users.  I think distributions should as well and make various BLAS libraries available for selection by the user.

4) Intel MKL BLAS is definitely high performance, but definitely has bugs reported against it on the Octave bug tracker.  I see that you plan not to use it for other reasons (non-free software), but that will make your life easier in this case.

Rik <rik5>
Project Administrator
Mon 21 Sep 2020 03:57:43 PM UTC, comment #109: 

After reading the discourse message:
There might be a misunderstanding. You can build Octave with 64bit indexing inside Octave but still use BLAS/LAPACK (and related) libraries that use 32bit indexing.
Octave itself is careful enough to not call those libraries with too large indices.

From the download page of the Octave Windows builds:
https://www.gnu.org/software/octave/download#ms-windows

> Unless your computer has more than ~32GB of memory and you need to solve linear algebra problems with arrays containing more than ~2 billion elements, this version will offer no advantage over the recommended Windows-64 version above.

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 02:56:42 PM UTC, comment #108: 

OK Thanks for explaining this Markus. I'm currently consulting with my distro about the choice of the blas implementation at https://discourse.nixos.org/t/openblas-vs-reference-blas-implementation/9086 . TBH, I feel rather confident, since we use Nix, to put 64bit indexing into production, as the ecosystem makes it easy to debug why issues occur. 64 bit is the future, and Nix is there too.

For the record, Intel MKL BLAS is not free software and hence it cannot be used for packaging Octave for NixOS.

Thanks a lot for your help, and the work on Octave 6 !

Doron Behar <doronbehar>
Mon 21 Sep 2020 11:19:40 AM UTC, comment #107: 

Aaaah. I misunderstood your "you" referring to "me".

After reading the link from SuiteSparse: I don't know if Octave will work correctly with Intel MKL BLAS (which they recommend). I believe there have been reports about buggy behavior.

From an Octave point of view: Use either the reference BLAS (probably not the best performance) or OpenBLAS with optimizations for your processor (to have better performance).
Only use 64bit indexing in the Fortran libraries if you really need it.

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 11:12:44 AM UTC, comment #106: 

If you really need 64bit indices, this is a (probably incomplete) list of libraries you need to match:
BLAS
LAPACK
OpenBLAS
ARPACK
qrupdate
all SuiteSparse libraries
SUNDIALS
...

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 11:12:30 AM UTC, comment #105: 

comment #103:

> I never recommended using OpenBLAS. In fact, I asked whether you could reproduce with the reference BLAS libraries.
>
> If you ask for my recommendation, I'd advice against going down that rabbit hole of 64bit indices in BLAS/LAPACK. Instead stick to 32bit indices for BLAS/LAPACK and all related numeric libraries.


Yes you do: https://octave.org/doc/v5.2.0/External-Packages.html#External-Packages quoting:

> Basic Linear Algebra Subroutine library. Accelerated BLAS libraries such as OpenBLAS (https://www.openblas.net/) or ATLAS (http://math-atlas.sourceforge.net) are recommended for best performance. The reference implementation (http://www.netlib.org/blas) is slow, unmaintained, and suffers from certain bugs in corner case inputs.


The tests that previously failed, succeed when I use the same openblas for octave, qrupdate, suitesparse and arpack, though I'm worried about the degraded performence warning of suitesparse..

Doron Behar <doronbehar>
Mon 21 Sep 2020 11:08:02 AM UTC, comment #104: 

In case I was unclear in my previous comment: If you use OpenBLAS, make sure its indices match the other libraries. I'd recommend to build it (and all other related libraries) for 32bit indices.

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 11:03:49 AM UTC, comment #103: 

I never recommended using OpenBLAS. In fact, I asked whether you could reproduce with the reference BLAS libraries.

If you ask for my recommendation, I'd advice against going down that rabbit hole of 64bit indices in BLAS/LAPACK. Instead stick to 32bit indices for BLAS/LAPACK and all related numeric libraries.

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 10:57:04 AM UTC, comment #102: 

Markus After investigating I learned that also suitesparse and arpack are not using openblas but use the reference blas implementation, which is compiled on NixOS without 64 bit support, at least now.

Moreover, you recommend using openblas, but suitesparse recommends not using it, see:

https://github.com/DrTimothyAldenDavis/SuiteSparse#about-the-blas-and-lapack-libraries

I can attempt to make octave use a 64 bit build of the reference blas implementation, and make suitesparse use the same 64 bit reference blas as well. Alternatively I can also make suitesparse use openblas. Doing either will be against either of your recommendations. What should I do?

Please, and thanks for your help.

Doron Behar <doronbehar>
Mon 21 Sep 2020 10:09:38 AM UTC, comment #101: 

Thanks for the advise Markus, I'll look into it!

Doron Behar <doronbehar>
Mon 21 Sep 2020 10:08:36 AM UTC, comment #100: 

The output of the `ldd` command is:

        linux-vdso.so.1 (0x00007ffea5fa3000)
        liboctinterp.so.8 => not found
        liboctave.so.8 => not found
        libstdc++.so.6 => /nix/store/z5g0y84g2iknwwgfhw9wslbbzgw1w22k-gfortran-9.3.0-lib/lib/libstdc++.so.6 (0x00007f67b47b1000)
        libm.so.6 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libm.so.6 (0x00007f67b4670000)
        libgomp.so.1 => /nix/store/z5g0y84g2iknwwgfhw9wslbbzgw1w22k-gfortran-9.3.0-lib/lib/libgomp.so.1 (0x00007f67b4638000)
        libgcc_s.so.1 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libgcc_s.so.1 (0x00007f67b461c000)
        libpthread.so.0 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libpthread.so.0 (0x00007f67b45fb000)
        libc.so.6 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libc.so.6 (0x00007f67b443c000)
        /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/ld-linux-x86-64.so.2 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib64/ld-linux-x86-64.so.2 (0x00007f67b4994000)
        libdl.so.2 => /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31/lib/libdl.so.2 (0x00007f67b4437000)

I think the liboctinterp and liboctave shared objects not found, is due to something in our build sandbox - the working directory of me when I entered the sandbox after the build failed, is not the same as the working directory of the real builder.

Doron Behar <doronbehar>
Mon 21 Sep 2020 10:06:53 AM UTC, comment #99: 

If you are using OpenBLAS with 64bit indices, take care to also build all dependent numeric libraries with 64bit indices.
In the case of chol, a potentially incompatible library might be qrupdate.

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 09:53:52 AM UTC, comment #98: 

For reference, here's the files for the openblas package on NixOS:

result
├── bin
├── include
│   ├── cblas.h
│   ├── f77blas.h
│   ├── lapacke_config.h
│   ├── lapacke.h
│   ├── lapacke_mangling.h
│   ├── lapacke_utils.h
│   ├── lapack.h
│   └── openblas_config.h
└── lib
    ├── cmake
    │   └── openblas
    │       ├── OpenBLASConfig.cmake
    │       └── OpenBLASConfigVersion.cmake
    ├── libblas.so -> libopenblasp-r0.3.10.so
    ├── libblas.so.3 -> libopenblasp-r0.3.10.so
    ├── libcblas.so -> libopenblasp-r0.3.10.so
    ├── libcblas.so.3 -> libopenblasp-r0.3.10.so
    ├── liblapacke.so -> libopenblasp-r0.3.10.so
    ├── liblapacke.so.3 -> libopenblasp-r0.3.10.so
    ├── liblapack.so -> libopenblasp-r0.3.10.so
    ├── liblapack.so.3 -> libopenblasp-r0.3.10.so
    ├── libopenblasp-r0.3.10.so
    ├── libopenblas.so -> libopenblasp-r0.3.10.so
    ├── libopenblas.so.0 -> libopenblasp-r0.3.10.so
    └── pkgconfig
        ├── blas.pc
        ├── cblas.pc
        ├── lapack.pc
        └── openblas.pc

6 directories, 25 files

Doron Behar <doronbehar>
Mon 21 Sep 2020 09:49:44 AM UTC, comment #97: 

Thanks for the prompt replies. Here's how my disto, NixOS, is building openblas:

https://github.com/doronbehar/nixpkgs/blob/pkg/octave/pkgs/development/libraries/science/math/openblas/default.nix

As can be seen in the `postInstall` attribute. the `liblapack.so` and `libblas.so` libraries are linked to the same shared object. You probably both understand this subject better then I am, but I think that if there's a incompatibility issue here, it's between openblas and it self, or between octave and it.

NixOS is building every package in a sandbox. Hence, no other blas / lapack libraries are potentially used if they were not declared in the inputs of the build "recipe". Please rest assured that my build wasn't influenced by a mixture of blas implementations, if that was your concern.

I've re-initiated the build of octave, so in a while I should be able to tell you Dimitri the output of `ldd src/.libs/octave-cli`.

P.S The Nix Expression for Octave which I'm testing is here:

https://github.com/doronbehar/nixpkgs/blob/pkg/octave/pkgs/development/interpreters/octave/default.nix

Doron Behar <doronbehar>
Mon 21 Sep 2020 09:32:11 AM UTC, comment #96: 

Most likely you have mis-matched blas libraries.
What is an output of
ldd src/.libs/octave-cli
?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Mon 21 Sep 2020 09:29:12 AM UTC, comment #95: 

Could you re-run the tests for the build with 64bit OpenBLAS?
Does it segfault again at the same test?

I'm not sure if we are running continuous tests for that configuration. AFAICT, 64bit BLAS/LAPACK libraries are still quite rare.
It might be we are doing something wrong in Octave. But it might also be an error in OpenBLAS. It might be not very well tested with 64bit indices...

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 09:20:45 AM UTC, comment #94: 

With the reference blas and lapack, the tests did not fail, but I did not succeed in reaching this configure report, prior to building:

  64-bit array dims and indexing:       yes
  64-bit BLAS array dims and indexing:  yes

Instead I got:

  64-bit array dims and indexing:       yes
  64-bit BLAS array dims and indexing:  no

Doron Behar <doronbehar>
Mon 21 Sep 2020 09:17:29 AM UTC, comment #93: 

Is this segmentation fault reproducible?

Does the same happen if you used reference BLAS and LAPACK instead of OpenBLAS?

Markus Mützel <mmuetzel>
Project Member
Mon 21 Sep 2020 09:05:03 AM UTC, comment #92: 

I experience tests failing with gcc and the RC 6.0.90. The failure happens I think in an earlier stage of the tests. Here's my full build log:

https://gist.github.com/doronbehar/eb3111a3bf11ac753bf380da5fbe88b9

I'm using openblas' lapack and blas implementation, and in order for them to be detected I use:

  F77_INTEGER_8_FLAG = "-fdefault-integer-8";

Otherwise blas and lapack are not detected, not sure if that's relevant.

Doron Behar <doronbehar>
Wed 16 Sep 2020 01:51:27 PM UTC, comment #91: 

Thanks.
I admit that it is quite unlikely that this could be caused by overcommitting memory in that case.
But we'll be sure when the next crash occurs.

Markus Mützel <mmuetzel>
Project Member
Wed 16 Sep 2020 01:46:58 PM UTC, comment #90: 

Markus, all four of my buildbot worker systems have 30GB swap space.  Rarely is more than a few 100MB of that used, as far as I can tell by watching the systems with top from time to time.

Three of the systems have 32GB RAM, the other (oldest one) has 16GB.

I executed

sudo sysctl vm.overcommit_memory=2

on all four systems.

John W. Eaton <jwe>
Project Administrator
Wed 16 Sep 2020 08:12:40 AM UTC, comment #89: 

@jwe: To test whether these random crashes are caused by bug #58790, could you increase the swap size or (for some time) disable memory overcommit on the workers?

See e.g. here for how that could be done on Linux:
https://serverfault.com/a/142003

Markus Mützel <mmuetzel>
Project Member
Tue 15 Sep 2020 06:23:08 AM UTC, comment #88: 

Another one for clang-debian:
http://buildbot.octave.org:8010/#/builders/32/builds/52/steps/7/logs/stdio

Last test appearing in the log: polynomial/polyval.m

Markus Mützel <mmuetzel>
Project Member
Fri 11 Sep 2020 07:13:22 AM UTC, comment #87: 

Another one for gcc-lto-debian:
http://buildbot.octave.org:8010/#/builders/31/builds/39/steps/7/logs/stdio

Last test in the log was sparse/gmres.m.

I wonder if this is bug #58790, i.e. Octave is killed by the kernel because available memory on the system was low.

Markus Mützel <mmuetzel>
Project Member
Sun 06 Sep 2020 12:06:09 PM UTC, comment #86: 

Another segmentation fault while running the test suite:
http://buildbot.octave.org:8010/#/builders/36/builds/15/steps/7/logs/stdio

Looks like this was for stable-clang-debian during io.tst.

Markus Mützel <mmuetzel>
Project Member
Thu 27 Aug 2020 03:38:03 PM UTC, comment #85: 

AFAICT, the old builders will disappear from the waterfall view by themselves when they have been inactive for a while.
All stable builders disappeared for me from time to time when there was little action on the stable branch.
If you scroll very far down (and I mean veeeery far), I guess the epfl builders will eventually pop in.

Markus Mützel <mmuetzel>
Project Member
Thu 27 Aug 2020 03:27:20 PM UTC, comment #84: 

The buildbot systems are currently using

gcc version 10.2.0 (Debian 10.2.0-5)
clang version 9.0.1-13

If someone wants to set up other builders to test older compiler versions or libraries, then I'd be glad to add them to our master config file.  Maybe it would be best to use older systems that are intentionally not upgraded or to use some container or VM to fix the version?  I think Kai is working on a docker image that could be used for this purpose?

John W. Eaton <jwe>
Project Administrator
Thu 27 Aug 2020 03:22:15 PM UTC, comment #83: 

I don't plan to delete history, but I don't really want them displayed by default.  The old entries take up horizontal space and add clutter in the waterfall display.  But now I see there is a "show old builders" option in the waterfall display settings.  The option is disabled by default, but toggling it removes the old builders from the display.  The reversed sense of this setting is a known bug.  I will see about changing the default in the master.cfg file, at least for the waterfall display page.

John W. Eaton <jwe>
Project Administrator
Thu 27 Aug 2020 02:40:49 PM UTC, comment #82: 

Please, don't remove the history of the deprecated builders (unless they would cause problems otherwise).

It sometimes served as a kind of "lazy-man's-bisect" for me. (Sometimes also months back.)

Markus Mützel <mmuetzel>
Project Member
Thu 27 Aug 2020 02:34:36 PM UTC, comment #81: 

comment #79:

> [...] but they still appear on the buildbot web display.  I'm not sure how to remove them.


You have to remove them from the state.sqlite database with SQL statements (while the Buildbot Master is not running).  Or remove that file entirely to forget all history.  In any case make a backup before working on that file.

# sqlite3 state.sqlite
SQLite version 3.26.0 2018-12-01 12:34:55
Enter ".help" for usage hints.
sqlite> .tables
build_properties       change_users           patches
builder_masters        changes                scheduler_changes
builders               changesource_masters   scheduler_masters
builders_tags          changesources          schedulers
buildrequest_claims    configured_workers     sourcestamps
buildrequests          connected_workers      steps
builds                 logchunks              tags
buildset_properties    logs                   users
buildset_sourcestamps  masters                users_info
buildsets              migrate_version        workers
change_files           object_state
change_properties      objects


sqlite> SELECT * FROM workers;
1|ubuntu-1804-worker-01|{"admin": null, "host": null, "access_uri": null, "version": "2020.08.12"}|0|0


sqlite> SELECT * FROM builders;
1|octave-stable||097d6ab83a4824a0a88317812b687c5e919fc4db
2|octave-mxe-stable-w64-64||eaf8b4220ced7c8ec9bc8f95edc55307a214ee23
3|octave-mxe-stable-w64||5e82c51d06a1833b0065bb0281c1ed231b688796
4|octave-mxe-stable-w32||2dc32fc9b95a4f0839a75aff983d6a552e92625d
5|octave-stable-doxygen||1bf254e1654404c338635dadaec105637dc3e932

Kai Torben Ohlhus <siko1056>
Project Member
Thu 27 Aug 2020 02:22:31 PM UTC, comment #80: 

Thank you for that update.

Just out of interest: Which versions are clang and gcc that are used currently?

I think it's a good thing that the old builders are still "there". They'll disappear eventually from the waterfall view unless one scrolls down to a time when they were still active.

Markus Mützel <mmuetzel>
Project Member
Thu 27 Aug 2020 02:15:23 PM UTC, comment #79: 

Yes, I rearranged the jobs on my buildbot systems in an attempt to balance the loads.

I also upgraded those systems to the latest Debian testing packages and lost some old compilers (Clang 4 & 5 and GCC 7) so those builds are no longer active but they still appear on the buildbot web display.  I'm not sure how to remove them.

John W. Eaton <jwe>
Project Administrator
Thu 27 Aug 2020 08:03:38 AM UTC, comment #78: 
Markus Mützel <mmuetzel>
Project Member
Mon 24 Aug 2020 09:57:23 AM UTC, comment #77: 

The buildbots still seem to crash randomly during the BISTs:
http://buildbot.octave.org:8010/#/builders/19/builds/1756/steps/7/logs/stdio (gcc-7-debian)
http://buildbot.octave.org:8010/#/builders/12/builds/1898/steps/7/logs/stdio (clang-5.0-debian)

Both crashes occurred roughly at the same time. (But on different workers. Or have they been re-assigned recently?)

Markus Mützel <mmuetzel>
Project Member
Mon 24 Aug 2020 09:45:21 AM UTC, comment #76: 

Like we agreed in the online meetings a few weeks back, this bug shouldn't block the RC of Octave 6.
Lowering severity.

Markus Mützel <mmuetzel>
Project Member
Wed 19 Aug 2020 04:08:07 PM UTC, comment #75: 

Reverting status to "Confirmed" after jwe's comment #74.
I guess we'll see with time if the build bots are still occasionally crashing while running the test suite.

The changes in comment #72 are very likely to have contributed to fixing bug #56952 though.

Markus Mützel <mmuetzel>
Project Member
Wed 19 Aug 2020 03:48:21 PM UTC, comment #74: 

No, the test suite runs in one copy of Octave, so you are right, this change is unlikely to have an effect on those crashes.

But for creating the figures when building Octave, we do run multiple independent scripts, so I think the patch does help to avoid those problems.

John W. Eaton <jwe>
Project Administrator
Wed 19 Aug 2020 11:12:46 AM UTC, comment #73: 

If I correctly understand jwe's fix, it changes the order of tasks done when closing Octave.
Does running the test suite with "make check" open and close Octave repeatedly?
I never verified that. But I assumed it would be comparable to running "__run_test_suite__" at the Octave prompt.

Markus Mützel <mmuetzel>
Project Member
Sun 16 Aug 2020 03:58:36 AM UTC, comment #72: 

OK, I pushed my changes to stable and merged with default:

http://hg.savannah.gnu.org/hgweb/octave/rev/d075c2f26d1d

John W. Eaton <jwe>
Project Administrator
Sat 15 Aug 2020 11:07:48 PM UTC, comment #71: 

It looks like Rik's patch (shortcut processing of txt files) made it to crash with both gcc and clang.

Dmitri.

Dmitri A. Sergatskov <dasergatskov>
Sat 15 Aug 2020 07:31:00 PM UTC, comment #70: 

Thanks for the patch

1.) I checked whether my segfault occurs at the same position as Dimitri's (see comment #55). Result: Yes, it does.

2.) I applied JWE's patch (comment #69) and made a clang build. Result: no segfault with unset DISPLAY. Then stopped with lldb at "octave::graphics_toolkit::close at graphics-toolkit.h:279". One further step goes to "gnuplot_graphics_toolkit::close at _init_gnuplot_.cc:151". Looks good.

Hg200 <hg200>
Sat 15 Aug 2020 04:52:21 PM UTC, comment #69: 

What appears to be happening is that the _gnuplot_init_.oct file (the one that defines the gnuplot graphics toolkit) is being closed before the toolkit is unloaded.  So when that happens, the pointer to the toolkit object is invalid.  The same problem exists with fltk.  It doesn't happen with qt because that is not dynamically loaded (with dlopen).  The same sequence of events happens when Octave is compiled with GCC, but for whatever reason, it the crash isn't happening.  So, it seems that we need to either prevent the toolkit's .oct file from being dlclosed (I thought the mlock in the init function would do that!) or we need to ensure that when it is dlclosed, that it is also removed/unregistered from the toolkit list.

I'm attaching a possible change to consider.  I'm not sure is the best solution, but it should at least avoid the crash.

(file #49678)

John W. Eaton <jwe>
Project Administrator
Sat 15 Aug 2020 09:33:22 AM UTC, comment #68: 

It looks like this report is getting sidetracked again. It originally was about segfaults when running the test suite.
The errors on creating the graphics for the manual are probably better tracked in bug #56952.

At the moment, it's probable (but not entirely certain) the errors here are related to graphics.

Markus Mützel <mmuetzel>
Project Member
Fri 14 Aug 2020 11:51:27 PM UTC, comment #67: 

Build with CLANG:

unset DISPLAY
./run-octave --eval "figure (1,\"visible\",\"off\")"
octave: X11 DISPLAY environment variable not set
octave: disabling GUI features
fatal: caught signal Segmentation fault -- stopping myself...
Segmentation fault (core dumped)

Build with GCC - no segfault:

unset DISPLAY
./run-octave --eval "figure (1,\"visible\",\"off\")"
octave: X11 DISPLAY environment variable not set
octave: disabling GUI features

Hg200 <hg200>
Fri 14 Aug 2020 10:31:33 PM UTC, comment #66: 

hmm - ok. from two clang builds with DISPLAY="" one did catch a segfault. an "incremental make" seems to reproduce the segfault consitently. it is in the .txt file section as already reported below. Rik's delay also does not help here. I am on default.

Hg200 <hg200>
Fri 14 Aug 2020 10:13:08 PM UTC, comment #65: 

I'm working on stable.  It looks like a delay didn't help though.

Rik <rik5>
Project Administrator
Fri 14 Aug 2020 10:12:35 PM UTC, comment #64: 

I can also reproduce the crash with clang 10 (on Fedora 32).
Fedora's buildbots have DISPLAY set, so there is not crash there.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:52:15 PM UTC, comment #63: 

Are you working off stable or default?
In any case here is the output on 45a9dcee45db+ (stable)

rm -f src/octave-gui-6.0.1 && \
cd src && ln -s octave-gui octave-gui-6.0.1
rm -f doc/interpreter/plot-axesproperties.texi-t doc/interpreter/plot-axesproperties.texi && /bin/sh run-octave --norc --silent --no-history --path ../doc/interpreter --eval "genpropdoc ('axes');" > doc/interpreter/plot-axesproperties.texi-t && mv doc/interpreter/plot-axesproperties.texi-t doc/interpreter/plot-axesproperties.texi
fatal: caught signal Segmentation fault -- stopping myself...
/bin/sh: line 1: 2062846 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path ../doc/interpreter --eval "genpropdoc ('axes');" > doc/interpreter/plot-axesproperties.texi-t
make[2]: *** [Makefile:31022: doc/interpreter/plot-axesproperties.texi] Error 139
make[2]: Leaving directory '/home/dima/src/octave/clang_debug'
make[1]: *** [Makefile:27468: all-recursive] Error 1
make[1]: Leaving directory '/home/dima/src/octave/clang_debug'
make: *** [Makefile:11093: all] Error 2

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:41:22 PM UTC, comment #62: 

@Dmitri: Since you have a repeatable segfault, could you try the attached patch?

diff -r 45a9dcee45db doc/interpreter/genpropdoc.m
--- a/doc/interpreter/genpropdoc.m        Fri Aug 14 13:37:07 2020 -0700
+++ b/doc/interpreter/genpropdoc.m        Fri Aug 14 14:38:09 2020 -0700
@@ -1911,7 +1911,8 @@ function s = getstructure (objname, base
   endif

   if (isfigure (hf))
-    close (hf)
+    close (hf);
+    pause (0.5);
   endif

endfunction

I also attached it to this bug report.  This is obviously not determining the root cause, but it might be good enough for the documentation.

(file #49674)

Rik <rik5>
Project Administrator
Fri 14 Aug 2020 09:19:58 PM UTC, comment #61: 

The build bots show that the segfault has now shifted from the "txt" images to the calling of genpropdoc.m.  I think it is significant that genpropdoc creates graphics figures and objects in order to query their default properties.  I went ahead and merged the change I made to the image generation files on stable to default since it seems to have improved things.

Rik <rik5>
Project Administrator
Fri 14 Aug 2020 09:19:16 PM UTC, comment #60: 

It is critical that you do not have DISPLAY set when you do make.
It finishes OK in my case when DISPLAY set to default.

So

DISPLAY="" make -j1 V=1

should do it.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:17:38 PM UTC, comment #59: 

clang-devel-9.0.1-2.fc31.x86_64
clang-libs-9.0.1-2.fc31.x86_64
clang-tools-extra-9.0.1-2.fc31.x86_64
clang-9.0.1-2.fc31.x86_64

Hg200 <hg200>
Fri 14 Aug 2020 09:13:35 PM UTC, comment #58: 

What's clang version?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:11:50 PM UTC, comment #57: 

i still can't reproduce. switches are:

./configure CC=clang CXX=clang++
make V=1 -j12
C compiler:                    clang  -pthread  -Wall -W -Wshadow -Wformat -Wpointer-arith -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wcast-align -Wcast-qual -g -O2
C++ compiler:                  clang++  -pthread  -Wall -W -Wshadow -Woverloaded-virtual -Wold-style-cast -Wformat -Wpointer-arith -Wwrite-strings -Wcast-align -Wcast-qual -g -O2

;-(((

Hg200 <hg200>
Fri 14 Aug 2020 09:07:09 PM UTC, comment #56: 

It looks like processing of TEXI files triggers it now.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 09:01:29 PM UTC, comment #55: 

It still crashes on my workstation:

ake[2]: Entering directory '/home/dima/src/octave/clang_debug'
/bin/sh config.status oct-conf-post.h-tmp oct-conf-post.h
config.status: creating oct-conf-post.h-tmp
config.status: executing oct-conf-post.h commands
/bin/sh config.status liboctave/mk-version-h.sh-tmp liboctave/mk-version-h.sh
config.status: creating liboctave/mk-version-h.sh-tmp
config.status: executing liboctave/mk-version-h.sh commands
/bin/sh config.status libinterp/corefcn/mk-mxarray-h.sh-tmp libinterp/corefcn/mk-mxarray-h.sh
config.status: creating libinterp/corefcn/mk-mxarray-h.sh-tmp
config.status: executing libinterp/corefcn/mk-mxarray-h.sh commands
/bin/sh config.status build-aux/subst-config-vals.sh-tmp build-aux/subst-config-vals.sh
config.status: creating build-aux/subst-config-vals.sh-tmp
config.status: executing build-aux/subst-config-vals.sh commands
/bin/sh config.status liboctave/external/mk-f77-def.sh-tmp liboctave/external/mk-f77-def.sh
config.status: creating liboctave/external/mk-f77-def.sh-tmp
config.status: executing liboctave/external/mk-f77-def.sh commands
rm -f doc/interpreter/plot-axesproperties.texi-t doc/interpreter/plot-axesproperties.texi && /bin/sh run-octave --norc --silent --no-history --path ../doc/interpreter --eval "genpropdoc ('axes');" > doc/interpreter/plot-axesproperties.texi-t && mv doc/interpreter/plot-axesproperties.texi-t doc/interpreter/plot-axesproperties.texi
fatal: caught signal Segmentation fault -- stopping myself...
/bin/sh: line 1: 2052762 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path ../doc/interpreter --eval "genpropdoc ('axes');" > doc/interpreter/plot-axesproperties.texi-t
make[2]: *** [Makefile:31022: doc/interpreter/plot-axesproperties.texi] Error 139
make[2]: Leaving directory '/home/dima/src/octave/clang_debug'
make[1]: *** [Makefile:27468: all-recursive] Error 1
make[1]: Leaving directory '/home/dima/src/octave/clang_debug'
make: *** [Makefile:11093: all] Error 2

And the backtrace:

(gdb) thread apply all bt

Thread 2 (Thread 0x7f4753af6700 (LWP 2052933)):
#0  0x00007f487422e4dc in sigtimedwait () from /lib64/libc.so.6
#1  0x00007f48745ca95c in sigwait () from /lib64/libpthread.so.0
#2  0x00007f487bfdd3cf in signal_watcher (arg=0x7f487d7fbbe0 <octave::generic_sig_handler(int)>) at ../liboctave/wrappers/signal-wrappers.c:697
#3  0x00007f48745c02de in start_thread () from /lib64/libpthread.so.0
#4  0x00007f48742f1e83 in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f487e074940 (LWP 2052762)):
#0  0x00007f487d67abce in octave::graphics_toolkit::close (this=0x15d3410) at ../libinterp/corefcn/graphics-toolkit.h:279
#1  0x00007f487d676f8a in octave::gtk_manager::unload_all_toolkits (this=0x10ae3c0) at ../libinterp/corefcn/gtk-manager.h:107
#2  0x00007f487d671cab in octave::interpreter::shutdown (this=0x10ad150) at ../libinterp/corefcn/interpreter.cc:902
#3  0x00007f487cb6d975 in octave::cli_application::execute (this=0x7ffcb8683990) at ../libinterp/octave.cc:381
#4  0x0000000000401839 in main (argc=15, argv=0x7ffcb8683cb8) at ../src/main-cli.cc:95

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 08:41:40 PM UTC, comment #54: 

Interesting clues.  Just for fun, I'm testing the idea of short-circuiting the building of "txt" images for the documentation in this changeset (https://hg.savannah.gnu.org/hgweb/octave/rev/45a9dcee45db).  I now need to wait for the buildbots to notice this, unless someone has the permissions and knows how to kick off a manual build of stable-clang-4.0-debian and stable-clang-5.0-debian.

Rik <rik5>
Project Administrator
Fri 14 Aug 2020 08:04:03 PM UTC, comment #53: 

Actually serial make crashes as well.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 07:57:32 PM UTC, comment #52: 

I can reproduce this on local computer (with stable) with clang 9
if I do
DISPLAY="" make -j32 V=1

Here is backtrace

(gdb) thread apply all bt

Thread 2 (Thread 0x7f50b4e98700 (LWP 1921806)):
#0  0x00007f51cd5d04dc in sigtimedwait () from /lib64/libc.so.6
#1  0x00007f51cd96c95c in sigwait () from /lib64/libpthread.so.0
#2  0x00007f51d537f3cf in signal_watcher (arg=0x7f51d6b9dbe0 <octave::generic_sig_handler(int)>) at ../liboctave/wrappers/signal-wrappers.c:697
#3  0x00007f51cd9622de in start_thread () from /lib64/libpthread.so.0
#4  0x00007f51cd693e83 in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f51d7416940 (LWP 1919268)):
#0  0x00007f51d6a1cbce in octave::graphics_toolkit::close (this=0x1b5c3e0) at ../libinterp/corefcn/graphics-toolkit.h:279
#1  0x00007f51d6a18f8a in octave::gtk_manager::unload_all_toolkits (this=0x16a33c0) at ../libinterp/corefcn/gtk-manager.h:107
#2  0x00007f51d6a13cab in octave::interpreter::shutdown (this=0x16a2150) at ../libinterp/corefcn/interpreter.cc:902
#3  0x00007f51d5f0f975 in octave::cli_application::execute (this=0x7ffc924ac400) at ../libinterp/octave.cc:381
#4  0x0000000000401839 in main (argc=15, argv=0x7ffc924ac728) at ../src/main-cli.cc:95
(gdb)

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 14 Aug 2020 07:09:04 PM UTC, comment #51: 

What's interesting is that the failures all seem to be with files which are not related to actual plotting.  It seems that it is the generation of images in the ".txt" format which are failing, but looking at the m-files in doc/interpreter one sees

  if (strcmp (typ , "txt"))
    image_as_txt (d, nm);

and then

## generate something for the texinfo @image command to process
function image_as_txt (d, nm)
  fid = fopen (fullfile (d, [nm ".txt"]), "wt");
  fputs (fid, "\n");
  fputs (fid, "+---------------------------------+\n");
  fputs (fid, "| Image unavailable in text mode. |\n");
  fputs (fid, "+---------------------------------+\n");
  fclose (fid);
endfunction

So, no real plotting is being done and it may be the speed with which the graphics system is setup and torn down which is the problem.

Taking plotimages.m as representative, the function begins

function plotimages (d, nm, typ)
  set_graphics_toolkit ();
  set_print_size ();
  hide_output ();
  outfile = fullfile (d, [nm "." typ]);
  if (strcmp (typ, "png"))
    set (groot, "defaulttextfontname", "*");
  endif
  if (strcmp (typ, "eps"))
    d_typ = "-depsc2";
  else
    d_typ = ["-d", typ];
  endif

  if (strcmp (typ , "txt"))
    image_as_txt (d, nm);

and then ends with

  hide_output ();
endfunction

Shooting in the dark, what if we move the test for the "txt" format to the top of the file with this code

function plotimages (d, nm, typ)

  if (strcmp (typ , "txt"))
    image_as_txt (d, nm);
    return;
  endif

  set_graphics_toolkit ();
  set_print_size ();

The graphics system will never get invoked.

Rik <rik5>
Project Administrator
Fri 14 Aug 2020 03:01:52 PM UTC, comment #50: 

Huh, since my changes related to bug #58814, the clang builds performed by buildbot on my Debian systems seem to all be failing when generating graphics for the manual.  Maybe it is related to those systems not running the builds in a framebuffer context?  I will try to take a look at that.

John W. Eaton <jwe>
Project Administrator
Sat 04 Jul 2020 06:45:34 AM UTC, comment #49: 

this one crashed yesterday and then again 4 hours ago with a core dumped:

http://buildbot.octave.org:8010/#/builders/12/builds/1847

Hg200 <hg200>
Fri 26 Jun 2020 03:28:02 PM UTC, comment #48: 

.profile are fro non interactive logins.
You can put some dummy variable there for a check.
Make sure you are using a correct home directory
(/var/lib/buildbot as far as I can tell).

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 26 Jun 2020 03:23:17 PM UTC, comment #47: 

The shell commands appear to be run using /bin/sh in non-interactive mode so startup files like .profile are not executed, at least as far as I can tell.

John W. Eaton <jwe>
Project Administrator
Fri 26 Jun 2020 03:02:19 PM UTC, comment #46: 

You should be able to add ulimit to .profile in buildbot home directory.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Fri 26 Jun 2020 01:51:01 PM UTC, comment #45: 

The following build on one of my buildbot systems failed:

http://buildbot.octave.org:8010/#/builders/22/builds/516

I've been repeatedly running the test suite using this build for the last 8 hours or so and it hasn't failed once.  I'm using

while true ; do
  if nice -n 19 xvfb-run -a -s 'screen 0 640x480x24' make V=1 check ; then
    echo "OK $?"
  else
    echo "NOT OK: $?"
    break
  fi
done

I said earlier that I would set the default ulimit for the buildbots so that we would generate core files, but I'm not sure of the best way to do that.  If I understand correctly, buildbot starts new shells to do each shell command step and I'd rather not have to add ulimit commands to each one.  So it seems that a change like this should be made on the worker systems instead but I don't know what startup file to add the ulimit command to on the build worker system.  Any ideas?

John W. Eaton <jwe>
Project Administrator
Thu 25 Jun 2020 01:37:18 PM UTC, comment #44: 

Yes, we are storing the graphics_object and it uses a shared_ptr to hold the base_graphics_object that contains the actual data.  But it does not provide copy-on-write semantics, so although the interpreter thread won't delete the underlying base_graphics_object while the GUI thread holds the reference, it can still change the contents unexpectedly while we are doing something with the data unless we are locking correctly.

I don't know for sure that is the problem here.  Can we guarantee that we are getting the locking right?  To me, that seems harder than implementing copy-on-write semantics for objects, but I'm also not sure what is appropriate for these objects.

Also, if the real problem in this case is a crash in Mesa, then are there difference in versions between the systems where the crashes happen frequently vs. those where it is rare (or maybe never happens?)

John W. Eaton <jwe>
Project Administrator
Thu 25 Jun 2020 10:58:16 AM UTC, comment #43: 

@John: The GUI Object (a Figure here) already stores a reference to the underlying graphics_object, see this excerpt from Object.h:

    // Store the graphics object directly so that it will exist when
    // we need it.  Previously, it was possible for the graphics
    // toolkit to get a handle to a figure, then have the interpreter
    // thread delete the corresponding object before the graphics
    // toolkit (GUI) thread had a chance to display it.  It should be OK
    // to store this object and use it in both threads (graphics_object
    // uses a std::shared_ptr) provided that we protect access with
    // mutex locks.
    graphics_object m_go;

After this addition, we should have changed all the logic and removed m_handle since m_go lets us access the object directly.

Anyway, is it me or what we see in the backtrace is a crash in Mesa, not in Octave?

Pantxo Diribarne <pantxo>
Project Member
Wed 24 Jun 2020 09:01:34 PM UTC, comment #42: 

In answer to the question in comment #37, no, there is no special code that I know of to detect whether we are using a real display or some framebuffer thing.

What I was thinking might be happening is that the GUI thread accesses a graphics object (which belongs to the interpreter) and uses it without acquiring and holding a lock so that the interpreter could modify or delete the graphics object while the GUI thread is using it.  Unlike octave_value objects, the graphics objects do not have copy-on-write semantics, so it seems there could be trouble.

Here is QtHandles::Figure::slotGetPixels, which is one of the functions that shows up in the stack trace shown in comment #41:

  uint8NDArray
  Figure::slotGetPixels (void)
  {
    uint8NDArray retval;
    Canvas *canvas = m_container->canvas (m_handle);

    if (canvas)
      {
        gh_manager& gh_mgr = m_interpreter.get_gh_manager ();

        gh_mgr.process_events ();
        octave::autolock guard (gh_mgr.graphics_lock ());
        retval = canvas->getPixels ();
      }

    return retval;
  }

If I understand correctly, m_handle is the figure number for the current figure and is used to find the graphics object for the figure object.  Is it possible that processing events could invalidate m_handle?  It doesn't seem like that's what's happening here because if it were, then I would expect the call to gh_mgr.get_object in GLCanvas::do_getPixels to fail.

But this is the kind of thing that looks suspect to me.  It seems like the way the GUI thread is using of graphics handles and objects that belong to the interpreter thread is not clearly defined.

If the GUI thread stores a handle to a graphics object (or a graphics object itself) then it seems to me that it should somehow grab a reference to it in a way that can either be checked later to ensure that it remains valid, or that will prevent it from being modified/deleted until the GUI thread no longer needs it.

John W. Eaton <jwe>
Project Administrator
Sat 20 Jun 2020 06:34:52 PM UTC, comment #41: 

I built with enabled address sanitizer flags and ran "make check". During the fixed tests in publish/publish.tst, I got a heap-buffer-overflow.
I was able to reproduce reproduce it twice (out of two tests) when running the complete test suite.

The attached log contains the backtrace and other info from the address sanitizer.

I'm not sure if this is related or something different. But it might be a graphics/threading issue afaict.

Wrt what Pantxo wrote on the maintainer's mailing list [1]: I came across this blog post [2]. It looks like the general idea could be applied cross-platform.
Would that be helpful?

[1]: https://lists.gnu.org/archive/html/octave-maintainers/2020-06/msg00067.html

[2]: https://devblogs.microsoft.com/oldnewthing/20130712-00/?p=3823

(file #49330)

Markus Mützel <mmuetzel>
Project Member
Thu 11 Jun 2020 02:05:29 PM UTC, comment #40: 

RE: comment #38, I'll try to make that change soon.  We should be testing full Qt builds.  I could also set up some separate builds to continue testing with gnuplot, but that's a lower priority for me.

John W. Eaton <jwe>
Project Administrator
Wed 10 Jun 2020 02:44:46 PM UTC, comment #39: 

Low prio and JFYI: I have spent a considerable amount of time to force a segfault in Fedora, either with gcc or with clang. E.g. i ran the test suite in a forever loop over a day. I also had no luck with "nice -19" that is the adjustment on the build bots.

Fedora Core 31
gcc version 9.3.1
clang version 9.0.1
Target: x86_64

Hg200 <hg200>
Wed 10 Jun 2020 02:09:00 PM UTC, comment #38: 

I think it would be interesting to have some (or all) of the debians buildbot run make through xvfb as well, so the documentation will be built with qt graphics. So we will see if there are any crashes.

Dmitri.

Dmitri A. Sergatskov <dasergatskov>
Wed 10 Jun 2020 01:53:25 PM UTC, comment #37: 

I haven't found a "no-extras" buildbot that crashed with that error so far.

The only one I found with a crash was this one here:
http://buildbot.octave.org:8010/#/builders/7/builds/1615/steps/6/logs/stdio

Please help improve Octave by contributing tests for these files
(see the list in the file /scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build/test/fntests.log).
double free or corruption (out)
fatal: caught signal Aborted -- stopping myself...
/bin/bash: line 1: 3679392 Aborted                 /bin/bash ../run-octave --no-init-file --silent --no-history -p /scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build/test/mex /scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build/../src/test/fntests.m /scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build/../src/test
make[3]: *** [Makefile:31583: check-local] Error 134
make[3]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build'
make[2]: *** [Makefile:27717: check-am] Error 2
make[2]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build'
make[1]: *** [Makefile:27419: check-recursive] Error 1
make[1]: Leaving directory '/scratch/buildbot/workers/jwe-debian-x86_64-2/no-extras-debian/build'
make: *** [Makefile:27719: check] Error 2
program finished with exit code 2
elapsedTime=784.988686

But that one looks different and has probably been fixed in the meantime.

Also I haven't found a Fedora buildbot that crashed with that signature. (The ones mentioned in some of the comments here are due to bug #55225).

If I correctly understood yesterday, the Debian workers use Xvfb for plotting with the "qt" graphics toolkit while running the test suite.
Could that be related? Does Octave use a different code paths if the framebuffer is virtual?

Markus Mützel <mmuetzel>
Project Member
Sun 07 Jun 2020 05:12:42 PM UTC, comment #36: 

I've never worked with coredumps. So I can't judge how useful that would be.

Markus Mützel <mmuetzel>
Project Member
Thu 04 Jun 2020 03:13:36 PM UTC, comment #35: 

The buildbots don't explicitly enable core dumps, so it depends on the prevailing system settings.  We could change that.

John W. Eaton <jwe>
Project Administrator
Thu 04 Jun 2020 02:16:44 PM UTC, comment #34: 

And another one for "clang-5.0-debian" (with "plot/util/saveas.m" as the last test in the output):
http://buildbot.octave.org:8010/#/builders/12/builds/1797/steps/7/logs/stdio

Do the buildbots store core dumps?

Markus Mützel <mmuetzel>
Project Member
Wed 03 Jun 2020 05:29:57 PM UTC, comment #33: 

Thanks for the hint. I'll try to produce a core dump. But the crashes are very rare for me. I've seen maybe one every few weeks or months...

While some of the crashes on the build bots occured when plotting tests run, they generally seem to be in more random places.
I've probably missed a bunch of them (see bug #58393). But here a list of crashes with the last test occurring in the log in no particular order grouped by builders:

clang-4.0-debian:
  miscellaneous/copyfile.m
  plot/appearance/title.m
  plot/appearance/legend.m
  sparse/sprand.m
  linear-algebra/bandwidth.m
  optimization/optimget.m

clang-5.0-debian:
  special-matrix/hadamard.m
  miscellaneous/tar.m
  statistics/movmedian.m
  plot/appearance/annotation.m

gcc-7-debian:
  miscalleneous/isfile.m

gcc-7-lto-debian:
  java/usejava.m

At first glance, it doesn't look like these functions have anything special in common. I'm not sure if it has anything to do with graphics.

Markus Mützel <mmuetzel>
Project Member
Wed 03 Jun 2020 02:58:07 PM UTC, comment #32: 

Try enabling core dumps.  In bash, it is "ulimit -c unlimited".  Then you can run gdb on the core dump and the executable file that produced it.

If these are happening in the Qt graphics code, then I suspect more threading issues, probably because we are not using mutexes appropriately when accessing graphics objects.

John W. Eaton <jwe>
Project Administrator
Wed 03 Jun 2020 12:12:29 PM UTC, comment #31: 

And here (clang-4.0-debian):
http://buildbot.octave.org:8010/#/builders/10/builds/1548/steps/7/logs/stdio

I tried to reproduce by running the test suite in gdb "make check RUN_OCTAVE_OPTIONS=-g" repeatedly. But it looks like the crash never occurs in the debugger (or I was just unlucky).
However, my gcc is version 9.3.0. But even with that version I've seen the occasional crash when running the test suite. Unfortunately, these happened when I didn't run in a debugger.
Is there any way to get any additional information when a program crashed in Ubuntu 20.04 (like the system log on Windows)?

Also looking on which buildbots failed, I'm starting to doubt whether the crashes depend on compiler or version.

Markus Mützel <mmuetzel>
Project Member
Wed 03 Jun 2020 11:51:13 AM UTC, comment #30: 
Markus Mützel <mmuetzel>
Project Member
Sat 30 May 2020 02:38:01 PM UTC, comment #29: 

And another one from a clang buildbot (clang-4.0-debian):
http://buildbot.octave.org:8010/#/builders/10/builds/1539/steps/7/logs/stdio

Markus Mützel <mmuetzel>
Project Member
Tue 26 May 2020 09:37:20 AM UTC, comment #28: 

This time it was a builder that uses gcc (gcc-7-debian) that crashed while running the test suite:
http://buildbot.octave.org:8010/#/builders/19/builds/1623/steps/6/logs/stdio

Markus Mützel <mmuetzel>
Project Member
Fri 15 May 2020 07:47:07 AM UTC, comment #27: 

The other bug is probably bug #56952.

Markus Mützel <mmuetzel>
Project Member
Fri 15 May 2020 07:44:47 AM UTC, comment #26: 

Another recent segfault during the test suite with clang on Debian:
http://buildbot.octave.org:8010/#/builders/10/builds/1488/steps/6/logs/stdio

Randomly checking a bunch of build logs, it looks like the error here seems to be mostly (or exclusively) with the clang buildbots on Debian. (They would probably be easier to spot if tests with a segfault would be marked as failed, see comment #13.)

The failing buildbots on Fedora might be caused by something similar. But they are probably something different.

Changing the bug title again to something more appropriate. Sorry for the distraction.

Markus Mützel <mmuetzel>
Project Member
Thu 14 May 2020 10:20:22 PM UTC, comment #25: 

I was trying to point out the difference in comment #18 between a bug in the test suite and a bug building documentation.

This report is about the test suite, and I think we can make more progress here because it should be possible to build an image with debugging symbols and then run 'make check' until a segfault is captured.

Rik <rik5>
Project Administrator
Thu 14 May 2020 07:00:02 PM UTC, comment #24: 

I just realized i confused by all those coredumps. Those are during building docs, not test suits. I think we have a building docs crash bug somewhere as well.

Dmitri.

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 06:59:05 PM UTC, comment #23: 

Is it just me or does it seem like this bug has now been completely hijacked by a different bug?

The original bug was about segmentation faults running the test suite with clang builds. The recent comments seem to be about segmentation faults when building the doc images.

These are separate issues, please keep that in mind and do with this report whatever you think is best.

Mike Miller <mtmiller>
Project Administrator
Thu 14 May 2020 06:57:20 PM UTC, comment #22: 

This is the trace for that crash:

May 14 12:12:09 i7 systemd-coredump[528353]: Process 527783 (lt-octave-gui) of user 1002 dumped core.

                                             Stack trace of thread 527783:
                                             #0  0x00007f63e16c3ef9 _ZN11QMetaObject12invokeMethodEP7QObjectPKcN2Qt14ConnectionTypeE22QGenericReturnArgument16QGenericAr>
                                             #1  0x00007f63e40a36a0 n/a (/home/buildbotu/fc25-x86_64/gcc-fedora/build/libgui/.libs/liboctgui.so.6.0.0 + 0x1826a0)
May 14 12:12:09 i7 systemd[1]: systemd-coredump@3-528352-0.service: Succeeded.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 06:51:07 PM UTC, comment #21: 

Buildbots job run with V=1, so the latest failure
http://buildbot.octave.org:8010/#/builders/11/builds/1521/steps/5/logs/stdio

But it is not 100% reproducible. Sometimes it etex, sometimes it
is GraphicsMagick, sometimes it is postscript.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 06:38:53 PM UTC, comment #20: 

Can you run with V=1 to maybe show exactly what command is being executed and failing?

John W. Eaton <jwe>
Project Administrator
Thu 14 May 2020 06:28:38 PM UTC, comment #19: 

I t always looks to me that there is a race condition in parallel make (octave starts writing an output file to disk, but etex
already uses it to compile some document etc...). It never happens with serial make.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 03:43:39 PM UTC, comment #18: 

Does this problem go away if 'make -j1' is used?  I seem to remember that this is caused by high load on the system.

If it does go away, then this might be something to do with run-octave script and competition for some shared resource.

But that is likely to be a different error than the one from the test suite.

Rik <rik5>
Project Administrator
Thu 14 May 2020 03:29:41 PM UTC, comment #17: 

When compiled with debug flags (-O0 -ggdb3) the failure rate is much lower and the one I got did not generate crash dump:

 GEN      doc/interpreter/splinefit6.png
  MAKEINFO ../doc/interpreter/octave.info
  TEXI2DVI doc/interpreter/octave.dvi
  MAKEINFO doc/interpreter/octave.html/.octave-html-stamp
/usr/bin/texi2dvi: etex exited with bad status, quitting.
make[2]: *** [Makefile:31088: doc/interpreter/octave.dvi] Error 1
make[2]: *** Waiting for unfinished jobs....
make[2]: Leaving directory '/home/dima/src/octave/gcc_debug'
make[1]: *** [Makefile:27415: all-recursive] Error 1
make[1]: Leaving directory '/home/dima/src/octave/gcc_debug'
make: *** [Makefile:11050: all] Error 2

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 03:06:51 PM UTC, comment #16: 

OK, I updated buildbot to Fedora 32.
gcc version 10.1.1 20200507 (Red Hat 10.1.1-1) (GCC)

clang version 10.0.0 (Fedora 10.0.0-1.fc32)

It is still crashes the same way (running build on dev as user):


GEN      doc/interpreter/interpderiv2.txt
fatal: caught signal Segmentation fault -- stopping myself...
fatal: caught signal Segmentation fault -- stopping myself...
fatal: caught signal Segmentation fault -- stopping myself...
  GEN      doc/interpreter/plot.txt
  GEN      doc/interpreter/hist.txt
  GEN      doc/interpreter/errorbar.txt
  GEN      doc/interpreter/polar.txt
  GEN      doc/interpreter/mesh.txt
/bin/sh: line 1: 139633 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path /home/dima/src/octave/gcc_def/../doc/interpreter/ --eval "geometryimages ('doc/interpreter/', 'delaunay', 'txt');"
make[2]: *** [Makefile:31213: doc/interpreter/delaunay.txt] Error 139
make[2]: *** Waiting for unfinished jobs....
/bin/sh: line 1: 139587 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path /home/dima/src/octave/gcc_def/../doc/interpreter/ --eval "geometryimages ('doc/interpreter/', 'convhull', 'txt');"
make[2]: *** [Makefile:31211: doc/interpreter/convhull.txt] Error 139
/bin/sh: line 1: 139580 Segmentation fault      (core dumped) /bin/sh run-octave --norc --silent --no-history --path /home/dima/src/octave/gcc_def/../doc/interpreter/ --eval "geometryimages ('doc/interpreter/', 'griddata', 'txt');"
make[2]: *** [Makefile:31209: doc/interpreter/griddata.txt] Error 139
make[2]: Leaving directory '/home/dima/src/octave/gcc_def'
make[1]: *** [Makefile:27415: all-recursive] Error 1
make[1]: Leaving directory '/home/dima/src/octave/gcc_def'
make: *** [Makefile:11050: all] Error 2

This is a trace I got in the system logs:

May 14 10:57:55 i7 systemd-coredump[140692]: Process 139633 (lt-octave-gui) of user 1001 dumped core.

                                             Stack trace of thread 139633:
                                             #0  0x00007fac4c40d750 n/a (n/a + 0x0)
                                             #1  0x00007fac8845749e n/a (/home/dima/src/octave/gcc_def/libgui/.libs/liboctgui.so.6.0.0 + 0x18249e)
May 14 10:57:55 i7 systemd-coredump[140696]: Process 139587 (lt-octave-gui) of user 1001 dumped core.

                                             Stack trace of thread 139587:
                                             #0  0x0000000001e6c948 n/a (n/a + 0x0)
                                             #1  0x00007f1a0ae0049e n/a (/home/dima/src/octave/gcc_def/libgui/.libs/liboctgui.so.6.0.0 + 0x18249e)
May 14 10:57:55 i7 systemd[1]: systemd-coredump@0-140691-0.service: Succeeded.
May 14 10:57:55 i7 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@0-140691-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 14 10:57:55 i7 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@1-140694-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 14 10:57:55 i7 systemd[1]: systemd-coredump@1-140694-0.service: Succeeded.
May 14 10:57:55 i7 systemd-coredump[140697]: Process 139580 (lt-octave-gui) of user 1001 dumped core.

                                             Stack trace of thread 139580:
                                             #0  0x00007f2a1fa7fdd2 _ZN7QObject7connectEPKS_PKcS1_S3_N2Qt14ConnectionTypeE (libQt5Core.so.5 + 0x275dd2)
                                             #1  0x00007f2a2244249e n/a (/home/dima/src/octave/gcc_def/libgui/.libs/liboctgui.so.6.0.0 + 0x18249e)
May 14 10:57:55 i7 systemd[1]: systemd-coredump@2-140695-0.service: Succeeded.

I will try a debug build to see if I get more info.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 14 May 2020 02:39:44 PM UTC, comment #15: 

This is going to be really hard to debug unless we can get a stacktrace.

For starters, does someone have a machine set up which mimics one of the failing buildbot configurations (such as gcc on Fedora or an older version of clang)?  Can you get semi-repeatable crashes on that machine?

Rik <rik5>
Project Administrator
Thu 14 May 2020 08:48:14 AM UTC, comment #14: 

Changing the bug title to reflect that the segfaults occur not only with clang and also rarely happen on Ubuntu.

Markus Mützel <mmuetzel>
Project Member
Thu 14 May 2020 08:41:47 AM UTC, comment #13: 

The errors might be more prevalent than the current green markings in the buildbot's waterfall view might suggest.
If a test causes the buildbot to interrupt with a segmentation fault, the overall test is still marked as "green".
I wrote about this a while back on the mailing list:
https://octave.1599824.n4.nabble.com/buildbots-False-pass-results-for-segmentation-fault-in-test-td4695266.html

Maybe someone with more expertise in setting up buildbot could take a look?

That won't solve the actual issue. But it might make it easier to judge the impact and prevalence.

Markus Mützel <mmuetzel>
Project Member
Wed 13 May 2020 08:09:41 PM UTC, comment #12: 

On the buildbot (still fedora 31):

gcc -v
Using built-in specs.
COLLECT_GCC=/usr/bin/gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/9/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl --enable-offload-targets=nvptx-none --without-cuda-driver --enable-gnu-indirect-function --enable-cet --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 9.3.1 20200408 (Red Hat 9.3.1-2) (GCC)
[dima@i7 ~]$ cat /etc/redhat-release
Fedora release 31 (Thirty One)

clang -v
clang version 9.0.1 (Fedora 9.0.1-2.fc31)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-redhat-linux/9
Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/9
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-redhat-linux/9
Candidate multilib: .;@m64
Candidate multilib: 32;@m32
Selected multilib: .;@m64

On CentOS
gcc version 8.3.1 20190507
clang version 8.0.1 (Red Hat 8.0.1-1.module_el8.1.0+215+a01033fb)

The problem did not manifest itself with 5.x release (at least not as obvious and quite reproducible as it now).

I think many people do not see that problem very often, because they do incremental re-build that does not re-make the docs.

Dmitri.
--

Dmitri.

Dmitri A. Sergatskov <dasergatskov>
Wed 13 May 2020 07:58:55 PM UTC, comment #11: 

The previous problem seemed to be with the clang compiler for old versions (4 & 5).  It might be the same case with gcc on Fedora, the compiler there is old or bad in some manner.  What were version numbers for gcc and Fedora?

Rik <rik5>
Project Administrator
Wed 13 May 2020 06:25:08 PM UTC, comment #10: 

It fails with gcc on Fedora/Centos as well.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Mon 20 Jan 2020 07:59:31 AM UTC, comment #9: 

Thanks for testing again.

There seem to be occasional segfaults also for the gcc-7-lto-debian buildbot:
http://buildbot.octave.org:8010/#/builders/24/builds/1141/steps/6/logs/stdio

This might be related or it might be caused by something different.

Markus Mützel <mmuetzel>
Project Member
Fri 17 Jan 2020 10:03:36 PM UTC, comment #8: 

I ran the test suite twice while compiling other projects in the background, and no difference for me.

Mike Miller <mtmiller>
Project Administrator
Fri 17 Jan 2020 06:23:28 AM UTC, comment #7: 

@Mike: Thanks for your tests. The segfaults occur only intermittently on the buildbots. It might be that they only occur when the machine is under heavy load. E.g. when an mxe job is running on the same machine at the same time.

Could you please try to stress your machine while you are running the test suite and check if it still doesn't segfault?

Markus Mützel <mmuetzel>
Project Member
Thu 16 Jan 2020 10:53:06 PM UTC, comment #6: 

I further built the default branch with Clang 6 and Clang 10 (Git snapshot), and the full test suite runs fine with them. I don't have any older versions of Clang readily available without setting up a container or VM.

So if this buildbot segmentation fault is real, it seems to have been fixed with Clang 6 and later.

I don't think there's any particular reason why those buildbots use Clang 4 and 5, probably just old configurations that haven't been updated.

Mike Miller <mtmiller>
Project Administrator
Thu 16 Jan 2020 08:10:44 AM UTC, comment #5: 

That wasn't clear to me either.
Would it make sense to update the buildbots to use newer clang versions? Or is there a particular reason they run with clang 4 and clang 5?

Markus Mützel <mmuetzel>
Project Member
Wed 15 Jan 2020 09:10:31 PM UTC, comment #4: 

Ok, that wasn't clear in this report. If this error only occurs with older versions of Clang, and never with newer versions, then it's probably not worth fixing, right?

Mike Miller <mtmiller>
Project Administrator
Wed 15 Jan 2020 09:00:19 PM UTC, comment #3: 

8 is reasonably new, the fault is with clang 4 and 5.
Fedora has 9 and it is also fine.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Wed 15 Jan 2020 08:55:34 PM UTC, comment #2: 

I am not able to reproduce this segmentation fault on my system (Debian) with Clang version 8 (with or without xvfb-run). The full test suite runs for me with only 2 test failures in publish.tst, same exact results as with GCC.

Mike Miller <mtmiller>
Project Administrator
Tue 14 Jan 2020 04:22:52 PM UTC, comment #1: 

Maybe need to compile a version with debugging symbols and attempt "__run_test_suite__" manually under a debugger so that a backtrace can be obtained.

This could be a problem with clang, but more likely it is something generic that we are doing incorrectly that only occasionaly surfaces for combinations of compiler, libraries, and machine.

Rik <rik5>
Project Administrator
Mon 13 Jan 2020 03:34:54 PM UTC, original submission:  

The test suite repeatedly fails to complete on the clang buildbots:
http://buildbot.octave.org:8010/#/builders/12/builds/1525/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/10/builds/1322/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/12/builds/1522/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/10/builds/1317/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/10/builds/1315/steps/6/logs/stdio
http://buildbot.octave.org:8010/#/builders/12/builds/1516/steps/6/logs/stdio
And several more.

The segmentation fault seems to occur at random tests. It occurs with both the clang 4.0 and the clang 5.0 buildbots.

This should ideally be fixed before releasing Octave 6.1.
If we don't aim to support clang, the priority can probably be lowered.

Markus Mützel <mmuetzel>
Project Member

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #49678:  shutdown-diffs.txt added by jwe (2KiB - text/plain)
file #49674:  57591.genpropdoc.diff added by rik5 (353B - text/x-patch)
file #49330:  asan_publish.log added by mmuetzel (7KiB - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by siko1056 (Posted a comment)
  • -email is unavailable- added by pantxo (Posted a comment)
  • -email is unavailable- added by hg200 (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by dasergatskov (Posted a comment)
  • -email is unavailable- added by rik5 (Posted a comment)
  • -email is unavailable- added by mmuetzel (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can add your encouragement to it.
    This task has 0 encouragements so far.

    Only project members can vote.

     

     

     

    Follow 14 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2020-10-27 jwe Attached File- => Added stable-clang-debian-stack-trace.txt, #50140
    2020-09-29 doronbehar Carbon-CopyRemoved 111330 => -
    2020-08-24 mmuetzel Severity5 - Blocker => 4 - Important
    2020-08-19 mmuetzel StatusReady For Test => Confirmed
    2020-08-16 jwe StatusPatch Submitted => Ready For Test
    2020-08-15 jwe Attached File- => Added shutdown-diffs.txt, #49678
        StatusConfirmed => Patch Submitted
    2020-08-14 rik5 Attached File- => Added 57591.genpropdoc.diff, #49674
    2020-06-20 mmuetzel Attached File- => Added asan_publish.log, #49330
        StatusNone => Confirmed
    2020-05-15 mmuetzel StatusConfirmed => None
        SummarySegmentation faults when running the test suite (more prevalent on Fedora) => Segmentation faults when running the test suite (mostly with clang)
    2020-05-14 rik5 StatusNone => Confirmed
    2020-05-14 mmuetzel SummarySegmentation faults with clang when running the test suite => Segmentation faults when running the test suite (more prevalent on Fedora)

    Back to the top


    Powered by Savane 3.5