Oops, I forgot that mlock doesn't take a function name but must be called from inside the function you wish to lock.
Closing this report as fixed.
|
JWE:
Your fix works, nice, thanks.
As to mlock:
(with a cross-build from yesterday evening)
|
JWE wrote:
As a quick test, you can also take one of the builds you have that shows the
crash and do
and see if that works. It should be essentially the same as my change.
|
As a quick test, you can also take one of the builds you have that shows the crash and do
|
Thanks,
I'll give it a try. Will take some time (make dist, cross-build, install ...
|
My test shows that locking _init_qt_ also avoids the crash, so I pushed this changeset:
http://hg.savannah.gnu.org/hgweb/octave/rev/8953dd219f4c
|
Philip: For testing, I backed out the following changesets along with a few updates to account for conflicts due to other unrelated changes since these were made:
332be8be16eb
4b661d535ddb
553278e5aac7
14e844f1459a
But now I'm wondering whether we just need to mlock _init_qt_.oct when _init_qt_ is called. We already do that for _init_gnuplot_ and _init_fltk_.
I'll try that before pushing a changeset.
See also bug #55254.
|
@JWE: is that this one: 25931:332be8be16eb ?
According to the log that cset dates from Sep 28, while a Windows binary from Oct 14 still runs __run_test_suite__ fine.
Depends a bit on when that cset was actually pushed.
I tested another binary from Sep 30, it doesn't segfault on __run_test_suite__ either.
But maybe there's some other magic involved.
|
It looks like dynamically loading the Qt graphics code on Windows is the problem. I should have a changeset ready sometime later today.
|
@Markus comment #9:
No, sorry, I usually build Windows binaries after merging default with my local adaptations so I expect the HGID to be that of those merges. I usually do that in the evenings (Central Eur. time).
I'll try to check if I can pinpoint it a little more precisely based om m-file csets (-changes) around Nov. 14.
|
A simple script which demonstrates a crash which may be related:
|
@Philip: Do know know which hg id the October 14 build was based on?
|
@jwe, comment #4: This cset also causes Octave to crash (or hang, I don't remember) when building the doc on Mac OS (see http://octave.1599824.n4.nabble.com/building-on-macOS-td4689952.html). So if you think it is possible to back it out until we understand what is going wrong, then it would probably be good to do that.
|
Based on my archive of Windows binaries, all I can see is that the change happened between Oct 14 (still good) and Oct 25 (segfaults).
IIRC the long interval between these builds is due to a vacation :-)
|
The 2 fails are because I am using fltk:
I also see the same regression on Linux. So all in all: Everything looks good (apart from this crash).
|
To be honest, I am just guessing as to what is causing the crash.
But it looks like this might "solve" the issue. Running the test suite with fltk does NOT segfault for me:
(I see 2 fails and 1 regression.)
|
Markus, thanks for doing the bisecting.
Ah, so I'd bet that the first bad changeset is the one in which I made Octave dynamically load the qt graphics toolkit. If undoing that change does fix the problem, then maybe I should just remove it for the release?
|
Octave is still crashing at the fixed tests for me with hg id 8c3e727c44b5.
Is this a blocker for Octave 5?
What can I do to help solve this?
Attached is the backtrace with gdb of that segmentation fault. The top of that backtrace:
So together with the window of changesets, this could be an issue with how we initialize the qt graphics toolkit.
The last test appearing in the command window is that for bug #55308.
(file #45956)
|
Just as a info point, if I change runtestsuit to run the fixed tests before the integrated tests, it does not crash.
|
On my 64-bit Windows 7 Professional box and Windows 7 Enterprise box at work, _run_test_suite_ segfaults in the fixed tests section, and fntests.log always ends with the colormap test.
Insofar it is consistent on just my boxes and ends elsewhere on other people's boxes I can't say. What might matter there is that I have several binary patches in my cross-builds (h5read, uitable, matrix right division, shortcut keys for variable editor).
The individual tests run fine (cd-ing to the fixed test dir and running them one by one (using "test <blabla>.tst").
I also tried running just the fixed tests:
_run_test_suite ({}, {'/path/to/fixed/tests'}, {})
and also then I saw no segfault.
Echo of screen messages attached.
Because of that I didn't pay much attention, but I agree with Markus that _run_test_suite_ shouldn't end in a segfault.
Maybe the amount of tests, or amount of some type(s) of tests, induces oversaturation of Octave's resources somewhere?
(file #45482)
|
When executing "__run_test_suite__" with Octave from the default branch on Windows causes it to crash with a segmentation fault. See the attached backtraces.
The crash happens at the fixed tests towards the end of the test suite at different tests that seem to be unrelated. But the backtraces look similar.
The last version that completes the test suite without crashing is hg id a0079f6f8c4 with 466c405ee09b and ac0b3c09c3db grafted on top.
I am having trouble cross-compiling a lot of the sub-sequent nodes for Windows. The next version I managed to cross-compile is hg id 14e844f1459a. That version already shows the segfault.
So this leaves a window of ~25 changesets starting with:
And ending with:
|