bugGNU Octave - Bugs: bug #58926, Octave gives wrong results with...

 
 

bug #58926: Octave gives wrong results with intel-mkl when diagonalizing large matrices

Submitter:  Archisman Panigrahi <apandada1>
Submitted:  Mon 10 Aug 2020 06:01:06 AM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Incorrect Result
Status:  Invalid / Not an Octave Bug Assigned to:  None
Originator Name:  Open/Closed:  * Closed
Release:  * 5.2.0 Operating System:  * GNU/Linux
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Sat 03 Feb 2024 08:49:34 PM UTC, comment #35: 

I'm able to compile GNU Octave 8.4.0 from source directly with Intel MKL 2024.0.0 without any segfault from ___run_test_suite___.  No LD_PRELOAD workaround is necessary.  It gives the correct results for the issue mentioned in this bug report at seemingly MKL speed.  More details in https://askubuntu.com/questions/891189/octave-4-2-1-and-intel-mkl/1502160#1502160

Henry Shu <mlguru>
Tue 06 Sep 2022 01:31:02 PM UTC, comment #34: 

Closing this old bug report for Octave 5.2.0 caused by a user using an unsupported linear algebra library with incompatible settings. If documentation is still needed, it can be added to wiki.octave.org.

Arun Giridhar <arungiridhar>
Group Member
Thu 13 Aug 2020 05:28:09 PM UTC, comment #33: 

Building OpenBLAS from source is definitely recommended. It will use very good CPU-probing during configuration to pick the most appropriate compiler flags for your architecture. You can continue to use precompiled Octave with that self-compiled OpenBLAS if you want, particularly if you will spend the majority of the time executing linear algebra code and not interpreted code like I/O or drawing graphs. You are of course welcome to build Octave from source too but it's not essential based on what you've described.

OMP_NUM_THREADS controls the number of parallel threads used for large matrix operations. It's a tuning parameter that you can experiment with after you have built OpenBLAS. Common values are the number of physical cores or number of threads. Your Intel i3 likely has 2 cores and 4 threads (might be called "logical cores").

The objective of this experiment is not to tune one user's setup alone. It's to quantifiably compare OpenBLAS (which is fully supported by Octave and is cross-platform, cross-architecture) and MKL (which is not supported and is heavily Intel-specific and nonfree in GNU classification) as a basis for future recommendations. This thinking is also influencing HPC decisions in academia, with Epyc Rome being seriously considered to replace older Xeons with.

Anonymous
Thu 13 Aug 2020 04:27:10 PM UTC, comment #32: 

Re comment #31, I have an entry level Intel i3 (4 cores) CPU.
I am using precompiled OpenBLAS libraries from the Ubuntu repositories. Will compiling OpenBLAS in my machine significantly improve results?

There are many threads about it in internet, some advise to compile OpenBLAS and then compile Octave using that OpenBLAS, while some advise to compile OpenBLAS and make it the default BLAS, and Octave (precompiled, from Ubuntu repositories) will automatically use it.

Can someone suggest which among these two methods will give better results? Also, what kind of flags should be used while compiling OpenBLAS?

@Dmitri: Is OMP_NUM_THREADS somehow related to OpenBLAS? What is the best practice for specifying the number of threads? (= number of cores?)

Anyway, here are my results with the precompiled OpenBLAS from default repositories.


octave:1> format long g
octave:2> clear all; n = 500; C = sin((1:n)' + (1:n).^2); for i = 100:-1:1, tic; val(i) = max(eig(C)); t(i) = toc; end; clear i; assert(range(val)==0); assert(all(abs(val - 16.914886497930) <= 1e-12)); [min(t), mean(t), median(t), max(t), range(t), std(t)]
ans =

 Columns 1 through 3:

     0.417322158813477     0.441691470146179     0.441818952560425

 Columns 4 through 6:

     0.512679100036621    0.0953569412231445    0.0211517261113866


Here are the results with MKL and the environment variable workaround, for the same code.


ans =

 Columns 1 through 3:

     0.136284112930298     0.148864336013794      0.13891065120697

 Columns 4 through 6:

     0.232420921325684    0.0961368083953857    0.0211357759595586




Archisman Panigrahi <apandada1>
Thu 13 Aug 2020 03:33:59 PM UTC, comment #31: 

@Dmitri: Completely agree. I'm on Ryzen too and my OpenBLAS results were much faster than OP's posted benchmarks. (Yours are slightly faster being 16 threads unlike my 12). That's what prompted my question to OP on whether he was using precompiled OpenBLAS with untuned flags. If so, using a properly built OpenBLAS would be very competitive with MKL, as far as Octave goes anyway.


octave:7> format long g
octave:8> format compact
octave:9> clear all; n = 500; C = sin((1:n)' + (1:n).^2); for i = 100:-1:1, tic; val(i) = max(eig(C)); t(i) = toc; end; clear i; assert(range(val)==0); assert(all(abs(val - 16.914886497930) <= 1e-12)); [min(t), mean(t), median(t), max(t), range(t), std(t)]
ans =
 Columns 1 through 4:
      0.1320030689239502       0.133814971446991       0.133613109588623      0.1365499496459961
 Columns 5 and 6:
    0.004546880722045898   0.0008096358540892868


Anonymous
Thu 13 Aug 2020 03:01:01 PM UTC, comment #30: 

Speaking of benchmarks, on Ryzen i got best results whith openMP
interface and limiting number of threds to the number of CPU cores
(16 in my case):

 OMP_NUM_THREADS=16 LD_PRELOAD=./libopenblas.so octave -q -f
octave:1> ii=1:500;
octave:2> c = sin(ii' + ii.^2);
octave:3> whos c
Variables in the current scope:

   Attr Name        Size                     Bytes  Class
   ==== ====        ====                     =====  =====
        c         500x500                  2000000  double

Total is 250000 elements using 2000000 bytes

octave:4> tic; g = eig(c); toc
Elapsed time is 0.118513 seconds.
octave:5> max(real(g))
ans =  16.915

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 13 Aug 2020 02:20:57 PM UTC, comment #29: 

https://gist.github.com/N0rbert/cfda101b8f0aa326df1edb6beee0076d

I saw the comparison above has now expanded to include R, which like Octave requires the MKL_THREADING_LAYER workaround, and that Scilab like Octave is having more test failures with MKL than without.

@OP: This really ought to be documented better by the MKL team at Intel. Please do keep up your efforts there.

@OP: In your timing comparisons of OpenBLAS and MKL, are you using precompiled OpenBLAS from a repository or is it OpenBLAS that was configured and built on your specific machine? That makes a big difference with "march=native" compiler flags.

Anonymous
Thu 13 Aug 2020 12:58:10 AM UTC, comment #28: 

@OP: You have a typo between m and M in line 50 of your test script. Could you confirm that Scilab passes the assertion after fixing it?

Anonymous
Wed 12 Aug 2020 07:23:39 PM UTC, comment #27: 

Re comment #21,
Further mentioning "For example, if you are using bash, this can be achieved by adding the line

export MKL_THREADING_LAYER=gnu

to the .bashrc file" would really help a lot of users. Many beginner friendly GNU/Linux OS still use bash by default.

Archisman Panigrahi <apandada1>
Wed 12 Aug 2020 07:06:37 PM UTC, comment #26: 


comment #22:

> RE comment #15, how do Scilab and Fortran + MKL produce the correct result?  Do they not also require setting the MKL_THREADING_LAYER environment variable to get the correct results?
>


I have verified that scilab produces the correct result without setting the environment variable, using the code in this script by Norbert. https://gist.github.com/N0rbert/cfda101b8f0aa326df1edb6beee0076d

I copied the lines containing scilab code and ran it separately to ensure that the environment variable was not set before running scilab.

Archisman Panigrahi <apandada1>
Wed 12 Aug 2020 05:50:17 PM UTC, comment #25: 

Unfortunately the fntests.log files don't seem to give any reason for the crash. The assertion failure between real and complex is pretty blatant though, even with the MKL_THREADING_LAYER workaround. That is a no-go for anyone who wants to use Octave with MKL, at least the versions that OP downloaded from the distro repositories.

Overall, I don't see how we can do better in this case than documenting the problem, suggesting the MKL_THREADING_LAYER workaround, and asking the user to test the output. At least future users trying the same combination can find it either in the manual or in this bug report.

@OP: I saw in your linked forum posts that you asked the MKL team to document it as well. Do feel free to try it again in future as new versions of MKL are released, or if you want to build Octave from source yourself and link with IOMP, etc. But that will have to be on your own. If you find anything that works, please consider a wiki HOWTO entry.

Anonymous
Wed 12 Aug 2020 01:11:51 PM UTC, comment #24: 

I am uploading the results of

__run_test_suite__

with and without the command  export MKL_THREADING_LAYER=gnu. After this Octave shows segmentation fault (core dumped).

(file #49655, file #49656)

Archisman Panigrahi <apandada1>
Tue 11 Aug 2020 06:20:33 PM UTC, comment #23: 

@John: Latest doc version attached incorporating your feedback.

I dug deeper into the question of how other programs achieve the correct results with MKL. I cannot speak specifically for Scilab, but one way to get correct results with MKL is apparently to use IOMP instead of GOMP throughout. There was some discussion elsewhere [1] that using Clang/LLVM instead of GCC can be used to make it link against IOMP instead of GOMP if that is what the user wants. The consensus was that using GCC with "-fopenmp" automatically goes to GOMP, and that using Clang to link it overrides the GOMP calls with IOMP. Presumably someone who compiles Octave with Clang and uses IOMP instead of GOMP throughout will get fully functional MKL, but there doesn't seem to be any actual evidence yet one way or another. Since it was unproven, I left it out of the Known Bugs section as a possible workaround. In any case it won't help OP who is using a pre-compiled Octave 5.2.0 binary from the Ubuntu repository.

[1]: https://stackoverflow.com/questions/25986091/telling-gcc-to-not-link-libgomp-so-it-links-libiomp5-instead

(file #49654)

Anonymous
Tue 11 Aug 2020 05:58:46 PM UTC, comment #22: 

RE comment #15, how do Scilab and Fortran + MKL produce the correct result?  Do they not also require setting the MKL_THREADING_LAYER environment variable to get the correct results?

John W. Eaton <jwe>
Group administrator
Tue 11 Aug 2020 05:56:57 PM UTC, comment #21: 


export VAR=val


is not universal syntax and .bashrc won't work for people who don't use bash, so maybe we could just say "can be achieved by setting the environment variable MKL_THREADING_LAYER to "gnu" before starting Octave"?


John W. Eaton <jwe>
Group administrator
Tue 11 Aug 2020 05:24:43 PM UTC, comment #20: 

Oops. Had a typo in the bugfix (model instead of layer), now fixed. Use this version to review instead.


(file #49653)

Anonymous
Tue 11 Aug 2020 03:32:28 PM UTC, comment #19: 

On second thought, I removed the part about Ctrl-C with external libraries. Only the Intel MKL workaround is documented. Updated "hg diff bugs.txi" attached.

(file #49652)

Anonymous
Tue 11 Aug 2020 03:11:06 PM UTC, comment #18: 

I have taken the liberty to add the known problems with external libraries as a new section in doc/interpreter/bugs.txi. Attached output of "hg diff bugs.txi". Please provide feedback if this verbiage is appropriate. Happy to edit as required.


(file #49651)

Anonymous
Tue 11 Aug 2020 01:06:29 PM UTC, comment #17: 

@Dmitri: It's to protect our own users who might encounter bugs with MKL and think that Octave is at fault when Octave really isn't. If you see the links earlier in the thread, there's a constant theme of language like "BLAS is innocent, Octave is suspect" etc before realizing that most of these programs are perfectly fine by themselves but can't work together in some situations. We also have a section already in the manual (Appendix D) for exactly this sort of user experience:


This section describes known problems that affect users of Octave. Most of these are not Octave bugs per se—if they were, we would fix them. But the result for a user may be like the result of a bug.


It looks like a good place to list known problems with existing libraries outside Octave's control and how to work around them. The clash between IOMP and GOMP is one such.

Anonymous
Tue 11 Aug 2020 12:55:58 PM UTC, comment #16: 

octave also coredemps with MKL. MKL is broken and should not be used with octave at the moment.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 11 Aug 2020 12:50:12 PM UTC, comment #15: 

comment #14:

> So at the moment, MKL is broken. Not the first time; I am pretty sure not the last time. Why would we need to document this in the manual?
>
> Dmitri.
> --
>


@Dmitri

Octave + BLAS/OpenBLAS --> produces correct result
Octave + MKL --> produces incorrect result
Scilab + MKL --> produces correct result (Someone confirmed in a comment in https://askubuntu.com/a/1265802/124466)
Fortran + MKL --> produces correct result (Someone confirmed in https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/GNU-Octave-gives-different-results-with-MKL/m-p/1198890/highlight/true#M29885)


Archisman Panigrahi <apandada1>
Tue 11 Aug 2020 12:34:28 PM UTC, comment #14: 

So at the moment, MKL is broken. Not the first time; I am pretty sure not the last time. Why would we need to document this in the manual?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 11 Aug 2020 08:23:05 AM UTC, comment #13: 

https://octave.org/doc/v5.2.0/Trouble.html

@OP and @Kai: I recommend that the problem be documented in the Octave manual rather than the wiki. Would the above location be suitable? The problem occurs outside Octave's code base so perhaps calling it "Known Incompatibility wth MKL" would be better than "Known Bugs". I don't know enough about MKL to represent that it affects only Debian-based systems.

@OP: regarding the test suite failing, could you include the excerpts from test/fntests.log showing where Octave crashed when used with MKL? Please do this for both cases, without and with the export technique. We cannot promise that Octave can be made sufficiently bulletproof against external library architecture but we can certainly document it and list workarounds.

Anonymous
Tue 11 Aug 2020 07:23:37 AM UTC, comment #12: 

comment #10:

> Regarding comment #6: Thank you for sharing your knowledge.
>
> Regarding comment #8: Thank you for testing the workaround of comment #6.
>
> I agree that Octave should better inform Intel MKL users.  Do you have suggestions were to add this information?  If you know a suitable wiki location, please go ahead and add this information (everybody is an author) 😉


Maybe a new page titled "Using Octave with Intel MKL in Debian based systems" can be created?

>
> On the other hand, I do not think that this is a severe bug.  A workaround exists and Octave does not recommend or rely on the Intel MKL.  Boldly speaking, it was your decision to use the Intel MKL with Octave (at your own risk).  Same goes for self-compiled libraries, etc.  It is impossible to regard all possible library and parameter combinations.
>

I competely agree with you that it is my responsibility when I use a non-free external library like MKL, but it should still be well documented as MKL is quite popular.

> A general recommendation is to run the Octave test suite when changing from default packages (reference BLAS, OpenBLAS)
>
> +verbatim-
> _run_test_suite_
> -verbatim-
>
> This takes only about 5 minutes and should reveal such bugs.


Although Octave executed my program correctly, it fails in the test_suite even after using

export MKL_THREADING_LAYER=gnu


Here is the output.


octave:1> __run_test_suite__

Integrated test scripts:

  liboctave/array/Array.cc-tst ................................... PASS   21/21
  liboctave/array/CMatrix.cc-tst ................................. PASS    9/11
                                                                   FAIL    2
  liboctave/array/CSparse.cc-tst ................................. PASS   10/10
  liboctave/array/Sparse.cc-tst .................................. PASS  107/107
  liboctave/array/dMatrix.cc-tst ................................. PASS   10/10
  liboctave/array/dSparse.cc-tst ................................. PASS   12/12
  liboctave/array/fCMatrix.cc-tst ................................fatal: caught signal Segmentation fault -- stopping myself...
Segmentation fault (core dumped)


Without the export command, the output is


Integrated test scripts:

  liboctave/array/Array.cc-tst ................................... PASS   21/21
  liboctave/array/CMatrix.cc-tst .................................fatal: caught signal Segmentation fault -- stopping myself...
Segmentation fault (core dumped)


Archisman Panigrahi <apandada1>
Tue 11 Aug 2020 06:58:32 AM UTC, comment #11: 

Sorry for the bad markup in comment #10:


__run_test_suite__


Kai Torben Ohlhus <siko1056>
Group Member
Tue 11 Aug 2020 06:56:41 AM UTC, comment #10: 

Regarding comment #6: Thank you for sharing your knowledge.

Regarding comment #8: Thank you for testing the workaround of comment #6.

I agree that Octave should better inform Intel MKL users.  Do you have suggestions were to add this information?  If you know a suitable wiki location, please go ahead and add this information (everybody is an author) 😉

On the other hand, I do not think that this is a severe bug.  A workaround exists and Octave does not recommend or rely on the Intel MKL.  Boldly speaking, it was your decision to use the Intel MKL with Octave (at your own risk).  Same goes for self-compiled libraries, etc.  It is impossible to regard all possible library and parameter combinations.

A general recommendation is to run the Octave test suite when changing from default packages (reference BLAS, OpenBLAS)

+verbatim-
_run_test_suite_
-verbatim-

This takes only about 5 minutes and should reveal such bugs.

Kai Torben Ohlhus <siko1056>
Group Member
Tue 11 Aug 2020 06:06:27 AM UTC, comment #9: 


comment #7:

> BTW, independent of the BLAS and OMP question, you can speed up the rest of your code with vectorization:
>


> octave:21> clear a b c d
> octave:22> tic; for a = 1:500, for b = 1:500, c(a,b) = sin(a + b^2); endfor; endfor; toc
> Elapsed time is 0.85335 seconds.
> octave:23> tic; d = sin((1:500)' + (1:500).^2); toc
> Elapsed time is 0.00294185 seconds.
> octave:24> assert(all(all(c==d)))
> octave:25>


>
> It's roughly 290 times faster to avoid the for-loops in this case (853 ms vs 3 ms).


Thank you, I did not know about this technique.

Archisman Panigrahi <apandada1>
Tue 11 Aug 2020 06:04:00 AM UTC, comment #8: 


comment #6:

> @OP: That is enough information to localize that the bug is not within Octave, since openblas gives the correct answer.
>
> If you type


> export MKL_THREADING_LAYER=gnu


> at the bash prompt before invoking Octave, does the error go away? If so, that would confirm that the error is between IOMP and GOMP. The MKL was given a Gnu compatibility mode to solve that problem.
>

I confirm that this workaround works.

However, users who are using MKL with Octave should be somehow notified about this bug. This issue only pops up when the size of matrices is sufficiently big, and this may go unnoticed (having using Octave for several years, I always believe its results, and so would most other users). I only noticed the issue after plotting a graph of something, which seemed to be wrong.

This issue and its workaround should be mentioned in the Octave Wiki. Right now, it is can only be found in some Debian bug reports (not mentioned in MKL's website at all, and I could not find it anywhere else in the web), so it is hard to find. And users normally would not double check results produced by Octave.

I feel the severity of this bug should be increased from Normal to Very High (it is not exactly a Octave bug, but it ends up affecting  Octave severely. Scilab remains unaffected by this issue. Also, I don't know whether I am allowed to do change the severity of the bug myself. I could not find how to do it.), and this bug report should not be closed until this issue is mentioned in the Octave Wiki.

> If that workaround works, add the "export" line to your .bashrc file and Octave will work with MKL without surprises.


Yes, this works.

Archisman Panigrahi <apandada1>
Mon 10 Aug 2020 07:10:41 PM UTC, comment #7: 

BTW, independent of the BLAS and OMP question, you can speed up the rest of your code with vectorization:


octave:21> clear a b c d
octave:22> tic; for a = 1:500, for b = 1:500, c(a,b) = sin(a + b^2); endfor; endfor; toc
Elapsed time is 0.85335 seconds.
octave:23> tic; d = sin((1:500)' + (1:500).^2); toc
Elapsed time is 0.00294185 seconds.
octave:24> assert(all(all(c==d)))
octave:25>


It's roughly 290 times faster to avoid the for-loops in this case (853 ms vs 3 ms).

Anonymous
Mon 10 Aug 2020 06:59:35 PM UTC, comment #6: 

@OP: That is enough information to localize that the bug is not within Octave, since openblas gives the correct answer.

If you type

export MKL_THREADING_LAYER=gnu

at the bash prompt before invoking Octave, does the error go away? If so, that would confirm that the error is between IOMP and GOMP. The MKL was given a Gnu compatibility mode to solve that problem.

If that workaround works, add the "export" line to your .bashrc file and Octave will work with MKL without surprises.

Anonymous
Mon 10 Aug 2020 04:25:57 PM UTC, comment #5: 

comment #3:

> Can you test it using a different linear algebra library like openblas, atlas, or even the reference blas? If it works with that and not on MKL then we localize one end of the problem.
>

I have tested it with openblas (which is installed as a dependency with default installation of Octave in Ubuntu), and there is no error (ans = 16.915).

> Can you compute the same thing using MKL and a front end other than Octave like Python / Anaconda or even just C++ or Fortran? If that works / doesn't work we'll know whether the problem is in Octave or in MKL.


I initially posted this in AskUbuntu and someone confirmed the bug (last comment of https://askubuntu.com/a/1265802/124466). He also mentioned that the same code produces correct results with Scilab (mentioned in the comment). He made a gist (https://gist.github.com/N0rbert/cfda101b8f0aa326df1edb6beee0076d) containing Octave and Scilab codes.

I don't have much experience with doing linear algebra in C or Fortran (I can produce the matrix, but I don't know how to use the Eigen library to diagonalize it). I am trying to write a Python program with Numpy. Meanwhile, if someone can post a Python code based on the following algorithm, I will be happy to test it.

Algorithm: Create a 500x500 matrix A with
 A(i,j) = sin(i + j*j)

Diagonalize A, and find the maximum among the real parts of all eigenvalues of A.

The correct answer is approximately 16.915

Archisman Panigrahi <apandada1>
Mon 10 Aug 2020 03:38:35 PM UTC, comment #4: 

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=921193#12

The above discussion and the link from the OP both point to a clash between libgomp (GNU) and libiomp (Intel). Some workarounds suggested were to set MKL_THREADING_LAYER=gnu or =tbb before running Octave with MKL.

@OP: will that workaround work for you?

Anonymous
Mon 10 Aug 2020 03:21:26 PM UTC, comment #3: 

Can you test it using a different linear algebra library like openblas, atlas, or even the reference blas? If it works with that and not on MKL then we localize one end of the problem.

Can you compute the same thing using MKL and a front end other than Octave like Python / Anaconda or even just C++ or Fortran? If that works / doesn't work we'll know whether the problem is in Octave or in MKL.

Anonymous
Mon 10 Aug 2020 06:07:28 AM UTC, comment #2: 

(I duplicated the original post in the last comment by mistake, now I cannot find how to delete it)

Although the output is changing on every run, maybe I should still add an output. Here is one.

octave:1> testmkl
ans =  157135.35198

Archisman Panigrahi <apandada1>
Mon 10 Aug 2020 06:03:42 AM UTC, comment #1: 


original submission:

> I installed octave (5.2.0) and intel-mkl (2020.0.166) from the default repositories of Ubuntu 20.04
>
> When I try to diagonalize a large matrix, Octave gives wrong results with MKL (it is much faster, though).
>
> This is the code (https://pastebin.pl/view/611d6fe3) which gives wrong results with MKL.
>
> ```
> for a = 1:500
>         for b = 1:500
>                 c(a,b) = sin(a + b^2);
>         endfor
> endfor

> g = eig(c);
> max(real(g))
> ```
>
> The correct result is `ans = 16.915` (I checked this before installing intel-mkl, and also in another Ubuntu 18.04 installation in the same computer, where intel-mkl was not installed).
>
> With MKL in Ubuntu 20.04, I get random numbers of order 10^5 - 10^6, which change every time the code is run.
>
> I don't know whether this is related to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=921207
>
>

Archisman Panigrahi <apandada1>
Mon 10 Aug 2020 06:01:06 AM UTC, original submission:  

I installed octave (5.2.0) and intel-mkl (2020.0.166) from the default repositories of Ubuntu 20.04

When I try to diagonalize a large matrix, Octave gives wrong results with MKL (it is much faster, though).

This is the code (https://pastebin.pl/view/611d6fe3) which gives wrong results with MKL.

```
for a = 1:500
        for b = 1:500
                c(a,b) = sin(a + b^2);
        endfor
endfor
 
g = eig(c);
max(real(g))
```

The correct result is `ans = 16.915` (I checked this before installing intel-mkl, and also in another Ubuntu 18.04 installation in the same computer, where intel-mkl was not installed).

With MKL in Ubuntu 20.04, I get random numbers of order 10^5 - 10^6, which change every time the code is run.

I don't know whether this is related to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=921207


Archisman Panigrahi <apandada1>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #49655:  fntests_with_export.log added by apandada1 (1KiB - text/x-log - fntest result with and without export MKL_THREADING_LAYER=gnu)
file #49656:  fntests_without_export.log added by apandada1 (1KiB - text/x-log - fntest result with and without export MKL_THREADING_LAYER=gnu)
file #49654:  bugs.txi.diff added by None (2KiB - text/x-patch)
file #49653:  bugs.txi.diff added by None (2KiB - text/x-patch)
file #49652:  bugs.txi.diff added by None (2KiB - text/x-patch)
file #49651:  bugs.txi.diff added by None (3KiB - text/x-patch)
file #49648:  testmkl.m added by apandada1 (96B - text/x-objcsrc - code which produces wrong results with MKL)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by mlguru (Posted a comment)
  • -email is unavailable- added by arungiridhar (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by dasergatskov (Posted a comment)
  • -email is unavailable- added by siko1056 (Updated the item)
  • -email is unavailable- added by apandada1 (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 11 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2022-09-06 arungiridhar StatusNeed Info Invalid / Not an Octave Bug
        Open/ClosedOpen Closed
    2020-08-12 apandada1 Attached File- Added fntests_with_export.log, #49655
        Attached File- Added fntests_without_export.log, #49656
    2020-08-11 None Attached File- Added bugs.txi.diff, #49654
    2020-08-11 None Attached File- Added bugs.txi.diff, #49653
    2020-08-11 None Attached File- Added bugs.txi.diff, #49652
    2020-08-11 None Attached File- Added bugs.txi.diff, #49651
    2020-08-11 siko1056 Item GroupNone Incorrect Result
    2020-08-11 siko1056 StatusNone Need Info
    2020-08-10 apandada1 Attached File- Added testmkl.m, #49648

    Back to the top

    Powered by Savane 3.13-d3ae.
    Corresponding source code