bugGNU Octave - Bugs: bug #66882, Convolution code path improvements

 
 

bug #66882: Convolution code path improvements

Submitter:  None
Submitted:  Sat 08 Mar 2025 12:20:38 AM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Performance
Status:  Ready For Test Assigned to:  None
Originator Name:  Originator Email:  -email is unavailable-
Open/Closed:  * Open Release:  * 9.2.0
Release:  Operating System:  * GNU/Linux
Fixed Release:  11.1.0 (current default) Planned Release:  11.1.0 (current default)
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Wed 23 Apr 2025 06:49:52 AM UTC, comment #76: 

Nice. I didn't think of casting the address of that value and dereferencing the result. (Maybe, too late in the evening.)

I followed up with another change that moves these wrappers a bit closer to the style that is used elsewhere in Octave:
https://hg.savannah.gnu.org/hgweb/octave/rev/0ac3eb06cf0c

Markus Mützel <mmuetzel>
Group administrator
Wed 23 Apr 2025 01:17:47 AM UTC, comment #75: 

That seems to fix it. I tested it locally and pushed it with a commit message: https://hg.octave.org/octave/rev/1a7b38bd6843

Let's see if CI finds warnings.

Arun Giridhar <arungiridhar>
Group Member
Tue 22 Apr 2025 11:50:28 PM UTC, comment #74: 

I did a recast and it worked. See attached.
(file conv_recast.diff)

(Also removed redundancies)

Dmitri.
--


(file #57161)

Dmitri A. Sergatskov <dasergatskov>
Tue 22 Apr 2025 10:53:49 PM UTC, comment #73: 

Apparently, the "mixed" complex*real wrappers aren't used anywhere either.

See the attached wip patch that builds for me locally with GCC and Clang. (I haven't removed these "mixed" wrappers yet. Just commented them out.)
I tried to static_cast the complex scalar. But apparently that isn't allowed. Neither does it work with reinterpret_cast. (Maybe, because of a potentially different memory alignment?)
I resorted to a memcpy to "convert" from the C++ to the C type.

(file #57160)

Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 10:49:10 PM UTC, comment #72: 

It looks like makeing specialization functions "static" triggered the warnings.

So, I just also realized that mixed type (comlex * real) are redundant and can be removed as well since the templated version does pretty much the same.

(file das_conv_WIP3.diff)
The same as WIP_2, but with some redundancies removed.
Still no style changes per JWE request.

Dmitri.
--


(file #57159)

Dmitri A. Sergatskov <dasergatskov>
Tue 22 Apr 2025 09:58:43 PM UTC, comment #71: 

Before the switch (fortran):

octave:1> A=ones(1e5,1);
octave:2> B=A';
octave:3> tic; conv2(complex(B),(B)); toc
Elapsed time is 1.03716 seconds.
octave:4> tic; conv2((B),(B)); toc
Elapsed time is 0.952221 seconds.
octave:5> tic; conv2(complex(B),complex(B)); toc
Elapsed time is 1.05226 seconds.
octave:6>

After das_convn_WIP_2.diff patch applied:

octave:1> A=ones(1e5,1);
octave:2> B=A';
octave:3> tic; conv2((B),(B)); toc
Elapsed time is 0.966276 seconds.
octave:4> tic; conv2(complex(B),complex(B)); toc
Elapsed time is 1.06502 seconds.
octave:5>


Also, yes, generic specialization can be deleted.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 22 Apr 2025 09:37:49 PM UTC, comment #70: 

I see (both with gcc and clang):

octave:1> A=ones(1e5,1);
octave:2> B=A';
octave:3> tic; conv2(B,B); toc
Elapsed time is 0.955019 seconds.
octave:4> tic; conv2(complex(B),complex(B)); toc
Elapsed time is 8.06255 seconds.
octave:5> tic; conv2(complex(A),complex(A)); toc
Elapsed time is 7.91182 seconds.

I also tried to remove fall-back (loops) specialization and I see:

../liboctave/numeric/oct-convn.cc:77:1: note: candidate function not viable: no known conversion from 'std::complex<double>' to 'const F77_DBLE_CMPLX' (aka 'const _Complex double') for 2nd argument
   77 | blas_axpy (const F77_INT& n, const F77_DBLE_CMPLX& alpha,
      | ^                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~

(and many others like that).

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 22 Apr 2025 09:36:48 PM UTC, comment #69: 

Oops. Meant to write that Clang might be handling `std::complex<float>` different from `float _Complex` (and similarly for the double precision complex types).

Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 09:34:58 PM UTC, comment #68: 

It might also be that Clang handles `std::complex<float>` different from `double _Complex` in that respect. But GCC does not.

In principle, they are different types. So, Clang might not be wrong here...

Maybe, we should use `FloatComplex ` and `Complex ` in the function declarations of the C++ wrapper functions and reinterpret_cast these pointers to `F77_CMPLX ` and `F77_DBLE_CMPLX ` when calling the ?axpy Fortran functions?

That is a bit risky if the alignment of these type would indeed be different. But that is not a new "risk" that would be introduced by that change. These pointers have always been converted between each other one way or the other. And we didn't have issues so far...

Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 09:33:04 PM UTC, comment #67: 

Deleting the generic function causes a bunch of compilation errors (attached) essentially that it cannot convert F77_DBLE_CMPLX to the appropriate version but is trying to convert to type double and failing.

Should we use std::Complex instead of F77_DBLE_CMPLX?

(file #57158)

Arun Giridhar <arungiridhar>
Group Member
Tue 22 Apr 2025 09:29:14 PM UTC, comment #66: 

Yes. Anything with complex inputs is going to the generic version instead of the type-specific version.


octave:1> clear; A = ones(2,2) + i; B = ones(2,2) - i; tic; C = convn (A, B); toc
../liboctave/numeric/oct-convn.cc:131: Generic version! Should not get here!
../liboctave/numeric/oct-convn.cc:131: Generic version! Should not get here!
../liboctave/numeric/oct-convn.cc:131: Generic version! Should not get here!
../liboctave/numeric/oct-convn.cc:131: Generic version! Should not get here!
../liboctave/numeric/oct-convn.cc:131: Generic version! Should not get here!
../liboctave/numeric/oct-convn.cc:131: Generic version! Should not get here!
../liboctave/numeric/oct-convn.cc:131: Generic version! Should not get here!
../liboctave/numeric/oct-convn.cc:131: Generic version! Should not get here!


Diff:

diff --git a/liboctave/numeric/oct-convn.cc b/liboctave/numeric/oct-convn.cc
--- a/liboctave/numeric/oct-convn.cc
+++ b/liboctave/numeric/oct-convn.cc
@@ -28,6 +28,7 @@
 #endif

 #include <algorithm>
+#include <iostream>

 #include "Array.h"
 #include "CColVector.h"
@@ -127,6 +128,7 @@ static inline void
 blas_axpy (const F77_INT& n, const T& alpha, const T *x,
            const F77_INT& incx, T *y, const F77_INT& incy)
 {
+  std::cout << __FILE__ << ':' << __LINE__ << ": Generic version! Should not get here!\n";
   for (F77_INT i = 0; i < n; i++)
     y[i * incy] += alpha * x[i * incx];
 }


Arun Giridhar <arungiridhar>
Group Member
Tue 22 Apr 2025 09:17:10 PM UTC, comment #65: 

I'm seeing these warnings on the buildbot that is using Clang. But not on the one that's using GCC.

I'm wondering whether this is an issue that existed before. Maybe, we are only getting the warning now that the compiler knows that these functions don't have external linkage and aren't used anywhere else. Or is this warning false?

If you add different output to `std::out` to these functions and to the generic template, which text do you see if you call `conv2` with different input arguments in Octave? Does it make a difference if you revert some of the recent changes?
Why do we need the generic template in the first place?
If we need the generic template, shouldn't the other functions be template specializations (instead of overloads)?

Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 08:56:28 PM UTC, comment #64: 

After some latest patches from, I see:

../liboctave/numeric/oct-convn.cc:77:1: warning: unused function 'blas_axpy' [-Wunused-function]
   77 | blas_axpy (const F77_INT& n, const F77_DBLE_CMPLX& alpha,
      | ^~~~~~~~~
../liboctave/numeric/oct-convn.cc:86:1: warning: unused function 'blas_axpy' [-Wunused-function]
   86 | blas_axpy (const F77_INT& n, const F77_CMPLX& alpha,
      | ^~~~~~~~~
../liboctave/numeric/oct-convn.cc:97:1: warning: unused function 'blas_axpy' [-Wunused-function]
   97 | blas_axpy (const F77_INT& n, const F77_DBLE_CMPLX& alpha, const double *x,
      | ^~~~~~~~~
../liboctave/numeric/oct-convn.cc:111:1: warning: unused function 'blas_axpy' [-Wunused-function]
  111 | blas_axpy (const F77_INT& n, const F77_CMPLX& alpha, const float *x,
      | ^~~~~~~~~


Which kind of suggest that there is some signature type mismatch and now it instantiate to generic type?! I am getting lost tracking all those down...

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 22 Apr 2025 02:46:46 PM UTC, comment #63: 

Sorry it turns up to be such a hassle. I did some profiling.
For your A=rand (4e3, 3e3, 30);B = [0 -1 0; -1 5 -1; 0 -1 0];tic;  C = convn (A, B); toc benchmark, the blas/axpy contribution is quite small and most time is spent in octave Array function.
(file prof.png)

This is with AMD uProf:
https://www.amd.com/en/developer/uprof.html

Dmitri.
--



Dmitri A. Sergatskov <dasergatskov>
Tue 22 Apr 2025 02:23:48 PM UTC, comment #62: 

comment #54:

> Could you pre-load netlib blas and run the code?
> On my machine I see almost no difference (2sec vs 1.8 with openblas).
>
> Dmitri.
> --
>


This turned out to be hopeless. No matter what I did, it only linked against OpenBLAS not reference BLAS. These were the things I did in sequence:

  • I tried to install reference BLAS from the distro but it said that would remove OpenBLAS, so I didn't proceed with that.


  • I built reference BLAS and LAPACK myself from source. It built only static libraries and LD_PRELOAD gave "couldn't read non-ELF format" errors, so I copied them to the `/usr/lib` directory under the names `librefblas.a` and `libreflapack.a` to not conflict with anything else.


  • Then I built a clone of Octave passing `--with-blas="refblas"` and `--with-lapack="reflapack"` to point to those static libraries. It correctly created the "-lrefblas" and "-lreflapack" flags during configuration.


  • Finally I added "-lreflapack -lrefblas" to the LIBS environment variable.


Even after all that, rebuilding from scratch each time, no matter what I did, Octave only uses OpenBLAS:


octave:1> version -blas
ans = OpenBLAS (config: OpenBLAS 0.3.24 NO_AFFINITY USE_OPENMP ZEN MAX_THREADS=32)

octave:2> ver
----------------------------------------------------------------------
GNU Octave Version: 11.0.0 (hg id: b97dd00210b0)


Since I couldn't get it to switch libraries without making disruptive system changes, I tried one further test:


r = ones (1, 1e5); tic; x = conv2 (r, r); toc


With OMP_NUM_THREADS = 1:

octave:4> r = ones (1, 1e5); tic; x = conv2 (r, r); toc
Elapsed time is 1.67704 seconds.
octave:5> r = ones (1, 1e5); tic; x = conv2 (r, r); toc
Elapsed time is 1.72117 seconds.
octave:6> r = ones (1, 1e5); tic; x = conv2 (r, r); toc
Elapsed time is 1.67366 seconds.


With OMP_NUM_THREADS = 6:

octave:4> r = ones (1, 1e5); tic; x = conv2 (r, r); toc
Elapsed time is 0.491193 seconds.
octave:5> r = ones (1, 1e5); tic; x = conv2 (r, r); toc
Elapsed time is 0.501442 seconds.
octave:6> r = ones (1, 1e5); tic; x = conv2 (r, r); toc
Elapsed time is 0.503529 seconds.


Passing it a column vector instead of a row vector is marginally faster in avoiding the need to transpose / permute the input:

With OMP_NUM_THREADS = 1:

octave:1> r = ones (1e5, 1); tic; x = conv2 (r, r); toc
Elapsed time is 1.6625 seconds.
octave:2> r = ones (1e5, 1); tic; x = conv2 (r, r); toc
Elapsed time is 1.66298 seconds.
octave:3> r = ones (1e5, 1); tic; x = conv2 (r, r); toc
Elapsed time is 1.67316 seconds.


With OMP_NUM_THREADS = 6:

octave:1> r = ones (1e5, 1); tic; x = conv2 (r, r); toc
Elapsed time is 0.48233 seconds.
octave:2> r = ones (1e5, 1); tic; x = conv2 (r, r); toc
Elapsed time is 0.482407 seconds.
octave:3> r = ones (1e5, 1); tic; x = conv2 (r, r); toc
Elapsed time is 0.483342 seconds.
octave:4> r = ones (1e5, 1); tic; x = conv2 (r, r); toc
Elapsed time is 0.482564 seconds.


Arun Giridhar <arungiridhar>
Group Member
Tue 22 Apr 2025 12:03:37 PM UTC, comment #61: 

That CI configuration built and passed all tests with those changes:
https://github.com/gnu-octave/octave/actions/runs/14592840848/job/40931797005

Summary:

  PASS                            19822
  FAIL                                0
  XFAIL (reported bug)               57
  SKIP (missing feature)             40
  SKIP (run-time condition)          73


Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 10:40:15 AM UTC, comment #60: 

Ah. These functions are currently only called with an increment of 1. So, the issue doesn't manifest.
We should still do that correctly in general imho. So, I went ahead and pushed a change that addresses these two issues to the default branch:
https://hg.savannah.gnu.org/hgweb/octave/rev/945caea50fc0

We might still want to make these new wrapper functions static to their compilation unit. But I didn't make that modification yet.

Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 10:26:18 AM UTC, comment #59: 

Looking at that part of the code for longer, I'm not sure if it does "the right thing"™.

If I understand correctly, the step size (incx) is already taken into account when converting from `c` to `cx`. But the `?axpy` functions are still called with an increment of `incx` instead of an increment of `1`.

Are there any tests that check if these "mixed" real/complex overloads are working correctly?

Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 10:14:08 AM UTC, comment #58: 

Using the Fortran types is probably better. If I recall correctly, the C++ standard makes some guarantees about the complex types. But there might still be some differences between the C and C++ types on some platforms (e.g., alignment, calling convention, ...).

I'm currently not set up to reproduce this locally. The error happened in a CI run. So, I can't readily test if this helps.

Something like the following (untested) would probably work:

diff --git a/liboctave/numeric/oct-convn.cc b/liboctave/numeric/oct-convn.cc
--- a/liboctave/numeric/oct-convn.cc
+++ b/liboctave/numeric/oct-convn.cc
@@ -98,12 +98,12 @@ blas_axpy (const F77_INT& n, const F77_D
            const F77_INT& incx, F77_DBLE_CMPLX *y, const F77_INT& incy)
 {
   // Create a temporary complex array from x
-  std::vector<F77_DBLE_CMPLX> cx(n);
+  OCTAVE_LOCAL_BUFFER (F77_DBLE_CMPLX, cx, n);
   for (F77_INT i = 0; i < n; i++)
-    cx[i] = F77_DBLE_CMPLX(x[i * incx]);
+    cx[i] = F77_DBLE_CMPLX (x[i * incx]);

   // Use zaxpy with the complex temporary
-  F77_FUNC (zaxpy, ZAXPY) (n, alpha, cx.data (), incx, y, incy);
+  F77_FUNC (zaxpy, ZAXPY) (n, alpha, cx, incx, y, incy);
 }

 // complex<float> * float  - by promoting to complex
@@ -112,12 +112,12 @@ blas_axpy (const F77_INT& n, const F77_C
            const F77_INT& incx, F77_CMPLX *y, const F77_INT& incy)
 {
   // Create a temporary complex array from x
-  std::vector<F77_CMPLX> cx(n);
+  OCTAVE_LOCAL_BUFFER (F77_CMPLX, cx, n);
   for (F77_INT i = 0; i < n; i++)
-    cx[i] = F77_CMPLX(x[i * incx]);
+    cx[i] = F77_CMPLX (x[i * incx]);

   // Use caxpy with the complex temporary
-  F77_FUNC (caxpy, CAXPY) (n, alpha, cx.data (), incx, y, incy);
+  F77_FUNC (caxpy, CAXPY) (n, alpha, cx, incx, y, incy);
 }

 // Generic fallback for types without BLAS support


Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 08:41:44 AM UTC, comment #57: 

@mmuetzel Does this make a difference? (Untested):

Change

  // Create a temporary complex array from x
  std::vector<F77_DBLE_CMPLX> cx(n);
  for (F77_INT i = 0; i < n; i++)
    cx[i] = F77_DBLE_CMPLX(x[i * incx]);


To

  // Create a temporary complex array from x
  std::vector<std::Complex<double>> cx(n);
  for (F77_INT i = 0; i < n; i++)
    cx[i] = std::Complex<double> (x[i * incx], 0.0);


That was Dmitri's original code but I tried using the Fortran double complex where possible per comment #46.

Arun Giridhar <arungiridhar>
Group Member
Tue 22 Apr 2025 08:23:57 AM UTC, comment #56: 

Maybe, try using `OCTAVE_LOCAL_BUFFER` for the temporary buffer instead of the std::vector.

Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 08:21:05 AM UTC, comment #55: 

It looks like this has lead to compilation errors on Ubuntu 22.04 using Clang 14 (with `clang++ -stdlib=libc++`):
https://github.com/gnu-octave/octave/actions/runs/14583052256/job/40903502575#step:10:6159

libtool: compile:  clang++ -stdlib=libc++ -std=gnu++17 -DHAVE_CONFIG_H -I. -I.. -DOCTAVE_DLL -DEXTERNAL_DLL -Iliboctave -I../liboctave -I../liboctave/array -Iliboctave/numeric -I../liboctave/numeric -Iliboctave/operators -I../liboctave/operators -I../liboctave/system -I../liboctave/util -I../liboctave/wrappers -I../liboctave/external/Faddeeva -I/usr/include/hdf5/serial -I/usr/include/suitesparse -fPIC -pthread -Wall -W -Wshadow -Woverloaded-virtual -Wold-style-cast -Wformat -Wpointer-arith -Wwrite-strings -Wcast-align -Wcast-qual -fvisibility=hidden -g -O2 -MT liboctave/numeric/libnumeric_la-oct-convn.lo -MD -MP -MF liboctave/numeric/.deps/libnumeric_la-oct-convn.Tpo -c ../liboctave/numeric/oct-convn.cc  -fPIC -DPIC -o liboctave/numeric/.libs/libnumeric_la-oct-convn.o
../liboctave/numeric/oct-convn.cc:101:31: error: implicit instantiation of undefined template 'std::vector<_Complex double>'
  std::vector<F77_DBLE_CMPLX> cx(n);
                              ^
/usr/lib/llvm-14/bin/../include/c++/v1/iosfwd:260:28: note: template is declared here
class _LIBCPP_TEMPLATE_VIS vector;
                           ^
../liboctave/numeric/oct-convn.cc:115:26: error: implicit instantiation of undefined template 'std::vector<_Complex float>'
  std::vector<F77_CMPLX> cx(n);
                         ^
/usr/lib/llvm-14/bin/../include/c++/v1/iosfwd:260:28: note: template is declared here
class _LIBCPP_TEMPLATE_VIS vector;
                           ^
2 errors generated.


Markus Mützel <mmuetzel>
Group administrator
Tue 22 Apr 2025 01:39:30 AM UTC, comment #54: 

Could you pre-load netlib blas and run the code?
On my machine I see almost no difference (2sec vs 1.8 with openblas).

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Tue 22 Apr 2025 12:47:32 AM UTC, comment #53: 

Sorry typo in previous comment. It's 4000 x 3000 x 30, not 4000 x 3000 x 3. Essentially the same code as in comment #49 but swap 3e3 and 4e3, rest is same.

Arun Giridhar <arungiridhar>
Group Member
Tue 22 Apr 2025 12:44:45 AM UTC, comment #52: 

I found the test wasn't focused on the loops. Because the size of A was 3000 x 4000 x 3 it was being permuted to 4000 x 3000 x 3 and that was taking half the time to move around some 3 GB in memory.

If I pass it 4000 x 3000 x 3 instead, so that permutation time is taken out of the equation, it's a follows:


ijk  1.72041
ikj  1.32809
jik  1.71654
jki  1.31012
kij  0.942958
kji  0.942854


Still no need to change the loop structure though.

@Dmitri: forgot to mention, I use clang and the polymorphic allocator configuration option. Not sure if it makes a difference.

Arun Giridhar <arungiridhar>
Group Member
Tue 22 Apr 2025 12:24:21 AM UTC, comment #51: 

Didn't think mine was fast. It's a 2019 model Ryzen. Those runtimes were on the default branch (not bytecode interpreter). I am using the jemalloc library and I build with -O3 except for debug builds. If it helps, my OpenBlas is self-compiled 0.3.23, didn't upgrade after that.

Arun Giridhar <arungiridhar>
Group Member
Tue 22 Apr 2025 12:06:34 AM UTC, comment #50: 

Arun, thanks for the fixes. How do you get this funtastic times on hte conv test? I thought we have (almost) identical CPUs and my times are ~5.5 sec.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Mon 21 Apr 2025 11:26:08 PM UTC, comment #49: 

To test the optimal sequence of loops inside `convolve_2d`, I tried all 6 permutations of the following three loops in the "full" part of the code:


  for (F77_INT k = 0; k < na; k++)
    for (F77_INT j = 0; j < nb; j++)
      for (F77_INT i = 0; i < mb; i++)


Test code (apply a gaussian blur on 30 different 12-megapixel images):

  A = ones (3e3, 4e3, 30);
  B = [0 -1 0; -1 5 -1; 0 -1 0];
  tic;  C = convn (A, B); toc


Runtime in seconds (best of 5 runs; `ijk` means i is outermost loop, k is innermost, etc):

  kji 3.09367   <----- current
  kij 3.08693   <----- better by some 7 milliseconds
  jki 3.45857
  jik 3.87881
  ikj 3.47818
  ijk 3.88147


So there's a very minor 7 millisecond speedup (0.23%) from changing the current kij loop order to kji. This is probably not enough to warrant a change.

Arun Giridhar <arungiridhar>
Group Member
Mon 21 Apr 2025 11:04:28 PM UTC, comment #48: 

Pushed to https://hg.savannah.gnu.org/hgweb/octave/rev/b97dd00210b0

Ready for test (that part alone).

Pending changes: Rik expressed an interest in streamlining the different code paths to simplify the code and eliminate numerical differences.

Arun Giridhar <arungiridhar>
Group Member
Mon 21 Apr 2025 10:18:24 PM UTC, comment #47: 

Updated patch attached with changes from comment #46. Passes 'make check' etc.

(file #57154)

Arun Giridhar <arungiridhar>
Group Member
Sun 20 Apr 2025 05:31:14 AM UTC, comment #46: 

This looks like a good update and I just have a few minor style points that should be addressed before we close this issue.

For portability and to be consistent with other code in Octave you should not assume that Fortran functions have an underscore appended to their names or that the names are lower case.  Even if these functions are currently not used anywhere else in Octave, we should declare all BLAS functions together.  So please add the *axpy functions to numeric/lo-blas-proto.h file using the conventions for declarations of Fortran functions used there.  Please use the F77_REAL, F77_DBLE, F77_CMPLX, and F77_DBLE_CMPLX macros use when declaring Fortran real, double, complex, and double complex variables.  When passing complex arguments to Fortran code, use the appropriate F77_CMPLX_ARG, F77_CONST_CMPLX_ARG, F77_DBLE_CMPLX_ARG, or F77_CONST_DBLE_CMPLX_ARG macros.  See liboctave/util/f77-fcn.h for the definitions and additional info.

Also note that you can use "const F77_INT&" to pass scalar arguments instead of "const F77_INT*" and "&var" in the caller.  This allows passing literal constants and not having to use the address-of operator when passing scalar variables to these functions.  Unless there is a strong argument to do otherwise, I'd also recommend using the same convention for the new "axpy" wrapper functions.

Finally, include the lo-blas-proto.h file in oct-convn.cc and also use the F77_FUNC macro to call the BLAS *axpy functions as in other parts of Octave that call Fortran functions. In some Octave code, you'll see that we use F77_XFCN instead of F77_FUNC.  You can write either of the following forms


F77_XFCN (f, F, (arg1, arg2, ...));
F77_FUNC (f, F) (arg1, arg2, ...);


but the latter should be used in new code. 

John W. Eaton <jwe>
Group administrator
Sat 19 Apr 2025 09:54:26 PM UTC, comment #45: 

For ease of testing, here is Dmitri's patch from comment #31 with the six Fortran files files removed and the changes to module.mk all rolled into a bigger diff.

(file #57150)

Arun Giridhar <arungiridhar>
Group Member
Sat 19 Apr 2025 12:22:45 PM UTC, comment #44: 

After testing enough number of times, they look pretty indistinguishable. The slowdown earlier was likely being caused by repeated testing causing the CPU's temperature throttling to activate. When I randomized the testing order and allowed the CPU fans to settle down between tests, the difference disappeared.

I have not seen the construction "&a[idx + p*q]" before. Usually if some function needed an address, the calling location would do pointer arithmetic like "base + delta * sizeof type" or something, but I don't think it would make a difference.

Arun Giridhar <arungiridhar>
Group Member
Sat 19 Apr 2025 12:16:10 PM UTC, comment #43: 

I copied loop pattern verbatim from Fortran. I kind of expected performance hit since the memory access is different in C and Fortran for arrays, but I wa not able to make a benchmark that would demonstrate this, so far whatever I tried is pretty much the same (and limited by AXPY performance).

Dmitri.
--
 


Dmitri A. Sergatskov <dasergatskov>
Sat 19 Apr 2025 12:12:04 PM UTC, comment #42: 

As I mentioned somewhere in octave.discourse  axpy performance in OpenBLAS is bad. See e.g.
https://github.com/OpenMathLib/OpenBLAS/issues/5230

So, when benchmarking,  be aware of that.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 19 Apr 2025 12:05:50 PM UTC, comment #41: 

Yes I only removed the trailing backslash from the preceding line before the "*conv2.f" section but the diff clumps everything together.

I am seeing a performance slowdown in the C++ version though. Let me dig deeper.

Arun Giridhar <arungiridhar>
Group Member
Sat 19 Apr 2025 11:54:24 AM UTC, comment #40: 

Sorry for the noise -- I just misread your diff -  I thought you removed some of them (but then it was added back in).

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 19 Apr 2025 11:48:20 AM UTC, comment #39: 

I would not recommend changing those without more study in a separate discussion. One or more of those dot.f Fortran files are used in these places:

  • libinterp/corefcn/dot.cc
  • libinterp/corefcn/interpreter.cc
  • liboctave/array/CMatrix.cc
  • liboctave/array/dMatrix.cc
  • liboctave/array/fCMatrix.cc
  • liboctave/array/fMatrix.cc
  • liboctave/array/CRowVector.cc
  • liboctave/array/dRowVector.cc
  • liboctave/array/fCRowVector.cc
  • liboctave/array/fRowVector.cc


Arun Giridhar <arungiridhar>
Group Member
Sat 19 Apr 2025 11:20:41 AM UTC, comment #38: 


comment #37:

> I am not sure about "*dot*.f" files. Are they used somewhere else?
>


Apparently not. It compiles fine for me w/o them.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 19 Apr 2025 11:11:41 AM UTC, comment #37: 

I am not sure about "*dot*.f" files. Are they used somewhere else?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 19 Apr 2025 11:03:22 AM UTC, comment #36: 

This additional diff removes them from the build system:


diff --git a/liboctave/external/blas-xtra/module.mk b/liboctave/external/blas-xtra/module.mk
--- a/liboctave/external/blas-xtra/module.mk
+++ b/liboctave/external/blas-xtra/module.mk
@@ -16,13 +16,7 @@ EXTERNAL_SOURCES += \
   %reldir%/xsnrm2.f \
   %reldir%/xscnrm2.f \
   %reldir%/xcdotc.f \
-  %reldir%/xcdotu.f \
-  %reldir%/cconv2.f \
-  %reldir%/csconv2.f \
-  %reldir%/dconv2.f \
-  %reldir%/sconv2.f \
-  %reldir%/zconv2.f \
-  %reldir%/zdconv2.f
+  %reldir%/xcdotu.f

 XERBLA_SRC = \
   %reldir%/xerbla.cc


Arun Giridhar <arungiridhar>
Group Member
Sat 19 Apr 2025 10:47:11 AM UTC, comment #35: 

Yes, they would need to also be removed from the mk file where they are listed. I'll add that to this discussion in the next hour. (Not at a computer yet).

Arun Giridhar <arungiridhar>
Group Member
Sat 19 Apr 2025 10:26:38 AM UTC, comment #34: 

I think files removal would also need some change to the build system, that is why  I did not do it.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Sat 19 Apr 2025 10:24:58 AM UTC, comment #33: 

Yes, the 6 files can be removed.

Dmitri,
--

Dmitri A. Sergatskov <dasergatskov>
Sat 19 Apr 2025 10:03:18 AM UTC, comment #32: 

Retitled the bug since the row vector speed has been addressed already.

@Dmitri: your patch works for me too. Do you want to `hg remove` the six Fortran files as well?

Arun Giridhar <arungiridhar>
Group Member
Sat 19 Apr 2025 12:25:17 AM UTC, comment #31: 

Attached is (file das_convn_WIP.diff) eliminates the need for
fortran *conv2.f.
It passes "make check" and all the benchmarks I tried are pretty much the same as the original (fortran) version.

Dmitri.
--


(file #57147)

Dmitri A. Sergatskov <dasergatskov>
Wed 16 Apr 2025 05:55:16 PM UTC, comment #30: 


comment #29:

>
> comment #27:
> > @Dmitri: I don't think there is any good reason for this to be in Fortran other than inertia.  The code worked, it already existed, why bother to port it to C++?
> >
>
> Reduce complexity.
>
> (file das_oct-convn.diff) is really added.

> Dmitri.
> --
>


Sorry, this didn't come across correctly.  I was trying to be rhetorical.  I believe we should absolutely move this to C++.

>
> (file #57143)

Rik <rik5>
Group administrator
Wed 16 Apr 2025 05:22:49 PM UTC, comment #29: 


comment #27:

> @Dmitri: I don't think there is any good reason for this to be in Fortran other than inertia.  The code worked, it already existed, why bother to port it to C++?
>


Reduce complexity.

(file das_oct-convn.diff) is really added.
 
Dmitri.
--


(file #57143)

Dmitri A. Sergatskov <dasergatskov>
Wed 16 Apr 2025 05:19:08 PM UTC, comment #28: 

For a test I made the following change (file das_oct-convn.diff).
For benchmarks in comment 7 I see slight improvements for x1 and x2 and same numbers for x3 and x4. Still the same large-ish difference between "full" and "valid". But that may be an issue with "*axpy" from BLAS. BTW, "*axpy" from OpenBLAS (0.3.29) appears to be completely broken for MacOS (the timing is 40x longer than with APPLE Veclib), and I suspect not too good for x86_64 (timings the same with OpenBLAS and NETLIB, I will try to get some other blas there).

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Wed 16 Apr 2025 04:56:33 PM UTC, comment #27: 

@Dmitri: I don't think there is any good reason for this to be in Fortran other than inertia.  The code worked, it already existed, why bother to port it to C++?

But, given that it doesn't work as well as we want, that it probably needs to be re-written, we probably should take the opportunity to move it to C++.  The various Fortran functions are trivial loops so porting would be very easy.  As a bonus, we wouldn't need to do all of the weird forward Fortran declarations in the C++ code.

Rik <rik5>
Group administrator
Wed 16 Apr 2025 03:58:53 PM UTC, comment #26: 


comment #2:

>  And that C++ function is a wrapper to Fortran code.
>


Why are we doing Fortran here? I profiled some "convX" benchmarks and it is all in "*axpy". Is it so much faster to call it from Fortran than from C++?

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Mon 14 Apr 2025 11:31:39 PM UTC, comment #25: 

If we're interested in performance, another thing to improve would be the results when SHAPE = "same".  Currently the code performs the full convolution, and then selects out of the result the portion which is the size of A.  This is fine if A is large, and B is a small kernel such that the extra terms calculated scale with size (B).  But if B is large then this could be a lot.

Imagine doing a convolution on a 9 mega-pixel image (3e6 elements on a side).  If the kernel B is 100 terms then this is an extra 3e8 elements to calculate.  At least, I think that is approximately correct.

The code in oct-convn.cc occurs after the convolution and is


  if (ct == convn_same)
    {
      // Pick the relevant part.
      Array<idx_vector> sidx (dim_vector (nd, 1));

      for (int i = 0; i < nd; i++)
        sidx(i) = idx_vector::make_range (bdims(i)/2, 1, adims(i));
      c = c.index (sidx);
    }


Rik <rik5>
Group administrator
Mon 14 Apr 2025 10:57:30 PM UTC, comment #24: 

The m-file conv.m was calling conv2() which in turn called convn().  There really isn't any reason not to go direct (small performance gain) so I made that change here https://hg.savannah.gnu.org/hgweb/octave/rev/101d5affa7d1.

Rik <rik5>
Group administrator
Mon 14 Apr 2025 10:27:50 PM UTC, comment #23: 

I checked in this changeset (https://hg.savannah.gnu.org/hgweb/octave/rev/0dd8ee4934f9) which permutes the inputs based on the largest dimensions of the matrix A.  This yields repeatably good performance.

Rik <rik5>
Group administrator
Sun 13 Apr 2025 02:43:34 AM UTC, comment #22: 

I don't have Matlab myself.

While experimenting with various input sizes and evaluation sequences, I found a roadblock. The result C is always of the same type as A (class T) not that of B (class R). So changing the convolve function to do BA instead of AB also needed the <T, R> template to change to <R, T> but then the data type of C no longer agrees with that of the first argument (T != R). The only way around looked like a duplication of a lot of code but with C of type R instead of T. Is there a better way?

Arun Giridhar <arungiridhar>
Group Member
Sun 13 Apr 2025 12:29:24 AM UTC, comment #21: 

That's interesting.  In pure math the convolution commutes:


conv (A,B) === conv (B,A)


I'm glad that this statement holds in Octave up to numerical precision.

It seems like it should be possible to understand the best ordering.  Do you have access to Matlab?  I wonder if they also have a significant difference between A,B and B,A ordering.  If not, then they have figured out whatever trick is required.


Rik <rik5>
Group administrator
Sat 12 Apr 2025 04:12:51 PM UTC, comment #20: 

Thanks for pushing, and that's a better author summary as well.

I was investigating this comment in the code:

  // The 2nd array is assumed to be the smaller one.

since the code was not using a size assumption anywhere, even in the calculation of cdims, which is protected by `std::max` from becoming negative.

While testing out that assumption with this code:

  A = randn (20, 1000, 3);
  B = randn (1000, 30, 3);
  tic; C = convn (A, B); toc
  tic; D = convn (B, A); toc
  [max(abs(C(:) - D(:))), max(abs(C(:))), max(abs(D(:)))]


I noticed that the results C and D are the same to within numerical precision, so the comment is misleading, but the execution time is very different:

  Elapsed time is 0.721468 seconds.
  Elapsed time is 2.63246 seconds.
  ans =
     9.0949e-13   2.0310e+02   2.0310e+02


In this first case, A is the bigger array and convn (A, B) is 3.6 times faster than convn (B, A).

Seemingly minor changes in the input sizes can cause the order to flip:

  A = randn (20, 1000, 3);
  B = randn (1000, 10, 3);
  tic; C = convn (A, B); toc
  tic; D = convn (B, A); toc
  [max(abs(C(:) - D(:))), max(abs(C(:))), max(abs(D(:)))]

causes

Elapsed time is 1.04298 seconds.
Elapsed time is 0.241561 seconds.
ans =
   3.1264e-13   1.2193e+02   1.2193e+02


In this second case, A is the bigger array but convn (A, B) is 4.3 times slower than convn (B, A).

There are too many moving parts to eliminate that performance difference and always pick the fastest sequence, so it's up to the user to try out both AB and BA for their inputs.

In any case, I'll eliminate that comment, which is definitely misleading.

Arun Giridhar <arungiridhar>
Group Member
Sat 12 Apr 2025 01:09:00 AM UTC, comment #19: 

A team effort.  The Mercurial '-u' option for 'user' requires just a string (no special format).  I set the user to "Arun Giridhar <arungiridhar@gmail.com> and Rik <rik@octave.org>" and then removed the attribution line in the commit message summary.  Also, just an FYI, the Octave commit message standard is to reference the function changed in parentheses after the file name.  I changed the commit message ever so slightly to


* oct-convn.cc (convolve (MArray<T>&, MArray<R>&, convn_type): Check dimensions
of output.  If largest dimensions are not leading dimensions then permute inputs
and output to improve performance in subsequent Fortran code.


I checked it in here  https://hg.savannah.gnu.org/hgweb/octave/rev/ca60785bf141

It passes 'make check', but I suppose we should leave the bug report as "Ready for Test" for a little bit.


Rik <rik5>
Group administrator
Fri 11 Apr 2025 10:29:25 PM UTC, comment #18: 

For some reason I thought that since Array is based on std::vector, growing it incrementally was always possible even without a prespecified size. I must have misunderstood the limits of that assumption.

For the patch, I realized that for the square input case as well as for tall and skinny inputs, it's double the memory usage to do the permutations for no benefit. That is, even if we hold a few milliseconds of delay to be negligible, the transient memory usage might or might not be, and we would be penalizing users who have already oriented their inputs for efficiency so that we can accommodate users who have not. I've added a boolean flag that only does the permutation if required. The additional code is just one variable and one if-else, so hopefully it's OK.

Updated patch attached with that change and NEWS.11.md update. I've listed you as coauthor. If all looks good, I can push to default.

(file #57134)

Arun Giridhar <arungiridhar>
Group Member
Fri 11 Apr 2025 09:10:38 PM UTC, comment #17: 

Sorry, I didn't mean to make extra work for you.  I could have pointed out the difference.  In the first case, you are created an Array which is 0x0 and assigning to the Array (order(i) = x) does not increase the size of the Array as it does in the Octave language.  Very probably you were writing into memory that wasn't yours.  Probably running under ASAN would check this.  Given that the number of dimensions is small, some of the time you would get lucky and some of the time you wouldn't.  I initialized the Array with the proper size and then everything works.

I don't think it is worth implementing special cases as milliseconds are not really an issue.

It's worth adding a note to NEWS.11.md about the performance improvement here when you convert this to a changeset.


Rik <rik5>
Group administrator
Fri 11 Apr 2025 08:41:54 PM UTC, comment #16: 

After adding and removing one line at a time from your patch to my previous attempt, I see the reason for the previous error in `make check` now. I was doing this:

  Array<octave_idx_type> order;

instead of this:

  Array<octave_idx_type> order (dim_vector (1, nd));

Evidently that breaks `make check` in a weird way like in comment #14. Guessing it created a column vector for permutation? Anyway it works now with your patch.

The speedup depends on the aspect ratio of the input. The shorter and wider it is, the more the speedup from the permutation. For width:height ratio of 10:1, I am getting 10% to 30% faster depending on the size. For a ratio of 1000:1, it is about 8 times faster. The most extreme speedup would be for the row inputs again (100x faster or more).

For square inputs, it is marginally slower (extra time for permutation probably) but the difference is negligible (milliseconds) compared to the time for the convolution (seconds). If we want to, we can use the permutation technique only for inputs that are wide by some margin (e.g. only if "columns - rows >= 10" or something), and leave it unpermuted for others, but the difference didn't seem to be much at least on my machine.

Btw, a comment needs to be corrected. You wrote "bubble sort" but that algorithm is actually "selection sort". (Bubble sort orders consecutive elements j and j+1 while selection sort orders i and j so that after every j-loop, elements 1..i are sorted). I think just calling it "sort" is sufficient.

Arun Giridhar <arungiridhar>
Group Member
Fri 11 Apr 2025 06:59:21 PM UTC, comment #15: 

@Arun: Attached is a re-vamped patch that passes all tests.

It is true that this approach is going to use a lot of memory since the inputs A and B have to be copied and permuted.  And a temporary copy of C will also need to be created.  I had originally only anticipated the extra memory for C.  It probably is still okay given that the convolution is pretty slow and maybe isn't going to be used on exceedingly large arrays.

For benchmarking, I created two 3-D arrays wider than they are tall.


x = rand (1e2, 1e3, 3);
y = rand (1e2, 1e3, 3);
save -binary convn.var


I load the variables for each test configuration so I am not introducing different data and possibly skewing the benchmark.

For the current code


tic; z = convn (x,y); toc
Elapsed time is 28.9076 seconds.


For the patched code


tic; z = convn (x,y); toc
Elapsed time is 25.4617 seconds.


The savings is about 11%, which is measurable but not really tremendous.  On the other hand, there is no extreme outlier.  Using your benchmark from comment #7:


time_row_conv = 0.9016
time_col_conv = 0.9269
time_row_conv2 = 0.9394
time_col_conv2 = 0.9406


The "time_row_conv2" performance is now about the same as the rest.


(file #57133)

Rik <rik5>
Group administrator
Fri 11 Apr 2025 01:26:29 AM UTC, comment #14: 

OK I tried the attached patch in oct-convn.cc to permute the inputs in descending order of size before the convolution. But I managed to get this new kind of failure in `make check` that I've never seen before:


  libinterp/corefcn/conv2.cc-tst .................................invalid warning state:


Right after that, "make check" ends with "Octave successfully built" but no message about pass/fail/xfail or fntests.log or anything. In effect "make check" is cut short.

Anyway, please look at the attached patch and let me know how to do it correctly.

(file #57127)

Arun Giridhar <arungiridhar>
Group Member
Thu 10 Apr 2025 06:45:27 PM UTC, comment #13: 

One of the ideas in modern programming is to have a User Champion to present their perspective in strategy sessions about features.  My conception of the typical Octave user is an engineer or scientist just trying to get their code to work to solve the problem in their own domain of knowledge.  They are not reading the documentation in-depth (just enough to call the function) and they are not computer scientists who have an intuition about what works best.

With that image in mind, I think documentation changes have little effect.  And, I think we just need to make the code run well no matter how they have structured it.  For that reason, I would make an effort to handle wide matrices.  I don't like the multiplication of effort to 6 files if the change is done in Fortran.  Also, there are fewer Fortran programmers than there are C++ programmers.  The maintenance burden will be easier if it is done in oct-convn.cc.

Given that memory keeps falling in price, I'm okay with making the time/memory tradeoff.

Rik <rik5>
Group administrator
Thu 10 Apr 2025 05:50:19 PM UTC, comment #12: 

@mmuetzel: Thanks, it does help but that's also what I was afraid of: that any speedup for matrices that does not create an explicit transpose (and therefore use more memory) would require six different Fortran files to be touched instead of a single location in oct-convn.cc.

As of now, the choices seem to be these:

  • Make no changes to code. Add a performance note to the doc that the best performance will happen when the first dimension is the biggest, so the user can e.g. call `permute` on their inputs if they need the speed and permute back afterwards, if they have the memory to spare.


  • Make a minimal change to oct-convn.cc as in comment #10 that applies only to row vectors, on the grounds that something is better than nothing. This is more speed with no extra memory, but only for the limited scope of row vectors.


  • Make a more intrusive change to oct-convn.cc that explicitly constructs the transpose for wide matrices before and after the call to the corresponding Fortran file. This is more speed for all wide matrices, at the expense of memory.


  • Make changes to all 6 Fortran files, possibly once for outer transpose and once for inner transpose, with essentially duplicated code except for the BLAS function name and the order of the loops. This is more speed for all wide matrices without extra memory usage, at the expense of increased maintenance effort.


I'm honestly not sure which is the best path for this; I myself am inclined to doing (1) and (2) (minimal changes and advise the user in a performance note). Any thoughts?

Arun Giridhar <arungiridhar>
Group Member
Thu 10 Apr 2025 04:03:55 PM UTC, comment #11: 

If you'd like to avoid doing the transpose, you might need to change the Fortran implementation which is eventually called by convolve_2d<T, R>.

You can find the implementation, e.g., for double precision (non-complex) floating point input in liboctave/external/blas-extra/dconv2.f.
Let's first look at the subroutine dconv2o. Looking at the loops from the outermost to the innermost, they loop across columns of a and b, and then across rows of b. For each of these loops, it calls the BLAS function daxpy:
https://www.netlib.org/lapack/explore-html-3.6.1/de/da4/group__double__blas__level1_ga8f99d6a644d3396aa32db472e0cfc91c.html

Currently, it uses increments of 1 for the input matrix a and for the result c.

You could probably write an alternative subroutine (e.g., dconv2ot in the same file) that does the same thing but in transposed order. For that, you might need to reorder the loops and use "na" as increment instead of 1 when calling "daxpy". (That happens to work already with row vectors as the input, because "na" is 1 for them.)

After you have done that, you'd also need to implement the equivalent for the dconv2i subroutine. And equivalently for the sconv2*, cconv2*, and zconv2* subroutines.

With that, you could add an additional argument to the template specializations that are defined with the macro "FORWARD_IMPL" in oct-convn.cc. That new argument could select whether you'd like to call the original or your new (transposed order) Fortran implementation.
Or have it select the more performant Fortran implementation based on whether the inputs are tall or skinny without an additional argument.

I haven't put too much thought into it. I hope that still helps a bit.

Markus Mützel <mmuetzel>
Group administrator
Wed 09 Apr 2025 10:56:42 PM UTC, comment #10: 

This change passes `make check` and gives the speed benefit, but only for row vectors.


diff --git a/liboctave/numeric/oct-convn.cc b/liboctave/numeric/oct-convn.cc
--- a/liboctave/numeric/oct-convn.cc
+++ b/liboctave/numeric/oct-convn.cc
@@ -118,7 +118,12 @@ void convolve_nd (const T *a, const dim_
       F77_INT bd0 = to_f77_int (bd(0));
       F77_INT bd1 = to_f77_int (bd(1));

-      convolve_2d<T, R> (a, ad0, ad1, b, bd0, bd1, c, inner);
+      // If convolving two row vectors, do it in transposed order for speed,
+      // but without actually taking the transpose. See bug #66882.
+      if (ad0 == 1 && bd0 == 1)
+        convolve_2d<T, R> (a, ad1, ad0, b, bd1, bd0, c, inner);
+      else
+        convolve_2d<T, R> (a, ad0, ad1, b, bd0, bd1, c, inner);
     }
   else
     {


Is there a way to do this for short and wide matrices, not just row vectors, without taking the transpose?

(Also, I tried taking the transpose but it said that transpose is not a member of class T -- not sure how to work around that?).

Arun Giridhar <arungiridhar>
Group Member
Tue 11 Mar 2025 11:00:27 AM UTC, comment #9: 

Cross-linking some related topics:


Arun Giridhar <arungiridhar>
Group Member
Tue 11 Mar 2025 04:16:39 AM UTC, comment #8: 

Adding jwe to the CC list.

This is a great analysis.  I would want to put any fix deep enough such that not just users of the Octave interpreter, but users of liboctave would also get the benefit of any fix.

The only two locations that qualify are oct-convn.cc or the Fortran code.  I have a slight leaning towards modifying the C++ since Fortran seems so old and foreign that I tend to leave it alone.  Still, It would be worth understanding to what extent we could avoid creating a copy of the matrix that is going to be transposed.

Rik <rik5>
Group administrator
Mon 10 Mar 2025 10:57:16 PM UTC, comment #7: 

I made a simpler and faster test for myself:

r = ones (1, 5e4);
c = r';

tic;  x1 = conv  (r, r);  time_row_conv  = toc
tic;  x2 = conv  (c, c);  time_col_conv  = toc
tic;  x3 = conv2 (r, r);  time_row_conv2 = toc
tic;  x4 = conv2 (c, c);  time_col_conv2 = toc

assert (x1, x2'); assert (x1, x3);  assert (x1, x4');
assert (x2, x3'); assert (x2, x4);  assert (x3, x4');


Unpatched baseline:

time_row_conv = 0.195361852645874
time_col_conv = 0.1820180416107178
time_row_conv2 = 23.99320006370544
time_col_conv2 = 0.1834909915924072


The place where the actual calculation is passed from C++ to Fortran is in oct-convn.cc, which calls Fortran functions like dconv2.f and its friends. Those in turn ultimately call BLAS routines daxpy and friends.

There are two reasons for a speed difference:

  • One is the number of calls made to daxpy. It calls it once for each column, and passes the whole column in one go. For tall and skinny matrices, it is therefore very few calls, while for the short and wide case it is lots of calls (50K times more) and each time it is passed only a single element.


  • The second reason is a magnifier of the first: when daxpy is passed a lot of data in one call, it uses multiple cores, but when passed only scalars or small vectors, it stays in single-core mode. (This effect shows up clearly when using cputime instead of tic/toc).


Between the two effects, it ends up using 24 seconds instead of 0.18 seconds.

Here is some performance hackery: I am checking in dconv2.f whether the input is tall and skinny or short and wide, and I vectorize the longer dimension in the call to daxpy.

diff -r 1c0c32d8aadf liboctave/external/blas-xtra/dconv2.f
--- a/liboctave/external/blas-xtra/dconv2.f     Thu Mar 06 10:48:15 2025 -0500
+++ b/liboctave/external/blas-xtra/dconv2.f     Mon Mar 10 18:41:58 2025 -0400
@@ -39,13 +39,31 @@ c
       double precision c(ma+mb-1,na+nb-1)
       integer i,j,k
       external daxpy
-      do k = 1,na
-        do j = 1,nb
-          do i = 1,mb
-            call daxpy(ma,b(i,j),a(1,k),1,c(i,j+k-1),1)
+      if (ma + mb >= na + nb) then
+c
+c       Tall and skinny matrices: vectorize on ma.
+c
+        do k = 1,na
+          do j = 1,nb
+            do i = 1,mb
+              call daxpy(ma,b(i,j),a(1,k),1,c(i,j+k-1),1)
+            end do
           end do
         end do
-      end do
+      else
+c
+c       Short and wide matrices: vectorize on na.
+c       This currently fails "make check" because it's a hack for vectors,
+c       not for arrays with more dimensions.
+c
+        do k = 1,ma
+          do j = 1,nb
+            do i = 1,mb
+              call daxpy(na,b(i,j),a(k,1),1,c(i,j+k-1),1)
+            end do
+          end do
+        end do
+      end if
       end subroutine

       subroutine dconv2i(ma,na,a,mb,nb,b,c)


Patched:

time_row_conv = 0.1779379844665527
time_col_conv = 0.1760709285736084
time_row_conv2 = 0.1752481460571289
time_col_conv2 = 0.1754329204559326


NOTE: This hack fails "make check" because it currently only works for vectors but not for n-dimensional arrays. It's just a test hack for now.

If this sort of hackery is the right thing to do, then it would need to be done for the "outer" function in multiple Fortran files in the blas-xtra directory.

It would be much cleaner to write this check once in oct-convn.cc before it calls the corresponding Fortran function, but at the expense of constructing temporary transposed matrices. Time-memory tradeoff. (But can it be done without constructing the transpose?)

The bigger question is whether this sort of performance improvement needs to be done at all. I do not know the typical length of arrays being convolved together, so feel free to weigh in about whether the test above is representative of real use cases or not. Maybe having the user transpose it manually is the best option.

Arun Giridhar <arungiridhar>
Group Member
Mon 10 Mar 2025 10:37:23 AM UTC, comment #6: 

The value is 1e38 so 1e23 is 'epsilon' rounding error.

Better test without big values.

P = ones (1, 20000);
Q = P':
N = 1000;
tic
for ii = 1:N
    X = conv2 (P, P);
endfor
toc
tic
for ii = 1:N
    Y = conv2 (Q, Q);
endfor
toc


Anonymous
Sun 09 Mar 2025 11:42:08 PM UTC, comment #5: 

Okay, if polynomial exponential is what is desired then that is fine.  But convolution of a column vector and convolution of a row vector appear to be different operations and therefore there should be no expectation that they take the same amount of time.

Try this code


p = ones (1, 10);
x1 = 1;
for ii = 1:40
  x1 = conv2 (x1, p);
end

p = ones (10, 1);
x2 = 1;

for ii = 1:40
  x2 = conv2 (x2, p);
end

d = x1(:) - x2;
max (abs (d))


For me, I get


ans = 9.4447e+22


so the results are very different.

Rik <rik5>
Group administrator
Sun 09 Mar 2025 07:12:59 PM UTC, comment #4: 

OP code is polynomial exponential. Changing x to y is different.

Anonymous
Sun 09 Mar 2025 04:32:18 AM UTC, comment #3: 

Actually, the trouble seems to be in your benchmarking code.  You define the variable x to 1 at the start, but then use it as the output variable of the call to conv2.  There is a lot of locking/unlocking of memory and new/delete calls.

The only problem example was the third one.  If I re-write that to


p = ones (1, 10);
x = 1;
tic
for ii = 1:10000
  y = conv2 (x, p);
end
toc


then the elapsed time on my machine is 0.0271108 seconds.


Rik <rik5>
Group administrator
Sun 09 Mar 2025 01:27:40 AM UTC, comment #2: 

As Hendrik wrote, conv is a wrapper to conv2.  And, conv2 is a wrapper for convn.  And that C++ function is a wrapper to Fortran code.

What you are likely seeing is that Fortran stores matrices in column-major order.  That means that when incrementing a memory address the next matrix location fetched is one row down.  All modern processors fetch a block of memory into a cache line.  That means that the CPU is very likely to have all the row data for a given column.  However, when the for loop runs over the columns of a matrix the blocks of memory can be very far apart.  This means the data isn't in the cache and has to be fetched from DRAM over a slow memory bus.  The conv() function always makes column vectors out of its inputs so it is not a problem.

The question is whether convn, which is meant for N-dimensional objects, should have special code to detect vector inputs and re-orient them to column vectors.

Changing the Item Group to Performance and marking as In Progress.


Rik <rik5>
Group administrator
Sat 08 Mar 2025 03:54:26 AM UTC, comment #1: 

Looking at the implementation of conv will help:

type conv


As one can see, conv ALWAYS uses conv2 (covering in general 2D convolution whereas conv is a special use case for vectors only).

Note that conv converts vectors into column vectors before calling conv2.


Performance depends heavily on processor, cache size, compiler, optimization flags, used underlying specific library implementation (for conv2 I think blas is used) etc, so outside of the control of octave.

So any statement in the octave help about specific performance comparisons is not really useful.


Hendrik K <koerhen>
Sat 08 Mar 2025 12:20:38 AM UTC, original submission:  

It is very confusing whether to use conv or conv2. Somebody said "conv2 is faster than conv" because of some reason but it was actually much slower than conv but making the input a column vector suddenly becomes much faster than conv. This should be added to "help conv" and "help conv2".

Row vector and conv = Elapsed time is 3.202 seconds.

p = ones (1, 10);
x = 1;
tic
for ii = 1:10000
  x = conv (x, p);
end
toc


Column vector and conv = Elapsed time is 3.0865 seconds.

p = ones (10, 1);
x = 1;
tic
for ii = 1:10000
  x = conv (x, p);
end
toc


Row vector and conv2 = Elapsed time is 71.853 seconds.

p = ones (1, 10);
x = 1;
tic
for ii = 1:10000
  x = conv2 (x, p);
end
toc


Column vector and conv2 = Elapsed time is 1.0920 seconds.

p = ones (10, 1);
x = 1;
tic
for ii = 1:10000
  x = conv2 (x, p);
end
toc


Please, this should be added to "help conv" and "help conv2".
 
"For speed, use conv2 instead of conv, and also change row vector to column vector. Using conv2 is 3 times faster than conv for column vector but 25 times slower than conv for row vector."

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #57161:  conv_recast.diff added by dasergatskov (3KiB - text/x-patch)
file #57159:  das_conv_WIP3.diff added by dasergatskov (7KiB - text/x-patch)
file #57158:  errors.txt added by arungiridhar (20KiB - text/plain)
file #57156:  prof.png added by dasergatskov (129KiB - image/png)
file #57154:  convn_3.txt added by arungiridhar (26KiB - text/plain)
file #57150:  das_convn_WIP_2.diff added by arungiridhar (25KiB - text/x-patch)
file #57147:  das_convn_WIP.diff added by dasergatskov (8KiB - application/octet-stream)
file #57143:  das_oct-convn.diff added by dasergatskov (5KiB - application/octet-stream)
file #57134:  conv2_patch.txt added by arungiridhar (5KiB - text/plain)
file #57133:  bug66882.diff added by rik5 (3KiB - text/x-patch)
file #57127:  conv_v1.txt added by arungiridhar (2KiB - text/plain)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by dasergatskov (Posted a comment)
  • -email is unavailable- added by mmuetzel (Posted a comment)
  • -email is unavailable- added by rik5
  • -email is unavailable- added by arungiridhar (Posted a comment)
  • -email is unavailable- added by rik5 (Posted a comment)
  • -email is unavailable- added by koerhen (Posted a comment)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 23 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2025-04-22 dasergatskov Attached File- Added conv_recast.diff, #57161
    2025-04-22 mmuetzel Attached File- Added bug66882-cxx-blas-wrappers.patch, #57160
    2025-04-22 dasergatskov Attached File- Added das_conv_WIP3.diff, #57159
    2025-04-22 arungiridhar Attached File- Added errors.txt, #57158
    2025-04-22 dasergatskov Attached File- Added prof.png, #57156
    2025-04-21 arungiridhar StatusPatch Submitted Ready For Test
    2025-04-21 arungiridhar Attached File- Added convn_3.txt, #57154
    2025-04-19 arungiridhar Attached File- Added das_convn_WIP_2.diff, #57150
    2025-04-19 arungiridhar StatusReady For Test Patch Submitted
        Summaryconv2 is slower for row vectors than column vectors Convolution code path improvements
    2025-04-19 dasergatskov Attached File- Added das_convn_WIP.diff, #57147
    2025-04-16 dasergatskov Attached File- Added das_oct-convn.diff, #57143
    2025-04-12 rik5 StatusIn Progress Ready For Test
        Fixed ReleaseNone 11.1.0 (current default)
    2025-04-11 arungiridhar Attached File- Added conv2_patch.txt, #57134
    2025-04-11 rik5 Attached File- Added bug66882.diff, #57133
    2025-04-11 arungiridhar Attached File- Added conv_v1.txt, #57127
    2025-03-11 rik5 Carbon-Copy- Added jwe
    2025-03-10 arungiridhar Summaryhelp conv and help conv2 should say to use column vector conv2 is slower for row vectors than column vectors
    2025-03-09 rik5 CategoryDocumentation Octave Function
        Item GroupDocumentation Performance
        StatusNone In Progress
        Planned ReleaseNone 11.1.0 (current default)

    Back to the top

    Powered by Savane 3.14-962f.
    Corresponding source code