bugGNU Octave - Bugs: bug #60818, delaunayn - 2D code path...

 
 

bug #60818: delaunayn - 2D code path vectorization doesn't match nD algorithm

Submitted by:  Nicholas Jankowski <nrjank>
Submitted on:  Thu 24 Jun 2021 06:38:13 PM UTC  
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  None
Status:  In Progress Assigned to:  None
Originator Name:  Nicholas Jankowski Open/Closed:  Open
Release:  dev Operating System:  Any

Add a New Comment (Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

( Jump to the original submission)

Thu 22 Jul 2021 04:56:23 PM UTC, comment #41: 

in process. got delayed and took a vacation with the family :)

I got some odd timing results after stitching the two different approaches together, and found an error or two, so I'm rerunning them.
 it's on the to do list.

Nicholas Jankowski <nrjank>
Thu 22 Jul 2021 04:36:29 PM UTC, comment #40: 

@nrjank: Did you come around to running the performance tests and adding some self tests?

Markus Mützel <mmuetzel>
Project Member
Sun 04 Jul 2021 10:57:53 AM UTC, comment #39: 

Didn't look at memory but over 50x faster in fact for the 3d 1000pt case. LU itself goes back to being the slowest part. I don't see any way around that for a general nD solution.

Absent other minor tweaks this is largely complete I think. I'll run some timing tests as a fn of #pts and ndim to see if maybe a few higher dims should be Laplace expansion.

I think at the end of the function it also says it should have some delaunayn specific self tests added. Will look at adding a few of those too.

Nicholas Jankowski <nrjank>
Sat 03 Jul 2021 10:55:41 AM UTC, comment #38: 

Fixed the slow line by changing it to

[~, rev_sort] = sort (reordered_tri_idx);
vol = prod (reshape (R(rev_sort), dim, nt), 1).';

I have not checked the speed but I expect faster with less memory used. Also placed the missing abs() on the 3D volume calculation.

Attached file

(file #51642)

Anonymous
Fri 02 Jul 2021 02:52:07 PM UTC, comment #37: 

ok. file attached makes a few changes.

1: replaced both l and q with ~ is the lu output. just leaving q out issues a warning and significantly alters the output, but leaving the placeholder in is fine.  a bit less in memory now, but it's still big.

2: both code paths now start with the same dimension independent edg_vec as currently set in the LU path, and end with the same dimension independent volume/prod(edge_lengths) <tol check.  the edge_lengths reshapes/permutes for that are pretty low impact.

3: it would be a nice-to-have, but not necessary, if we could use the  reshape/permuted edg_vec in the first place, and go from that to eqs. it would remove two separate reshapes. but, compared to other things they are low impact.

3: what is an oddly large impact is getting the vol back out of the R's.  the

prod(reshape(kron(R,ones(1,nt))(reordered_tri_idx.'==(1:nt)),dim,nt),1)

process is oddly time consuming.  splitting it apart, kron(R,ones(1,nt) for the 1000pt 3D case takes a few seconds, and then the (reordered_tri_idx.'==(1:nt)) take the same about of time, too. 

Was hoping to get that to work so we'd have a consistent <tol test between codepaths (which kicked this off in the first place). if that's an issue maybe could go back to the idea of just testing R/min(edg_vec) for each tri, but that may have the same kron / reordered issue. will look more at that.

this version also still just uses the laplace expansion for 2 and 3d.   once happy with the nD codepath it's trivial for me to move in higher order dimensions for that if that turns out to be better. faster code vs less/simpler code?

(file #51640)

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 03:05:29 PM UTC, comment #36: 

and now looking at the geometric alternative of doing the volume comparison for the LU code, and I'm realizing we may be calculating volume wrong in the Laplace expansion code. the simple orthogonal cases may have just masked it.

volume of a triangle, tetragonal, etc., is 1/n! * volume of the parallelogram/parallelepiped, etc., defined by the vector product. but that requires use of a coterminated vectors.  (p21, p31, p41, etc.) not (p21, p32, p43) like we have there.  I was just following the 2D code which used p12 and p23, since they're co-terminated except for a sign. But using p12, p23, p34, ... for higher dims uses vectors not part of the parallelepiped to calculate it's volume, which i suspect is very wrong.

It's a trivial switch in the Laplace expansion determinant code.  will make that change and update the patch after check on the effect of leaving q out or changing it to q in the LU code.

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 02:21:28 PM UTC, comment #35: 

regarding the ability to drop q - it appears that the behavior of LU changes based on whether or not q is requested as an output?

"When called with two or three output arguments and a sparse input matrix, lu does not attempt to perform sparsity preserving column permutations. Called with a fourth output argument, the sparsity preserving column transformation Q is returned, such that P * A * Q = L * U."

I don't quite understand the internals enough to know if that would be a problem for subsequent operations.

does calling the function like [~,u,p,~] = lu (eqs, "vector") work, and does it make a difference?

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 02:12:54 PM UTC, comment #34: 

Dmitri, according to the Octave help

"If A is full then subroutines from LAPACK are used, and if A is sparse then UMFPACK is used."

https://octave.sourceforge.io/octave/function/lu.html

I'm not sure if this makes any particular difference?

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 11:12:27 AM UTC, comment #33: 

Sorry, sloppy patch again.  playing with this one so much, guess should have made a got for it.

 Yes, as written it should say

if (any(nd == [3 4]))

for the currently  written 2d and 3d code paths. (I had removed the 4 for timing the LU path for 3D). 

I hadn't yet added the higher dimension expansions to this version, but if we do then higher nd's would go in as well)

q was used up until the last patch which fixed the ordering issue.

In the last patch I had changed
[l,u,p,q] to [~,u,p,q]

Can so that with q now too.

This removed the storage requirement for L, but i think this doesn't stop Octave from building L during the LU, it just discards it, correct? So I don't think it'll actually make much difference in speed or memory? 

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 07:16:10 AM UTC, comment #32: 

I opened bug #60867 for `whos` not showing the sparse attribute.

Markus Mützel <mmuetzel>
Project Member
Thu 01 Jul 2021 07:07:17 AM UTC, comment #31: 

Sorry. Please ignore my last comment. I was thinking that sparse matrices would be marked somehow in the output of `whos`. But that is not the case...

Markus Mützel <mmuetzel>
Project Member
Thu 01 Jul 2021 07:04:53 AM UTC, comment #30: 

I would have thought `eqs` was a sparse (block diagonal) matrix. But it looks like it is a full matrix in comment #26.
That could also be the reason why the LU decomposition is unexpectedly slow.
I wonder which command/assignment causes the conversion from sparse to full.

Markus Mützel <mmuetzel>
Project Member
Thu 01 Jul 2021 03:43:43 AM UTC, comment #29: 

Sorry for my mistake nd=dim+1 so should suggest

if (any(nd == [3 4]))

and posibally

if (any(nd == [3 4 5 6 7]))

if we want faster code where repeated determinants is likely faster.

Anonymous
Thu 01 Jul 2021 03:26:22 AM UTC, comment #28: 

Question with the patch should

if (any(nd == [3]))

be

if (nd == 2 || nd == 3)

Regarding the variables at the end

  • l and q are never used
  • u could be overwritten rather than using a new variable R
  • eqs is not needed after the input to lu
  • edgvec is not used after it enters eqs but if the check is changed this may also change
Anonymous
Thu 01 Jul 2021 03:21:13 AM UTC, comment #27: 

l is not needed. I edited it out in the last patch (v5).

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 03:08:31 AM UTC, comment #26: 

I added "whos" at the end of the function. This is what I see for 6D 1000 pt case:

octave:2> whos
Variables visible from the current scope:

variables in scope: top scope

   Attr Name        Size                     Bytes  Class
   ==== ====        ====                     =====  =====
        p        1000x1                       8000  double
        q        1000x1                       8000  double
        r        1000x1                       8000  double
        s        1000x1                       8000  double
        x        1000x1                       8000  double
        y        1000x1                       8000  double
        z        1000x1                       8000  double

Total is 7000 elements using 56000 bytes

octave:3> tic; delaunayn([x,y,z,p,q,r]); toc
Variables visible from the current scope:

variables in scope: delaunayn: /home/dima/scratch/delaunayn.m

   Attr Name                 Size                     Bytes  Class
   ==== ====                 ====                     =====  =====
        R              3654654x1                   58474480  double
    f   T               608426x7                   34071856  double
        edgvec         3654654x6                  175423392  double
        eqs            3654654x3654654            380084024  double
        idx                  1x683                     5464  double
        l              3654654x3654654            233897864  double
        nd                   1x1                          8  double
        nt                   1x1                          8  double
        p              3654654x1                   29237232  double
    f   pts               1000x6                      48000  double
        q              3654654x1                   29237232  double
        reorderdtriidx       1x3654654             29237232  double
        tol                  1x1                          8  double
        u              3654654x3654654            233897864  double
    f   varargin             0x0                          0  cell

Total is 40069528391356 elements using 1203614664 bytes

Elapsed time is 6.89901 seconds.

Do we actually need "l" matrix? I do not see it being used
anywhere in the code.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 01 Jul 2021 02:58:20 AM UTC, comment #25: 

yup, that fixes it.  revised patch attached with that fix and a couple other tiny tweaks.

(file #51634)

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 02:37:01 AM UTC, comment #24: 

To fix the ordering issue change the line

reorderdtriidx = kron (1:nt, ones (1,nd-1))(p)(q);

to

reorderdtriidx = kron (1:nt, ones (1,nd-1))(p);

Please check this I normally use lu on full matrixes so not completely across the two outputs of sparse matrices. This change fixed the example below.

Anonymous
Thu 01 Jul 2021 02:04:51 AM UTC, comment #23: 

I'll step through a single large test and watch the memory at each step.

Right now I'm trying to step through and figure out why the LU method is getting different triangle volumes than the other methods related to comment #20.  It think one of the ordering/reordering steps might be off.

simplex volumes before removal:

old code:

vol =

        0
   0.5000
  -0.5000
        0
  -0.5000
   0.5000
        0
   0.5000
   0.5000
   0.5000
  -0.5000
  -0.5000
  -0.5000
   0.5000
   0.5000

LU code:
Compressed Column Sparse (rows = 1, cols = 15, nnz = 13 [87%])

  (1, 1) -> 1
  (1, 3) -> 0.5000
  (1, 4) -> 1
  (1, 5) -> 0.5000
  (1, 6) -> 1
  (1, 7) -> 0.5000
  (1, 8) -> 0.2500
  (1, 10) -> 0.5000
  (1, 11) -> 1
  (1, 12) -> 0.2500
  (1, 13) -> 0.2500
  (1, 14) -> 0.5000
  (1, 15) -> 1

(2 and 9 missing, so zero.  the fact that we're getting 1s and .25s is what makes me think it's an ordering issue)

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 01:56:58 AM UTC, comment #22: 

Could we work out which line is causing the memory issue. I visually cannot see why it is so high. I thought the line with logical may be making a full array but calling logical on a sparse array keeps it sparse.

The reason for

(nd - 1) * nt

is the same reason as (the above the number of rows or columns of the matrix)

max (size (A))

in the rank check. I set up the test by working out which rows are causing the matrix to be singular rather than working from a geometry perspective. It is normally done using singular value decomposition or a permuted QR factorization so I am probably stretching the applicability of the check with a LU decomposition. Feel free to change the check to what you think is better.

Anonymous
Thu 01 Jul 2021 01:40:10 AM UTC, comment #21: 

Perhaps I misunderstood the code, but it looks to me that octave has its own C++ code for LU rather than using lapack's one.

http://www.netlib.org/lapack/explore-3.1.1-html/dgetrf.f.html

appears to be a recent addition to lapack, so perhaps this is the reason for that.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 01 Jul 2021 01:17:20 AM UTC, comment #20: 

stepping through the nD code, using a 3D example, it seems to be missing one of the zero volume tets:

Matlab:

A = [0 0 0;
1 0 0;
0 1 0;
0 0 1;
1 1 0;
1 0 1;
0 1 1;
1 1 1;
0.5 0.5 0.5];

B = delaunay(A)
B =
     3     4     1     9
     3     7     9     5
     8     9     7     5
     8     6     9     5
     8     6     7     9
     9     7     4     6
     3     7     4     9
     3     9     2     5
     9     6     2     5
     9     4     2     6
     9     4     1     2
     3     9     1     2

size(B)
ans =
    12     4

Octave old code:

>> A = [0 0 0; 1 0 0; 0 1 0; 0 0 1; 1 1 0; 1 0 1; 0 1 1; 1 1 1; 0.5 0.5 0.5]
A =

        0        0        0
   1.0000        0        0
        0   1.0000        0
        0        0   1.0000
   1.0000   1.0000        0
   1.0000        0   1.0000
        0   1.0000   1.0000
   1.0000   1.0000   1.0000
   0.5000   0.5000   0.5000

>> B = delaunayn(A)
B =

   5   9   2   1
   5   9   3   1
   6   9   2   1
   6   9   4   1
   6   5   9   2
   6   5   8   9
   7   9   3   1
   7   9   4   1
   7   5   9   3
   7   5   8   9
   7   6   9   4
   7   6   8   9

>> size(B)
ans =

   12    4

new code, using LU decomp for 3D:

>> B = delaunayn(A)
B =

   5   3   2   1
   5   9   3   1
   6   4   2   1
   6   9   2   1
   6   9   4   1
   6   5   8   2
   6   5   9   2
   7   9   3   1
   7   9   4   1
   7   5   9   3
   7   5   8   9
   7   6   9   4
   7   6   8   9

>> size(B)
ans =

   13    4

>> p12 = A(B(:,1),:)-A(B(:,2),:); p23 = A(B(:,2),:)-A(B(:,3),:); p34 = A(B(:,3),:)-A(B
(:,4),:);
>> vol = dot (p12, cross (p23, p34, 2), 2)
vol =

        0
  -0.5000
        0
  -0.5000
   0.5000
        0
   0.5000
   0.5000
  -0.5000
  -0.5000
  -0.5000
   0.5000
   0.5000

so, it left in the three 0 vol tets but dropped 2 others.  will look through to see if it's something obvious.

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 01:01:47 AM UTC, comment #19: 

regarding sliver identification, a few references:

"Generating Well-Shaped Delaunay Meshes in 3D" - Li & Teng [1]
- defines (for 3D) small as V/L^3 < tol1, where L is the smallest edge, AND R/L < tol2, where R is the circumradius of the tetrahedron
Explained in his thesis [2] "Many types of tetrahedra can have a small value of V/L3, but only slivers simultaneously have small R/L ratio". Much of his work then goes into bounding tol1 and tol2, in addition to modifying the delaunay process in the first place to avoid them.

[1] http://www.cs.iit.edu/~xli/paper/Conf/sliver-SODA01.pdf
[2] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.95.7943&rep=rep1&type=pdf

"An experimental study of sliver exudation" - Edelsbrunner & Guoy [3]
Slivers have circumradius/min(L) approaching the right tetrahedral minimum of sqrt(6)/4, but also 'small' minimum dihedral angle (between faces). (of course calculating dihedral angles is a much larger set of vector products than we're already doing. see [4])

[3] https://www.ljll.math.upmc.fr/~frey/papers/meshing/Edelsbrunner%20H.,%20An%20experimental%20study%20of%20sliver%20exudation.pdf
[4] https://math.stackexchange.com/q/315171

At the moment the current patch uses V/L^n < 1000eps, for the explicit calcs, so the direct solvers are doing part of the first criteria.

The nD portion is doing something like:
nth_root(vol) / (ndims *num_triangles)  < 100 eps

(allowing for equal R's, that's something like R^n = vol of a simplex).  I'm still not following what the (ndims*numtriangles) part is intending, and why the number of triangles should affect the size check. If we could make this more of a relative size check (maybe doing a R / min(L) < tol^(1/n) for each simplex?) it would be more consistent with the other check.

Nicholas Jankowski <nrjank>
Thu 01 Jul 2021 12:18:25 AM UTC, comment #18: 

looking at timing. It appears that Octave LU calls are just very computational and memory expensive:

using the profiler to look at a 3D case:

Laplace Expansion:
>> rand("state", [1:625]');
>> x = rand(100,1)*4-2;y = rand(100,1)*4-2; z = rand(100,1)*4-2;
>> profile clear; profile on; for idx = 1:10000, delaunayn([x,y,z]);endfor, profile off; profshow

   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
   5 __delaunayn__            13.772      78.08        10000
   1     delaunayn             1.899      10.77        10000
  13         cross             1.245       7.06        10000
  12      binary -             0.088       0.50        60000
  19           cat             0.088       0.50        10000
  18     binary .*             0.087       0.50        80000
  22          sqrt             0.080       0.45        30000
  21         sumsq             0.070       0.40        30000
   6           isa             0.044       0.25        10000
  20           dot             0.035       0.20        10000
   3      binary <             0.030       0.17        50000
   9          size             0.024       0.14        30000
   7           eps             0.023       0.13        10000
   2        nargin             0.022       0.13        50001
  15         ndims             0.022       0.12        30000
  10     binary ==             0.019       0.11        30000
  16          ones             0.018       0.10        10000
  23     binary ./             0.017       0.10        10000
  11           any             0.017       0.10        10000
  24           abs             0.013       0.07        10000


using the LU decomposition:

>> clear all
>> rand("state", [1:625]');
>> x = rand(100,1)*4-2;y = rand(100,1)*4-2; z = rand(100,1)*4-2; p = rand(100,1)*4-2;
q = rand(100,1)*4-2; r = rand(100,1)*4-2; s = rand(100,1)*4-2;
>> profile clear; profile on; for idx = 1:10000, delaunayn([x,y,z]);endfor, profile of
f; profshow
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            14.207      40.10        10000
   5 __delaunayn__            12.965      36.59        10000
   1     delaunayn             4.423      12.48        10000
  27        unique             0.783       2.21        10000
  15          kron             0.753       2.13        30000
  22       logical             0.576       1.63        10000
  17         speye             0.368       1.04        10000
  16        sparse             0.218       0.62        20000
  24          diag             0.184       0.52        10000
   3      binary <             0.182       0.51        30000
  25           abs             0.095       0.27        10000
  13      binary -             0.078       0.22        70000
  12    postfix .'             0.078       0.22        20000
  26           max             0.073       0.21        10000
  21          true             0.055       0.16        30000
   7           eps             0.051       0.14        20000
   6           isa             0.038       0.11        10000
  36         zeros             0.036       0.10        10000
  30         false             0.028       0.08        20001
  14          ones             0.028       0.08        20000

so at least in 3D, for 100 points the LU approach is spending just as much time in LU as in _delaunayn_, plus ~1/2 of _delaunayn_ again on other ops within delaunayn.  The Laplace expansion avoids most other function calls except for cross, spending a total of ~1/3 of the time spent in _delaunayn_.

jumping out to 6D:

LU:

>> clear all
>> x = rand(100,1)*4-2;y = rand(100,1)*4-2; z = rand(100,1)*4-2; p = rand(100,1)*4-2;q = rand(100,1)*4-2; r = rand(100,1)*4-2;
>> profile clear; profile on; for idx = 1:100, delaunayn([x,y,z,p,q,r]);endfor, profile off; profshow
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            26.310      47.92          100
   5 __delaunayn__            12.579      22.91          100
   1     delaunayn             9.997      18.21          100
  15          kron             2.593       4.72          300
  22       logical             1.388       2.53          100
  12    postfix .'             0.857       1.56          200
...


Laplace expansion:

>> clear all
>> x = rand(100,1)*4-2;y = rand(100,1)*4-2; z = rand(100,1)*4-2; p = rand(100,1)*4-2;q = rand(100,1)*4-2; r = rand(100,1)*4-2;
>> profile clear; profile on; for idx = 1:100, delaunayn([x,y,z,p,q,r]);endfor, profile off; profshow
   #          Function Attr     Time (s)   Time (%)        Calls
----------------------------------------------------------------
   5     __delaunayn__            14.085      57.97          100
  22         binary .*             1.971       8.11        87600
  17             cross             1.690       6.95        12000
  15 delaunayn>detvec4             1.341       5.52         3000
  12          binary -             1.335       5.49        44100
  14 delaunayn>detvec5             0.910       3.75          600
  24               dot             0.869       3.57        12000
...

not much improvement.

upping to 1000pts:

3D, 1000pts

LU:
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            28.536      55.89         1000
   5 __delaunayn__            17.235      33.76         1000
   1     delaunayn             2.665       5.22         1000
  15          kron             0.811       1.59         3000
  22       logical             0.674       1.32         1000


Laplace Exp:
   #          Function Attr     Time (s)   Time (%)        Calls
----------------------------------------------------------------
   5     __delaunayn__            17.201      94.61         1000
   1         delaunayn             0.443       2.43         1000
  14             cross             0.143       0.79         1000
  12          binary -             0.081       0.44         6000



6D, 1000pts:

LU decomp:
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            97.561      40.34           10
   5 __delaunayn__            85.050      35.17           10
   1     delaunayn            35.671      14.75           10
  15          kron             9.233       3.82           30
  22       logical             5.001       2.07           10
  12    postfix .'             3.478       1.44           20

(peak mem about 3GB, total time ~241s)

Laplace expansion:
   #          Function Attr     Time (s)   Time (%)        Calls
----------------------------------------------------------------
   5     __delaunayn__            85.856      35.19           10
  22         binary .*            55.332      22.68         8820
  12          binary -            30.297      12.42         4410
  15 delaunayn>detvec4            20.128       8.25          300
  23               cat            10.416       4.27         1200
  24               dot            10.393       4.26         1200

(peak mem about 700MB, total time ~244s)

so here at the cost of 3x the memory, LU decomp breaks even at 6D.

upping to 10000 points. 

3D, 10kpts:
LU:
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            41.780      47.47          100
   5 __delaunayn__            27.882      31.68          100
   1     delaunayn            10.659      12.11          100
  15          kron             2.891       3.29          300
  22       logical             1.455       1.65          100


Laplace expansion:
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
   5 __delaunayn__            27.508      91.32          100
   1     delaunayn             1.558       5.17          100
  12      binary -             0.665       2.21          600
  19           cat             0.099       0.33          100
  22          sqrt             0.076       0.25          300
  18     binary .*             0.073       0.24          800

6D:

Laplace expansion:
   #          Function Attr     Time (s)   Time (%)        Calls
----------------------------------------------------------------
   5     __delaunayn__           195.219      44.54            1
  22         binary .*            87.128      19.88          876
  12          binary -            46.551      10.62          441
  15 delaunayn>detvec4            29.981       6.84           30
  24               dot            16.814       3.84          120

peak memory usage by Octave in the first few minutes hit about 3GB, then 7GB toward the end

LU:
peak memory usage by Octave in the first few minutes hit about 3GB, then 13GB toward the end, then
hit an out of memory limit.

incomplete profshow:
>> profshow
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
   5 __delaunayn__           198.244      54.59            1
   1     delaunayn           132.159      36.39            1
  15          kron            13.536       3.73            2
  22       logical             8.555       2.36            1
  12    postfix .'             5.262       1.45            2
  13      binary -             3.891       1.07            5
  16        sparse             1.474       0.41            2
  23       profile             0.011       0.00            1
  24     binary !=             0.002       0.00            1

I think it crashed out during the LU decomp, which is just showing up as delaunayn time.  Not sure how well that time can be trusted, since the times I was hitting 97+ memory I saw the disk usage spike as it started trying to diskswap. 

So in summary, seems like with Octave's LU it starts to be faster at higher dimensions for larger arrays, but at a big memory cost.

Nicholas Jankowski <nrjank>
Wed 30 Jun 2021 09:13:35 AM UTC, comment #17: 

Thanks. I filled a separate bug report for that.
https://savannah.gnu.org/bugs/index.php?60859

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Wed 30 Jun 2021 08:51:07 AM UTC, comment #16: 

I think the issue mentioned in comment 15 is a bug with sparse. Simple example:

a=sparse(1);
a(1)=single(20);

To fix change the line with an issue to

eqs(logical (kron (speye (nt, nt), true (nd-1))))=double(edgvec.'(:));

Anonymous
Wed 30 Jun 2021 08:25:40 AM UTC, comment #15: 

I am somewhat lost with all the changes being made, perhaps its is irrelevant, but the LU code from comment #4 cannot handle
single precision input:

 octave -q -f
warning: function /home/dima/scratch/delaunayn.m shadows a core library function
octave:1> x = single(rand(1000,1)*4-2);
octave:2> y = single(rand(1000,1)*4-2);
octave:3> z = single(rand(1000,1)*4-2);
octave:4> p = single(rand(1000,1)*4-2);
octave:5> n = 10; tic; for idx = 1:n, delaunayn([x,y,z]); endfor; toc
error: operator =: no conversion for assignment of 'float matrix' to indexed 'sparse matrix'
error: called from
    delaunayn at line 114 column 54
octave:6> n = 10; tic; for idx = 1:n, delaunayn([x,y]); endfor; toc
Elapsed time is 0.0229518 seconds.
octave:7>

Dmitri A. Sergatskov <dasergatskov>
Wed 30 Jun 2021 04:42:15 AM UTC, comment #14: 

Attached is an mildly corrected patch - the 2D and 3D cases didn't need /2 and /6. (forgot they were dropped with the volume/volume calculation.) and I left a few typos in both 2D and 3D versions. Otherwise it's still the same.

I'll reexamine the timing tomorrow.  When I ran the tests where I manually wrote out the explicit determinants, i still had the incorrect  (and maybe faster) tol checks in there. I'm wondering if a correct tol check would make it and the LU approach comparable in speed. I also excepted to see LU decomp do better after ~5 or 6 dimensions, but the numbers below were showing that to still be 2x faster.

the easiest question - is the small simplex test necessary? for matlab compatibility it is, as they throw out zero-volume simplexes.  with floating point error determining 'zero-volume' = less than some arbitrary distance from zero.  maybe we could make an optional flag to disable it, but it needs to be there by default.

i think one simple case needing small simplex removal.  A simple cube with a center point:

A = [0 0 0;
     1 0 0;
     0 1 0;
     0 0 1;
     1 1 0;
     1 0 1;
     0 1 1;
     1 1 1;
     0.5 0.5 0.5];

delaunayn(A), size(ans)
ans =

   5   9   2   1
   5   9   3   1
   6   9   2   1
   6   9   4   1
   6   5   9   2
   6   5   8   9
   7   9   3   1
   7   9   4   1
   7   5   9   3
   7   5   8   9
   7   6   9   4
   7   6   8   9

ans =

   12    4

the size before small simplexes were removed, however:

T =

   5   3   2   1
   5   9   2   1
   5   9   3   1
   6   4   2   1
   6   9   2   1
   6   9   4   1
   6   5   8   2
   6   5   9   2
   6   5   8   9
   7   9   3   1
   7   9   4   1
   7   5   9   3
   7   5   8   9
   7   6   9   4
   7   6   8   9

debug> size(T)
ans =

   15    4

with the volumes:

vol =

        0
   0.5000
  -0.5000
        0
  -0.5000
   0.5000
        0
   0.5000
   0.5000
   0.5000
  -0.5000
  -0.5000
  -0.5000
   0.5000
   0.5000

it apparently created three zero volume tetrahedra from the cube faces:

>> A([5 3 2 1], :)
ans =

   1   1   0
   0   1   0
   1   0   0
   0   0   0

>> A([6 4 2 1], :)
ans =

   1   0   1
   0   0   1
   1   0   0
   0   0   0

>> A([6 5 8 2], :)
ans =

   1   0   1
   1   1   0
   1   1   1
   1   0   0

(file #51627)

Nicholas Jankowski <nrjank>
Wed 30 Jun 2021 01:02:30 AM UTC, comment #13: 

Looking at how expensive the small simplex check is (and it is not vital to the method) would it be a good idea to have another input from the user which could default to true and only do that part of the algorithm if the input is true?

Anonymous
Wed 30 Jun 2021 12:39:09 AM UTC, comment #12: 

Theoretical basis of the check eps(max(R))
The check I proposed is a way of estimating the rank or finding the rows which cause the matrix to be singular or nearly singular. The rank function default tolerance is

tol = max (size (A)) * sigma(1) * eps;

where sigma(1) is a norm of the matrix found from the singular value decomposition. So max(R) is an estimate of the matrix norm. For this application as long as this estimate is within a few orders of magnitude of the true matrix norm I don't imagine any problems. We could use a 1, inf, Frobenius, or max norm at a small additional computational expense. I don't see the need for making the tolerance a function of every simplex, a deliberate difference in the simplex size of over 10 orders of magnitude would be required to make any difference. This method could also be applied using QR or SVD decompositions but it would be more expensive. In summary, the tolerance check I suggested is based on checking which rows are causing the matrix to be nearly singular rather than any geometrical arguments.

Suggestions

  • If idx is not empty we should display a warning to the user. One of the main uses for Delaunay is mesh generation, removing simplexes could cause issues. A warning would potentially get the user to check their points and fix issues with the point locations. An example would be a nearly duplicated point and it would be better to remove the nearly duplicated point and re-mesh than to remove some simplexes.
  • The comment in the latest patch about the determinant should say that it will find the determinant of one simplex all of them is more complicated, see the end of comment 1.
  • Do we even need to remove zero volume simplexes, they do cause problems when solving partial differential equations on a mesh containing them but changing the location of the points typically allows the Delaunay algorithm to find a mesh without low volume simplexes.

Can we check the time data with what we should expect:

  • LU: I expect nt*(nd-1)^3 plus some overhead
  • Leibniz rule repeated 2x2 det: nt*((nd-1)!)

So around six dimensions, the LU method should become faster and it should become much faster for higher dimensions.

Anonymous
Tue 29 Jun 2021 07:09:24 PM UTC, comment #11: 

oops.  bad headers in that patch. correct one attached.

(file #51626)

Nicholas Jankowski <nrjank>
Tue 29 Jun 2021 07:06:09 PM UTC, comment #10: 

attached is a version of the patch that removes some unneeded assignments from 2D (no real need for ptsz and separate p1,2,etc), and adds in a similar path for 3d.  the 3D tol check matches 2D calculating the volume of an equivalent right-angled tetrahedron, and tweaked a few comments.

see attached.

(file #51625)

Nicholas Jankowski <nrjank>
Tue 29 Jun 2021 06:18:22 PM UTC, comment #9: 

based on the timing, and the likelihood that 2d and 3d will be the most heavily used, I'd recommend we keep both 2D and 3D broken out as vector based determinants (with the cross() and dot(cross...) respectively. 

then just to make sure the <tol function is consistent

Nicholas Jankowski <nrjank>
Tue 29 Jun 2021 05:05:46 PM UTC, comment #8: 

That is a pity. It could probably have saved a lot of work, had we found bug #53942 before. Sorry for not noticing earlier.
There are probably a lot of other good patches on the tracker that went unnoticed for years.

With respect to the patch here. I haven't tested it as of now.
But I am wondering why it is possible to check all simplices against the same `eps(max(R))`. The previous algorithm used a reference that was different for each simplex.
But maybe I am missing something obvious.

The patch did apply for me. I amended it with a few comments and fixed what I think was a minor inaccuracy in the 2-D case. See the attachment.
Is that all OK?

We might want to tweak the introductory comment a bit more once I better understand the part about `eps(max(R))`.

If the number of simplices is very high, would it make sense to break the calculation into subsets of these simplices and use the new algorithm for each subset? Or is this overkill?

(file #51624)

Markus Mützel <mmuetzel>
Project Member
Tue 29 Jun 2021 04:21:16 PM UTC, comment #7: 

patch in comment #4 applies clean to default on my machine. and delaunayn appears to run fine.

adding to the timing comparisons on discourse:

rand("state", [1:625]');
x = rand(1000,1)*4-2; y = rand(1000,1)*4-2; z = rand(1000,1)*4-2; p = rand(1000,1)*4-2;  q = rand(1000,1)*4-2;r = rand(1000,1)*4-2; s = rand(1000,1)*4-2;

n = 1000; tic; for idx = 1:n, delaunayn([x,y]);endfor;toc

and adjusting dimension and n as practical:

dim         n         existing_loop (s)         expanded_det (s)         factor         block-LU (s)         factor
2D        10000        7.91032        7.69139        1.028        7.85065        1.007
3D        1000        35.6323        1.77464        20.07        3.74609        9.511
4D        100        11.4891        0.686602        16.73        2.03804        5.637
5D        100        39.3493        3.55978        11.05        11.7757        3.341
6D        100        155.825        27.9706        5.571        63.8424        2.440
7D        10        54.9079        16.8202        3.264        32.2846        1.700

I'm surprised to see the significant difference of the explicit determinant over the block-LU approach.  I note, though that the `<tau" check is still different and may be much simpler in that approach. I don't know if this accounts for some of difference.

Nicholas Jankowski <nrjank>
Tue 29 Jun 2021 02:52:25 PM UTC, comment #6: 

for the record, I thought I had searched for delaunay bugs/patches when I started this.  apologies for missing bug #53942

Nicholas Jankowski <nrjank>
Tue 29 Jun 2021 09:42:05 AM UTC, comment #5: 

for the record, bug #53942, file #44397 also proposed the approach with the LU decomposition of the sparse block-diagonal matrix.

A.R. Burgers <arb>
Tue 29 Jun 2021 08:08:29 AM UTC, comment #4: 

As requested a patch file and m file. The last few patches I have sent have not worked. I completely removed my local copy and redownloaded the latest version and hope it works this time.

(file #51621, file #51622)

Anonymous
Tue 29 Jun 2021 06:38:46 AM UTC, comment #3: 

@Anonymous from comment #1: That looks great.
Could you please prepare a changeset on top of the current default branch?
If that is not possible, could you provide a modified version of `delaunayn` that implements the algorithm you describe?

Markus Mützel <mmuetzel>
Project Member
Tue 29 Jun 2021 03:34:31 AM UTC, comment #2: 

hah,  that's great.  I had started looking at LU decomp for just that reason but stopped short of setting up the block diagonal matrix to make it all work in parallel.

will look at it and compare with some of the more brute force stuff I played with just breaking out the determinants (see the  Discourse discussion https://octave.discourse.group/t/delaunayn-trivial-triangle-removal-criteria, or the attached modified delaunayn.m)

we were still unsure of the validity of the arbitrary relative volume check and I was looking for something in the literature to justify one approach or another.

(file #51620)

Nicholas Jankowski <nrjank>
Tue 29 Jun 2021 02:43:20 AM UTC, comment #1: 

I have worked out a way to speed up the check for sliver simplexes without using loops for any number of dimensions. It appears to be faster than the existing method for all but the 2d case and can remove the same simplexes. The check is as follows:

edgvec=pts(T(:,2:end).'(:),:)-kron(pts(T(:,1),:),ones(nd-1,1));
eqs=sparse((nd-1)*nt,(nd-1)*nt);
eqs(logical(kron(speye(nt,nt),true(nd-1))))=edgvec.'(:);
[l u p q]=lu(eqs,"vector");
R=abs(diag(u));
reorderdtriidx=kron(1:nt,ones(1,nd-1))(p)(q);
idx=unique(reorderdtriidx(R<100*(nd-1)*nt*eps(max(R))));

The check is as follows:

  • Calculate the edge vectors of the simplex

    edgvec=pts(T(:,2:end).'(:),:)-kron(pts(T(:,1),:),ones(nd-1,1));

  • Place the edge vectors into a block diagonal matrix where each block is for each simplex

    eqs=sparse((nd-1)*nt,(nd-1)*nt);
    eqs(logical(kron(speye(nt,nt),true(nd-1))))=edgvec.'(:);

  • Find the lu factorization and extract the diagonal, the operations used in the lu factorization do not change the determinant (other than the sign and we only care about the absolute value), and the product of u in the diagonal of the lu factorization equals the determinant.

    [l u p q]=lu(eqs,"vector");
    R=abs(diag(u));

  • The lu factorization reorders the matrix so we can reorder the indexes to the simplex blocks to match

    reorderdtriidx=kron(1:nt,ones(1,nd-1))(p)(q);

  • Perform the check, the 100 is arbitrary and eps(max(R)) could be replaced by a matrix norm. With

    idx=unique(reorderdtriidx(R<100*(nd-1)*nt*eps(max(R))));

using

x=rand(10,3);
x=[x;x-1000*eps];
delaunayn(x);

I get the same indexes as the existing method with the 100*, without it I get a subset of the indexes.

This method is similar to a method I use to find which rows of a matrix are causing it to be singular. I normally use a QR factorization but the QR factorization was significantly slower than the existing method.

If you want to use the existing method the determinant of a particular triangle (the first on in this case) is

prod(R(reorderdtriidx==1))

or for all determinants in a vector

absdet=prod(reshape(kron(R,ones(1,nt))(reorderdtriidx.'==(1:nt)),nd-1,nt),1)

where R=abs(diag(u)) from the lu factorization.

Anonymous
Thu 24 Jun 2021 06:38:13 PM UTC, original submission:  

As discussed in some depth on the Octave Discourse [1],  delaunayn.m has a longstanding FIXME to vectorize the check for trivially small simplexes.  It currently loops over every shape. bug# 53689 separated out at least the 2D case and vectorized it, showing significant speedup.

Looking at doing the same for at least 3D if not nD, we noticed the 2D codepath does not actually reproduce the algorithm used by the for loop. Vectorizing higher dimensions requires choosing what algorithm to implement, and whether the 2D case needs to be made to match >2D.

'Brief' summary of the differences:
The original code admits the evaluation criteria is arbitrary.  It compares each volume to 'a reference volume', and discards volumes that are too small. the 'reference volume', or equivalent, is the difference.

The looped nD code takes the shape-defining edge vectors, orthogonalizes them to create a resultant vector, and a matrix division of the volume by that resultant vector produces another vector. (in 2D this new vector * the first vector has the same volume as the simplex). the components of the new vector are compared to 'tol' (=1000*eps), and if all components are smaller than tol, that shape is deemed trivially small and discarded.

2D path:  simplex volume is divided by the length of each triangle edge producing a resultant length (as if it was a rectangle), and the length of that result is compared to tol. if all such vectors are <tol, that volume is discarded. 

so the main difference is that 'tol' is compared either to the calculated vector length (2d), or it's component lengths (nD). a test pushing a 2D example through both code paths shows very different comparisons are made to 'tol'. 

the benefit of the nD case is that for any dimension, tol is compared to 1D lengths. if the 2D approach were applied to nD cases, you would compare tol to lengths, area, volume,... etc., with increasing dimension.

to the 2D code path's advantage, using rdivide instead of mrdivide, is much easier to vectorize and could be extended to >2d fairly easily.

I'd recommend trying to stick with the nD case for dimensional consistency, unless a better argument can be made, although this will make even the 2D case much harder to vectorize.  That said, the code even admits the check is arbitrarily decided, and maybe we can come up with a more self-consistent 'arbitrary' measure.

[1] https://octave.discourse.group/t/delaunayn-trivial-triangle-removal-criteria

(added discourse discussion users and bug# 53689 author to notification)

Nicholas Jankowski <nrjank>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #51642:  delaunayn.m added by None (7KiB - text/plain)
file #51640:  delaunayn.m added by nrjank (7KiB - text/plain - less redundant code between codepaths and has consistent <tol checks , but still very slow vol check scales very badly for nD case)
file #51634:  bug60818-delaunayn-v5.patch added by nrjank (4KiB - application/octet-stream - fix tri ordering)
file #51627:  bug60818-delaunayn-v4.patch added by nrjank (4KiB - application/octet-stream - minor fixes to 2D and 3D)
file #51626:  bug60818-delaunayn-v3.patch added by nrjank (4KiB - application/octet-stream - fixed patch header)
file #51625:  bug60818-delaunayn-v3.patch added by nrjank (4KiB - application/octet-stream)
file #51624:  bug60818-delaunayn-v2.patch added by mmuetzel (3KiB - application/octet-stream)
file #51622:  delaunayn@bug@60818.patch added by None (1KiB - application/octet-stream)
file #51621:  delaunayn.m added by None (5KiB - text/plain)
file #51620:  delaunayn.m added by nrjank (10KiB - text/plain - modified delaunay.m with broken out vectorized determinants up to 7D. still inconsistent volume check method)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by dasergatskov (Posted a comment)
  • -email is unavailable- added by arb (Posted a comment)
  • -email is unavailable- added by mmuetzel (Posted a comment)
  • -email is unavailable- added by nrjank (Submitted the item)
  • -email is unavailable- added by nrjank
  • -email is unavailable- added by nrjank
  •  

    Do you think this task is very important?
    If so, you can add your encouragement to it.
    This task has 0 encouragements so far.

    Only project members can vote.

     

     

     

    Follow 13 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2021-07-03 None Attached File- => Added delaunayn.m, #51642
    2021-07-02 nrjank Attached File- => Added delaunayn.m, #51640
    2021-07-01 nrjank Attached File- => Added bug60818-delaunayn-v5.patch, #51634
    2021-06-30 nrjank Attached File- => Added bug60818-delaunayn-v4.patch, #51627
    2021-06-29 nrjank Attached File- => Added bug60818-delaunayn-v3.patch, #51626
    2021-06-29 nrjank Attached File- => Added bug60818-delaunayn-v3.patch, #51625
    2021-06-29 mmuetzel Attached File- => Added bug60818-delaunayn-v2.patch, #51624
        StatusNone => In Progress
    2021-06-29 None Attached File- => Added delaunayn.m, #51621
        Attached File- => Added delaunayn@bug@60818.patch, #51622
    2021-06-29 nrjank Attached File- => Added delaunayn.m, #51620
    2021-06-24 nrjank Carbon-Copy- => Added mmuetzel
        Carbon-Copy- => Added arb

    Back to the top


    Powered by Savane 3.6