bugGNU Octave - Bugs: bug #60818, delaunayn - low performance loop...

 
 

bug #60818: delaunayn - low performance loop used for >3D code path

Submitter:  Nicholas Jankowski <nrjank>
Submitted:  Thu 24 Jun 2021 06:38:13 PM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  None
Status:  In Progress Assigned to:  None
Originator Name:  Nicholas Jankowski Open/Closed:  * Open
Release:  * dev Operating System:  * Any
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Tue 25 Oct 2022 08:40:33 PM UTC, comment #55: 

no comments or concerns about comment #54 patch for a month.  title now reflects goal for future performance improvement.  changing status back to In Progress to reflect that.

Nicholas Jankowski <nrjank>
Group Member
Wed 28 Sep 2022 07:24:02 PM UTC, comment #54: 

realized i let this sit for some time. I've refreshed the patch, made a few minor tweaks and format corrections, added some input validation and BISTS, and updated NEWS.md.8. passes all tests and a make check, so doesn't seem to break the few other functions that call it.

again, this takes care of the initial bug, and low hanging fruit for performance improvement.  since it's more than just a bugfix, pushed to default for v8 as https://hg.savannah.gnu.org/hgweb/octave/rev/bf8f33249e86

will retitle and point bug #53942 here for future work on >3D path performance and memory use improvement.

Marking as Ready for Test.

Nicholas Jankowski <nrjank>
Group Member
Sat 20 Nov 2021 06:28:25 AM UTC, comment #53: 

ok, never figured out how to get around the high nD memory issue, so uploading a new patch. Hoping this might be sufficient to push for now (maybe for 7?), tabling some of the performance improvement until a later task.

1 - fixes all codepaths so they use the same volume comparison for trivial simplex removal. (main bug topic needing to be fixed)

2 - in the process, simplifies the n-D for loop by pulling as much of the calculation outside of the loop as possible. (the actual det function is the only thing left inside.)  speeds up for loop about 2-4x for 3D and 4D cases over the current delaunayn

3 - pulled just the 3D case out into its own explicit path like 2D.  Figured 2D and 3D are likely most used (eg by interpolant and grid functions) and would benefit the most, and there had been no question on which det approach made sense for 3D. pulling it out is about 10-20x faster than leaving it in the loop, 20-40x faster than the old loop.

I also added a couple BISTS for 3D and 4D, including a simple 3D that requires trivial simplex removal, and added more explanatory comments to the code.

This makes a decent improvement over current version. Dealing with the memory impact of vectorizing the nD part can be a separete performance improvement task.

(file #52303)

Nicholas Jankowski <nrjank>
Group Member
Tue 07 Sep 2021 09:32:20 PM UTC, comment #52: 

minor patch update, fixing an error introduced by some of the simplex check indexing when _delaunayn_ returns T = 0.  now just bypasses the simplex checking in that case.  no other significant changes yet.

(file #51877)

Nicholas Jankowski <nrjank>
Group Member
Mon 09 Aug 2021 03:44:37 PM UTC, comment #51: 

regarding R, that was the original way of looking for small elements, but was switched to a volume calculation to make for a consistent check between the two codepaths.  Getting R is the expensive part (it is essentially a single vector of the components of the simplex volume before running a prod(R) to get volume.) the overhead for reshaping and prod() are fairly trivial after getting R. note in the latest version, both code paths start with the same edge_vectors form and end with the same volume check.

R could easily be formed in pieces. each simplex has a [dim+1 x dim+1] edge-vector matrix that is then used to get the volume. old loop used det[x]. LU decomp turns x into R, reducing det(x) into the product of the main diag of u.  but the block diagonal form makes each simplex independent within LU, which lets the whole thing be done in one step. arbitrarily picking a portion of the simplex set, you could easily LU on each one to build R.

now, how to decide on a memory based partition, and how that relates to input geometry - i guess we could come up with a simplex count. at 7D, going from 10 > 100 > 1000 input points increases simplex count from 8 > 68E3 > 3E6.  the latter takes 4.3MB in storage, but the process peaked at <<12GB (switched to drive swapping) with LU and 3800 with laplace.

Nicholas Jankowski <nrjank>
Group Member
Mon 09 Aug 2021 03:06:59 PM UTC, comment #50: 

patch of latest version attached.

I had already made some changes to drop memory use a bit. used ~ on lu outputs for all but u and p.  then 'reused' eqs instead of u, got rid the one reordered tri index variable. i think it dropped mem usage by ~10%, but the big memory jumps are still when forming eqs and running lu.



(file #51753)

Nicholas Jankowski <nrjank>
Group Member
Mon 09 Aug 2021 10:55:47 AM UTC, comment #49: 

Could you please attach a patch for the latest version? I find it easier to see what has changed with patches.

If we would like to implement that, we should probably not simply half/quarter/... the input size of `lu`. Instead, we should probably partition such that the size of `eqs` doesn't exceed a limit we'd still need to determine.
E.g. if we should decide that the "payload" of `eqs` shouldn't exceed 256 MiB with double precision floating point numbers:

max_num_simplices_per_loop = max (1, floor (2^28/8/nd^2));

Or whatever we deem reasonable.
I'm not sure how much memory `lu` needs for a given input size. So that estimate could be quite off from a reasonable value.

Looking at the last patch, the limit that we use for deciding which simplices to eliminate is:

R < 100 * (nd - 1) * nt * eps (max (R))

The vector `R` should be small compared to the matrix `eqs`. Would it be possible to collect the elements of `R` in that loop?

Since it looks like we are never using `q` (is that still true in the latest version?), you could probably write:

[~, u, p, ~] = lu (eqs, "vector");

That might also save some memory.

Markus Mützel <mmuetzel>
Group administrator
Sun 08 Aug 2021 10:11:13 PM UTC, comment #48: 

We could arbitrarily split up the vector equations by simplexes, but in what way? half? quarter? When we set out I was wondering if there was a way to loop over the dimensions. With volume calculation being the main task I figured some aspect of iteration increasing the volume projection would work.  not something I could see with the LU decomp though.

I'm still not sure why the method does so poorly with low point count, but other than that, with the tweaked version that removes calls to 'dot' and 'cross' which is fairly quick anyway, the laplace methed is faster with lower memory up to and including 6D.  So if LU is just used for degree > 6, maybe it's natural to partition by degree number.

Nicholas Jankowski <nrjank>
Group Member
Wed 04 Aug 2021 01:48:43 AM UTC, comment #47: 

separate from partitioning:
peeking into _delaunayn_ and looking through qhull options, it seems that qhull can compute facet areas with a couple options, but there's no general way to go from facet area to 'n-volume'. in 3d could probably take 'facet area' x opposite point distance to get volume, but not clear how to expand that to nD.

lu is already compiled, so no 'moving it into an oct file' for speed, and that doesn't affect memory. we could evaluate the initial 'FIXME' suggestion of moving the check right into _delanayn_.cc, and compare the compiled vectorization or original loop options (after fixing the volume check). i'm not set up to work on c-code, though, so I'd just be an eager spectator.

Nicholas Jankowski <nrjank>
Group Member
Sun 01 Aug 2021 06:45:07 PM UTC, comment #46: 

A good question. In the extreme each volume’s eqn  diagonal could be independently calculated, that’s what the for loop did. And I’m pretty sure  the blocks would independently diagonalize (since you can use matrix row operations instead of lu decomp to get the diagonal, and row ops to get row echelon form are block independent), but not sure how to partition that in any non-arbitrary way.

I’m half convinced this whole problem stems from not making use of what’s already going on in the initial Delaunay calculation, and we’re doubling effort for an after the fact check.

Nicholas Jankowski <nrjank>
Group Member
Sun 01 Aug 2021 09:17:19 AM UTC, comment #45: 

To reduce the memory footprint, could we split the LU factorization into multiple sub-sets and solve those separately using a loop?
I'd guess that would come at a performance penalty. But it might still be faster than the Laplace expansion if the subsets can be reasonably large.

Markus Mützel <mmuetzel>
Group administrator
Sun 01 Aug 2021 03:35:44 AM UTC, comment #44: 

so all that said:

biggest memory jump seems to happen inside lu.  no real way to work around that.

laplace does scale worse than lu at high dimension and high number of points, but seems laplace still edges it out to about 5D (ignoring hte small 10pt case), after which laplace scaling gets pretty poor.

just based on timing, I'd recommend using laplace expansion up to 5D, then switch to the LU approach.  unfortunately that will run into memory issues for >5D. I don't really know any way around that for those running larger, higher dimension triangulations. Personally, I don't know anyone running such high dimension triangulations.

Attached is the m file which would do that. Haven't worked up a patch yet, as this should still get some BISTs added to it.



(file #51724)

Nicholas Jankowski <nrjank>
Group Member
Sun 01 Aug 2021 03:15:41 AM UTC, comment #43: 

oops botched memory table formats:


old loop
est max memory
dim 10 100 1000 10000
2d              55
3d         50   71
4d         57   153
5d         98   655
6d         322  3500
7d         1572 OOM

LU
est max memory
dim 10  100  1000  10000
2d                 64
3d           66    252
4d      54   123   933
5d      63   528   5880
6d      140  3100  OOM
7d      453  13000 OOM

laplace
est max memory
dim 10  100  1000  10000
2d                 55
3d                 78
4d                 233
5d           85    1100
6d           590   8400
7d      105  3800  OOM

memory_lu/memory_laplace
dim  10  100   1000  10000
2d                   1.16
3d                   3.23
4d                   4.00
5d             6.21  5.34
6d             5.25
7d       4.31  3.42


Nicholas Jankowski <nrjank>
Group Member
Sun 01 Aug 2021 03:06:49 AM UTC, comment #42: 

ok. ran a bunch of speed and memory checks for different dimensions (up to 7D) and  numbers of points(10-10000). There were definite points at high dimension where the LU approach was faster than the direct laplace expansion.  However, the memory footprint is 3-6x higher.

looking at the code to see where the memory increases come in, for one example, 4D and 10k points - peak mem LU-919MB, laplace-233MB :

after running main _delaunayn_: working mem 66MB, peak 224MB
after building eqs:  working 185MB, peak 587MB
after running LU:  working 250MB, peak 919MB

So, both building eqs and running LU are significant memory users, doubling and quadrupling the other method's usage. While my machine hit memory limits just running the _delaunayn_ for 7D @ 10k pts, i also ran out of mem with the LU algorithm at 6D, 10k pts.

timing:
I ran _delaunayn_ through the tests without the simplex volume check to get a baseline, then ran the old loop and the two vectorized approaches.  did enough loops and tic/tocs to get a good average, then figured out average time/operation and subtracted out _delaunayn_'s time to compare just the checks.

here was the original loop (i didn't take time getting the really low memory numbers):


old loop
est max memory
dim        10        100        1000        10000
2d                                55
3d                        50        71
4d                        57        153
5d                        98        655
6d                        322        3500
7d                        1572        OOM

old loop
check time, ms
dim  10        100        1000      10000
2d   0.412164  0.462299   0.44162   0.9745
3d   1.341657  38.14379   435.1904  4755.529
4d   1.309238  128.04121  1950.719  22246.64
5d   1.371815  426.4956   8552.84   121692.96
6d   0.861909  1427.137   46292.5   667997
7d   0.646422  4765.347   221883.5  OOM


the LU algorithm:


LU
est max memory
dim        10        100        1000        10000
2d                                64
3d                        66        252
4d                54        123        933
5d                63        528        5880
6d                140        3100        OOM
7d                453        13000        OOM

lu
check time, ms
dim  10      100     1000     10000
2d   0.4971  0.9715  6.6968   69.4045
3d   0.4556  2.3249  36.732   636.099
4d   0.5227  15.078  336.72   4516.34
5d   0.5603  96.882  2586.3   35837.96
6d   0.5480  494.23   18004   OOM
7d   0.5369  2471.3  205843   OOM

lu
check/loop
dim  10           100   1000   10000
2d   1.20  2.10  15.16  71.22
3d   0.33  0.06  0.084  0.133
4d   0.39  0.11  0.172  0.203
5d   0.40  0.22  0.302  0.294
6d   0.63  0.34  0.388        OOM
7d   0.83  0.51  0.927        OOM


and the laplace expansion:

laplace
est max memory
dim        10        100        1000        10000
2d                                55
3d                                78
4d                                233
5d                        85        1100
6d                        590        8400
7d                105        3800        OOM

laplace
check time, ms
dim  10             100     1000      10000
2d   0.2019  0.2276  0.10983   9.0344
3d   0.3230  0.3180  1.2734    33.169
4d   0.9225  1.1493  28.029    533.24
5d   3.9620  13.853  418.83    9204.96
6d   23.342  298.25  18801.6   278337
7d   164.25  7893.5  656063.5  OOM

laplace
check/loop
dim  10     100    1000   10000
2d   0.489  0.490  0.248  -0.28
3d   0.240  0.008  0.002  0.006
4d   0.704  0.008  0.014  0.023
5d   2.888  0.032  0.048  0.074
6d   27.08  0.208  0.406  0.413
7d   254.0  1.656  2.956  OOM


ok, so comparing the two:


memory_lu/memory_laplace
dim  10  100  1000  10000
2d                  1.16
3d                  3.23
4d                  4.00
5d            6.21  5.34
6d            5.25
7d            4.31  3.42

time_lu / time_laplace
dim  10     100   1000   10000
2d   2.462  4.26  60.97  7.68
3d   1.410  7.30  28.84  19.1
4d   0.566  13.1  12.01  8.46
5d   0.141  6.99  6.175  3.89
6d   0.023  1.65  0.957  inf
7d   0.003  0.31  0.313  OOM


Nicholas Jankowski <nrjank>
Group Member
Thu 22 Jul 2021 04:56:23 PM UTC, comment #41: 

in process. got delayed and took a vacation with the family :)

I got some odd timing results after stitching the two different approaches together, and found an error or two, so I'm rerunning them.
 it's on the to do list.

Nicholas Jankowski <nrjank>
Group Member
Thu 22 Jul 2021 04:36:29 PM UTC, comment #40: 

@nrjank: Did you come around to running the performance tests and adding some self tests?

Markus Mützel <mmuetzel>
Group administrator
Sun 04 Jul 2021 10:57:53 AM UTC, comment #39: 

Didn't look at memory but over 50x faster in fact for the 3d 1000pt case. LU itself goes back to being the slowest part. I don't see any way around that for a general nD solution.

Absent other minor tweaks this is largely complete I think. I'll run some timing tests as a fn of #pts and ndim to see if maybe a few higher dims should be Laplace expansion.

I think at the end of the function it also says it should have some delaunayn specific self tests added. Will look at adding a few of those too.

Nicholas Jankowski <nrjank>
Group Member
Sat 03 Jul 2021 10:55:41 AM UTC, comment #38: 

Fixed the slow line by changing it to


[~, rev_sort] = sort (reordered_tri_idx);
vol = prod (reshape (R(rev_sort), dim, nt), 1).';


I have not checked the speed but I expect faster with less memory used. Also placed the missing abs() on the 3D volume calculation.

Attached file

(file #51642)

Anonymous
Fri 02 Jul 2021 02:52:07 PM UTC, comment #37: 

ok. file attached makes a few changes.

1: replaced both l and q with ~ is the lu output. just leaving q out issues a warning and significantly alters the output, but leaving the placeholder in is fine.  a bit less in memory now, but it's still big.

2: both code paths now start with the same dimension independent edg_vec as currently set in the LU path, and end with the same dimension independent volume/prod(edge_lengths) <tol check.  the edge_lengths reshapes/permutes for that are pretty low impact.

3: it would be a nice-to-have, but not necessary, if we could use the  reshape/permuted edg_vec in the first place, and go from that to eqs. it would remove two separate reshapes. but, compared to other things they are low impact.

3: what is an oddly large impact is getting the vol back out of the R's.  the

prod(reshape(kron(R,ones(1,nt))(reordered_tri_idx.'==(1:nt)),dim,nt),1)

process is oddly time consuming.  splitting it apart, kron(R,ones(1,nt) for the 1000pt 3D case takes a few seconds, and then the (reordered_tri_idx.'==(1:nt)) take the same about of time, too. 

Was hoping to get that to work so we'd have a consistent <tol test between codepaths (which kicked this off in the first place). if that's an issue maybe could go back to the idea of just testing R/min(edg_vec) for each tri, but that may have the same kron / reordered issue. will look more at that.

this version also still just uses the laplace expansion for 2 and 3d.   once happy with the nD codepath it's trivial for me to move in higher order dimensions for that if that turns out to be better. faster code vs less/simpler code?


(file #51640)

Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 03:05:29 PM UTC, comment #36: 

and now looking at the geometric alternative of doing the volume comparison for the LU code, and I'm realizing we may be calculating volume wrong in the Laplace expansion code. the simple orthogonal cases may have just masked it.

volume of a triangle, tetragonal, etc., is 1/n! * volume of the parallelogram/parallelepiped, etc., defined by the vector product. but that requires use of a coterminated vectors.  (p21, p31, p41, etc.) not (p21, p32, p43) like we have there.  I was just following the 2D code which used p12 and p23, since they're co-terminated except for a sign. But using p12, p23, p34, ... for higher dims uses vectors not part of the parallelepiped to calculate it's volume, which i suspect is very wrong.

It's a trivial switch in the Laplace expansion determinant code.  will make that change and update the patch after check on the effect of leaving q out or changing it to q in the LU code.

Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 02:21:28 PM UTC, comment #35: 

regarding the ability to drop q - it appears that the behavior of LU changes based on whether or not q is requested as an output?

"When called with two or three output arguments and a sparse input matrix, lu does not attempt to perform sparsity preserving column permutations. Called with a fourth output argument, the sparsity preserving column transformation Q is returned, such that P A Q = L * U."

I don't quite understand the internals enough to know if that would be a problem for subsequent operations.

does calling the function like [~,u,p,~] = lu (eqs, "vector") work, and does it make a difference?

Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 02:12:54 PM UTC, comment #34: 

Dmitri, according to the Octave help

"If A is full then subroutines from LAPACK are used, and if A is sparse then UMFPACK is used."

https://octave.sourceforge.io/octave/function/lu.html

I'm not sure if this makes any particular difference?

Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 11:12:27 AM UTC, comment #33: 

Sorry, sloppy patch again.  playing with this one so much, guess should have made a got for it.

 Yes, as written it should say


if (any(nd == [3 4]))


for the currently  written 2d and 3d code paths. (I had removed the 4 for timing the LU path for 3D). 

I hadn't yet added the higher dimension expansions to this version, but if we do then higher nd's would go in as well)

q was used up until the last patch which fixed the ordering issue.

In the last patch I had changed
[l,u,p,q] to [~,u,p,q]

Can so that with q now too.

This removed the storage requirement for L, but i think this doesn't stop Octave from building L during the LU, it just discards it, correct? So I don't think it'll actually make much difference in speed or memory?

Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 07:16:10 AM UTC, comment #32: 

I opened bug #60867 for `whos` not showing the sparse attribute.

Markus Mützel <mmuetzel>
Group administrator
Thu 01 Jul 2021 07:07:17 AM UTC, comment #31: 

Sorry. Please ignore my last comment. I was thinking that sparse matrices would be marked somehow in the output of `whos`. But that is not the case...

Markus Mützel <mmuetzel>
Group administrator
Thu 01 Jul 2021 07:04:53 AM UTC, comment #30: 

I would have thought `eqs` was a sparse (block diagonal) matrix. But it looks like it is a full matrix in comment #26.
That could also be the reason why the LU decomposition is unexpectedly slow.
I wonder which command/assignment causes the conversion from sparse to full.

Markus Mützel <mmuetzel>
Group administrator
Thu 01 Jul 2021 03:43:43 AM UTC, comment #29: 

Sorry for my mistake nd=dim+1 so should suggest

if (any(nd == [3 4]))

and posibally

if (any(nd == [3 4 5 6 7]))

if we want faster code where repeated determinants is likely faster.

Anonymous
Thu 01 Jul 2021 03:26:22 AM UTC, comment #28: 

Question with the patch should

if (any(nd == [3]))

be

if (nd == 2 || nd == 3)

Regarding the variables at the end

  • l and q are never used
  • u could be overwritten rather than using a new variable R
  • eqs is not needed after the input to lu
  • edgvec is not used after it enters eqs but if the check is changed this may also change
Anonymous
Thu 01 Jul 2021 03:21:13 AM UTC, comment #27: 

l is not needed. I edited it out in the last patch (v5).

Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 03:08:31 AM UTC, comment #26: 

I added "whos" at the end of the function. This is what I see for 6D 1000 pt case:


octave:2> whos
Variables visible from the current scope:

variables in scope: top scope

   Attr Name        Size                     Bytes  Class
   ==== ====        ====                     =====  =====
        p        1000x1                       8000  double
        q        1000x1                       8000  double
        r        1000x1                       8000  double
        s        1000x1                       8000  double
        x        1000x1                       8000  double
        y        1000x1                       8000  double
        z        1000x1                       8000  double

Total is 7000 elements using 56000 bytes

octave:3> tic; delaunayn([x,y,z,p,q,r]); toc
Variables visible from the current scope:

variables in scope: delaunayn: /home/dima/scratch/delaunayn.m

   Attr Name                 Size                     Bytes  Class
   ==== ====                 ====                     =====  =====
        R              3654654x1                   58474480  double
    f   T               608426x7                   34071856  double
        edgvec         3654654x6                  175423392  double
        eqs            3654654x3654654            380084024  double
        idx                  1x683                     5464  double
        l              3654654x3654654            233897864  double
        nd                   1x1                          8  double
        nt                   1x1                          8  double
        p              3654654x1                   29237232  double
    f   pts               1000x6                      48000  double
        q              3654654x1                   29237232  double
        reorderdtriidx       1x3654654             29237232  double
        tol                  1x1                          8  double
        u              3654654x3654654            233897864  double
    f   varargin             0x0                          0  cell

Total is 40069528391356 elements using 1203614664 bytes

Elapsed time is 6.89901 seconds.


Do we actually need "l" matrix? I do not see it being used
anywhere in the code.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 01 Jul 2021 02:58:20 AM UTC, comment #25: 

yup, that fixes it.  revised patch attached with that fix and a couple other tiny tweaks.

(file #51634)

Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 02:37:01 AM UTC, comment #24: 

To fix the ordering issue change the line

reorderdtriidx = kron (1:nt, ones (1,nd-1))(p)(q);

to

reorderdtriidx = kron (1:nt, ones (1,nd-1))(p);

Please check this I normally use lu on full matrixes so not completely across the two outputs of sparse matrices. This change fixed the example below.

Anonymous
Thu 01 Jul 2021 02:04:51 AM UTC, comment #23: 

I'll step through a single large test and watch the memory at each step.

Right now I'm trying to step through and figure out why the LU method is getting different triangle volumes than the other methods related to comment #20.  It think one of the ordering/reordering steps might be off.

simplex volumes before removal:

old code:

vol =

        0
   0.5000
  -0.5000
        0
  -0.5000
   0.5000
        0
   0.5000
   0.5000
   0.5000
  -0.5000
  -0.5000
  -0.5000
   0.5000
   0.5000

LU code:
Compressed Column Sparse (rows = 1, cols = 15, nnz = 13 [87%])

  (1, 1) -> 1
  (1, 3) -> 0.5000
  (1, 4) -> 1
  (1, 5) -> 0.5000
  (1, 6) -> 1
  (1, 7) -> 0.5000
  (1, 8) -> 0.2500
  (1, 10) -> 0.5000
  (1, 11) -> 1
  (1, 12) -> 0.2500
  (1, 13) -> 0.2500
  (1, 14) -> 0.5000
  (1, 15) -> 1


(2 and 9 missing, so zero.  the fact that we're getting 1s and .25s is what makes me think it's an ordering issue)


Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 01:56:58 AM UTC, comment #22: 

Could we work out which line is causing the memory issue. I visually cannot see why it is so high. I thought the line with logical may be making a full array but calling logical on a sparse array keeps it sparse.

The reason for

(nd - 1) * nt

is the same reason as (the above the number of rows or columns of the matrix)

max (size (A))

in the rank check. I set up the test by working out which rows are causing the matrix to be singular rather than working from a geometry perspective. It is normally done using singular value decomposition or a permuted QR factorization so I am probably stretching the applicability of the check with a LU decomposition. Feel free to change the check to what you think is better.

Anonymous
Thu 01 Jul 2021 01:40:10 AM UTC, comment #21: 

Perhaps I misunderstood the code, but it looks to me that octave has its own C++ code for LU rather than using lapack's one.

http://www.netlib.org/lapack/explore-3.1.1-html/dgetrf.f.html

appears to be a recent addition to lapack, so perhaps this is the reason for that.

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Thu 01 Jul 2021 01:17:20 AM UTC, comment #20: 

stepping through the nD code, using a 3D example, it seems to be missing one of the zero volume tets:

Matlab:

A = [0 0 0;
1 0 0;
0 1 0;
0 0 1;
1 1 0;
1 0 1;
0 1 1;
1 1 1;
0.5 0.5 0.5];

B = delaunay(A)
B =
     3     4     1     9
     3     7     9     5
     8     9     7     5
     8     6     9     5
     8     6     7     9
     9     7     4     6
     3     7     4     9
     3     9     2     5
     9     6     2     5
     9     4     2     6
     9     4     1     2
     3     9     1     2

size(B)
ans =
    12     4


Octave old code:

>> A = [0 0 0; 1 0 0; 0 1 0; 0 0 1; 1 1 0; 1 0 1; 0 1 1; 1 1 1; 0.5 0.5 0.5]
A =

        0        0        0
   1.0000        0        0
        0   1.0000        0
        0        0   1.0000
   1.0000   1.0000        0
   1.0000        0   1.0000
        0   1.0000   1.0000
   1.0000   1.0000   1.0000
   0.5000   0.5000   0.5000

>> B = delaunayn(A)
B =

   5   9   2   1
   5   9   3   1
   6   9   2   1
   6   9   4   1
   6   5   9   2
   6   5   8   9
   7   9   3   1
   7   9   4   1
   7   5   9   3
   7   5   8   9
   7   6   9   4
   7   6   8   9

>> size(B)
ans =

   12    4


new code, using LU decomp for 3D:


>> B = delaunayn(A)
B =

   5   3   2   1
   5   9   3   1
   6   4   2   1
   6   9   2   1
   6   9   4   1
   6   5   8   2
   6   5   9   2
   7   9   3   1
   7   9   4   1
   7   5   9   3
   7   5   8   9
   7   6   9   4
   7   6   8   9

>> size(B)
ans =

   13    4

>> p12 = A(B(:,1),:)-A(B(:,2),:); p23 = A(B(:,2),:)-A(B(:,3),:); p34 = A(B(:,3),:)-A(B
(:,4),:);
>> vol = dot (p12, cross (p23, p34, 2), 2)
vol =

        0
  -0.5000
        0
  -0.5000
   0.5000
        0
   0.5000
   0.5000
  -0.5000
  -0.5000
  -0.5000
   0.5000
   0.5000


so, it left in the three 0 vol tets but dropped 2 others.  will look through to see if it's something obvious.

Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 01:01:47 AM UTC, comment #19: 

regarding sliver identification, a few references:

"Generating Well-Shaped Delaunay Meshes in 3D" - Li & Teng [1]
- defines (for 3D) small as V/L^3 < tol1, where L is the smallest edge, AND R/L < tol2, where R is the circumradius of the tetrahedron
Explained in his thesis [2] "Many types of tetrahedra can have a small value of V/L3, but only slivers simultaneously have small R/L ratio". Much of his work then goes into bounding tol1 and tol2, in addition to modifying the delaunay process in the first place to avoid them.

[1] http://www.cs.iit.edu/~xli/paper/Conf/sliver-SODA01.pdf
[2] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.95.7943&rep=rep1&type=pdf

"An experimental study of sliver exudation" - Edelsbrunner & Guoy [3]
Slivers have circumradius/min(L) approaching the right tetrahedral minimum of sqrt(6)/4, but also 'small' minimum dihedral angle (between faces). (of course calculating dihedral angles is a much larger set of vector products than we're already doing. see [4])

[3] https://www.ljll.math.upmc.fr/~frey/papers/meshing/Edelsbrunner%20H.,%20An%20experimental%20study%20of%20sliver%20exudation.pdf
[4] https://math.stackexchange.com/q/315171



At the moment the current patch uses V/L^n < 1000eps, for the explicit calcs, so the direct solvers are doing part of the first criteria.

The nD portion is doing something like:
nth_root(vol) / (ndims *num_triangles)  < 100 eps

(allowing for equal R's, that's something like R^n = vol of a simplex).  I'm still not following what the (ndims*numtriangles) part is intending, and why the number of triangles should affect the size check. If we could make this more of a relative size check (maybe doing a R / min(L) < tol^(1/n) for each simplex?) it would be more consistent with the other check.

Nicholas Jankowski <nrjank>
Group Member
Thu 01 Jul 2021 12:18:25 AM UTC, comment #18: 

looking at timing. It appears that Octave LU calls are just very computational and memory expensive:

using the profiler to look at a 3D case:


Laplace Expansion:
>> rand("state", [1:625]');
>> x = rand(100,1)*4-2;y = rand(100,1)*4-2; z = rand(100,1)*4-2;
>> profile clear; profile on; for idx = 1:10000, delaunayn([x,y,z]);endfor, profile off; profshow

   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
   5 __delaunayn__            13.772      78.08        10000
   1     delaunayn             1.899      10.77        10000
  13         cross             1.245       7.06        10000
  12      binary -             0.088       0.50        60000
  19           cat             0.088       0.50        10000
  18     binary .*             0.087       0.50        80000
  22          sqrt             0.080       0.45        30000
  21         sumsq             0.070       0.40        30000
   6           isa             0.044       0.25        10000
  20           dot             0.035       0.20        10000
   3      binary <             0.030       0.17        50000
   9          size             0.024       0.14        30000
   7           eps             0.023       0.13        10000
   2        nargin             0.022       0.13        50001
  15         ndims             0.022       0.12        30000
  10     binary ==             0.019       0.11        30000
  16          ones             0.018       0.10        10000
  23     binary ./             0.017       0.10        10000
  11           any             0.017       0.10        10000
  24           abs             0.013       0.07        10000


using the LU decomposition:

>> clear all
>> rand("state", [1:625]');
>> x = rand(100,1)*4-2;y = rand(100,1)*4-2; z = rand(100,1)*4-2; p = rand(100,1)*4-2;
q = rand(100,1)*4-2; r = rand(100,1)*4-2; s = rand(100,1)*4-2;
>> profile clear; profile on; for idx = 1:10000, delaunayn([x,y,z]);endfor, profile of
f; profshow
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            14.207      40.10        10000
   5 __delaunayn__            12.965      36.59        10000
   1     delaunayn             4.423      12.48        10000
  27        unique             0.783       2.21        10000
  15          kron             0.753       2.13        30000
  22       logical             0.576       1.63        10000
  17         speye             0.368       1.04        10000
  16        sparse             0.218       0.62        20000
  24          diag             0.184       0.52        10000
   3      binary <             0.182       0.51        30000
  25           abs             0.095       0.27        10000
  13      binary -             0.078       0.22        70000
  12    postfix .'             0.078       0.22        20000
  26           max             0.073       0.21        10000
  21          true             0.055       0.16        30000
   7           eps             0.051       0.14        20000
   6           isa             0.038       0.11        10000
  36         zeros             0.036       0.10        10000
  30         false             0.028       0.08        20001
  14          ones             0.028       0.08        20000


so at least in 3D, for 100 points the LU approach is spending just as much time in LU as in _delaunayn_, plus ~1/2 of _delaunayn_ again on other ops within delaunayn.  The Laplace expansion avoids most other function calls except for cross, spending a total of ~1/3 of the time spent in _delaunayn_.

jumping out to 6D:


LU:

>> clear all
>> x = rand(100,1)*4-2;y = rand(100,1)*4-2; z = rand(100,1)*4-2; p = rand(100,1)*4-2;q = rand(100,1)*4-2; r = rand(100,1)*4-2;
>> profile clear; profile on; for idx = 1:100, delaunayn([x,y,z,p,q,r]);endfor, profile off; profshow
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            26.310      47.92          100
   5 __delaunayn__            12.579      22.91          100
   1     delaunayn             9.997      18.21          100
  15          kron             2.593       4.72          300
  22       logical             1.388       2.53          100
  12    postfix .'             0.857       1.56          200
...


Laplace expansion:

>> clear all
>> x = rand(100,1)*4-2;y = rand(100,1)*4-2; z = rand(100,1)*4-2; p = rand(100,1)*4-2;q = rand(100,1)*4-2; r = rand(100,1)*4-2;
>> profile clear; profile on; for idx = 1:100, delaunayn([x,y,z,p,q,r]);endfor, profile off; profshow
   #          Function Attr     Time (s)   Time (%)        Calls
----------------------------------------------------------------
   5     __delaunayn__            14.085      57.97          100
  22         binary .*             1.971       8.11        87600
  17             cross             1.690       6.95        12000
  15 delaunayn>detvec4             1.341       5.52         3000
  12          binary -             1.335       5.49        44100
  14 delaunayn>detvec5             0.910       3.75          600
  24               dot             0.869       3.57        12000
...


not much improvement.


upping to 1000pts:


3D, 1000pts

LU:
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            28.536      55.89         1000
   5 __delaunayn__            17.235      33.76         1000
   1     delaunayn             2.665       5.22         1000
  15          kron             0.811       1.59         3000
  22       logical             0.674       1.32         1000


Laplace Exp:
   #          Function Attr     Time (s)   Time (%)        Calls
----------------------------------------------------------------
   5     __delaunayn__            17.201      94.61         1000
   1         delaunayn             0.443       2.43         1000
  14             cross             0.143       0.79         1000
  12          binary -             0.081       0.44         6000



6D, 1000pts:

LU decomp:
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            97.561      40.34           10
   5 __delaunayn__            85.050      35.17           10
   1     delaunayn            35.671      14.75           10
  15          kron             9.233       3.82           30
  22       logical             5.001       2.07           10
  12    postfix .'             3.478       1.44           20

(peak mem about 3GB, total time ~241s)

Laplace expansion:
   #          Function Attr     Time (s)   Time (%)        Calls
----------------------------------------------------------------
   5     __delaunayn__            85.856      35.19           10
  22         binary .*            55.332      22.68         8820
  12          binary -            30.297      12.42         4410
  15 delaunayn>detvec4            20.128       8.25          300
  23               cat            10.416       4.27         1200
  24               dot            10.393       4.26         1200

(peak mem about 700MB, total time ~244s)

so here at the cost of 3x the memory, LU decomp breaks even at 6D.


upping to 10000 points. 


3D, 10kpts:
LU:
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
  23            lu            41.780      47.47          100
   5 __delaunayn__            27.882      31.68          100
   1     delaunayn            10.659      12.11          100
  15          kron             2.891       3.29          300
  22       logical             1.455       1.65          100


Laplace expansion:
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
   5 __delaunayn__            27.508      91.32          100
   1     delaunayn             1.558       5.17          100
  12      binary -             0.665       2.21          600
  19           cat             0.099       0.33          100
  22          sqrt             0.076       0.25          300
  18     binary .*             0.073       0.24          800

6D:

Laplace expansion:
   #          Function Attr     Time (s)   Time (%)        Calls
----------------------------------------------------------------
   5     __delaunayn__           195.219      44.54            1
  22         binary .*            87.128      19.88          876
  12          binary -            46.551      10.62          441
  15 delaunayn>detvec4            29.981       6.84           30
  24               dot            16.814       3.84          120

peak memory usage by Octave in the first few minutes hit about 3GB, then 7GB toward the end

LU:
peak memory usage by Octave in the first few minutes hit about 3GB, then 13GB toward the end, then
hit an out of memory limit.

incomplete profshow:
>> profshow
   #      Function Attr     Time (s)   Time (%)        Calls
------------------------------------------------------------
   5 __delaunayn__           198.244      54.59            1
   1     delaunayn           132.159      36.39            1
  15          kron            13.536       3.73            2
  22       logical             8.555       2.36            1
  12    postfix .'             5.262       1.45            2
  13      binary -             3.891       1.07            5
  16        sparse             1.474       0.41            2
  23       profile             0.011       0.00            1
  24     binary !=             0.002       0.00            1


I think it crashed out during the LU decomp, which is just showing up as delaunayn time.  Not sure how well that time can be trusted, since the times I was hitting 97+ memory I saw the disk usage spike as it started trying to diskswap. 

So in summary, seems like with Octave's LU it starts to be faster at higher dimensions for larger arrays, but at a big memory cost.

Nicholas Jankowski <nrjank>
Group Member
Wed 30 Jun 2021 09:13:35 AM UTC, comment #17: 

Thanks. I filled a separate bug report for that.
https://savannah.gnu.org/bugs/index.php?60859

Dmitri.
--

Dmitri A. Sergatskov <dasergatskov>
Wed 30 Jun 2021 08:51:07 AM UTC, comment #16: 

I think the issue mentioned in comment 15 is a bug with sparse. Simple example:

a=sparse(1);
a(1)=single(20);

To fix change the line with an issue to

eqs(logical (kron (speye (nt, nt), true (nd-1))))=double(edgvec.'(:));


Anonymous
Wed 30 Jun 2021 08:25:40 AM UTC, comment #15: 

I am somewhat lost with all the changes being made, perhaps its is irrelevant, but the LU code from comment #4 cannot handle
single precision input:


 octave -q -f
warning: function /home/dima/scratch/delaunayn.m shadows a core library function
octave:1> x = single(rand(1000,1)*4-2);
octave:2> y = single(rand(1000,1)*4-2);
octave:3> z = single(rand(1000,1)*4-2);
octave:4> p = single(rand(1000,1)*4-2);
octave:5> n = 10; tic; for idx = 1:n, delaunayn([x,y,z]); endfor; toc
error: operator =: no conversion for assignment of 'float matrix' to indexed 'sparse matrix'
error: called from
    delaunayn at line 114 column 54
octave:6> n = 10; tic; for idx = 1:n, delaunayn([x,y]); endfor; toc
Elapsed time is 0.0229518 seconds.
octave:7>


Dmitri A. Sergatskov <dasergatskov>
Wed 30 Jun 2021 04:42:15 AM UTC, comment #14: 

Attached is an mildly corrected patch - the 2D and 3D cases didn't need /2 and /6. (forgot they were dropped with the volume/volume calculation.) and I left a few typos in both 2D and 3D versions. Otherwise it's still the same.

I'll reexamine the timing tomorrow.  When I ran the tests where I manually wrote out the explicit determinants, i still had the incorrect  (and maybe faster) tol checks in there. I'm wondering if a correct tol check would make it and the LU approach comparable in speed. I also excepted to see LU decomp do better after ~5 or 6 dimensions, but the numbers below were showing that to still be 2x faster.

the easiest question - is the small simplex test necessary? for matlab compatibility it is, as they throw out zero-volume simplexes.  with floating point error determining 'zero-volume' = less than some arbitrary distance from zero.  maybe we could make an optional flag to disable it, but it needs to be there by default.

i think one simple case needing small simplex removal.  A simple cube with a center point:


A = [0 0 0;
     1 0 0;
     0 1 0;
     0 0 1;
     1 1 0;
     1 0 1;
     0 1 1;
     1 1 1;
     0.5 0.5 0.5];

delaunayn(A), size(ans)
ans =

   5   9   2   1
   5   9   3   1
   6   9   2   1
   6   9   4   1
   6   5   9   2
   6   5   8   9
   7   9   3   1
   7   9   4   1
   7   5   9   3
   7   5   8   9
   7   6   9   4
   7   6   8   9

ans =

   12    4


the size before small simplexes were removed, however:

T =

   5   3   2   1
   5   9   2   1
   5   9   3   1
   6   4   2   1
   6   9   2   1
   6   9   4   1
   6   5   8   2
   6   5   9   2
   6   5   8   9
   7   9   3   1
   7   9   4   1
   7   5   9   3
   7   5   8   9
   7   6   9   4
   7   6   8   9

debug> size(T)
ans =

   15    4


with the volumes:

vol =

        0
   0.5000
  -0.5000
        0
  -0.5000
   0.5000
        0
   0.5000
   0.5000
   0.5000
  -0.5000
  -0.5000
  -0.5000
   0.5000
   0.5000


it apparently created three zero volume tetrahedra from the cube faces:

>> A([5 3 2 1], :)
ans =

   1   1   0
   0   1   0
   1   0   0
   0   0   0

>> A([6 4 2 1], :)
ans =

   1   0   1
   0   0   1
   1   0   0
   0   0   0

>> A([6 5 8 2], :)
ans =

   1   0   1
   1   1   0
   1   1   1
   1   0   0


(file #51627)

Nicholas Jankowski <nrjank>
Group Member
Wed 30 Jun 2021 01:02:30 AM UTC, comment #13: 

Looking at how expensive the small simplex check is (and it is not vital to the method) would it be a good idea to have another input from the user which could default to true and only do that part of the algorithm if the input is true?

Anonymous
Wed 30 Jun 2021 12:39:09 AM UTC, comment #12: 

Theoretical basis of the check eps(max(R))
The check I proposed is a way of estimating the rank or finding the rows which cause the matrix to be singular or nearly singular. The rank function default tolerance is

tol = max (size (A)) * sigma(1) * eps;

where sigma(1) is a norm of the matrix found from the singular value decomposition. So max(R) is an estimate of the matrix norm. For this application as long as this estimate is within a few orders of magnitude of the true matrix norm I don't imagine any problems. We could use a 1, inf, Frobenius, or max norm at a small additional computational expense. I don't see the need for making the tolerance a function of every simplex, a deliberate difference in the simplex size of over 10 orders of magnitude would be required to make any difference. This method could also be applied using QR or SVD decompositions but it would be more expensive. In summary, the tolerance check I suggested is based on checking which rows are causing the matrix to be nearly singular rather than any geometrical arguments.

Suggestions

  • If idx is not empty we should display a warning to the user. One of the main uses for Delaunay is mesh generation, removing simplexes could cause issues. A warning would potentially get the user to check their points and fix issues with the point locations. An example would be a nearly duplicated point and it would be better to remove the nearly duplicated point and re-mesh than to remove some simplexes.
  • The comment in the latest patch about the determinant should say that it will find the determinant of one simplex all of them is more complicated, see the end of comment 1.
  • Do we even need to remove zero volume simplexes, they do cause problems when solving partial differential equations on a mesh containing them but changing the location of the points typically allows the Delaunay algorithm to find a mesh without low volume simplexes.


Can we check the time data with what we should expect:

  • LU: I expect nt*(nd-1)^3 plus some overhead
  • Leibniz rule repeated 2x2 det: nt*((nd-1)!)

So around six dimensions, the LU method should become faster and it should become much faster for higher dimensions.

Anonymous
Tue 29 Jun 2021 07:09:24 PM UTC, comment #11: 

oops.  bad headers in that patch. correct one attached.

(file #51626)

Nicholas Jankowski <nrjank>
Group Member
Tue 29 Jun 2021 07:06:09 PM UTC, comment #10: 

attached is a version of the patch that removes some unneeded assignments from 2D (no real need for ptsz and separate p1,2,etc), and adds in a similar path for 3d.  the 3D tol check matches 2D calculating the volume of an equivalent right-angled tetrahedron, and tweaked a few comments.

see attached.

(file #51625)

Nicholas Jankowski <nrjank>
Group Member
Tue 29 Jun 2021 06:18:22 PM UTC, comment #9: 

based on the timing, and the likelihood that 2d and 3d will be the most heavily used, I'd recommend we keep both 2D and 3D broken out as vector based determinants (with the cross() and dot(cross...) respectively. 

then just to make sure the <tol function is consistent

Nicholas Jankowski <nrjank>
Group Member
Tue 29 Jun 2021 05:05:46 PM UTC, comment #8: 

That is a pity. It could probably have saved a lot of work, had we found bug #53942 before. Sorry for not noticing earlier.
There are probably a lot of other good patches on the tracker that went unnoticed for years.

With respect to the patch here. I haven't tested it as of now.
But I am wondering why it is possible to check all simplices against the same `eps(max(R))`. The previous algorithm used a reference that was different for each simplex.
But maybe I am missing something obvious.

The patch did apply for me. I amended it with a few comments and fixed what I think was a minor inaccuracy in the 2-D case. See the attachment.
Is that all OK?

We might want to tweak the introductory comment a bit more once I better understand the part about `eps(max(R))`.

If the number of simplices is very high, would it make sense to break the calculation into subsets of these simplices and use the new algorithm for each subset? Or is this overkill?


(file #51624)

Markus Mützel <mmuetzel>
Group administrator
Tue 29 Jun 2021 04:21:16 PM UTC, comment #7: 

patch in comment #4 applies clean to default on my machine. and delaunayn appears to run fine.

adding to the timing comparisons on discourse:


rand("state", [1:625]');
x = rand(1000,1)*4-2; y = rand(1000,1)*4-2; z = rand(1000,1)*4-2; p = rand(1000,1)*4-2;  q = rand(1000,1)*4-2;r = rand(1000,1)*4-2; s = rand(1000,1)*4-2;

n = 1000; tic; for idx = 1:n, delaunayn([x,y]);endfor;toc


and adjusting dimension and n as practical:


dim         n         existing_loop (s)         expanded_det (s)         factor         block-LU (s)         factor
2D        10000        7.91032        7.69139        1.028        7.85065        1.007
3D        1000        35.6323        1.77464        20.07        3.74609        9.511
4D        100        11.4891        0.686602        16.73        2.03804        5.637
5D        100        39.3493        3.55978        11.05        11.7757        3.341
6D        100        155.825        27.9706        5.571        63.8424        2.440
7D        10        54.9079        16.8202        3.264        32.2846        1.700


I'm surprised to see the significant difference of the explicit determinant over the block-LU approach.  I note, though that the `<tau" check is still different and may be much simpler in that approach. I don't know if this accounts for some of difference.


Nicholas Jankowski <nrjank>
Group Member
Tue 29 Jun 2021 02:52:25 PM UTC, comment #6: 

for the record, I thought I had searched for delaunay bugs/patches when I started this.  apologies for missing bug #53942.

Nicholas Jankowski <nrjank>
Group Member
Tue 29 Jun 2021 09:42:05 AM UTC, comment #5: 

for the record, bug #53942, file #44397 also proposed the approach with the LU decomposition of the sparse block-diagonal matrix.

A.R. Burgers <arb>
Tue 29 Jun 2021 08:08:29 AM UTC, comment #4: 

As requested a patch file and m file. The last few patches I have sent have not worked. I completely removed my local copy and redownloaded the latest version and hope it works this time.

(file #51621, file #51622)

Anonymous
Tue 29 Jun 2021 06:38:46 AM UTC, comment #3: 

@Anonymous from comment #1: That looks great.
Could you please prepare a changeset on top of the current default branch?
If that is not possible, could you provide a modified version of `delaunayn` that implements the algorithm you describe?

Markus Mützel <mmuetzel>
Group administrator
Tue 29 Jun 2021 03:34:31 AM UTC, comment #2: 

hah,  that's great.  I had started looking at LU decomp for just that reason but stopped short of setting up the block diagonal matrix to make it all work in parallel.

will look at it and compare with some of the more brute force stuff I played with just breaking out the determinants (see the  Discourse discussion https://octave.discourse.group/t/delaunayn-trivial-triangle-removal-criteria, or the attached modified delaunayn.m)

we were still unsure of the validity of the arbitrary relative volume check and I was looking for something in the literature to justify one approach or another.

(file #51620)

Nicholas Jankowski <nrjank>
Group Member
Tue 29 Jun 2021 02:43:20 AM UTC, comment #1: 

I have worked out a way to speed up the check for sliver simplexes without using loops for any number of dimensions. It appears to be faster than the existing method for all but the 2d case and can remove the same simplexes. The check is as follows:


edgvec=pts(T(:,2:end).'(:),:)-kron(pts(T(:,1),:),ones(nd-1,1));
eqs=sparse((nd-1)*nt,(nd-1)*nt);
eqs(logical(kron(speye(nt,nt),true(nd-1))))=edgvec.'(:);
[l u p q]=lu(eqs,"vector");
R=abs(diag(u));
reorderdtriidx=kron(1:nt,ones(1,nd-1))(p)(q);
idx=unique(reorderdtriidx(R<100*(nd-1)*nt*eps(max(R))));


The check is as follows:

  • Calculate the edge vectors of the simplex


edgvec=pts(T(:,2:end).'(:),:)-kron(pts(T(:,1),:),ones(nd-1,1));

  • Place the edge vectors into a block diagonal matrix where each block is for each simplex


eqs=sparse((nd-1)*nt,(nd-1)*nt);
eqs(logical(kron(speye(nt,nt),true(nd-1))))=edgvec.'(:);

  • Find the lu factorization and extract the diagonal, the operations used in the lu factorization do not change the determinant (other than the sign and we only care about the absolute value), and the product of u in the diagonal of the lu factorization equals the determinant.


[l u p q]=lu(eqs,"vector");
R=abs(diag(u));

  • The lu factorization reorders the matrix so we can reorder the indexes to the simplex blocks to match


reorderdtriidx=kron(1:nt,ones(1,nd-1))(p)(q);

  • Perform the check, the 100 is arbitrary and eps(max(R)) could be replaced by a matrix norm. With


idx=unique(reorderdtriidx(R<100*(nd-1)*nt*eps(max(R))));

using

x=rand(10,3);
x=[x;x-1000*eps];
delaunayn(x);

I get the same indexes as the existing method with the 100*, without it I get a subset of the indexes.

This method is similar to a method I use to find which rows of a matrix are causing it to be singular. I normally use a QR factorization but the QR factorization was significantly slower than the existing method.

If you want to use the existing method the determinant of a particular triangle (the first on in this case) is

prod(R(reorderdtriidx==1))

or for all determinants in a vector

absdet=prod(reshape(kron(R,ones(1,nt))(reorderdtriidx.'==(1:nt)),nd-1,nt),1)

where R=abs(diag(u)) from the lu factorization.

Anonymous
Thu 24 Jun 2021 06:38:13 PM UTC, original submission:  

As discussed in some depth on the Octave Discourse [1],  delaunayn.m has a longstanding FIXME to vectorize the check for trivially small simplexes.  It currently loops over every shape. bug# 53689 separated out at least the 2D case and vectorized it, showing significant speedup.

Looking at doing the same for at least 3D if not nD, we noticed the 2D codepath does not actually reproduce the algorithm used by the for loop. Vectorizing higher dimensions requires choosing what algorithm to implement, and whether the 2D case needs to be made to match >2D.

'Brief' summary of the differences:
The original code admits the evaluation criteria is arbitrary.  It compares each volume to 'a reference volume', and discards volumes that are too small. the 'reference volume', or equivalent, is the difference.

The looped nD code takes the shape-defining edge vectors, orthogonalizes them to create a resultant vector, and a matrix division of the volume by that resultant vector produces another vector. (in 2D this new vector * the first vector has the same volume as the simplex). the components of the new vector are compared to 'tol' (=1000*eps), and if all components are smaller than tol, that shape is deemed trivially small and discarded.

2D path:  simplex volume is divided by the length of each triangle edge producing a resultant length (as if it was a rectangle), and the length of that result is compared to tol. if all such vectors are <tol, that volume is discarded. 

so the main difference is that 'tol' is compared either to the calculated vector length (2d), or it's component lengths (nD). a test pushing a 2D example through both code paths shows very different comparisons are made to 'tol'. 

the benefit of the nD case is that for any dimension, tol is compared to 1D lengths. if the 2D approach were applied to nD cases, you would compare tol to lengths, area, volume,... etc., with increasing dimension.

to the 2D code path's advantage, using rdivide instead of mrdivide, is much easier to vectorize and could be extended to >2d fairly easily.

I'd recommend trying to stick with the nD case for dimensional consistency, unless a better argument can be made, although this will make even the 2D case much harder to vectorize.  That said, the code even admits the check is arbitrarily decided, and maybe we can come up with a more self-consistent 'arbitrary' measure.

[1] https://octave.discourse.group/t/delaunayn-trivial-triangle-removal-criteria

(added discourse discussion users and bug# 53689 author to notification)

Nicholas Jankowski <nrjank>
Group Member

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #52907:  delaunayn.m added by nrjank (7KiB - text/plain - delaunayn.m file from v8 patch)
file #52303:  bug60818-delaunayn-v8.patch added by nrjank (6KiB - application/octet-stream - patch for comment #53)
file #51877:  bug60818-delaunayn-v7.patch added by nrjank (7KiB - application/octet-stream - minor update for T=0 error)
file #51753:  bug60818-delaunayn-v6.patch added by nrjank (6KiB - application/octet-stream - patch for latest version (comment #49))
file #51751:  delaunayn.m added by nrjank (8KiB - text/plain - laplace speed increase by removing calls to cross)
file #51724:  delaunayn.m added by nrjank (8KiB - text/plain - comment #44, combined laplace expansion up to 5D, LU after.)
file #51642:  delaunayn.m added by None (7KiB - text/plain)
file #51640:  delaunayn.m added by nrjank (7KiB - text/plain - less redundant code between codepaths and has consistent <tol checks , but still very slow vol check scales very badly for nD case)
file #51634:  bug60818-delaunayn-v5.patch added by nrjank (4KiB - application/octet-stream - fix tri ordering)
file #51627:  bug60818-delaunayn-v4.patch added by nrjank (4KiB - application/octet-stream - minor fixes to 2D and 3D)
file #51626:  bug60818-delaunayn-v3.patch added by nrjank (4KiB - application/octet-stream - fixed patch header)
file #51625:  bug60818-delaunayn-v3.patch added by nrjank (4KiB - application/octet-stream)
file #51624:  bug60818-delaunayn-v2.patch added by mmuetzel (3KiB - application/octet-stream)
file #51622:  delaunayn@bug@60818.patch added by None (1KiB - application/octet-stream)
file #51621:  delaunayn.m added by None (5KiB - text/plain)
file #51620:  delaunayn.m added by nrjank (10KiB - text/plain - modified delaunay.m with broken out vectorized determinants up to 7D. still inconsistent volume check method)

 

Carbon-Copy List
  • -email is unavailable- added by dasergatskov (Posted a comment)
  • -email is unavailable- added by arb (Posted a comment)
  • -email is unavailable- added by mmuetzel (Posted a comment)
  • -email is unavailable- added by nrjank (Submitted the item)
  • -email is unavailable- added by nrjank
  • -email is unavailable- added by nrjank
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 25 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2022-10-25 nrjank StatusReady For Test In Progress
    2022-10-25 nrjank Dependencies- bugs #53397 is dependent
    2022-09-28 nrjank Dependencies- bugs #53942 is dependent
    2022-09-28 nrjank StatusPatch Submitted Ready For Test
        Summarydelaunayn - 2D code path vectorization doesn't match nD algorithm delaunayn - low performance loop used for >3D code path
    2022-02-22 nrjank Attached File- Added delaunayn.m, #52907
    2021-11-20 nrjank Attached File- Added bug60818-delaunayn-v8.patch, #52303
        StatusIn Progress Patch Submitted
    2021-09-07 nrjank Attached File- Added bug60818-delaunayn-v7.patch, #51877
    2021-08-09 nrjank Attached File- Added bug60818-delaunayn-v6.patch, #51753
    2021-08-08 nrjank Attached File- Added delaunayn.m, #51751
    2021-08-01 nrjank Attached File- Added delaunayn.m, #51724
    2021-07-03 None Attached File- Added delaunayn.m, #51642
    2021-07-02 nrjank Attached File- Added delaunayn.m, #51640
    2021-07-01 nrjank Attached File- Added bug60818-delaunayn-v5.patch, #51634
    2021-06-30 nrjank Attached File- Added bug60818-delaunayn-v4.patch, #51627
    2021-06-29 nrjank Attached File- Added bug60818-delaunayn-v3.patch, #51626
    2021-06-29 nrjank Attached File- Added bug60818-delaunayn-v3.patch, #51625
    2021-06-29 mmuetzel Attached File- Added bug60818-delaunayn-v2.patch, #51624
        StatusNone In Progress
    2021-06-29 None Attached File- Added delaunayn.m, #51621
        Attached File- Added delaunayn@bug@60818.patch, #51622
    2021-06-29 nrjank Attached File- Added delaunayn.m, #51620
    2021-06-24 nrjank Carbon-Copy- Added mmuetzel
        Carbon-Copy- Added arb

    Back to the top

    Powered by Savane 3.13-4448.
    Corresponding source code