bugGNU Octave - Bugs: bug #61129, Performance of factor(). Proposed...

 
 

bug #61129: Performance of factor(). Proposed patch attached.

Submitted by:  None
Submitted on:  Wed 08 Sep 2021 07:32:42 PM UTC  
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Performance
Status:  Patch Submitted Assigned to:  None
Originator Name:  Originator Email:  -email is unavailable-
Open/Closed:  Open Release:  dev
Operating System:  Any

Add a New Comment (Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

( Jump to the original submission)

Mon 27 Sep 2021 11:08:52 PM UTC, comment #13: 

No further feedback for a week. Request testing from devs. Patch 6 reattached to this comment. Updated factor.m attached.

(file #51980, file #51981)

Anonymous
Sat 18 Sep 2021 12:41:40 PM UTC, comment #12: 

Hey! No! Stop! Patch 5 turns out to be 1.5% slower than Patch 4, against all expectations. The problem turned out to be the single line "largeprimes = primes(sqrt(q))((length(smallprimes)+1):end);" which was trying to not redo the smallprimes division but it turns out the indexing is still slower than not filtering it at all. That had been the one change from Patch 4 to Patch 5 that I didn't test beforehand, and it caused a 1.5% slowdown (several seconds over the whole test data set, verified multiple times from both directions). The slowdown was particularly evident for large prime numbers or products of large prime numbers.

I've now removed that specific indexing. Simply letting it go to the division step is faster in this case. This is 0.6% faster than Patch 4, because of using the "foo = foo(ii)" instead of "foo(~ii) = []" which is now the only difference from Patch 4.

Patch 6 attached. Use Patch 6 not Patch 5.

(file #51938)

Anonymous
Sat 18 Sep 2021 11:27:57 AM UTC, comment #11: 

Thanks for the improvements in comment #10. I profiled the variants "foo(~ii) = []" and "foo = foo(ii)" over a few million trials, and the second takes 3.7 milliseconds while the first takes 4.3 milliseconds. Adopted the second variant. The suggestion to use length(smallprimes) instead of max(smallprimes) is good -- I've made that change.

It's true that if you're doing it in a loop for say the numbers from 1 to 1e9, calling factor() could be a little slower, but the best approach there as you say would be to use a pre-built list of primes and use it only once, not calling factor() at all. It would be like a prime sieve except instead of removing all multiples of a prime, one would be marking that prime as a factor for all its multiples. The emphasis on bigger n for factor() is conscious to allow for scalable code over the whole range, following the recent extension a month ago to handle uint64 beyond flintmax(). This is the change where that was made:

changeset:   29983:ecbcc4647dbe
user:        Rik <rik@octave.org>
date:        Tue Aug 17 16:28:36 2021 -0700
summary:     factor.m: Overhaul function to support inputs > flintmax.

Until then I was using the calculator program that came with my Linux installation (GNOME calculator), which does factorization on typing Ctrl-F, but that is only for manual use not programmatic use. With the current series of improvements in this thread, Octave's interpreted factor() is nearly as fast as that compiled calculator's code, which is both nice and programmatically accessible.

Patch 5 attached.

(file #51937)

Anonymous
Fri 17 Sep 2021 08:53:00 PM UTC, comment #10: 

I would not say that it is always a bad idea to put a switch into a for-loop. Sometimes, it is even the best idea, for instance here when one wants to execute the same code with different primes lists, without calling a function. I agree that your solution with a function is definitely more elegant, and in a compiled language the compiler would care about making it efficient (such as by inlining). Octave is not compiled, and function calls really cost more than just the execution of the code they contain, that's why I consciously avoided this solution in patch2. However, I did not test, and it is only two function calls which probably do not make much difference, in any case not when you factor numbers beyond flintmax.

But this is my other point: I think that you put much too strong weight onto the behaviour at such large inputs. I think that most users, when they have to factorize a given very large number once (like you tell in comment #6) would just go to Wolfram Alpha. I was not even aware that factor.m works at all with uint64 input beyond flintmax (I think there are quite a number of functions in octave that fail either silently or noticeably in this case). Yes, factorizing a single small number once returns immediately, both with the original version or with any of the patches. However, for me the main issue is when factor.m is called repeatedly in a loop with different inputs (of reasonable magnitude) --  but of course in this situation it would be clearly better to compute a large list of primes once and use it always.

And as we are talking about good and bad styles of programming, the only comment I still have with respect to your patch4 is about the note starting in line 69: indeed, I would call the commented-out command in line 70 an example of an octave-antipattern -- it has to make a copy of largeprimes, resize it, and overwrite largeprimes. The command in line 72 is slightly better, as it generates the new largeprimes directly from the index vector. However, both are very inefficient, as the comparison has to be done for every prime, and the copying is unnecessary. The most obvious solution would be to do

largeprimes = primes (sqrt (q))(1+length (smallprimes):end);

in line 68 -- in this case, there is no copying, as the result of primes() never enters the scope, but stays in memory, while the resulting largeprimes is a reference to this object, starting at some later element. In all probability, it won't make much difference, as these first elements are immediately thrown out in  reducefactors(), but at least I would think that it doesn't slow down the execution. And the corresponding line

divisors (mod (q, divisors) ~= 0) = []; # throw out non-factors

is the next octave-antipattern, I suggest to change that to

divisors = divisors (mod (q, divisors) == 0); # keep only factors

Michael Leitner <mleitner>
Thu 16 Sep 2021 09:25:28 PM UTC, comment #9: 

Attaching full time performance graphs for Base and all four patches submitted so far.

Patch4 is by far the most scalable and fastest across the entire range of 0 <= n <= intmax ("uint64").

I could not make a legend reliably, so the legend is as follows:

Base: thick black solid line with circles
Patch 1: thin blue dashed line
Patch 2: thin red dotted line with + signs
Patch 3: thin green dash-dot line with asterisks
Patch 4: thick magenta solid line with circles


Anonymous
Thu 16 Sep 2021 05:03:06 PM UTC, comment #8: 

I'm attaching Patch 4 and an updated factor.m to this comment. This version is much faster (4x) than previous versions by changing the sequence of divisions and using the same type everywhere, getting rid of repeated type conversions. It consistently pulls out factors in ascending order, making the final sort unnecessary. I also moved the division routines to an internal function that's simply called twice, once for smallprimes and once for largeprimes without a loop in the main function body.

Re the structure of Patch 2, it's generally a bad idea to stick an if(i) or a switch(i) statement inside a deterministic "for i" loop. That is the for-case antipattern, also called the loop-switch sequence (https://en.wikipedia.org/wiki/Loop-switch_sequence), especially because the "for i = 1:2" loop in Patch 2 is fully deterministic without depending on external input. I consciously avoided that structure in Patch 3 and Patch 4.

Re the repeated division of smallprimes in the second round in Patch3, that was a test I had already done before making Patch 3, but doing that test and filter was slower than simply dividing it. I removed it before attaching Patch 3. In Patch 4, I've added comments to show what has been tested and what future programmers should look to tweak, especially tests about persistent variables and parameter ranges.

Attachments: Patch4 and factor.m with Patch4 applied.

Passes all Octave checks in "make check".

(file #51925, file #51926)

Anonymous
Thu 16 Sep 2021 06:48:05 AM UTC, comment #7: 

Yes, as I wrote below, by using in the first round a primes list with a length that depends on the input, you can cut down on the average execution time. This makes the code more complex, but yes, a simple if won't take so much time.

However, I still would say that from a general point of view of code readability, it is much better to see this as two rounds of the same algorithm, the first time with a small list of primes to reduce q, and the second time with the full list of all possible primes. Thus, the approach as in patch2 is to be preferred from my point of view, while in your patches the two rounds are distinct pieces of code and even use different operations (mod as opposed to rem, and testing only single primes as opposed to the full list). And note that in your patch3 you test the first primes in both rounds.

You can modify patch2 to make it as efficient as patch3 (or even slightly more efficient, as all primes are tested only once) by inserting your distinction whether q>=20e9 into the i==1 case, exporting the length of the resulting smallprimes list in Nsmall, and using the part (Nsmall:end) of the full primes list in the else branch. Casting the smallprimes list to the class of the input is a good catch, and of course you can gain additional efficiency for small inputs by taking a longer hard-coded list of primes. Hundred is not excessive, I would say, and indeed also in primes.m itself these first hundred primes are hard-coded for increased efficiency.

Michael Leitner <mleitner>
Wed 15 Sep 2021 12:27:49 AM UTC, comment #6: 

Patch 2 is good at the low end, for 1 <= n <= 1e6 (execution times are 0.16 milliseconds or less).

Patch 1 scales much better for n >= 10e9 (execution times are 0.3 milliseconds and longer).

I've made a new Patch 3 that incorporates aspects of both Patch 1 and Patch 2, switching to the faster algorithm based on input value. Feel free to improve it.

For the low end (all values from 1 to 1e6), the Base version takes 154 microseconds for each factorization on average, Patch 3 takes 164 microseconds, and Patch 2 takes 123 microseconds.

As n increases, Patch 3 takes over, reducing 33-second runtimes with the Base version to 0.3 seconds, and that 100-fold saving in time is absolutely worth the 10-microsecond penalty at the low end.

For background, the reason I was motivated to speed up factor() was when I tried to factorize the number 429632517266727264, which was the number of solutions to 4-coloring a certain graph. The base version of factor() took 10.85 seconds. Both Patch1 and Patch2 finish it off in 43 milliseconds, now Patch3 as well. The rationale is therefore to emphasize speeding up execution for n >= 1e14, because smaller numbers really don't take much time to factorize (sub-millisecond up to 1e9 or 1e10).

I did a stratified sampling with 1600 data points spread over 16 orders of magnitude with 100 data points each. Raw data attached. Time information attached as two graphs, plotting the median execution time for each order of magnitude. In both graphs, the black circles are the Base time, the blue dots are Patch1, the red plus signs are Patch2 and the green asterisks are Patch3.

Regarding the failed assertions for Patch2, it's because Patch2 is returning doubles for some reason in some cases, so the product loses precision even if the prime factors are smaller than flintmax:

octave:29> foo = uint64 (33333333333333338)
foo = 33333333333333338


octave:32> myfactor(foo)
ans =
                2               29  574712643678161

octave:33> class(myfactor(foo))
ans = uint64
octave:34> prod(myfactor(foo),"native")
ans = 33333333333333338
octave:35> prod(myfactor(foo),"native") == foo
ans = 1


octave:36> myfactor2(foo)
ans =
                       2                      29         574712643678161

octave:37> class(myfactor2(foo))
ans = double
octave:38> prod(myfactor2(foo),"native")
ans = 3.333333333333334e+16
octave:39> prod(myfactor2(foo),"native") == foo
ans = 0


octave:40> prod(uint64(myfactor2(foo)),"native") == foo   ## extra cast to uint64 fixes it
ans = 1

(file #51913, file #51914,

Anonymous
Tue 14 Sep 2021 07:54:29 AM UTC, comment #5: 

With respect to performance: I disagree. First, I would not quote three significant digits as a general performance increase when you test it by a very specific sample.

Yes, at large (to be specific: extremely large) inputs your solution will be faster, because your approach uses a larger list of primes in the first round, which will sometimes reduce q so that in the second round you need to do the prime sieve only for a significantly smaller size, while my approach always only uses ten primes. Of course you are free to enlarge the hard-coded list, and you could even do your approach of using a list with a length that depends on the input. However, this needs more operations (in your solution, a rational power), and it is questionable whether in the typical case of use this is beneficial or not. Indeed, one of the two distinguishing points of my patch is to use a hard-coded list as opposed to compute the prime sieve. And I would say that in the typical case of use this is beneficial:

tic;for i=1:10000 factor(i);end;toc

takes 3.9 seconds for the existing factor, 5.5 seconds with your patch, and 3.3 with mine (and if we increase the hard-coded list to 100 entries, it is down at 2.8 seconds).

Now to accuracy: In fact, the second distinguishing point of my patch was to use the same code for the two runs (first with the small list of primes, then for the full list computed with the prime sieve according to the reduced q) -- it iteratively takes the list of primes that fulfill rem (q, p) == 0 and divides by their product. In your case, in the first run you step through the reduced list one by one, and then divide only by the prime to the power of one if it fulfills mod(q,pp) == 0 (which for positive q and pp should be equivalent to mod, I think).

Now the input 15999999832000000434 (for those who want to try that: you really have to compute it as the square of the uint64, as this is beyond flintmax, and I know of no way to give uint64 literals in octave) you mention factorizes as 2*3*2666666638666666739. Indeed, the order of divisions in the three versions is different, but as I understand, if you have an operation with a least one operand being an uint, the result will be an uint, and if it is exactly representable (which should be the case for divisions by divisors), it should be the arithmetically correct result. Thus, I do not understand how the three versions can differ. I cannot do the calculation as my memory is too small to compute the prime sieve. Can you tell me the respective results?

Michael Leitner <mleitner>
Sun 12 Sep 2021 11:28:02 PM UTC, comment #4: 

Patch1 from comment #1 is more performant than Patch2 from comment #3 by a factor of 2.63x.

Patch1 is also faster than Base by a factor of 6.76x.

Benchmark results:

Input value                 Base time        Patch1 time      Patch2 time
-------------------------------------------------------------------------
15999999832000000431        78.796292         0.057512        22.501813
15999999832000000432        80.945415         0.033662         0.888980
15999999832000000433        81.235536         0.003839        13.643711
15999999832000000434        80.566282        29.735831        30.042800
15999999832000000435        80.369279         0.162935        32.149949
15999999832000000436        80.402935         0.576786        36.607844
15999999832000000437        80.473461         0.003428        43.228909
15999999832000000438        79.911322         5.508081        55.743914
15999999832000000439        80.216414         0.013743        25.881780
15999999832000000440        80.265364         0.003458         0.423624
15999999832000000441        80.262461        80.191746        80.550838
15999999832000000442        80.185449         1.285730        55.418297
15999999832000000443        79.920971         0.255986         0.251054
15999999832000000444        79.780322        36.028808        36.207622
15999999832000000445        79.810045         2.573832        31.841480
15999999832000000446        79.860735         0.460476         9.549184
15999999832000000447        79.950774         0.107162        80.169429
15999999832000000448        79.845526         0.033626         7.154385
15999999832000000449        79.772000        11.908558        11.971591
15999999832000000450        79.687228         0.005464         0.815687
15999999832000000451        79.847437        79.790655        80.241356

Times:
ans = 1682.105248451233    ## Base
ans =  248.741317987442    ## Patch1
ans =  655.2842469215393   ## Patch2

Also, Patch2 fails the assertion for the input value 15999999832000000434 which has a prime factor larger than flintmax(). Patch1 and Base are both OK on the assertions.

Benchmark code:

p = uint64 (3999999979) ^ 2; ## large prime^2, slow to factorize
lo = p-10;
hi = p+10;

pos = 0;
for i = lo:hi
  pos += 1;

  ## base factor()
  tic
  f = factor (i);
  t0(pos) = toc;
  p0(pos) = prod(f, "native");

  ## Patch1
  tic
  f = myfactor (i);
  t1(pos) = toc;
  p1(pos) = prod(f, "native");

  ## Patch2
  tic
  f = myfactor2 (i);
  t2(pos) = toc;
  p2(pos) = prod(f, "native");

  fprintf (1, "%u\t%f\t%f\t%f\n", i, t0(pos), t1(pos), t2(pos));
end
sum(t0)
sum(t1)
sum(t2)

assert (all(p0 == (lo:hi)))
assert (all(p1 == (lo:hi)))
assert (all(p2 == (lo:hi)))

Anonymous
Sat 11 Sep 2021 09:52:00 PM UTC, comment #3: 

I would say yes, the performance is worth the additional code.

Consider also my attached version: it is about the same amount of additional code, potentially slightly faster (only one invocation of primes), and perhaps logically easier to understand, as the same algorithm is used in two stages.

(file #51893)

Michael Leitner <mleitner>
Thu 09 Sep 2021 01:07:13 AM UTC, comment #2: 

Sorry for a typo in the previous comment. For the assert statement, please use this instead:

  assert (prod(f,"native") == p+i)  ## check for correctness

The patch is still the same as in comment #1.

Anonymous
Thu 09 Sep 2021 01:01:50 AM UTC, comment #1: 

Some hours of testing later, the patch is improved. (Improved patch attached). Please test and check for correctness.

Updated code to test performance with and without patch:

p = uint64 (3999999979) ^ 2; ## large prime^2 near 2^64
pos = 0;
t = [];
for i = -100:+100
  tic
  f = factor (p+i);
  t(++pos) = toc;
  assert (prod(f) == i)  ## check for correctness
  disp([i t(pos)])
end
sum(t)  ## performance metric

Results:

With patch: about 44 minutes.
Without patch: longer than 2 hours.

Please comment if this is an acceptable approach.

(file #51879)

Anonymous
Wed 08 Sep 2021 07:32:42 PM UTC, original submission:  

The function factor() could be much faster for certain large inputs. This was discovered in the following real-world case:

T = uint64(429632517266727264)
tic; factor(T), toc
tic; factor(T / 2^5), toc
tic; factor(T / 2^5 / 3^3), toc

The time performance is respectively:

Elapsed time is 9.79291 seconds.
Elapsed time is 0.869242 seconds.
Elapsed time is 0.112411 seconds.

A more pathological case is this:

T = uint64(6) ^ 62
tic; factor(T), toc
tic; factor(T / 2^5), toc
tic; factor(T / 2^10), toc

which takes respectively

Elapsed time is 42.4696 seconds.
Elapsed time is 6.06825 seconds.
Elapsed time is 0.526102 seconds.

The bottleneck seems to be in the calculation of all prime numbers up to sqrt(n) inside scripts/specfun/factor.m, even if n is divisible by small numbers like 2 or 3. This suggests a special-case workaround like: repeatedly divide n by 2, then repeatedly by 3, and the resulting number which has neither 2 nor 3 as a factor can be passed off to the existing code. This results in a 10x to 80x speedup at the cost of some lines of extra code. This can be extended to other small primes like 5 and 7 but with diminishing returns. Question to the developers: is this performance worth the extra code? If so, a proposed patch to factor.m is attached.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #51980:  patch6.patch added by None (5KiB - text/x-patch)
file #51981:  factor.m added by None (8KiB - text/x-objcsrc)
file #51938:  patch6.patch added by None (5KiB - text/x-patch)
file #51937:  patch5.patch added by None (5KiB - text/x-patch)
file #51931:  factorperflinear2.png added by None (66KiB - image/png)
file #51932:  factorperflog2.png added by None (126KiB - image/png)
file #51925:  factor.m added by None (8KiB - text/x-objcsrc)
file #51926:  patch4 added by None (5KiB - application/octet-stream)
file #51913:  patch3 added by None (1KiB - application/octet-stream)
file #51914:  factorout added by None (76KiB - application/octet-stream)
file #51915:  factorperflog.png added by None (91KiB - image/png)
file #51916:  factorperflinear.png added by None (56KiB - image/png)
file #51893:  patch2 added by mleitner (1KiB - application/octet-stream)
file #51879:  factor.m.patch added by None (1000B - text/x-patch)
file #51878:  factor.m.patch added by None (970B - text/x-patch)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by siko1056 (Updated the item)
  • -email is unavailable- added by mleitner (Updated the item)
  •  

    Do you think this task is very important?
    If so, you can add your encouragement to it.
    This task has 0 encouragements so far.

    Only project members can vote.

     

     

     

    Follow 16 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2021-09-27 None Attached File- => Added patch6.patch, #51980
        Attached File- => Added factor.m, #51981
    2021-09-18 None Attached File- => Added patch6.patch, #51938
    2021-09-18 None Attached File- => Added patch5.patch, #51937
    2021-09-16 None Attached File- => Added factorperflinear2.png, #51931
        Attached File- => Added factorperflog2.png, #51932
    2021-09-16 None Attached File- => Added factor.m, #51925
        Attached File- => Added patch4, #51926
    2021-09-15 None Attached File- => Added patch3, #51913
        Attached File- => Added factorout, #51914
        Attached File- => Added factorperflog.png, #51915
        Attached File- => Added factorperflinear.png, #51916
    2021-09-13 siko1056 StatusNone => Patch Submitted
    2021-09-11 mleitner Attached File- => Added patch2, #51893
    2021-09-09 None Attached File- => Added factor.m.patch, #51879
    2021-09-08 None Attached File- => Added factor.m.patch, #51878

    Back to the top


    Powered by Savane 3.6