bugGNU Octave - Bugs: bug #52809, interpreter performance is slow on...

 
 

bug #52809: interpreter performance is slow on development branch

Submitted by:  Rik <rik5>
Submitted on:  Thu 04 Jan 2018 06:31:00 PM UTC  
 
Category:  Performance Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Regression
Status:  Ready For Test Assigned to:  None
Originator Name:  Open/Closed:  Open
Release:  dev Operating System:  Any

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

( Jump to the original submission)

Wed 10 Jan 2018 05:47:42 PM UTC, comment #23:

A new bug report related to cputime() operating system interface is here (I noted this is low priority and tentative):

https://savannah.gnu.org/bugs/index.php?52858

Dan Sebald <sebald>
Wed 10 Jan 2018 04:44:21 PM UTC, comment #22:

Yes, move this to a new bug report.

For reference, I used these two loops while testing performance

bm_for_loop.m

bm_for_loop2.m

The first one uses cputime, the second uses wall time.

Results:

As you can see, wall time was less than cputime which doesn't make sense to me, but maybe that is my misunderstanding.

Rik <rik5>
Project Administrator
Wed 10 Jan 2018 02:05:54 PM UTC, comment #21:

There is a clock_gettime replacement in gnulib, so we should use a wrapper for that function if we decide to change. On systems where clock_gettime is missing, gnulib calls gettimeofday.

I don't think filling in two values in a struct is a big issue, so I don't see a problem with having a C++ class to encapsulate the results. The overhead of calling a DEFUN-defined function in Octave is always going to be much larger than that, no matter what we do.

But anyway, yes, any problems with cputime should be a different bug report.

John W. Eaton <jwe>
Project Administrator
Wed 10 Jan 2018 05:39:17 AM UTC, comment #20:

Well, cputime() is a pretty important function to leave to the vagaries of unix time management and system interface of old.

It sounds to me that the issue is that some people may not be getting good time resolution with cputime() because their system's support of the routine we are using is lacking. Some of these older routines have a reference called CLOCK_PROCESS_CPUTIME_ID, which I've grepped for but don't see in Octave's source code. Just the fact there is such a setting suggests that, yeah, different hardware could have different resolution...an undesirable result from Octave's cputime() perspective.

Here is a good discussion on various time routines in Unix:

https://stackoverflow.com/questions/12392278/measure-time-in-linux-time-vs-clock-vs-getrusage-vs-clock-gettime-vs-gettimeof

and it sounds as though the latest Unix routine is this guaranteed-to-be-high-resolution clock_gettime:

https://linux.die.net/man/3/clock_gettime

That routine has a selectable input, of which we'd be interested in CLOCK_PROCESS_CPUTIME_ID. It returns the information in a

I grep for tv_sec and tv_nsec and I see such elements in the Octave code, quite a lot actually. However, the usage looks to be for date/time related code. From what I looked at yesterday regarding cputime(), the impression I got was that getrusage() was the means of determining cputime(). getrusage() is one of the routines where the resolution can vary from system to system.

So, perhaps we need to switch the cputime() routine to using the clock_gettime() C function (and keep any C++ objects to a minimum, just relay the system-function output to the OV output).

What do people think? Should we open a different bug report for this?

Dan Sebald <sebald>
Tue 09 Jan 2018 06:30:50 PM UTC, comment #19:

It is the underlying LInux system that is the problem.

Michael Godfrey <godfrey>
Project Member
Tue 09 Jan 2018 06:06:13 PM UTC, comment #18:

#16 What are your memories of the flaky cputime() function? It does appear there is a lot of overhead associated with the Octave cputime() routine, considering that we are looking for as accurate a number as possible. That is, there is use of a fairly large object

which in turn eventually uses

I.e., unix system routine getrusage(). Of course, the unix getrusage() is needed for Octave's getrusage() routine. However, I wonder if for cputime() the code should try to be as minimal and direct as possible--maybe use some static object as opposed to stack-based. For example, there is

http://www.tutorialspoint.com/unix_system_calls/times.htm

which seems to provide the necessary info for cputime(). Perhaps the larger getrusage() is a less efficient methodology and may have some peculiarities.

I don't know, there is a lot going on there for me to fully understand.

Dan Sebald <sebald>
Tue 09 Jan 2018 05:34:17 PM UTC, comment #17:

#15 Really? I'm seeing millisecond resolution here. I think that is typical these days. Each invocation of cputime seems to take about 4 ms.

Dan Sebald <sebald>
Tue 09 Jan 2018 05:30:26 PM UTC, comment #16:

cputime seems always to have been flaky. But clock time
is (nearly) always an over estime of CPU used. So, with
care, either is as good as it gets...

Michael Godfrey <godfrey>
Project Member
Tue 09 Jan 2018 04:53:42 PM UTC, comment #15:

I was getting unreliable results with cputime which is why I preferred tic/toc. Possibly just my hardware setup, but cputime didn't appear to have as high a resolution timer as the straight wall clock from tic/toc.

Rik <rik5>
Project Administrator
Tue 09 Jan 2018 04:28:24 PM UTC, comment #14:

Yes, cputime is what I prefer. tic/toc was the original poster's choice. I did run the tic/toc numbers a second time to confirm nothing happened system wise. Plus there are a lot of cores on this system such that I typically don't have issues unless some system resource is utilized (as opposed to simple looping). I can use cputime() from now on.

Dan Sebald <sebald>
Tue 09 Jan 2018 03:57:15 PM UTC, comment #13:

Just to be clear: tic,toc measures elapsed time.
cputime provides [total, user, system]

Might be a bit better here to use cputime just in case
something else is going on in the system.

Michael Godfrey <godfrey>
Project Member
Tue 09 Jan 2018 07:35:08 AM UTC, comment #12:

I'll add another column to the previous times I posted:

Observations, going from first patch to second patch:

1) An empty command list case is the same times.

2) The case of a constant within the loop shows 10% improvement.

3) The case of assigning a scalar value to a, i.e., a = b (b=1), shows a 12% decrease in CPU time, but there is a peculiar increase in the time at the extreme in which the inner loop is dominant. **If there is anything to investigate, it is this result.

4) The case in which there is a large matrix being assigned to 'a' shows a logical progression of times from small to large as the outer loop becomes more dominant.

5) Furthermore, note that the times required for large matrix assignment are now greater than the times for scalar assignment by about 10%. So, that's a good result in line with the notion that large matrix transfer requires more CPU than just transferring a single value.

In summary, 10-12% improvement in the CPU consumption, but something strange with the evaluation of "a = b" in the inner loop when b is scalar 1 at the extreme of inner loop dominance.

Dan Sebald <sebald>
Tue 09 Jan 2018 02:58:43 AM UTC, comment #11:

I checked in some changes that improve things for me:

http://hg.savannah.gnu.org/hgweb/octave/rev/8f2c479eb125
http://hg.savannah.gnu.org/hgweb/octave/rev/07876b7127bf
http://hg.savannah.gnu.org/hgweb/octave/rev/dbec1e04f499

The second change involves the evaluator but it is not really important or significant change as far as performance goes.

With these changes, things are better for Rik's example but not back to where we were with 4.2.1. However, beginning just before I started refactoring the interpreter, I started periodically timing "make check" on my build system and now my timings are about the same as before, and that is with a number of new tests that were not present back then. There is still some room for improvement, but things don't look nearly so bad as they did before my most recent changes.

John W. Eaton <jwe>
Project Administrator
Sat 06 Jan 2018 05:43:27 PM UTC, comment #10:

Oh, right.

Here is Rik's loop5.m:tic; for i=1:1000; for j=1:1000; a = 1.0; end; end; toc

Fedora: time is 0.169811 seconds.
dev: time is 0.968607 seconds.

Typical value for several repeats.

Sorry for previous. Donw in a hurry...
Michael

Michael Godfrey <godfrey>
Project Member
Sat 06 Jan 2018 05:13:44 PM UTC, comment #9:

I don't think there was a problem with an empty loop body. Could you try again with Rik's original test code?

John W. Eaton <jwe>
Project Administrator
Sat 06 Jan 2018 05:06:08 PM UTC, comment #8:

John,
I now get for:
t0=tic; for i=1:1000; for j=1:1000; end; end; t1=toc(t0); t1

4.2.1:t1 = 0.071527
dev: t1 = 0.077129

Linux pbdsl4 4.14.11-300.fc27.x86_64

definitely better.

Michael

Michael Godfrey <godfrey>
Project Member
Fri 05 Jan 2018 09:24:52 PM UTC, comment #7:

Here's a breakdown of the results after applying the patch:

In the following there's nothing on the statement list, so there is no surprise there's no change, except when the evaluation of the inner for-loop comes into play, for which there seems to be 40% improvement.

In the following there are no variables, just the constant. There is a 60% improvement when the evaluation of the inner for-loop isn't a factor (probably because there is no longer anything done to save variable memory), while again a 40% improvement when the inner-for loop evaluation comes into play.

The next case is when there is some variable evaluation. Now there is only 50% improvement when the inner for-loop evaluation is not dominant. (Still about 40% improvement when it is dominant.)

And the next is again with variable evaluation, but this time a matrix assignment. This is pretty much the same improvement, relatively, as in the previous example. But look carefully comparing the first column of the previous example and this example, and then the second column of the previous example and this example. (I believe I have those numbers correct. At least I double-checked.) In the previous example (the scalar assignment) before the patch it was slightly less CPU usage than the following example (the matrix assignment) before the patch. One would think that is logical--more memory movement, more CPU usage (although small compared to the evaluator). However, after the patch, this relationship has reversed: the scalar assignment appears to take a fraction to one or two percent more CPU usage, which is counter-intuitive. I do see though that in the patch it is broken up into the scalar and "multi" cases, so that might explain the difference.

In summary:

1) About 40% reduction in CPU usage related to memory management of evaluation.
2) About 60% reduction in CPU usage related to the need to do memory management during evaluation.
3) Some peculiar but not too significant difference having to deal with scalar/matrix memory storage during looping.

Dan Sebald <sebald>
Fri 05 Jan 2018 08:37:15 AM UTC, comment #6:

I'm attaching a rough patch that improves performance quite a bit but does not quite get back to what it was before my big evaluator refactoring. I can still see some possible areas for improvement, but the gains may be smaller.

This patch isn't ready to be pushed to savannah.

(file #42845)

John W. Eaton <jwe>
Project Administrator
Fri 05 Jan 2018 01:16:08 AM UTC, comment #5:

I'm pretty sure that a significant portion of this problem is caused by always creating an octave_value_list object even when only one value is requested or produced. In the past, we had two methods for computing values, rvalue (produce an octave_value_list) and rvalue1 (produce a single octave_value object). The binary expression evaluator used rvalue1 to compute values from the operand expressions. But when I refactored the evaluator I changed the expression evaluation methods to always generate an octave value list even when only one value is needed. So in those cases, we are creating an octave_value_list object with one value, pushing it on a stack, and then extracting it. I'm sure even I can do better than that now that I see it's likely an issue...

John W. Eaton <jwe>
Project Administrator
Thu 04 Jan 2018 11:40:27 PM UTC, comment #4:

>>> Is it not possible to get a trace on just the a = a + b + 123.0 sequence?
>>
>> I haven't done any real profiling, but from my tests it looks like the
>> evaluation of the expression is the real problem. Comparing
>>
>> t0=tic; for i=1:1000; for j=1:1000; end; end; t1=toc(t0); t1
>>
>> in 4.2.1 and the current dev version I see nearly identical results. Both
>> were built with GCC 7.2.0 on a Debian system.
>>
>> jwe
>
> I filed a bug report (https://savannah.gnu.org/bugs/index.php?52809) to
> keep track of this. Quoting from the write-up there:


Interesting. It's not necessarily the expression A = B + C portion that contributes, but any kind of expression that appears in the list used by tree_evaluator::visit_statement_list. (I looked at this last night, so at least I'm now a bit familiar.)

Consider the various limits again with the "no-command" and a real simple no-variable command:

What I take away from this result is when that outer for-loop gets to large iterations that inner for-loop evaluation begins to dominate. That is, the evaluation of the for-loop. With this simple example, I'm guessing the inner for loop--though large iterations--has an empty statement_list to visit, i.e., checks the list, finds none, so continues onto the next iteration.

Question: Is whatever is inside a for loop re-tokenized, etc. with every pass? Or is that tokenization done for the contents of the inner loop just once and then cached?

The loop path eventually goes through tree_evaluator::visit_statement_list, which as one might guess goes through a list of all the lexical line statements. By testing a few things here and there I conclude that not much of what is in this part of the parser contributes a whole lot to the overall time. There is some strange FIXME comment about having to do the following:

in order to keep the reference count down to avoid generating extra copies, but there is no "result_values" in the routine anywhere, so maybe this was just copied from somewhere else and is a conceptual place-holder. In other words, was never implemented.

Nonetheless, to test this we can try something like

There is no change to ref-count involved in the above, so that comment shouldn't be relevant. But there is already a 5x increase in CPU usage (probably the evaluation routine). How about if we compare small-memory versus large memory?:

SMALL MEMORY

Another factor of 3 increase (for most) already, just for "a=b" versus "1". Certainly parsing and tokenizing can't be that much worse. So, the fact that variables are involved could be important.

LARGE MEMORY

There's a tiny percentage increase for assigning large amounts of memory. That suggests that any sort of reference counting issues has little influence on the CPU consumption.

Well, the place that the actual "a + b + 123.0" would be evaluated is

tree_evaluator::visit_statement (tree_statement& stmt)

and in turn

tree_evaluator::evaluate (tree_decl_elt *elt)

One thing to try is commenting out some aspects of these routines and see how much it decreases the looping usage. That would hint where the most savings could be. But it might cause internal errors to skip some things.

Dan Sebald <sebald>
Thu 04 Jan 2018 09:21:05 PM UTC, comment #3:

Copying objects that use shared_ptr also seems to be an issue, but probably doesn't explain everything. There are definitely a lot of places where we could improve performance.

John W. Eaton <jwe>
Project Administrator
Thu 04 Jan 2018 08:14:36 PM UTC, comment #2:

There might be a lot of stat() calls checking whether any function has changed. That seemed to be a possibility from one bug report.

I usually run with "--no-gui-libs" because I don't use the GUI and because I don't want to worry about interactions between the Octave thread and the GUI thread. That's an indication that the cause is not a slowdown waiting for data to be synchronized across octave_link.

It doesn't fully explain it, but I would also adding a move constructor for the dim_vector class because we create a lot of temporary dimension vectors.

Rik <rik5>
Project Administrator
Thu 04 Jan 2018 07:47:35 PM UTC, comment #1:

Well, this is bad. I'll see if I can track down what part of the interpreter is chewing up all that time.

John W. Eaton <jwe>
Project Administrator
Thu 04 Jan 2018 06:31:00 PM UTC, original submission:

The evaluation of statements seems to be much slower on the development branch than in previous versions.

A sample script testing just a for loop is:

On my machine, results are roughly equivalent between 4.2.1 and the development branch.

But if there is even a single variable assignment the results are far worse.

Results:

Benchmarking scripts are attached.

Rik <rik5>
Project Administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #42845:  evaluator-diffs.txt added by jwe (10KiB - text/plain)
file #42831:  bm_for_loop4.m added by rik5 (47B - text/x-matlab)
file #42832:  bm_for_loop5.m added by rik5 (56B - text/x-matlab)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by godfrey (Posted a comment)
  • -email is unavailable- added by sebald (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by rik5 (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can add your encouragement to it.
    This task has 0 encouragements so far.

    Only project members can vote.

     

     

     

    Follow 5 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-01-09 jwe StatusPatch Submitted => Ready For Test
    2018-01-05 jwe Attached File- => Added evaluator-diffs.txt, #42845
        StatusConfirmed => Patch Submitted
    2018-01-04 rik5 Attached File- => Added bm_for_loop4.m, #42831
        Attached File- => Added bm_for_loop5.m, #42832

    Back to the top


    Powered by Savane 3.3