bugGNU Octave - Bugs: bug #52809, interpreter performance is slow on...

 
 

bug #52809: interpreter performance is slow on development branch

Submitter:  Rik <rik5>
Submitted:  Thu 04 Jan 2018 06:31:00 PM UTC
   
 
Category:  Performance Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Regression
Status:  Fixed Assigned to:  None
Originator Name:  Open/Closed:  * Closed
Release:  * dev Operating System:  * Any
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Thu 13 Dec 2018 10:47:16 PM UTC, comment #25: 

On my system with bm_for_loop.m I see the following results:

4.2.2:  2.5 seconds

5.0.0:  3.5 seconds

With my call stack refactoring branch (jwe-call-stack-refactor bookmark at http://hg.octave.org/octave-jwe) I see around 2.5 seconds again and I think there is still room for improvement.

I'll post more about the call stack refactoring changes to the maintainers list soon.

John W. Eaton <jwe>
Group administrator
Thu 13 Dec 2018 08:21:49 PM UTC, comment #24: 

The interpreter is now 2X slower than 4.2.1 rather than 6X slower for the reference code here.  That's probably as good as we can do now.  I think the next step will be profiling Octave to figure out where we are losing performance.

Closing report.

Rik <rik5>
Group administrator
Wed 10 Jan 2018 05:47:42 PM UTC, comment #23: 

A new bug report related to cputime() operating system interface is here (I noted this is low priority and tentative):

https://savannah.gnu.org/bugs/index.php?52858

Dan Sebald <sebald>
Wed 10 Jan 2018 04:44:21 PM UTC, comment #22: 

Yes, move this to a new bug report.

For reference, I used these two loops while testing performance

bm_for_loop.m


a = 1; b = 1; t0=cputime; for i=1:1000; for j=1:1000; a = a + b + 123.0; end; end; t1=cputime; t1 - t0


bm_for_loop2.m


a = 1; b = 1; t0=tic; for i=1:1000; for j=1:1000; a = a + b + 123.0; end; end; t1=toc(t0); t1


The first one uses cputime, the second uses wall time.

Results:


octave:1> bm_for_loop
ans =  4.6840
octave:2> bm_for_loop
ans =  4.6640
octave:3> bm_for_loop
ans =  4.6240
octave:4> bm_for_loop2
t1 =  4.4830
octave:5> bm_for_loop2
t1 =  4.5830
octave:6> bm_for_loop2
t1 =  4.4962


As you can see, wall time was less than cputime which doesn't make sense to me, but maybe that is my misunderstanding.


Rik <rik5>
Group administrator
Wed 10 Jan 2018 02:05:54 PM UTC, comment #21: 

There is a clock_gettime replacement in gnulib, so we should use a wrapper for that function if we decide to change.  On systems where clock_gettime is missing, gnulib calls gettimeofday.

I don't think filling in two values in a struct is a big issue, so I don't see a problem with having a C++ class to encapsulate the results.  The overhead of calling a DEFUN-defined function in Octave is always going to be much larger than that, no matter what we do.

But anyway, yes, any problems with cputime should be a different bug report.

John W. Eaton <jwe>
Group administrator
Wed 10 Jan 2018 05:39:17 AM UTC, comment #20: 

Well, cputime() is a pretty important function to leave to the vagaries of unix time management and system interface of old.

It sounds to me that the issue is that some people may not be getting good time resolution with cputime() because their system's support of the routine we are using is lacking.  Some of these older routines have a reference called CLOCK_PROCESS_CPUTIME_ID, which I've grepped for but don't see in Octave's source code.  Just the fact there is such a setting suggests that, yeah, different hardware could have different resolution...an undesirable result from Octave's cputime() perspective.

Here is a good discussion on various time routines in Unix:

https://stackoverflow.com/questions/12392278/measure-time-in-linux-time-vs-clock-vs-getrusage-vs-clock-gettime-vs-gettimeof

and it sounds as though the latest Unix routine is this guaranteed-to-be-high-resolution clock_gettime:

https://linux.die.net/man/3/clock_gettime

That routine has a selectable input, of which we'd be interested in CLOCK_PROCESS_CPUTIME_ID.  It returns the information in a


struct timespec {
        time_t   tv_sec;        /* seconds */
        long     tv_nsec;       /* nanoseconds */
};


I grep for tv_sec and tv_nsec and I see such elements in the Octave code, quite a lot actually.  However, the usage looks to be for date/time related code.  From what I looked at yesterday regarding cputime(), the impression I got was that getrusage() was the means of determining cputime().  getrusage() is one of the routines where the resolution can vary from system to system.

So, perhaps we need to switch the cputime() routine to using the clock_gettime() C function (and keep any C++ objects to a minimum, just relay the system-function output to the OV output).

What do people think?  Should we open a different bug report for this?

Dan Sebald <sebald>
Tue 09 Jan 2018 06:30:50 PM UTC, comment #19: 

It is the underlying LInux system that is the problem.

Michael Godfrey <godfrey>
Group Member
Tue 09 Jan 2018 06:06:13 PM UTC, comment #18: 

#16  What are your memories of the flaky cputime() function?  It does appear there is a lot of overhead associated with the Octave cputime() routine, considering that we are looking for as accurate a number as possible.  That is, there is use of a fairly large object


  octave::sys::cpu_time cpu_tm;

  double usr = cpu_tm.user ();
  double sys = cpu_tm.system ();

  return ovl (usr + sys, usr, sys);


which in turn eventually uses


int
octave_cpu_time (time_t *usr_sec, time_t *sys_sec,
                 long *usr_usec, long *sys_usec)
{
  struct rusage ru;

  int status = getrusage (RUSAGE_SELF, &ru);

  if (status < 0)
    {
      *usr_sec = 0;
      *sys_sec = 0;

      *usr_usec = 0;
      *sys_usec = 0;
    }
  else
    {
      *usr_sec = ru.ru_utime.tv_sec;
      *usr_usec = ru.ru_utime.tv_usec;

      *sys_sec = ru.ru_stime.tv_sec;
      *sys_usec = ru.ru_stime.tv_usec;
    }

  return status;
}


I.e., unix system routine getrusage().  Of course, the unix getrusage() is needed for  Octave's getrusage() routine.  However, I wonder if for cputime() the code should try to be as minimal and direct as possible--maybe use some static object as opposed to stack-based.  For example, there is

http://www.tutorialspoint.com/unix_system_calls/times.htm

which seems to provide the necessary info for cputime().  Perhaps the larger getrusage() is a less efficient methodology and may have some peculiarities.

I don't know, there is a lot going on there for me to fully understand.

Dan Sebald <sebald>
Tue 09 Jan 2018 05:34:17 PM UTC, comment #17: 

#15  Really?  I'm seeing millisecond resolution here.  I think that is typical these days.  Each invocation of cputime seems to take about 4 ms.

Dan Sebald <sebald>
Tue 09 Jan 2018 05:30:26 PM UTC, comment #16: 

cputime seems always to have been flaky. But clock time
is (nearly) always an over estime of CPU used. So, with
care, either is as good as it gets...

Michael Godfrey <godfrey>
Group Member
Tue 09 Jan 2018 04:53:42 PM UTC, comment #15: 

I was getting unreliable results with cputime which is why I preferred tic/toc.  Possibly just my hardware setup, but cputime didn't appear to have as high a resolution timer as the straight wall clock from tic/toc.

Rik <rik5>
Group administrator
Tue 09 Jan 2018 04:28:24 PM UTC, comment #14: 

Yes, cputime is what I prefer.  tic/toc was the original poster's choice.  I did run the tic/toc numbers a second time to confirm nothing happened system wise.  Plus there are a lot of cores on this system such that I typically don't have issues unless some system resource is utilized (as opposed to simple looping).  I can use cputime() from now on.

Dan Sebald <sebald>
Tue 09 Jan 2018 03:57:15 PM UTC, comment #13: 

Just to be clear: tic,toc measures elapsed time.
                  cputime provides [total, user, system]

Might be a bit better here to use cputime just in case
something else is going on in the system.

Michael Godfrey <godfrey>
Group Member
Tue 09 Jan 2018 07:35:08 AM UTC, comment #12: 

I'll add another column to the previous times I posted:


octave:1> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = 1; t0=tic; for i=1:lim1; for j=1:lim2; end; end; t1=toc(t0);
t1
> end

BEFORE PATCH     AFTER PATCH      AFTER SECOND PATCH
t1 =  0.18503    t1 =  0.18796    t1 =  0.18483
t1 =  0.18548    t1 =  0.18670    t1 =  0.18388
t1 =  0.18704    t1 =  0.18585    t1 =  0.18441
t1 =  0.19305    t1 =  0.18805    t1 =  0.18753
t1 =  0.24489    t1 =  0.21950    t1 =  0.22058
t1 =  0.75854    t1 =  0.53110    t1 =  0.54419
t1 =  5.8472     t1 =  3.5870     t1 =  3.7401



octave:2> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = 1; t0=tic; for i=1:lim1; for j=1:lim2; 1; end; end; t1=toc(t0);
t1
> end

BEFORE PATCH     AFTER PATCH       AFTER SECOND PATCH
t1 =  1.0119     t1 =  0.43711     t1 =  0.39240
t1 =  1.0167     t1 =  0.43716     t1 =  0.39317
t1 =  1.0118     t1 =  0.43754     t1 =  0.39049
t1 =  1.0166     t1 =  0.44116     t1 =  0.39428
t1 =  1.0708     t1 =  0.47755     t1 =  0.43239
t1 =  1.6114     t1 =  0.83763     t1 =  0.77224
t1 =  6.9960     t1 =  4.3774      t1 =  4.1176



octave:3> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = 1; t0=tic; for i=1:lim1; for j=1:lim2; a=b; end; end;
t1=toc(t0); t1
> end

BEFORE PATCH     AFTER PATCH      AFTER SECOND PATCH
t1 =  3.1692     t1 =  1.7681     t1 =  1.8105
t1 =  3.1592     t1 =  1.8832     t1 =  1.6622
t1 =  3.1674     t1 =  1.8906     t1 =  1.6655
t1 =  3.1748     t1 =  1.8875     t1 =  1.6682
t1 =  3.2181     t1 =  1.9335     t1 =  1.7094
t1 =  3.7610     t1 =  2.2881     t1 =  2.0454
t1 =  9.0173     t1 =  5.7604     t1 =  5.2106



octave:4> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = ones(10000); t0=tic; for i=1:lim1; for j=1:lim2; a=b; end; end;
t1=toc(t0); t1
> end

BEFORE PATCH     AFTER PATCH      AFTER SECOND PATCH
t1 =  3.2118     t1 =  1.6255     t1 =  1.6648
t1 =  3.2141     t1 =  1.7589     t1 =  1.8495
t1 =  3.2061     t1 =  1.7608     t1 =  1.8414
t1 =  3.2156     t1 =  1.7673     t1 =  1.8461
t1 =  3.2567     t1 =  1.8127     t1 =  1.8916
t1 =  3.7914     t1 =  2.1563     t1 =  2.2721
t1 =  9.1665     t1 =  5.7264     t1 =  5.8135


Observations, going from first patch to second patch:

1) An empty command list case is the same times.

2) The case of a constant within the loop shows 10% improvement.

3) The case of assigning a scalar value to a, i.e., a = b (b=1), shows a 12% decrease in CPU time, but there is a peculiar increase in the time at the extreme in which the inner loop is dominant.  **If there is anything to investigate, it is this result.

4) The case in which there is a large matrix being assigned to 'a' shows a logical progression of times from small to large as the outer loop becomes more dominant.

5) Furthermore, note that the times required for large matrix assignment are now greater than the times for scalar assignment by about 10%.  So, that's a good result in line with the notion that large matrix transfer requires more CPU than just transferring a single value.

In summary, 10-12% improvement in the CPU consumption, but something strange with the evaluation of "a = b" in the inner loop when b is scalar 1 at the extreme of inner loop dominance.

Dan Sebald <sebald>
Tue 09 Jan 2018 02:58:43 AM UTC, comment #11: 

I checked in some changes that improve things for me:

  http://hg.savannah.gnu.org/hgweb/octave/rev/8f2c479eb125
  http://hg.savannah.gnu.org/hgweb/octave/rev/07876b7127bf
  http://hg.savannah.gnu.org/hgweb/octave/rev/dbec1e04f499

The second change involves the evaluator but it is not really important or significant change as far as performance goes.

With these changes, things are better for Rik's example but not back to where we were with 4.2.1.  However, beginning just before I started refactoring the interpreter, I started periodically timing "make check" on my build system and now my timings are about the same as before, and that is with a number of new tests that were not present back then.  There is still some room for improvement, but things don't look nearly so bad as they did before my most recent changes.

John W. Eaton <jwe>
Group administrator
Sat 06 Jan 2018 05:43:27 PM UTC, comment #10: 

Oh, right.

Here is Rik's loop5.m:tic; for i=1:1000; for j=1:1000; a = 1.0; end; end; toc

Fedora: time is 0.169811 seconds.
dev:    time is 0.968607 seconds.

Typical value for several repeats.

Sorry for previous. Donw in a hurry...
Michael

Michael Godfrey <godfrey>
Group Member
Sat 06 Jan 2018 05:13:44 PM UTC, comment #9: 

I don't think there was a problem with an empty loop body.  Could you try again with Rik's original test code?



John W. Eaton <jwe>
Group administrator
Sat 06 Jan 2018 05:06:08 PM UTC, comment #8: 

John,
I now get for:
t0=tic; for i=1:1000; for j=1:1000; end; end; t1=toc(t0); t1

4.2.1:t1 =  0.071527
dev:  t1 =  0.077129

Linux pbdsl4 4.14.11-300.fc27.x86_64

definitely better.

Michael

Michael Godfrey <godfrey>
Group Member
Fri 05 Jan 2018 09:24:52 PM UTC, comment #7: 

Here's a breakdown of the results after applying the patch:

In the following there's nothing on the statement list, so there is no surprise there's no change, except when the evaluation of the inner for-loop comes into play, for which there seems to be 40% improvement.


octave:1> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = 1; t0=tic; for i=1:lim1; for j=1:lim2; end; end; t1=toc(t0); t1
> end

BEFORE PATCH     AFTER PATCH
t1 =  0.18503    t1 =  0.18796
t1 =  0.18548    t1 =  0.18670
t1 =  0.18704    t1 =  0.18585
t1 =  0.19305    t1 =  0.18805
t1 =  0.24489    t1 =  0.21950
t1 =  0.75854    t1 =  0.53110
t1 =  5.8472     t1 =  3.5870


In the following there are no variables, just the constant.  There is a 60% improvement when the evaluation of the inner for-loop isn't a factor (probably because there is no longer anything done to save variable memory), while again a 40% improvement when the inner-for loop evaluation comes into play.


octave:2> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = 1; t0=tic; for i=1:lim1; for j=1:lim2; 1; end; end; t1=toc(t0); t1
> end

BEFORE PATCH     AFTER PATCH
t1 =  1.0119     t1 =  0.43711
t1 =  1.0167     t1 =  0.43716
t1 =  1.0118     t1 =  0.43754
t1 =  1.0166     t1 =  0.44116
t1 =  1.0708     t1 =  0.47755
t1 =  1.6114     t1 =  0.83763
t1 =  6.9960     t1 =  4.3774


The next case is when there is some variable evaluation.  Now there is only 50% improvement when the inner for-loop evaluation is not dominant.  (Still about 40% improvement when it is dominant.)


octave:3> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = 1; t0=tic; for i=1:lim1; for j=1:lim2; a=b; end; end; t1=toc(t0); t1
> end

BEFORE PATCH     AFTER PATCH
t1 =  3.1692     t1 =  1.7681
t1 =  3.1592     t1 =  1.8832
t1 =  3.1674     t1 =  1.8906
t1 =  3.1748     t1 =  1.8875
t1 =  3.2181     t1 =  1.9335
t1 =  3.7610     t1 =  2.2881
t1 =  9.0173     t1 =  5.7604


And the next is again with variable evaluation, but this time a matrix assignment.  This is pretty much the same improvement, relatively, as in the previous example.  But look carefully comparing the first column of the previous example and this example, and then the second column of the previous example and this example.  (I believe I have those numbers correct.  At least I double-checked.)  In the previous example (the scalar assignment) before the patch it was slightly less CPU usage than the following example (the matrix assignment) before the patch.  One would think that is logical--more memory movement, more CPU usage (although small compared to the evaluator).  However, after the patch, this relationship has reversed: the scalar assignment appears to take a fraction to one or two percent more CPU usage, which is counter-intuitive.  I do see though that in the patch it is broken up into the scalar and "multi" cases, so that might explain the difference.


octave:4> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = ones(10000); t0=tic; for i=1:lim1; for j=1:lim2; a=b; end; end; t1=toc(t0); t1
> end

BEFORE PATCH     AFTER PATCH
t1 =  3.2118     t1 =  1.6255
t1 =  3.2141     t1 =  1.7589
t1 =  3.2061     t1 =  1.7608
t1 =  3.2156     t1 =  1.7673
t1 =  3.2567     t1 =  1.8127
t1 =  3.7914     t1 =  2.1563
t1 =  9.1665     t1 =  5.7264


In summary:

1) About 40% reduction in CPU usage related to memory management of evaluation.
2) About 60% reduction in CPU usage related to the need to do memory management during evaluation.
3) Some peculiar but not too significant difference having to deal with scalar/matrix memory storage during looping.

Dan Sebald <sebald>
Fri 05 Jan 2018 08:37:15 AM UTC, comment #6: 

I'm attaching a rough patch that improves performance quite a bit but does not quite get back to what it was before my big evaluator refactoring.  I can still see some possible areas for improvement, but the gains may be smaller.

This patch isn't ready to be pushed to savannah.

(file #42845)

John W. Eaton <jwe>
Group administrator
Fri 05 Jan 2018 01:16:08 AM UTC, comment #5: 

I'm pretty sure that a significant portion of this problem is caused by always creating an octave_value_list object even when only one value is requested or produced.  In the past, we had two methods for computing values, rvalue (produce an octave_value_list) and rvalue1 (produce a single octave_value object).  The binary expression evaluator used rvalue1 to compute values from the operand expressions.  But when I refactored the evaluator I changed the expression evaluation methods to always generate an octave value list even when only one value is needed.  So in those cases, we are creating an octave_value_list object with one value, pushing it on a stack, and then extracting it.  I'm sure even I can do better than that now that I see it's likely an issue...

John W. Eaton <jwe>
Group administrator
Thu 04 Jan 2018 11:40:27 PM UTC, comment #4: 


>>> Is it not possible to get a trace on just  the a = a + b + 123.0 sequence?
>>
>> I haven't done any real profiling, but from my tests it looks like the
>> evaluation of the expression is the real problem.  Comparing
>>
>>    t0=tic; for i=1:1000; for j=1:1000; end; end; t1=toc(t0); t1
>>
>> in 4.2.1 and the current dev version I see nearly identical results. Both
>> were built with GCC 7.2.0 on a Debian system.
>>
>> jwe
>
> I filed a bug report (https://savannah.gnu.org/bugs/index.php?52809) to
> keep track of this.  Quoting from the write-up there:


Interesting.  It's not necessarily the expression A = B + C portion that contributes, but any kind of expression that appears in the list used by tree_evaluator::visit_statement_list.  (I looked at this last night, so at least I'm now a bit familiar.)

Consider the various limits again with the "no-command" and a real simple no-variable command:


octave:1> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = 1; t0=tic; for i=1:lim1; for j=1:lim2; end; end; t1=toc(t0); t1
> end
t1 =  0.18503
t1 =  0.18548
t1 =  0.18704
t1 =  0.19305
t1 =  0.24489
t1 =  0.75854
t1 =  5.8472


What I take away from this result is when that outer for-loop gets to large iterations that inner for-loop evaluation begins to dominate.  That is, the evaluation of the for-loop.  With this simple example, I'm guessing the inner for loop--though large iterations--has an empty statement_list to visit, i.e., checks the list, finds none, so continues onto the next iteration.

Question: Is whatever is inside a for loop re-tokenized, etc. with every pass?  Or is that tokenization done for the contents of the inner loop just once and then cached?

The loop path eventually goes through tree_evaluator::visit_statement_list, which as one might guess goes through a list of all the lexical line statements.  By testing a few things here and there I conclude that not much of what is in this part of the parser contributes a whole lot to the overall time.  There is some strange FIXME comment about having to do the following:


    // FIXME: commented out along with else clause below.
    // static octave_value_list empty_list;
[snip]
                //              result_values = empty_list;


in order to keep the reference count down to avoid generating extra copies, but there is no "result_values" in the routine anywhere, so maybe this was just copied from somewhere else and is a conceptual place-holder.  In other words, was never implemented.

Nonetheless, to test this we can try something like


octave:2> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = 1; t0=tic; for i=1:lim1; for j=1:lim2; 1; end; end; t1=toc(t0); t1
> end
t1 =  1.0119
t1 =  1.0167
t1 =  1.0118
t1 =  1.0166
t1 =  1.0708
t1 =  1.6114
t1 =  6.9960


There is no change to ref-count involved in the above, so that comment shouldn't be relevant.  But there is already a 5x increase in CPU usage (probably the evaluation routine).  How about if we compare small-memory versus large memory?:

SMALL MEMORY

octave:3> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = 1; t0=tic; for i=1:lim1; for j=1:lim2; a=b; end; end; t1=toc(t0); t1
> end
t1 =  3.1692
t1 =  3.1592
t1 =  3.1674
t1 =  3.1748
t1 =  3.2181
t1 =  3.7610
t1 =  9.0173


Another factor of 3 increase (for most) already, just for "a=b" versus "1".  Certainly parsing and tokenizing can't be that much worse.  So, the fact that variables are involved could be important.

LARGE MEMORY

octave:4> for lim_p = 0:6
>   lim1 = 10^lim_p;
>   lim2 = 10^(6-lim_p);
>   a = 1; b = ones(10000); t0=tic; for i=1:lim1; for j=1:lim2; a=b; end; end; t1=toc(t0); t1
> end
t1 =  3.2118
t1 =  3.2141
t1 =  3.2061
t1 =  3.2156
t1 =  3.2567
t1 =  3.7914
t1 =  9.1665


There's a tiny percentage increase for assigning large amounts of memory.  That suggests that any sort of reference counting issues has little influence on the CPU consumption.

Well, the place that the actual "a + b + 123.0" would be evaluated is

tree_evaluator::visit_statement (tree_statement& stmt)

and in turn

tree_evaluator::evaluate (tree_decl_elt *elt)

One thing to try is commenting out some aspects of these routines and see how much it decreases the looping usage.  That would hint where the most savings could be.  But it might cause internal errors to skip some things.

Dan Sebald <sebald>
Thu 04 Jan 2018 09:21:05 PM UTC, comment #3: 

Copying objects that use shared_ptr also seems to be an issue, but probably doesn't explain everything.  There are definitely a lot of places where we could improve performance.

John W. Eaton <jwe>
Group administrator
Thu 04 Jan 2018 08:14:36 PM UTC, comment #2: 

There might be a lot of stat() calls checking whether any function has changed.  That seemed to be a possibility from one bug report. 

I usually run with "--no-gui-libs" because I don't use the GUI and because I don't want to worry about interactions between the Octave thread and the GUI thread.  That's an indication that the cause is not a slowdown waiting for data to be synchronized across octave_link.

It doesn't fully explain it, but I would also adding a move constructor for the dim_vector class because we create a lot of temporary dimension vectors.

Rik <rik5>
Group administrator
Thu 04 Jan 2018 07:47:35 PM UTC, comment #1: 

Well, this is bad.  I'll see if I can track down what part of the interpreter is chewing up all that time.

John W. Eaton <jwe>
Group administrator
Thu 04 Jan 2018 06:31:00 PM UTC, original submission:  

The evaluation of statements seems to be much slower on the development branch than in previous versions.

A sample script testing just a for loop is:


tic; for i=1:1000; for j=1:1000; end; end; toc


On my machine, results are roughly equivalent between 4.2.1 and the development branch.


4.2.1 : 0.109 seconds
dev.  : 0.121 seconds


But if there is even a single variable assignment the results are far worse.


tic; for i=1:1000; for j=1:1000; a = 1.0; end; end; toc


Results:


4.2.1 : 0.218 seconds
dev.  : 1.28 seconds


Benchmarking scripts are attached.

Rik <rik5>
Group administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #42845:  evaluator-diffs.txt added by jwe (10KiB - text/plain)
file #42831:  bm_for_loop4.m added by rik5 (47B - text/x-matlab)
file #42832:  bm_for_loop5.m added by rik5 (56B - text/x-matlab)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by godfrey (Posted a comment)
  • -email is unavailable- added by sebald (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by rik5 (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 7 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-12-13 rik5 StatusReady For Test Fixed
        Open/ClosedOpen Closed
    2018-01-09 jwe StatusPatch Submitted Ready For Test
    2018-01-05 jwe Attached File- Added evaluator-diffs.txt, #42845
        StatusConfirmed Patch Submitted
    2018-01-04 rik5 Attached File- Added bm_for_loop4.m, #42831
        Attached File- Added bm_for_loop5.m, #42832

    Back to the top

    Powered by Savane 3.13-758e.
    Corresponding source code