bugGNU Octave - Bugs: bug #54100, fread using SKIP larger than zero...

 
 

bug #54100: fread using SKIP larger than zero is extremely slow

Submitter:  None
Submitted:  Mon 11 Jun 2018 06:41:30 PM UTC
   
 
Category:  Octave Function Severity:  3 - Normal
Priority:  3 - Low Item Group:  Performance
Status:  In Progress Assigned to:  None
Originator Name:  Gisle J Torvetjonn Originator Email:  -email is unavailable-
Open/Closed:  * Open Release:  * dev
Operating System:  * Any Fixed Release:  None
Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Sun 29 Jul 2018 06:59:05 PM UTC, comment #21: 

OK.  What I'll do is add the overflow check for the FIXME (I know it is being caught) and still leave the FIXME comment in place with reference to bug reports, in particular the one I created here for octave_idx_type overflow check:

https://savannah.gnu.org/bugs/?54405

That's a bigger issue.  I suppose there is no convenient way of having a macro or other object implement the overflow check for this fread because it is kind of doing its own memory allocation and management of smaller hunks.

Dan Sebald <sebald>
Sun 29 Jul 2018 05:12:30 PM UTC, comment #20: 

Recall that until recently, octave_idx_type was declared as int which on most platforms is just 32-bits long.  With one bit reserved for indicating overflow, that is 2^31 elements.  It's actually reasonably easy to overflow either the index type (> 2GB text file (char)) or memory (16 GB data file of doubles).  Although we now default to a 64-bit index type, it may not be available on all planforms (some smaller ARM RISC chips for example) or the user may have configured octave with --disable-64.  Hence, I think the tests for overflow need to remain, but they probably should be updated as you suggest so the tests themselves are accurate and don't overflow.

Rik <rik5>
Group administrator
Sun 29 Jul 2018 09:47:25 AM UTC, comment #19: 

How about we address this while looking at this stream::read() function?


    // FIXME: Ensure that this does not overflow.
    //        Maybe try comparing nr * nc computed in double with
    //        std::numeric_limits<octave_idx_type>::max ();
    octave_idx_type elts_to_read = nr * nc;


Now, there's no need to check that nr or nc is too big, because that is done already in


  static octave_idx_type
  get_size (double d, const std::string& who)
[snip]
        if (d > std::numeric_limits<octave_idx_type>::max ())
          ::error ("%s: dimension too large for Octave's index type",
                   who.c_str ());


However, I get the point that the FIXME is raising.  An octave_idx_type is being used to hold the size "elts_to_read = nr * nc" and that number could potentially be larger than std::numeric_limits<octave_idx_type>::max.  How about addressing this as follows:


    // Check for overflow.
    if (nr > 0 && nc > (std::numeric_limits<octave_idx_type>::max () / nr))
      error ("fread: dimension too large for Octave's index type");


(See https://stackoverflow.com/questions/199333/how-to-detect-integer-overflow.)

One related item.  std::numeric_limits<octave_idx_type>::max is (as printed out using std::cerr):

OIT MAX: 9223372036854775807

That's 9.2e18 or 2^63.  So it is already near 2^64, the addressable memory limit...and that is 1 million tera-"elements".  First, given the practically unachievable memory size, does it make sense for this error message:


octave:26> fid = fopen("zeros1000by61.dat","r"); tic; xt = fread (fid, [1 10000000000], 'single'); toc; fclose(fid);
error: out of memory or dimension too large for Octave's index type


to include the phrase "or dimension too large for Octave's index type"?  In reality, from pt-eval.cc the std::bad_alloc really is only indicating "insufficient memory", most likely.  (Could be some obscure thing with user-account security, don't know.)  Here is the code:


        catch (const std::bad_alloc&)
          {
            // FIXME: We want to use error_with_id here so that give users
            // control over this error message but error_with_id will
            // require some memory allocations.  Is there anything we can
            // do to make those more likely to succeed?

            error_with_id ("Octave:bad-alloc",
                           "out of memory or dimension too large for Octave's index type");


I'm careful to say "insufficient memory", as opposed to "out of memory" or "no memory left" in regard to the FIXME comment just above.  This FIXME is making it sound as though there is no memory left, but there probably is plenty of memory left for error messages, just not enough for some huge vector.  So, I'm not sure what the concern is.

Anyway, getting back on point, the "or dimension too large" seems a red herring.  Can it be dropped?  Or maybe it is meant to be a suggestion, like "out of memory (possibly dimension too large)"?  I don't know, is there currently a custom "new" installed that can more specifically state that D1 x D2 x ... x DN grows outside the index-type max.  But couldn't an error be thrown prior to attempt the new?

Something else is odd here:


octave:9> fid = fopen("zeros1000by61.dat","r"); tic; xt = fread (fid, [9223372036854775807, 9223372036854775807], 'char'); toc; fclose(fid);
OIT MAX: 9223372036854775807
nr: -9223372036854775808  nc: -9223372036854775808
(std::numeric_limits<octave_idx_type>::max () / nr): 0
nr * nc: 0
Elapsed time is 0.000128984 seconds.


The std::cerr << nc may be misinterpreting the sign of octave_idx_type, but the overflow test I put in place (see above) doesn't fail...that's odd, and nr*nc comes out to be zero.  Either it should rail to max octave_idx_type or wrap around.

Trying to get away from possible sign/unsigned issues for nr and nc, let's divide by 3:


octave:10> fid = fopen("zeros1000by61.dat","r"); tic; xt = fread (fid, [9223372036854775807/3, 9223372036854775807/3], 'char'); toc; fclose(fid);
OIT MAX: 9223372036854775807
nr: 3074457345618258432  nc: 3074457345618258432
(std::numeric_limits<octave_idx_type>::max () / nr): 3
nr * nc: -8198552921648660480
input_buf_size: 1048576
error: out of memory or dimension too large for Octave's index type


OH, I confirmed that the overflow check I put in place is correctly flagging the error, but somehow, somewhere that longer error message is appear instead of the "fread: dimension too large for Octave's index type".  Is it because of the try/catch of the interpreter?

Dan Sebald <sebald>
Tue 24 Jul 2018 06:35:47 PM UTC, comment #18: 

Probably need to add some extra BIST tests to either fread or io.tst for all of these corner cases.

Rik <rik5>
Group administrator
Tue 24 Jul 2018 05:52:54 AM UTC, comment #17: 

OK, I see what you are saying.  Just some small tweaks needed then, but it's the kind of thing that needs me to stare at the details of integer math; so when I find some free time in the next day or two I'll fix those cases.  (I fixed the first fail, then found an issue when the number of elements to read is not a factor of block_size, e.g., block_size of 2 but want to read 3 elements--there's a last partial block.)

Dan Sebald <sebald>
Mon 23 Jul 2018 11:04:56 PM UTC, comment #16: 

I think this is a good start.  The improvements in speed are impressive.  For testing, try


cd test
test io.tst


In this case, at least one of the tests is failing because while the block_size is correct, it doesn't read N_blocks.



block_size: 2
N_block: 262144
***** test
 [id, msg] = tmpfile ();
 if (id < 0)
   __printf_assert__ ("tmpfile failed: %s\n", msg);
 else
   unwind_protect
     fwrite (id, char (0:15));
     frewind (id);
     [data, count] = fread (id, inf, "2*uint8", 2);
     assert (data, [0; 1; 4; 5; 8; 9; 12; 13]);
     assert (count, 8);
   unwind_protect_cleanup
     fclose (id);
   end_unwind_protect
 endif
!!!!! test failed
ASSERT errors for:  assert (data,[0; 1; 4; 5; 8; 9; 12; 13])

  Location  |  Observed  |  Expected  |  Reason
     .          O(4x1)       E(8x1)      Dimensions don't match
octave:7>  [id, msg] = tmpfile ();
octave:8>     fwrite (id, char (0:15));
octave:9>      frewind (id);
octave:10>      [data, count] = fread (id, inf, "2*uint8", 2);
while
block_size: 2
N_block: 262144
octave:11> count
count =  4
octave:12> data
data =

   0
   1
   4
   5



Rik <rik5>
Group administrator
Mon 23 Jul 2018 09:08:24 AM UTC, comment #15: 

Here's the latest development result (similar to the result you listed):


octave:21> fid = fopen("zeros1000by61.dat","r");
octave:22> tic; xt = fread (fid, [1000, 60], 'single'); toc
Elapsed time is 0.0011611 seconds.
octave:23> fclose(fid);
octave:24> fid = fopen("zeros1000by61.dat","r");
octave:25> tic; xt = fread (fid, [1000, 60], 'single', 4); toc
Elapsed time is 0.0222042 seconds.


I'm attaching a very crude implementation of the post-read large buffer condensation method.  It may crash for the non-skip case right now, but I'm just trying to illustrate the improvement in speed for the skip case:


octave:1> fid = fopen("ramp1000by61.dat","r");
octave:2> tic; xt = fread (fid, [1000, 60], 'single', 4); toc
block_size: 1
N_block: 60000
Elapsed time is 0.00137806 seconds.
octave:3> xt(1,:)
ans =

 Columns 1 through 16:

    1    3    5    7    9   11   13   15   17   19   21   23   25   27   29   31

 Columns 17 through 31:

   33   35   37   39   41   43   45   47   49   51   53   55   57   59   61

octave:4> size(xt)
ans =

   1000     31


(file #44607)

Dan Sebald <sebald>
Mon 23 Jul 2018 06:49:38 AM UTC, comment #14: 

OK, significant improvement.  However, I see your point.  Actually, looking at this again, I bet the CPU consumer is not the read/skip, but the high recurrence of calling "new".  When there is skip bytes, those small hunks are all allocated dynamically.  I'm pretty certain that is a significant factor because the OS has to do a lot of overhead finding and keeping track of the memory.

Thinking about this, the current setup is optimum for when SKIP is very much larger than block size.  But if SKIP is on the order of block size, it might not be all that difficult to read in large quantities.  Mainly it would involve doing a little extra math in computing the byte-size to read, and then tossing out the skipped bytes right before


            input_buf_list.push_back (input_buf);


which is quite easy because the buffer can be condensed in place with no new memory.  I'll have a look sometime this week.

Dan Sebald <sebald>
Mon 23 Jul 2018 05:16:11 AM UTC, comment #13: 

I used arithmetic and the number of bytes read to eliminate one of the calls to tellg.  I also removed the unneccessary check on (! is) at the end of the skip block code as suggested in comment #6.  See https://hg.savannah.gnu.org/hgweb/octave/rev/0812413a0bb7.

Altogether it produced a 40% savings.  Test results are now


skip = 0: Elapsed time is 0.00130677 seconds.
skip = 4: Elapsed time is 0.033772 seconds.


So using skip is now 26X slower than not using skip, but it started out at 182X.

The next step would be harder so I am going to stop.  The problem now is that when skip != 0 the stream::read function is called in units of blocksize which for 'single' is just 4 bytes.  When skip is zero the function reads in very large block sizes which is much more efficient.



Rik <rik5>
Group administrator
Mon 23 Jul 2018 03:41:57 AM UTC, comment #12: 

I took the next suggestion and replaced the C functions tell and seek with the C++ stream functions tellg and seekg.  This resulted in another 2.5X speedup.  See https://hg.savannah.gnu.org/hgweb/octave/rev/db326f3aacf4.


Rik <rik5>
Group administrator
Fri 20 Jul 2018 06:02:16 PM UTC, comment #11: 

OK, well


                else
                  seek (skip, SEEK_CUR);


probably isn't as fast as I imagined.  You'd think the C compiler would know enough to simply advance the pointer rather to actually seek around on the disk for an area.  (But the compiler may just be using some low-level I/O.)  In any case, a good test would be to take

skip != 0

out of the test and then feed in a value of


tic; xt = fread (fid, [1000, 60], 'single', 0); toc


I suppose it could be the use of


                off_t orig_pos = tell ();


as well.  That position could be computed (kind of a pain), but I think the better way to go is to simply let the C function deal with running out of data (some error flags should be set, just check those after the fact).  That would avoid the use of tell(). 

It would be good to know exactly what routine is causing the slow down, the tell() or the seek().  If it were a simple case of reading X bytes, that's fine, but that means creating a memory space to hold those bytes.

Dan Sebald <sebald>
Fri 20 Jul 2018 05:26:49 PM UTC, comment #10: 

I took the first of the most obvious suggestions, calculating the size of the file just once and storing eof_pos in a variable.  This produces a 2.5X speed-up, and no regressions during testing, so I committed that change here (https://hg.savannah.gnu.org/hgweb/octave/rev/336267b16a3d).

Test results were


-------------------------------------------------------------------------
Before
-------------------------------------------------------------------------
octave:18> tic; xt = fread (fid, [1000, 60], 'single', 0); toc
Elapsed time is 0.00138497 seconds.

octave:21> tic; xt = fread (fid, [1000, 60], 'single', 4); toc
Elapsed time is 0.252382 seconds.

-------------------------------------------------------------------------
After
-------------------------------------------------------------------------
octave:27> tic; xt = fread (fid, [1000, 60], 'single'); toc
Elapsed time is 0.00123215 seconds.

octave:34> tic; xt = fread (fid, [1000, 60], 'single', 4); toc
Elapsed time is 0.110174 seconds.


It is still the case that turning on 'skip' is approximately 100X slower (0.110 vs. .0012).  Maybe the next low-hanging optimization would be to just read SKIP bytes from 'is' and discard them rather than using fseek itself.  It would require benchmarking.

Rik <rik5>
Group administrator
Thu 14 Jun 2018 07:38:38 AM UTC, comment #9: 

There are about 20 tests involving fread() in the file io.tst, and of those maybe a half dozen involve the SKIP parameter:


test/io.tst:%!       x(i) = fread (id, [1, 1], type_list{i});
test/io.tst:%!   s_out = fread (id, numel (s_in), sprintf ("%s=>%s", cls, cls));
test/io.tst:%!   m_out = fread (id, numel (m_in), sprintf ("%s=>%s", cls, cls));
test/io.tst:%! y = fread (id, Inf, "uchar=>char");
test/io.tst:%!error <Invalid call to fread> fread ()
test/io.tst:%!error <Invalid call to fread> fread (1, 2, "char", 1, "native", 2)
test/io.tst:%!error fread ("foo")
test/io.tst:%! [data, count] = fread (id);
test/io.tst:%! [data, count] = fread (id, 'int16');
test/io.tst:%! [data, count] = fread (id, [10, 2], 'int16');
test/io.tst:%! [data, count] = fread (id, [2, 10], 'int16');
test/io.tst:%! [data, count] = fread (id, inf, "2*uint8", 2);
test/io.tst:%! [data, count] = fread (id, 3, "2*uint8", 3);
test/io.tst:%! [data, count] = fread (id, 3, "2*uint8", 3);
test/io.tst:%! [data, count] = fread (id, 3, "2*uint8", 3);
test/io.tst:%! [data, count] = fread (id, 3, "2*uint8", 3);
test/io.tst:%! [data, count] = fread (id, [1, Inf], "4*uint16", 3);
test/io.tst:%! [data, count] = fread (id, [3, Inf], "4*uint16", 3);
test/io.tst:%! [data, count] = fread (id, [2, 3], "char");


They mostly seem to require an fwrite() prior to generate a file.

These tests can be run via


octave:7> [p, n, xf, xb, sk, rtsk, rgrs] = test ("/home/linux/octave/octave/octave/test/io.tst")
p =  157
n =  157
xf = 0
xb = 0
sk = 0
rtsk = 0
rgrs = 0


However, this test takes quite long to run so isn't suitable for the write-and-debug stage.

Dan Sebald <sebald>
Wed 13 Jun 2018 08:29:41 PM UTC, comment #8: 

References are always valid.  the "if (! is)" checks whether the stream state is still good.

Coprocessor?  We are talking about an integer division here...

But in any case, I'm pretty sure that any compiler we use is going to move an expression out of the loop if possible, so I don't think that's a serious issue, though defining a separate variable with a meaningful name might be useful.

I apologize if this code is convoluted, but it was the best I could do the last time I looked at speeding it up and making it do all the correct things with regard to skipping bytes, reading different types of values, and transforming them to the final requested types.

Since compatibility is also important here, maybe it would be best to start with trying to write some more comprehensive tests before overhauling the code for speed?

John W. Eaton <jwe>
Group administrator
Wed 13 Jun 2018 07:57:53 PM UTC, comment #7: 

Avoid putting computations in loop limits, e.g.:


for (int i=0;i<input_buf_elts/block_size;i++)


if the variables do not change within the loop (especially divisions, given they either are algorithm based or use a coprocessor which means more more clock cycles).  Instead, pre-compute the value stored in some stack variable.

Dan Sebald <sebald>
Wed 13 Jun 2018 05:59:25 PM UTC, comment #6: 

You could print out the value of input_buf_size to confirm, i.e.,


std::cerr << "IBS: " << input_buf_size << "\n";


then run an example using a non-zero skip value.  But, yeah, I would think the skip always has to be done in one, let's say, "read group" or "read vector".

True about the cost of function calls, especially since it appears seek() is an added C++ layer.  The routine is not using the C++ stream library's seekg() method directly.  Instead, there seems to be a virtual octave::base_stream class function seek().

Perhaps that is why there's so much activity going on with the stream's integrity:


        while (is ...


"is" is a reference, not a pointer.  (Does it make sense to check if a reference is non-null?)  I don't think the C++ library would delete an existing stream as a result of a read() or seekg() member call, so why keep checking in the loop?  I'd think that just once prior is fine.  In any case, the lines with an asterisk below seem superfluous:


            if (is && skip != 0 && nel == block_size)
              {
...
*               if (! is)
*                 break;
              }


if there is a check at the top of the loop on "is" which is the next step in program flow.

Your prescription sounds good.  Eliminate the skip==0 check and just make it generic where the skip value happens to be 0.

There's an added benefit to not using such small atomic reads, which is that the routine is using "new" within the loop to create a list of data chunks.  It must then at the end of all those read construct a valid octave_value data object with the routine


        retval = finalize_read (input_buf_list, input_buf_elts, count,
                                nr, nc, input_type, output_type, ffmt);


I think that could be a very large number of hunks which means inefficient call to "new" all the time plus the list management


input_buf_list.push_back (input_buf);


A better way to do these dynamic memory kinds of things is to start with a small buffer or "work space", and when that memory area runs out, create one twice the size and copy the old data to the new data and free the old data.  That way the size of the workspace remains on the order of the amount of data being read.  It's actually fairly efficient.  Just as you're suggesting to make the processing into a streamline operation at the CPU level, block copies of data are really fast; just set up the CPU for a tight loop that copies data from one place to another and let it chug away.  (That would be a separate changeset.)

Dan Sebald <sebald>
Wed 13 Jun 2018 02:27:53 PM UTC, comment #5: 

I just now learned about the possibility to read blocks of elements before some bytes are skipped. Thus, block_size is not necessarily equal to zero. It seems the code snippet should therefore read


for (int i=0;i<input_buf_elts/block_size;i++)
  for (j=0;j<(input_elt_size*block_size);j++)
    input_buf[i*(input_elt_size*block_size)+j]=input_buf[i*(input_elt_size*block_size+skip)+j];


Michael Leitner <mleitner>
Wed 13 Jun 2018 08:55:49 AM UTC, comment #4: 

Of course the actual reading of the data is buffered. The converse would imply that for every double you need to initiate a new communication with the hard disk, and this would probably correspond to a slowing-down of rather a factor 10^5 than the factor 100 (on my computer) for the present case.

I am getting lost in this convoluted code, but it seems that it could be simplified considerably: the reading is done in line 6623, where a buffer of size input_buf_size is read into a newly allocated char buffer, which is then pushed into a list. The size of this buffer is just input_buf_elts (line 6594), which for skip==0 is comfortably large (line 6583) or even the whole file (line 6585), but otherwise it is an ominous block_size (line 6588). However, as the skipping is indeed done between the reads (line 6655), this has to mean that block_size is necessarily equal to 1. Or am I misunderstanding something?

Of course your very last suggestion in comment #1 would be an improvement: as it is, for skip>0 you do two tells and three seeks per read element. It would be better to move this out of the loop, initially do one seek to the end, get the position, compute the number of elements to be read, seek back to the beginning, read all the elements in a for loop, and finally position the file pointer. Then you have just one seek per read element, and you do not need to rely on flags set by the library.

However, you still have one read and one seek per read element. So even if they are buffered, function calls always cost you. Going from one read, three seeks, and two tells to one read and one seek would give you a factor three and not more, I would guess, reducing the present factor 100 to 33.

So my suggestion would be to use large buffers as with skip==0 (increased by the skipped bytes) whenever skip is not too large, after line 6623 insert the line


for (int i=0;i<input_buf_elts;i++)
  for (j=0;j<input_elt_size;j++)
    input_buf[i*input_elt_size+j]=input_buf[i*(input_elt_size+skip)+j];


delete the whole section 6635-6659, and insert a last line to position the file pointer correctly (if this should be necessary). Then reading a file of given size to the end should take the same time whether skip==0 or not.

Michael Leitner <mleitner>
Tue 12 Jun 2018 08:16:13 PM UTC, comment #3: 

You may be right, meaning that this is one of those situations where it would involve implementing the two approaches and comparing.  It might even be system dependent.

What you describe, i.e., first reading in a block of data, is certainly something I've done in other applications and it is efficient.  It's especially useful when having to access the data more than once, say for filtering a big block of data, level crossing, etc.  I often do this in a "ping-pong" buffer kind of design.

If something more sophisticated isn't needed though, keep in mind that the C library is most likely already doing just what you describe.  It's buffered I/O, so behind the curtain it is reading in a block of data and keeping a file pointer into that buffer (not fetching every new bit of data from disk with every fread()).  Doing that buffering a second time just adds a little more, but it is only a small addition because the C routines are optimized to bring in a block of bytes.  That's why I think "seek (skip, SEEK_CUR);" is probably pretty efficient; it's just advancing the pointer in the C library.  Whereas "seek (0, SEEK_END);" is inefficient because it means jumping to the end of the file, streaming a whole new bunch of data from disk into the cache.

So in summary, it depends but either approach will be much better than moving the file pointer to the EOF and back before every block read.

Dan Sebald <sebald>
Tue 12 Jun 2018 07:51:40 AM UTC, comment #2: 

Wouldn't it be even better to just do a large binary read into an existing buffer buf1 and then pick the values that you need, that is, in C it would look somehow like


char * buf1;
double * buf2;

for (i=0;i<N;i++) buf2[i]=*(double *)(buf1+(sizeof(double)+skip)*i);


for the case of output in doubles. Of course, for very large skip-values one should fall back to the present implementation, and for very large N you would want to do this piece-meal in order not to explode your memory requirements. If you think that this step of copying is inefficient, remember that in the present case you even have some function calls per value, and that whenever you have a non-starred precision argument (other than double) you have to do the casting and copying in any case.

Michael Leitner <mleitner>
Mon 11 Jun 2018 08:48:56 PM UTC, comment #1: 

Without even testing, I think there is a clear source of slow down.  The code that stands out to me is in libinterp/corefcn/oct-stream.cc, around lines 6635-6659:


            if (is && skip != 0 && nel == block_size)
              {
                // Seek to skip.
                // If skip would move past EOF, position at EOF.

                off_t orig_pos = tell ();

                seek (0, SEEK_END);

                off_t eof_pos = tell ();

                // Is it possible for this to fail to return us to
                // the original position?
                seek (orig_pos, SEEK_SET);

                off_t remaining = eof_pos - orig_pos;

                if (remaining < skip)
                  seek (0, SEEK_END);
                else
                  seek (skip, SEEK_CUR);

                if (! is)
                  break;
              }


The above is inside a while loop that is reading in blocks of data.  The seek (skip, SEEK_CUR) is probably OK, but the stuff preceding it that checks the end of the file pointer is most likely disrupting the cache and slowing things down.

If possible, the context might allow checking the EOF location just once before the loop starts.  But another option might be to use the streams existing features.  I'm pretty sure that there is some way to simply do the seek and then inquire the status of the input stream, i.e., whether it has advanced past the end of the file.

Furthermore, the read routine will set some flags if it runs out of data to read at the EOF:

http://www.cplusplus.com/reference/istream/istream/read/
"If the input sequence runs out of characters to extract (i.e., the end-of-file is reached) before n characters have been successfully read, the array pointed to by s contains all the characters read until that point, and both the eofbit and failbit flags are set for the stream."

so just check those flags.  Upon getting those error flags, THEN retroactively compute how many fields were actually read.  Even if that number isn't readily available in the stream library, simply keep track of the "cpos = tell();" prior to every file read.  In that case, one would know the position of the start of the last read and the EOF position can be gotten easily.

In summary, the way to speed this up is to not compute how many bytes to read before reading.  Instead just let the C library do its thing.  (If one wants to compute how many bytes to read, do the whole computation prior, i.e., compute something like N_blocks the number of full blocks to read and N_leftover the number of bytes leftover for the last non-full block.)

Dan Sebald <sebald>
Mon 11 Jun 2018 06:41:30 PM UTC, original submission:  

data = single(fread(fid,[Var.no+1 nl],'single',0));
is quite speedy while
data = fread(fid,[Var.no nl],'single',4);
is very very very slow.

Reducing Var.no and increasing the number of skips increases the speed

data = fread(fid,[2 nl],'single',4*(Var.no-1));
takes 23.6 sec while
data = fread(fid,[1 nl],'single',4*(Var.no));
takes 11.6 sec while
data = single(fread(fid,[Var.no+1 nl],'single',0));
takes 0.2 sec.

(Var.no=60)

The code works fine in Matlab.
The code is intended for reading a double column in a mixed dataset . The first row is 'double' and the rest is single.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #44359:  LTSpiceRaw_Fa.m added by None (5KiB - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by rik5 (Posted a comment)
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by mleitner (Posted a comment)
  • -email is unavailable- added by sebald (Posted a comment)
  • -email is unavailable- added by None (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 7 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-07-23 rik5 StatusConfirmed In Progress
    2018-07-23 sebald Attached File- Added octave-fread_crude_large_hunk_prototype-djs2018jul22.diff, #44607
    2018-07-23 rik5 Priority5 - Normal 3 - Low
    2018-07-20 rik5 StatusNone Confirmed
        Release4.4.0 dev
        Operating SystemMicrosoft Windows Any
    2018-06-11 None Attached File- Added LTSpiceRaw_Fa.m, #44359

    Back to the top

    Powered by Savane 3.13-3230.
    Corresponding source code