bugGNU Octave - Bugs: bug #54100, fread using SKIP larger than zero...

 
 

bug #54100: fread using SKIP larger than zero is extremely slow

Submitted by:  None
Submitted on:  Mon 11 Jun 2018 06:41:30 PM UTC  
 
Category:  Octave Function Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Performance
Status:  None Assigned to:  None
Originator Name:  Gisle J Torvetjonn Originator Email:  -email is unavailable-
Open/Closed:  Open Release:  4.4.0
Operating System:  Microsoft Windows

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

( Jump to the original submission)

Thu 14 Jun 2018 07:38:38 AM UTC, comment #9:

There are about 20 tests involving fread() in the file io.tst, and of those maybe a half dozen involve the SKIP parameter:

They mostly seem to require an fwrite() prior to generate a file.

These tests can be run via

However, this test takes quite long to run so isn't suitable for the write-and-debug stage.

Dan Sebald <sebald>
Wed 13 Jun 2018 08:29:41 PM UTC, comment #8:

References are always valid. the "if (! is)" checks whether the stream state is still good.

Coprocessor? We are talking about an integer division here...

But in any case, I'm pretty sure that any compiler we use is going to move an expression out of the loop if possible, so I don't think that's a serious issue, though defining a separate variable with a meaningful name might be useful.

I apologize if this code is convoluted, but it was the best I could do the last time I looked at speeding it up and making it do all the correct things with regard to skipping bytes, reading different types of values, and transforming them to the final requested types.

Since compatibility is also important here, maybe it would be best to start with trying to write some more comprehensive tests before overhauling the code for speed?

John W. Eaton <jwe>
Project Administrator
Wed 13 Jun 2018 07:57:53 PM UTC, comment #7:

Avoid putting computations in loop limits, e.g.:

if the variables do not change within the loop (especially divisions, given they either are algorithm based or use a coprocessor which means more more clock cycles). Instead, pre-compute the value stored in some stack variable.

Dan Sebald <sebald>
Wed 13 Jun 2018 05:59:25 PM UTC, comment #6:

You could print out the value of input_buf_size to confirm, i.e.,

then run an example using a non-zero skip value. But, yeah, I would think the skip always has to be done in one, let's say, "read group" or "read vector".

True about the cost of function calls, especially since it appears seek() is an added C++ layer. The routine is not using the C++ stream library's seekg() method directly. Instead, there seems to be a virtual octave::base_stream class function seek().

Perhaps that is why there's so much activity going on with the stream's integrity:

"is" is a reference, not a pointer. (Does it make sense to check if a reference is non-null?) I don't think the C++ library would delete an existing stream as a result of a read() or seekg() member call, so why keep checking in the loop? I'd think that just once prior is fine. In any case, the lines with an asterisk below seem superfluous:

if there is a check at the top of the loop on "is" which is the next step in program flow.

Your prescription sounds good. Eliminate the skip==0 check and just make it generic where the skip value happens to be 0.

There's an added benefit to not using such small atomic reads, which is that the routine is using "new" within the loop to create a list of data chunks. It must then at the end of all those read construct a valid octave_value data object with the routine

I think that could be a very large number of hunks which means inefficient call to "new" all the time plus the list management

A better way to do these dynamic memory kinds of things is to start with a small buffer or "work space", and when that memory area runs out, create one twice the size and copy the old data to the new data and free the old data. That way the size of the workspace remains on the order of the amount of data being read. It's actually fairly efficient. Just as you're suggesting to make the processing into a streamline operation at the CPU level, block copies of data are really fast; just set up the CPU for a tight loop that copies data from one place to another and let it chug away. (That would be a separate changeset.)

Dan Sebald <sebald>
Wed 13 Jun 2018 02:27:53 PM UTC, comment #5:

I just now learned about the possibility to read blocks of elements before some bytes are skipped. Thus, block_size is not necessarily equal to zero. It seems the code snippet should therefore read

Michael Leitner <mleitner>
Wed 13 Jun 2018 08:55:49 AM UTC, comment #4:

Of course the actual reading of the data is buffered. The converse would imply that for every double you need to initiate a new communication with the hard disk, and this would probably correspond to a slowing-down of rather a factor 10^5 than the factor 100 (on my computer) for the present case.

I am getting lost in this convoluted code, but it seems that it could be simplified considerably: the reading is done in line 6623, where a buffer of size input_buf_size is read into a newly allocated char buffer, which is then pushed into a list. The size of this buffer is just input_buf_elts (line 6594), which for skip==0 is comfortably large (line 6583) or even the whole file (line 6585), but otherwise it is an ominous block_size (line 6588). However, as the skipping is indeed done between the reads (line 6655), this has to mean that block_size is necessarily equal to 1. Or am I misunderstanding something?

Of course your very last suggestion in comment #1 would be an improvement: as it is, for skip>0 you do two tells and three seeks per read element. It would be better to move this out of the loop, initially do one seek to the end, get the position, compute the number of elements to be read, seek back to the beginning, read all the elements in a for loop, and finally position the file pointer. Then you have just one seek per read element, and you do not need to rely on flags set by the library.

However, you still have one read and one seek per read element. So even if they are buffered, function calls always cost you. Going from one read, three seeks, and two tells to one read and one seek would give you a factor three and not more, I would guess, reducing the present factor 100 to 33.

So my suggestion would be to use large buffers as with skip==0 (increased by the skipped bytes) whenever skip is not too large, after line 6623 insert the line

delete the whole section 6635-6659, and insert a last line to position the file pointer correctly (if this should be necessary). Then reading a file of given size to the end should take the same time whether skip==0 or not.

Michael Leitner <mleitner>
Tue 12 Jun 2018 08:16:13 PM UTC, comment #3:

You may be right, meaning that this is one of those situations where it would involve implementing the two approaches and comparing. It might even be system dependent.

What you describe, i.e., first reading in a block of data, is certainly something I've done in other applications and it is efficient. It's especially useful when having to access the data more than once, say for filtering a big block of data, level crossing, etc. I often do this in a "ping-pong" buffer kind of design.

If something more sophisticated isn't needed though, keep in mind that the C library is most likely already doing just what you describe. It's buffered I/O, so behind the curtain it is reading in a block of data and keeping a file pointer into that buffer (not fetching every new bit of data from disk with every fread()). Doing that buffering a second time just adds a little more, but it is only a small addition because the C routines are optimized to bring in a block of bytes. That's why I think "seek (skip, SEEK_CUR);" is probably pretty efficient; it's just advancing the pointer in the C library. Whereas "seek (0, SEEK_END);" is inefficient because it means jumping to the end of the file, streaming a whole new bunch of data from disk into the cache.

So in summary, it depends but either approach will be much better than moving the file pointer to the EOF and back before every block read.

Dan Sebald <sebald>
Tue 12 Jun 2018 07:51:40 AM UTC, comment #2:

Wouldn't it be even better to just do a large binary read into an existing buffer buf1 and then pick the values that you need, that is, in C it would look somehow like

for the case of output in doubles. Of course, for very large skip-values one should fall back to the present implementation, and for very large N you would want to do this piece-meal in order not to explode your memory requirements. If you think that this step of copying is inefficient, remember that in the present case you even have some function calls per value, and that whenever you have a non-starred precision argument (other than double) you have to do the casting and copying in any case.

Michael Leitner <mleitner>
Mon 11 Jun 2018 08:48:56 PM UTC, comment #1:

Without even testing, I think there is a clear source of slow down. The code that stands out to me is in libinterp/corefcn/oct-stream.cc, around lines 6635-6659:

The above is inside a while loop that is reading in blocks of data. The seek (skip, SEEK_CUR) is probably OK, but the stuff preceding it that checks the end of the file pointer is most likely disrupting the cache and slowing things down.

If possible, the context might allow checking the EOF location just once before the loop starts. But another option might be to use the streams existing features. I'm pretty sure that there is some way to simply do the seek and then inquire the status of the input stream, i.e., whether it has advanced past the end of the file.

Furthermore, the read routine will set some flags if it runs out of data to read at the EOF:

http://www.cplusplus.com/reference/istream/istream/read/
"If the input sequence runs out of characters to extract (i.e., the end-of-file is reached) before n characters have been successfully read, the array pointed to by s contains all the characters read until that point, and both the eofbit and failbit flags are set for the stream."

so just check those flags. Upon getting those error flags, THEN retroactively compute how many fields were actually read. Even if that number isn't readily available in the stream library, simply keep track of the "cpos = tell();" prior to every file read. In that case, one would know the position of the start of the last read and the EOF position can be gotten easily.

In summary, the way to speed this up is to not compute how many bytes to read before reading. Instead just let the C library do its thing. (If one wants to compute how many bytes to read, do the whole computation prior, i.e., compute something like N_blocks the number of full blocks to read and N_leftover the number of bytes leftover for the last non-full block.)

Dan Sebald <sebald>
Mon 11 Jun 2018 06:41:30 PM UTC, original submission:

data = single(fread(fid,[Var.no+1 nl],'single',0));
is quite speedy while
data = fread(fid,[Var.no nl],'single',4);
is very very very slow.

Reducing Var.no and increasing the number of skips increases the speed

data = fread(fid,[2 nl],'single',4*(Var.no-1));
takes 23.6 sec while
data = fread(fid,[1 nl],'single',4*(Var.no));
takes 11.6 sec while
data = single(fread(fid,[Var.no+1 nl],'single',0));
takes 0.2 sec.

(Var.no=60)

The code works fine in Matlab.
The code is intended for reading a double column in a mixed dataset . The first row is 'double' and the rest is single.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #44359:  LTSpiceRaw_Fa.m added by None (5KiB - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by jwe (Posted a comment)
  • -email is unavailable- added by mleitner (Posted a comment)
  • -email is unavailable- added by sebald (Posted a comment)
  • -email is unavailable- added by None (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can add your encouragement to it.
    This task has 0 encouragements so far.

    Only project members can vote.

     

     

     

    Follows 1 latest change.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-06-11 None Attached File- => Added LTSpiceRaw_Fa.m, #44359

    Back to the top


    Powered by Savane 3.3