bugGNU Octave - Bugs: bug #38905, Memory Leak in openmpi_ext

 
 

bug #38905: Memory Leak in openmpi_ext

Submitter:  Sukanta Basu <basu0009>
Submitted:  Mon 06 May 2013 01:51:12 PM UTC
   
 
Category:  Octave Package Severity:  3 - Normal
Priority:  5 - Normal Item Group:  Performance
Status:  Fixed Assigned to:  None
Originator Name:  Sukanta Open/Closed:  * Closed
Release:  * 3.6.4 Operating System:  * GNU/Linux
Fixed Release:  None Planned Release:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Wed 16 Nov 2016 10:03:06 PM UTC, comment #4: 

Hi,

This bug seems to be solved in the MPI package,
as discussed in this thread:

http://octave.1599824.n4.nabble.com/ltfat-1-4-2-released-tp4657394p4660716.html

So even if it does not make much difference I am changing its
status from  'Wont Fix' to 'Fixed'

c.

Carlo de Falco <cdf>
Group Member
Wed 16 Nov 2016 07:42:27 PM UTC, comment #3: 

The openmpi_ext package no longer exists. If this problem still occurs with the current mpi package and a current version of Octave, this bug can be reopened and updated, or a new one filed.

Mike Miller <mtmiller>
Group Member
Mon 20 May 2013 09:25:56 AM UTC, comment #2: 

Riccardo just posted this comment:


Carlo,
I do not think this is a bug.
It is just that ... since openmpi_ext is a prototype it needs more functions (like MPI_Isend and MPI_IRecv).
When the master has lots of slaves "blocked" , the memory using lots of matrixes will become exhausted.
One solution would be to implement non - blocking messages and to "probe" about them.
Micheal Creel pings at messages to gather as much as possible. Cleary this might be improved.
What do you think? Am I clear?
Very bests
Riccardo



Riccardo,

Unfortunately replying to messages on this tracker by
hitting "reply" in your mail client does not work, you
need to click on the link that takes you to the tracker
and write the answer there directly.

I am not completely sure I understand why you
expect blocking communications to use more memory
than non-blocking ones.

If I understand correctly, when a blocking send/receive
is issued the program should wait until the data transmission
is completed before proceeding to the next instruction.


Therefore, upon exit from the MPI_{Send,Receive} function
there is no need to keep any of the buffers used for communication and they should be cleared.

And actually that is the way the code in openmpi_ext is implemented, all local memory is allocated via the
OCTAVE_LOCAL_BUFFER macro and should be therefore cleared
upon exit.

So I don't see why you expect that the master may have
"lots of slaves blocked" as far as I understand each process
should be able to have only one pending communication
at each time ...

Can you explain in more detail what you think is actually happening?

c.

Carlo de Falco <cdf>
Group Member
Fri 17 May 2013 08:06:49 PM UTC, comment #1: 

Riccardo,

I am not sure I understand all of your comments about this bug on the mailing list,
do you think this is not a bug in openmpi_ext but an incorrect usage? Or is it rather a bug in Octave itself?

c.

Carlo de Falco <cdf>
Group Member
Mon 06 May 2013 01:51:12 PM UTC, original submission:  

Hi,

I am using Octave 3.6.4 on Ubuntu 12.10 and 13.04. I have written a code for solving certain fluid dynamics problem (~5000 lines) using Octave and the openmpi_ext toolbox (version 1.1.0). I am noticing a strange problem: the memory usage slowly increases with each iteration. I noticed this leak on all the platforms I have access to: Ubuntu (12.04, 12.10, and 13.04) and RedHat systems. The leak persists for all the recent versions of openmpi (1.6.2, 1.6.4, 1.7.1).

Since my original code is too complicated for others to debug, I created two sample codes for testing (see attached).  The speedtest.m file was originally written by Dr. Jeremy Kepner (MatlabMPI). I modified it to work in conjunction with openmpi_ext.

When I ran this code with valgrind:

valgrind --leak-check=yes -v --log-file=Valgrind.out mpirun -np 2 octave -q --eval speedtest &

The summary of valgrind is:
==24981==    definitely lost: 42,031 bytes in 28 blocks
==24981==    indirectly lost: 25,802 bytes in 76 blocks
==24981==      possibly lost: 0 bytes in 0 blocks
==24981==    still reachable: 124,717 bytes in 603 blocks
==24981==         suppressed: 0 bytes in 0 blocks

The loss monotonically increases with increasing number of processors. The other code (OCTLES_TEST.m) has similar issue.

I also ran valgrind with massif:

valgrind --tool=massif --time-unit=ms  mpirun -np 4 octave -q --eval OCTLES_TEST &

The output of ms_print is attached. In general, the memory consumption increases with time (some fluctuations are noticeable). This should not be the case.

I would appreciate if you could help me out with identifying the memory leak(s) in openmpi_ext.

Sukanta Basu <basu0009>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

Attached Files
file #28028:  speedtest.m added by basu0009 (4KiB - text/x-objcsrc)
file #28029:  OCTLES_TEST.m added by basu0009 (737B - text/x-objcsrc)
file #28030:  Valgrind.out added by basu0009 (41KiB - application/octet-stream)
file #28031:  MSPRINT added by basu0009 (85KiB - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by mtmiller (Posted a comment)
  • -email is unavailable- added by cdf (Posted a comment)
  • -email is unavailable- added by cdf
  • -email is unavailable- added by basu0009 (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only group members can vote.

     

    Follow 9 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2016-11-16 cdf StatusWont Fix Fixed
    2016-11-16 mtmiller StatusNeed Info Wont Fix
        Open/ClosedOpen Closed
    2013-05-20 cdf StatusNone Need Info
    2013-05-17 cdf Carbon-Copy- Added riccardo corradini <riccardocorradini@yahoo.it>
    2013-05-06 basu0009 Attached File- Added speedtest.m, #28028
        Attached File- Added OCTLES_TEST.m, #28029
        Attached File- Added Valgrind.out, #28030
        Attached File- Added MSPRINT, #28031

    Back to the top

    Powered by Savane 3.13-d3ae.
    Corresponding source code