debugger Spyder stuck in SciPy SLSQP - debugging

When not running the debugger in Spyder I have no problem, but when I try to reach a breakpoint with Debug file (ctrl+f5) it stops here, before the breakpoint:
ipdb> > c:\programdata\anaconda3\lib\site-packages\scipy\optimize\slsqp.py(313)_minimize_slsqp()
311 # meq, mieq: number of equality and inequality constraints
312 meq = sum(map(len, [atleast_1d(c['fun'](x, *c['args']))
1-> 313 for c in cons['eq']]))
314 mieq = sum(map(len, [atleast_1d(c['fun'](x, *c['args']))
315 for c in cons['ineq']]))
While I don't have any breakpoint there, and I don't get any error message. When trying to continue to the next breakpoint with ctrl+f12 I and up at exactly the same place, which is probably due to my iteration. What's going wrong?

Related

Suppressing ipdb output in Spyder iPython interpreter

I have reason to believe that my iPython interpreter is causing my kernel to die and restart similar to the issues logged in this link and that link.
The latter link indicates that the error is caused by the fact that the debugger outputs step-by-step ipdb content into the interpreter. One user reported that the behavior stopped when he (and I quote)
disabled logging to console before running in debug mode
How does one "disable logging to console" in Spyder IDE/IPython? I really need to do this so I can at least step through my code....
EDIT
I would like to suppress this kind of output
ipdb> > d:\temp\other const models\plaxis\output\plotparfile.py(16)PlotParFile()
14 with open(filename,'r') as fid:
15 lines = fid.readlines()
---> 16 fid.close()
17 #split first line get header and pop it out
18 header = lines[0].split()
> d:\temp\other const models\plaxis\output\plotparfile.py(18)PlotParFile()
16 fid.close()
17 #split first line get header and pop it out
---> 18 header = lines[0].split()
19 lines.pop(0)
20
(Spyder developer here) That output is generated automatically and its purpose is to tell you where are you placed in your code while debugging.
Right now there are no options in Spyder to deactivate it. Besides, I really doubt that output could be the cause of any kernel failures.

using ipdb to debug python code in one cell (jupyter or Ipython)

I'm using jupyter (or Ipython) notebook with firefox, and want to debug some python code in the cell. I am using 'import ipdb; ipdb.set_trace()' as kind of breakpoint, for example my cell has the following code:
a=4
import ipdb; ipdb.set_trace()
b=5
print a
print b
which after execution with Shift+Enter gives me this error:
--------------------------------------------------------------------------
MultipleInstanceError Traceback (most recent call last)
<ipython-input-1-f2b356251c56> in <module>()
1 a=4
----> 2 import ipdb; ipdb.set_trace()
3 b=5
4 print a
5 print b
/home/nnn/anaconda/lib/python2.7/site-packages/ipdb/__init__.py in <module>()
14 # You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.
15
---> 16 from ipdb.__main__ import set_trace, post_mortem, pm, run, runcall, runeval, launch_ipdb_on_exception
17
18 pm # please pyflakes
/home/nnn/anaconda/lib/python2.7/site-packages/ipdb/__main__.py in <module>()
71 # the instance method will create a new one without loading the config.
72 # i.e: if we are in an embed instance we do not want to load the config.
---> 73 ipapp = TerminalIPythonApp.instance()
74 shell = get_ipython()
75 def_colors = shell.colors
/home/nnn/anaconda/lib/python2.7/site-packages/traitlets/config/configurable.pyc in instance(cls, *args, **kwargs)
413 raise MultipleInstanceError(
414 'Multiple incompatible subclass instances of '
--> 415 '%s are being created.' % cls.__name__
416 )
417
MultipleInstanceError: Multiple incompatible subclass instances of TerminalIPythonApp are being created.
The same error appears if I use this code not in the jupyter notebook in the browser, but in jupyter qtconsole.
What does this error mean and what to do to avoid it?
Is it possible to debug code in the cell step-by-step, using next, continue, etc commands of pdb debugger?
Had this problem also and it seems to be related to versions of jupyter and ipdb.
Solution is to use this instead of the ipdb library set_trace call:
from IPython.core.debugger import Tracer
Tracer()() #this one triggers the debugger
Source: http://devmartin.com/blog/2014/10/trigger-ipdb-within-ipython-notebook/
Annotated screenshot:
Tracer() is deprecated.
Use:
from IPython.core.debugger import set_trace
and then place set_trace() where breakpoint is needed.
from IPython.core.debugger import set_trace
def add_to_life_universe_everything(x):
answer = 42
set_trace()
answer += x
return answer
add_to_life_universe_everything(12)
This works fine and brings us a little bit more comfort (e.g. syntax highlighting) than just using the built-in pdb.
source
If using Jupyter Notebook
begin your cell with magic command "%%debug".
Then a ipdb line will be shown at the bottom of the cell which will help you navigate through the debugging session. Following commands should get you started:
n- execute current line and go to next line.
c- continue execution until next break point.
Make sure you restart the kernel each time you decide on debugging, so that all variables are freshly assigned.You can check the value of each variable through the ipdb line and you will see that the variable is undefined until you execute the line that assigns a value to that variable.
%%debug
import pdb
from pdb import set_trace as bp
def function_xyz():
print('before breakpoint')
bp() # This is a breakpoint.
print('after breakpoint')
My version of Jupyter is 5.0.0 and my corresponding ipython version is 6.1.0. I am using
import IPython.core.debugger
dbg = IPython.core.debugger.Pdb()
dbg.set_trace()
Tracer is listed as deprecated.
Update:
I tried using the method from another answer https://stackoverflow.com/a/43086430/8019692 below but got an error:
MultipleInstanceError: Multiple incompatible subclass instances of TerminalIPythonApp are being created.
I prefer my method to the %%debug magic since I can set breakpoints in functions defined in other cells and run the function in another cell. Jupyter/IPython drops into the debugger in my function where the breakpoint is set, and I can use the usual pdb commands. To each his own...
#lugger1, the accepted answer is deprecated.

debugging postgres (and external .so libraries) with kdbg (KDE Debugger in Lilnux)

I would like to debug a user defined function (called prepareTheOutputRecord implemented in C/C++ that is a part of user defined function in postgres. Here's how I achieve this with gdb:
The function prepareTheOutputRecord resides in libMyExtenstion.so file in the lib directory of postgresql server
I start the psql shell, retrieve the pid of the process
postgres=# SELECT pg_backend_pid();
pg_backend_pid
- - - - - - - - - - - - - -
4120
(1 row)
Run the gdb with the attached pid:
gdb -p 4120
Search now the .so file, how the function is exactly called:
nm -as libMyExtenstion.so | grep prepareTheOputRecord
00000000002633fe t _ZN6libafd6LIBAFD22prepareTheOutputRecordEP20FunctionCallInfoData
Set a breakpoint in gdb and run the program:
(gdb) b _ZN6libafd6LIBAFD22prepareTheOutputRecordEP20FunctionCallInfoData
Function "_ZN6libafd6LIBAFD22prepareTheOutputRecordEP20FunctionCallInfoData" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (_ZN6libafd6LIBAFD22prepareTheOutputRecordEP20FunctionCallInfoData) pending.
(gdb) c
Execute the SQL in psql. At a certain point the breakpoint is hit in gdb:
Breakpoint 1, prepareTheOutputRecord (this=0x1116410, fcinfo=0x7fff3a41e150)
at ../Dir/file.cpp:1736
1736 funcctx = SRF_PERCALL_SETUP();
(gdb)
Continue debugging the code.
I want to do exactly the same in kdbg. For that I loaded the postgres executable, attached to the process, loaded the cpp file, set the breakpoint with the mouse at the function, continued the execution of the postgres process, but the breakpoint was never hit :( I repeated the same with .so file (instead of postgres executable) without any success. I even tried to set the breakpoint to _ZN6libafd6LIBAFD22prepareTheOutputRecordEP20FunctionCallInfoData (without the mouse clicks) but the program does not stop in kdbg :(
I believe the problem was that the kdb was not run as the root (or postgres). Due to wrong permissions the symbols were not loaded and therefore no breakpoint was shown (nor allowed to be placed at a function)

GDB autoinitialising variables

I am trying to do an example about memory management in C++. I want to show people that there always is something standing in the memory (even if you do not write anything in it)
My problem is that gdb seems to exactly delete this values for debugging purpose...
Breakpoint 1, main (argc=1, argv=0x7fffffffe8f8) at dangling.cpp:6
6 int *test=new int;
(gdb) n
8 *test=10;
(gdb) p *test
$1 = 0
(gdb) n
10 delete test;
(gdb) p *test
$2 = 10
(gdb) n
12 std::cout<<*test<<std::endl;
(gdb) p *test
$3 = 0
(gdb)
is there a way to tell gdb not to do that. I would like to see the real value in the memory instead of the 0 of $1 and $3
gdb seems to exactly delete this values for debuging purpouse.
GDB does nothing of the sort.
I would like to see the real value in the memory instead of the 0 of $1 and $3
You are seeing the real value in memory (which happens to be 0).
Your problem is that default heap allocation returns you "clean" memory. It's only on subsequent re-allocations that you'll likely see "dirty" memory.

Linpack sometimes starting, sometimes not, but nothing changed

I installed Linpack on a 2-Node cluster with Xeon processors. Sometimes if I start Linpack with this command:
mpiexec -np 28 -print-rank-map -f /root/machines.HOSTS ./xhpl_intel64
linpack starts and prints the output, sometimes I only see the mpi mappings printed and then nothing following. To me this seems like random behaviour because I don't change anything between the calls and as already mentioned, Linpack sometimes starts, sometimes not.
In top I can see that xhpl_intel64processes have been created and they are heavily using the CPU but when watching the traffic between the nodes, iftop is telling me that it nothing is sent.
I am using MPICH2 as MPI implementation. This is my HPL.dat:
# cat HPL.dat
HPLinpack benchmark input file
Innovative Computing Laboratory, University of Tennessee
HPL.out output file name (if any)
6 device out (6=stdout,7=stderr,file)
1 # of problems sizes (N)
10000 Ns
1 # of NBs
250 NBs
0 PMAP process mapping (0=Row-,1=Column-major)
1 # of process grids (P x Q)
2 Ps
14 Qs
16.0 threshold
1 # of panel fact
2 PFACTs (0=left, 1=Crout, 2=Right)
1 # of recursive stopping criterium
4 NBMINs (>= 1)
1 # of panels in recursion
2 NDIVs
1 # of recursive panel fact.
1 RFACTs (0=left, 1=Crout, 2=Right)
1 # of broadcast
1 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM)
1 # of lookahead depth
1 DEPTHs (>=0)
2 SWAP (0=bin-exch,1=long,2=mix)
64 swapping threshold
0 L1 in (0=transposed,1=no-transposed) form
0 U in (0=transposed,1=no-transposed) form
1 Equilibration (0=no,1=yes)
8 memory alignment in double (> 0)
edit2:
I now just let the program run for a while and after 30min it tells me:
# mpiexec -np 32 -print-rank-map -f /root/machines.HOSTS ./xhpl_intel64
(node-0:0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15)
(node-1:16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31)
Assertion failed in file ../../socksm.c at line 2577: (it_plfd->revents & 0x008) == 0
internal ABORT - process 0
APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)
Is this a mpi problem?
Do you know what type of problem this could be?
I figured out what the problem was: MPICH2 uses different random ports each time it starts and if these are blocked your application wont start up correctly.
The solution for MPICH2 is to set the environment variable MPICH_PORT_RANGE to START:END, like this:
export MPICH_PORT_RANGE=50000:51000
Best,
heinrich

Resources