Tensorflow debugger breakpoint (tfdbg) - debugging

I am new in tensorflow and I am trying to run a basic program but I am getting the error while feeding the data into the model. I would like to debug it to pin point what is causing the error. I want to track the data in a node of the graph. When I am starting the tfdbg it throws the error "OutOfRangeError RandomShuffleQueue".
Is there any way to debug it from the start ? or can we put the breakpoint in the program ? I understand that the tensorflow program has a graph structure and it might not excecute like a normal python script.
Any help is appreciated.
To make it short,
can we put a breakpoint in a tensorflow program at the desired location to see the tensor values?

Related

Pytorch : W ParallelNative.cpp:206

I'm trying to use a pre-trained template on my image set by following the tutorial right here :
https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html
Only I always get this "error" when I run my code and the console locks up :
[W ParallelNative.cpp:206] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
Thank you in advance for your help,
I have the same problem.
Mac. Python 3.6 (also reproduces on 3.8). Pytorch 1.7.
It seems that with this error dataloaders don't (or can't) use parallel computing.
You can remove the error (this will not fix the problem) in two ways.
If you can access your dataloaders, set num_workers=0 when creating a dataloader
Set environment variable export OMP_NUM_THREADS=1
Again, both solutions kill parallel computing and may slow down data loading (and therefore training). I look forward to efficient solutions or a patch in Pytorch 1.7

Casadi IVP integrator RHSFUNC FAIL

I've been using CasADI to solve an IVP, but for some combinations of parameters the solver doesn't terminate. Sending a keyboard interrupt returns the error message in the title ("CV_RHSFUNC_FAIL"). Adjusting the maximum number of steps that the solver is allowed to take results in the function returning a different error (CV_TOO_MUCH_WORK) for all parameter combinations. I can obtain the solution to the IVP for these parameter sets using the SciPy odeint function.
Ideally I would like a solution like this
try:
CasadiSolver()
except:
ScipySolver()
However, because the Casadi solver runs forever instead of returning an error I cannot implement this solution. Has anyone seen this error before and knows what it means? If so, could they tell me how to stop the CasADI IVP solver running forever, and cause it to return an error instead?

Veins Add Polygon to SUMO GUI

I am simulating a scenario where I want to add and/or delete polygon dynamically. However, when I tried to add a polygon, the system generates me below error;
<!> ASSERT: Condition 'result == RTYPE_OK' does not hold in function 'query' at veins/modules/mobility/traci/TraCIConnection.cc:119 -- in module (TraCIDemo11p) RSUExampleScenario.node[1].appl (id=14), at t=1.1s, event #12
I debug the code and I see that the TraciConnection does not return the RTYPE_OK. If I remove the assert statement, the code works fine. However, I want to learn the logic behind this.
I also see that the SUMO console give an error message. The code that I used to add polygon is;
traci->addPolygon(polyId, polyType, color, filled, layer, points);
Sumo: 0.32 Omnet: 5.4.1 Veins: 4.7
Any suggestion is appreciated. I am a starter on GUI related things. Sorry if the question does not make sense. Thanks.
Most likely SUMO refuses to add the polygon you requested. Maybe the ID you chose already exists in the simulation.
To find out why SUMO complains, you can change its source code to include debug output -- or you can run SUMO in a debugger.
To run SUMO in a debugger, the simplest solution is to switch from using TraCIScenarioManagerLaunchd to TraCIScenarioManager (probably by changing veins/nodes/Scenario.ned) and launching SUMO in a debugger manually (e.g. by running lldb sumo -- --remote-port 9999 -c erlangen.sumo.cfg)

How to run PATCHMATCH source code

I would like to inpaint one image with a corrupted part using the patches of another image. in the link below, the authors of PATCHMATCH algorithm has provided the users with the source code. PATCHMATCH algorithm tries to find the nearest patch in the second image to fill the corrupted patches in the first image.
main page of the paper with source code:
http://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/index.php
and the link of source code:
http://www.cs.princeton.edu/gfx/pubs/Barnes_2009_PAR/patchmatch-2.1.zip
My problem is that I followed the instruction in the readme.txt file and ran build_unix.sh in terminal of Ubuntu 14.04 LTS,but got an error which says:
Unknown MEX argument '-inline'.
Unknown MEX argument '-inline'.
I removed -inline and execute the bash file in terminal and fortunately it worked and got this message:
MEX completed successfully.
MEX completed successfully.
Now my question is how to use this code to inpaint one image corrupted partially by using another image which comes from the same class (e.g.class)
For example:
I do really appreciate it if you could help me out of this problem. I am really in the need of help and have to make this source code run.
I will provide you with more information if needed.
Update: I have come up with an inpainting algorithm using Convolutional Neural Networks, which you can see the result below, I'd like to implement patchmatch algorithm to compare my result with this algorithm. CNN only gets corrupted image as input.

(Re)starting Matlab after error from error location

I'm debugging a matlab script that takes ~10 minutes to run. Towards the end of the script I do some i/o and simple calculations with my results, and I keep running into errors. Is there a way to start matlab from a certain sport in a script after it exits with an error--the data is still in the workspace so I could just comment out all of the code up until the error point, but I'm wondering if anyone knows a better way to go about doing this without rerunning the entire script (the ultra-lazy/inefficient way)?
Thanks,
Colorado
Yes, use dbstop. Type dbstop if error and then run your script. The minute it hits an error, it will create a breakpoint there and you're in the workspace of the script --- which means you can debug the error, save data ; anything you want! Here's a snippet from the documentation for dbstop if error --- there are other ways to do dbstop, so do check it out:
dbstop if error
Stops execution when any MATLAB program file you subsequently run produces a run-time error, putting MATLAB in debug mode, paused at the line that generated the error. The errors that stop execution do not include run-time errors that are detected within a try...catch block. You cannot resume execution after an uncaught run-time error. Use dbquit to exit from debug mode.
Double percent signs will enable 'cell mode' which lets you run little blocks of code in steps. Sounds like just what youre looking for.

Resources