How to debug modx, which snippets, processors etc were called - debugging

I need to find where some things happen during execution. Let's say, I'm looking for a line, which drops out user authorizition. It may be some processor like "security/logout", but not neccessary.
How should I debug modx revo ? debug_backtrace() gives me 30+ Mb of text, it's not real to read it.
How to watch quickly all user code, which is stored in database, during execution ?

Either you can turn on debug option on System Settings, or hack a bit the core:
http://www.virtudraft.com/blog/unofficial-parser-timer-for-modx-revolution.html

Related

How to debug potential CPU/RAM errors in Bash script on Linux

I have a relatively simple bash script that reads from a set of static input files, stores the input in bash variables and then does a bunch of processing over said input by calling out to external scripts (e.g. written in Python, Go, other bash scripts etc.) and using the intermediate results.
Lately I have been experiencing an intermittent problem where a single character seems to be getting altered somewhere during the processing which then causes subsequent errors. Specifically, a lot of the processing I'm doing involves slicing up a list of comma-separated records, and one of the values on each line is a unix timestamp, e.g. 1354245000.
What seems to be happening is that occasionally one of these values will get altered slightly, so I end up with a timestamp like 13542458=2 or 13542458>2 or 13542458;2 coming out of one of the intermediate scripts. This then subsequently gets fed into another script, which throws an exception when it tries to parse the value to an integer.
In the title of this question, I've suggested that this might be a potential CPU/RAM error. I know the general folly in thinking errors are caused by low level things like hardware/compilers etcetera, but the nature of this particular error makes me think it may be possible, for the following reasons:
The input files are the same on each invocation of the script, and the script only fails on some invocations.
I cannot think of any sources of randomness in the source code prior to where the script is breaking. It's basically just slicing and dicing csv input.
I cannot think of any sources of concurrency in the source code -- even the Go scripts aren't actually written to run anything concurrently.
This problem has only arisen in the last week or so. Prior to this time, this error would never occur.
While I haven't documented every erroneous character, they seem to often be quite close in the ASCII table to numeric values (=, >, ; etc). That said, I guess the Hamming distance between two characters quite far apart can be small also with changes to a high order bit.
The script often breaks at a different stage on different runs. i.e. I have a number of separate Python scripts, and sometimes it'll make it past one script and then the error will be induced in another. Other times it'll be induced on an earlier script.
What I'd like to know is, is there any methodical way to either confirm or rule out a hardware error for this problem? Or if it is a hardware problem, is it possibly undetectable by the operating system?
A bit of further info on the machine:
Linux 64-bit, Ubuntu 12.04
Intel i7 processor
16GB DDR3 RAM
I'm hoping someone can either point me to a reliable way to verify whether the hardware is to blame or otherwise a sound reason as to what else might be the cause.
Try booting into Memtest to check your memory.
While it is highly unlikely that it will be hardware, if you have exhausted you standard software debug as suggested by #OliCharlesworth, here is an outline of hardware error investigation:
(1) check your log area for any `MCE` logs (machine check exceptions).
If you find any in either your log area (syslog) or sometimes in
the present working dir or /dir -- you have a hardware failure.
(2) check your log area for disk errors. e.g:
smartd[3963]: Device: /dev/sda [SAT], 34 Currently unreadable (pending) sectors
(3) check your drive integrity, e.g.: (as root) # `smartctl -a /dev/sda` if any abnormality, run:
smartctl -t short /dev/sda (change drive as required)
(4) download/install/boot to [memtest86](http://www.memtest86.com/download.htm)
(run the complete test)
If your cpu/motherboard has thrown no mce's, you have no disk error, your drive tests OK with smartctl and you have no memory errors with memtest86, then recheck the software debugging. While additional hardware errors can still be present (bad capacitors, etc..) the likelihood at this point is software. Good luck.

How to check Matplotlib's speed in Xcode and increase performance?

I'm running into some considerable speed bottlenecks with a Python-Matplotlib-Xcode combination. I know some immediate responses will probably ask "Why are you doing python stuff in Xcode, just man up and use vim" --> I like the organizing ability and the built in version control, it makes elements of my work easier to deal with.
Getting python to run in xcode in the first place was a bit more tricky than I had hoped, but its possible. Now I have the following scenario:
A master file, 'main.py' does all the import stuff for me and sets up some universal formatting to make all the figures (for eventual inclusion in my PhD thesis) nice and uniform. Afterwards it runs a series of execfile commands to generate whichever graphics I need. Two things I can think of right off the bat:
1) at the very beginning of main.py after I import all the normal python stuff you tend to need, I call a system script which checks whether a certain filesystem is mounted. I keep all my climate model data on there since my local hard drive is too small to deal with all of it at once. Python pauses itself and waits for the system to do its thing, but once the filesystem has been found, it keeps going. Usually this only needs to happen once in the morning when I get to work, or if the VPN server kicked me off for whatever reason. (Side question, it'd be cool to know if theres a trick to automate an VPN login to reconnect as soon as it notices its not connected)
2) I'm not sure how much xcode is using on its own. running the same program from terminal is (somewhat) faster. I've tried to be memory conscience and turn off stuff I don't need while running the python/xcode combination.
Also, python launches a little window whenever I call plt.show(), this in itself takes time, I've considered just saving them as quick png files and opening them with some other viewer, although I guess that would also have to somehow take time to open up. Given how often these graphics change as I add model runs or think of nicer ways of displaying the data, it'd be nice to not waste something on the order of 15 to 30 minutes (possibly more) out of the entire day twiddling my thumbs and waiting for a window to pop up.
Benchmark it!
import datetime
start = datetime.datetime.now()
# your plotting code
td = datetime.datetime.now() - start
print td.total_seconds() # requires python version >= 2.7
Run it in xcode and from the command line, see what the difference is.

Why does CUDA code run so much faster in NVIDIA Visual Profiler?

A piece of code that takes well over 1 minute on the command line was done in a matter of seconds in NVIDIA Visual Profiler (running the same .exe). So the natural question is why? Is there something wrong with command line, or does Visual Profiler do something different and not really execute everything as on the command line?
I'm using CUBLAS, Thrust and cuRAND.
Incidentally, there's been a noticeable slowdown in compiled code on my machine very recently, even old code that previously ran quickly, hence I'm getting suspicious.
Update:
I have checked that the calculated output on command line and Visual Profiler is identical - i.e. all required code has been run in both cases.
GPU-shark indicated that my performance state was unchanged at P0 when I switched from command line to Visual Profiler.
However, GPU usage was reported at 0.0% when run with Visual Profiler, but went as high as 98% when run off command line.
Moreover, far less memory is used with Visual Profiler. When run off command line, task manager indicates usage of 650-700MB of memory (spikes at the first cudaFree(0) call). In Visual Profiler that figure goes down to ~100MB.
This is an old question, but I've just finished chasing the same issue (though the cause may not be the same).
Namely: my app achieved between 900 and 1100 frames (synchronous launches) per second when running under NVVP, but around 100-120 when running outside of the profiler.
The cause appears to be a status message I was printing to the console via cout. I had intended for this to only happen about once every 100-200 frames. Instead, it was printing the status message for every frame, and the console IO became the bottleneck.
By only printing the status message every 100 frames (though the optimal number here would depend on your application), the frame rate jumped back up to match what I was seeing in NVVP. Of course, this could also be handled in a separate CPU thread if that sort of overhead is unacceptable in your circumstances.
NVVP has to redirect stdout to its own internal buffer in order to capture the application's output (which it shows in its console tab). It appears that NVVP's mechanism for buffering or processing that output has significantly less overhead than allowing the operating system to handle it directly. It looks like NVVP is buffering everything, and displaying it in a separate thread, or just saving a bunch of output until some threshold is reached, when it adds that buffer to its own console tab.
So, my advice would be to disable any console IO, and see if or how that affects things.
(It didn't help that VS2012 refused to profile my CUDA app. It would have been nice to see that 80% of the execution time was spent on console IO.)
Hope this helps!
This should not happen. I've never seen anything like it; probably something in your setup.
It could be that some JIT-compile step is skipped by the profiler. This could explain the difference in memory usage. Try creating a fat binary?

Slow printing in vfp using dot matrix printer (LQ-1170)

I know that this is an old topic for vfp programmers. Still, I want to ask for advices that can improve the printing time for my particular case.
I recently asked to change a report written in vfp. It use commands such as fputs, etc. The user prints this report in a dot matrix printer and of course : no problem. But the user asked for column addition and some complex calculation in the report. We tried to avoid paper size changes. So my initial solution was to rework the report using report designer, and set the page orientation to landscape. The result is so slow when printed.When I open the print queue, it even shows error - printing status!
I've tried to instal the printer driver in my local PC ( the machine where I compiled the exe) and selected this printer, both with 'save printer environment' checked and unchecked. Still the same result.
Any suggestions? Other tricks for my case are welcomed. Thanks in advance.
Sometimes, and not sure if its your case, when creating a report in VFP, it saves the printer environment based on the computer used to develop it (ie: your machine). To check, and since all reports are nothing but .DBF tables renamed, try the following. Open the report as a table
USE YourReport.frx (you have to explicitly include the .frx extension)
BROWSE
The first line in the report is your environment information that includes paper size info, orientation and even printer information. Double click in the column "Expr". The only things you probably need in this column are
ORIENTATION=1 (or 0)
it may have other stuff and look something like
DRIVER=winspool
DEVICE=\some\printershare
OUTPUT=IP_192.168.1.22
ORIENTATION=1
PAPERSIZE=1
SCALE=100
ASCII=0
COPIES=1
DEFAULTSOURCE=15
PRINTQUALITY=600
COLOR=2
DUPLEX=2
YRESOLUTION=600
TTOPTION=3
COLLATE=0
You can remove the rest of it. Next, close this column and tab over about 10 more columns to "Tag" and "Tag2". They are also MEMO type fields. Open them up. Remove ALL data out of these two columns... BUT ONLY FOR THE FIRST ROW!!!! If you open them, you would see more embedded stuff about the printer, just remove it completely. Do NOT do a global replace to blank for all rows as that will kill the report content... ONLY the first row.
All this being said, I can't guarantee, but it may be the culprit... but then again, doing direct output on old dot-matrix printers might actually be faster than all the fancy rendering the printer drivers are doing.
Trying to print a report from the report designer through the windows driver to a dotmatrix printer will never be acceptably quick.
That's why they originally did the report using commands.
Your choices are either change the printer to a laser printer (probably not possible I'm guessing) or change the report back to the old style.
It's hardly difficult to print fast on a dot matrix printer with VFP reports, you should do it on a raw mode, but not on ?? or ??? way, I mean on API calls like this:
--- RawPrint VCX ---
http://www.universalthread.com/ViewPageNewDownload.aspx?ID=9556
You can use a wrapper, with kind-of "Formats" support it's a commercial software but it's worthy if you need to do a lot of reports with this kind of printer:
--- DosPrint 4 ---
http://www.victorespina.com.ve/hs/es/index.php/DOSPrint4_%28VFP%29
(disclaimer: the developer of DosPrint 4 it's a friend of mine, and I worked with him testing and supporting on the previous version DosPrint 3 on the spanish MS-VFP newsgroups and http://Portalfox.com)
Try using the Microsoft Generic Text only print driver

is there a way to track the values of certain variables after the end of a program in visual studio?

i have found myself several times in the need of knowing the last values set to a range of variables that were in a certain portion of code, or method; maybe dispersed around the program.
does anyone know a way to select variables and know the last value set to them after the program ends running - in a windows maybe ?
There isn't anything I know of that will record every value ever assigned to every variable in your program in case you want to look at it later. It is possible with VS2010's historical debugging abilities to look at "values from the past" although I haven't used this yet, so I don't know if that ability extends "beyond death" of the process.
You may also be able to use tracepoints (VS2008 and later). These are like breakpoints, but instead of stopping execution they simply print information to the debug output. So you could add a tracepoint for a variable so that each time it is changed its value is reported (basically the same as printing the values out in your code, but you don't have to change your code to enable them, and can add them while your code is executing).
Two simple approaches that will work for pretty much any dev environment are:
Write the values to an application log each time they change, then read the last reported entries. If you realise you need 5 values from all around the program, simply printing them to the debug output will only take a few seconds to add to your program. (If you can't do this easily, then you're not encapsulating your data very well).
Put a breakpoint on the destructor of the class you're interested in, or at the start of the shutdown process just before you destroy the objects, or the last line of code in your program (for statics) (etc) and just use the debugger to drill down into the data.

Resources