Different ways to debug a web2py application - debugging

As I am new to web2py, I wonder what are the ways available for debugging a web2py application. So far, I've come across the following scenarios:
when a runtime error occurs in a web2py app, an error ticket is generated and normally useful information is contained in the ticket.
however, sometimes only a plain error message is available on a page, for example, 'bad request'. that's it. So what would be the best way in this case to track down what goes wrong? Logging? If so, how do we do it properly?
if no obvious error message is shown, but the app doesn't perform as expected. Usually, I use a debugger with breakpoints to check it out. Any other suggestion?
Any experience/insight is extremely welcome.

You can detect errors at your model or controller layer by adding unit tests. That will help narrow your debugging efforts, especially when the error ticket system breaks down. Unfortunately the web2py documentation doesn't stress the importance of unit tests enough. You can run doctests on your controllers with
python web2py.py -T <application_name>
Since the model layers run for each controller, you will at least find syntax errors in your at the model layer.

The latest version has an integrated debugger. You can set breakpoints on your code and step through it.

The other suggestions are good. I would also suggest the Wing IDE debugger. It isn't very expensive, and works well with Python generally and web2py specifically.
Wing has a capability to do remote debugging -- very useful when you're working through production-style deployment with remote app servers. That capability saved my bacon any number of times.

As #Derek pointed out there is an
integrated debugger for web2py
You can set a breakpoint from the integrated Web2py editor (clicking on 'toggle breakpoint') or setting it manually as indicated in the above link.
Once you hit the breakpoint, you can open http://localhost:8000/admin/debug/interact (if running locally to evaluate any expression at that point.

Related

CICS/COBOL Abend ASRA in debugger only

I have an issue I don't seem to find a solution for.
One of the transactions gives ABEND ASRA when used in debug mode.
When I compile the Cobol program without debug option and run the program, it works fine.
The error looks like this one (quite exactly like this), only I am using Cobol V4:
http://www-01.ibm.com/support/docview.wss?uid=swg1PM96501
Now the question would be: why is it abending in debugger and not without debugger?
I am using the CICS debugger (DTCN transaction), the program starts normally, I can do steps with F2 and all this, then at some location is abends.
Please note that it is extremely difficult to say where it abends as the program is really big.
This happens only to this program at the moment, others are running fine with debugger. I placed a breakpoint before my modifications, the abend occurs in some other area.
Another weird thing is that this Abend is not consistent, If I do a big portion of the code with small steps (F2 and small breakpoints), sometimes it executes without abend until the end.
Due to the nature of the issue, I can not post much information.
I was hoping you encountered similar issues and you can tell me where to look for.
Thank you!
The issue was solved by deleting my debug tool profile form the system and then login to the debugger (DTCN) again so it creates a new profile (the profile was 3 files: TOOLTEMP.PDTOOLS.{userid}.DBGTOOL.* ). After this the issue was gone. I asked the guys how this happened, they told me this was because I had modified the program between 2 debugging sessions without closing CICS. This was a disfunction that can be avoided by closing CICS while we compile programs used in it (not sure about why exactly.... neither are they).
Hope this helps if you face a similar issue with DTCN debugging.

Paradox (ObjectPal) Application causing General Protection Violations sporadically, looking for the Reason

we have a pretty big application based on paradox / objectpal. since we moved the database from filebased tables (paradox) to ms sql 2008 express edition, we encounter lots of general protection violations (GPV) which appear sporadically. these errors seem to occur only with the paradox runtime, not with the developement edition, making debugging impossible. we did a lot to minimize those GPVs and it looks like its getting better. anyway, here and there are still annoying GPVs that crash the whole application.
so, what i´m looking for is kind of a debugger / logger for windows, to see what operations / methods cause these errors. like the windows event log, but with more details that could give a hint what and where to look for. i´m not sure if such a tool even exists... .
I can think of two things you might try.
(1) Check with these guys
http://pnews.thedbcommunity.com/cgi-bin/dnewsweb.exe
on the subject of GPV (GPF) with the runtime but not with the development platform. I'm sure the your question has come up there already.
Try searching the newsgroups first, but if that fails, your question probably belongs under "pnews.paradox-development".
(2) Add logging code to the application itself. Add a library object to encapsulate an event log file, with a custom method to report an event.
Begin with a call from the open() and close() events of each design object (form, script, report, etc). Then add a call to the action() method of any suspicious objects to detect and log specific actions.
This is tedious, I know, because you have to add the library to the Var() and Open() methods of every design object in the application. But if it is done correctly, the operation of your application becomes amazingly transparent.

Why Are Automated GUI Tools So Fragile?

For about a year a half, I've been working with SilkTest, which is a GUI automation tool, for both desktop and web applications. It simulates mouse and keyboard inputs, which eventually simulate end user behaviour. However, I find that it is a bit flaky; Button.Click() or DialogBox.Close() method calls that work just fine 9 times in a row seem to fail on a 10th call, only to go back working on the 11th. Normally I would just chalk this up to a quirk with SilkTest (or the application under Test, or the OS, or what have you) but then I see that there are similar issues with other GUI automation tools like Selenium:
Selenium Click() fails with Anchor Elements
Selenium Click() fails clicking button object
I know that for desktop apps, each GUI control/dialog has a tag element associated with it (at least in Windows-based GUIs) and that for web pages there is the domain object model hierarchy of page elements. My guess is that these tools sometimes run into issues navigating these hierarchies and finding unique elements and controls. But what is going on here? SilkTest is a relatively old, commercial software package while selenium is relatively new, open source and constantly evolving. The fact that they both can have similar problems raises a couple of flags with me.
Also, is this the case with other GUI test tools? Or have I just had a somewhat unusual experience?
There are 2 things here that you are talking about, first the concept of finding an object in the application under test that you want to automate. Your description of how SilkTest (and other tools) does this is quite accurate, i.e. as long as there is something that the automation software can use to identify the control then you are fine.
The second thing is why does the automation itself fails randomly, since the tool has not reported that it could not find the control then it must think that it sent the appropriate action to the application, e.g. a Click or a Type. This could be that the application is not ready to accept the action that you are sending it, this is similar to you attempting to click on something "before it was ready", in this case the application can decide to buffer the input or to discard the input.
So, how do you fix this? One way would be to use the capabilities of the tool to try to work out when the application is ready for input rather than sending it a stream of input blindly. SilkTest has capabilities that allow for you to do this (as does TestPartner). I cannot comment on Selenium as it is something I have not used.
A simple way of testing this would be to insert a pause for a couple of seconds before the offending action, then run this in a loop to see whether this solves the problem, if this is the case then it is your problem. If this does not fix the issue then there is something else going on that you need to contact the vendor of the testing tool.
Remember that applications are getting more and more complex, i.e. multi-threading, communications, any one of these could cause the automatic syncronisation to fail causing actions to fail.
Hope that helps.

Visual VoiceXML/VXML development tool?

Does anyone know of any tools out there that will let me run and debug a VXML application visually? There are a ton of VXML development tools, but they all require you to build your application within them.
I have an existing application that uses JSPs to generate VXML, and I'm looking for a way to navigate through and debug the rendered VXML in much the same way that Firebug allows one to do this with HTML. I have some proxy-like tools that let me inspect the rendered code as it is sent to the VXML browser, but there's a ton of JS, which makes traversing the code by hand rather difficult.
Has anyone worked with a product that allows for this?
Thanks!
IVR Avenger
There is JigSaw Test suite - has free trial license and reasonably priced.
There is IBM's debugger - part of WebSphere Voice Toolkit.
Many other products have debuggers - a very good summary is here
Disclaimer: I am the development manager for Voiyager (www.voiyager.com), a VoiceXML testing tool. It doesn't meet your criteria nor do I believe it is the type of tool you want, but I thought it was worth mentioning it.
As far as I know, there isn't such a test tool for VoiceXML. In fact there are very few VoiceXML tools on the market and hardly any of them test or analysis. The vendors that created development tools, have all been acquired by other companies. Some of them offered did offer various forms of debugging that were specific to their tool set or stayed at the Dialog (caller input) level. From your question, I'm assuming you need much lower level debugging capabilities.
I think the alternative paths are minimal and somewhat difficult. I believe your primary goal is to debug or rewrite an existing application, but you haven't provided any specific challenges beyond the JavaScript. Some thoughts or approaches that may help:
Isolate the JavaScript and place the code into a unit test harness. That will go a long way to understanding the logic of the application. Any encapsulation of the JavaScript you perform will probably go a long way towards better code maintainability.
Attempt to run the VoiceXML through a translation layer to HTML so you could use FireBug. The largest challenge would involve caller input (ie processing the SRGS grammars). You could probably cheat this by just having the form accept a JSON string the populates the field values. There are tools on the market to test grammars. Depending on the nature of your problems, you could take a simple and light approach and attempt this over just the trouble areas.
Plumb the application with a lot of logging. This can be done through the VoiceXML LOG element, or push the variable space back to the server. By adding intermediate forms, you may be able to provide a dump from each via the VoiceXML Data element.
See if your application will run in one of the open source VoiceXML browsers (not sure of the state of the open source browsers as we've built and bought for our various product lines). If you can get it mostly working, you can use the development debugger to provide some ability to step through the logic. However, it is probably one of the more difficult paths as you'll really need to understand the browser to know when and where to stick your breakpoints and to figure out how to expose the data you want.
Good luck on the challenge. If you find another approach, I would be interested in seeing it posted.
An alternative debug env is to use something like Asterisk with a voicexml browser plugin like the one from http://www.voiceglue.org/ or for a limited licence, i6net.
You can keep all the pieces separate(dynamic html and vxml application in php/jsp/j2ee/, tts processing, and optional asr processing as separate virtual machines with something like virtualbox. If the logic can be kept the same, then it is just a matter of changing the UI based on the channel.
A softphone is all you need to call a minimal asterisk machine, which has the voicexml browser with the url of the vxml in the call plan.
I just used Zend Framework as php is used in this environment, and changed view suffixes(phtml vs vxml) based on the user-agent string.
Flite for tts is fine for debugging, and when your app is ready you can either record phrases, and there was a page on the ubuntu forums with directions for how to increase flite quality with some additional sound files.
Do you have tried Eclipse VTP or InVision Studio?
Eclipse VTP
This is Eclipse plugin. But I feel that it is user-unfriendly a little (of Japanese viewpoint).
InVision Studio *Required create user account*
This is Convergys's IVR tool. It has to edit standard VXML mode. (Unfortunately, It's not exact matching.)
For just debugging vxml, I use Nuance Cafe's VoiceXML checker. It doesn't give you a visual tree or anything, but it's pretty good at spotting syntax errors and is free. I think they might also have more advanced debugging tools if you look into it, but I haven't had the need. (Note: I have no association with them)
http://cafe.bevocal.com/tools/vxmlchecker/vxmlchecker.jsp
I'm looking for the same problem that most of the links are down. I found a document where they propose an open source solution, which works as a plugin for Asterisk (https://www.researchgate.net/publication/228873959_Open_Source_VoiceXML_Interpreter_over_Asterisk_for_Use_in_IVR_Applications) and is available at https://sourceforge.net/projects/voxy/
I would like to know if there are current options to create a VXML structure graphically, like the next image.

"Works on my machine" - How to fix non-reproducible bugs?

Very occasionally, despite all testing efforts, I get hit with a bug report from a customer that I simply can't reproduce in the office.
(Apologies to Jeff for the 'borrowing' of the badge)
I have a few "tools" that I can use to try and locate and fix these, but it always feels a bit like I'm knife-and-forking it:-
Asking for more and more context from the customer: (systeminfo)
Log files from our application
Ad-hoc tests with the customer to attempt to change the behaviour
Providing customer with a new build with additional diagnostics
Thinking about the problem in the bath...
Site visit (assuming customer is somewhere warm and sunny)
Are there set procedures, or other techniques than anyone uses to resolve problems like this?
One of the attributes of good debuggers, I think is that they always have a lot of weapons in their toolkit. They never seem to get "stuck" for too long and there is always something else for them to try. Some of the things I've been known to do:
ask for memory dumps
install a remote debugger on a client machine
add tracing code to builds
add logging code for debugging purposes
add performance counters
add configuration parameters to various bits of suspicious code so I can turn on and off features
rewrite and refactor suspicious code
try to replicate the issue locally on a different OS or machine
use debugging tools such as application verifier
use 3rd party load generation tools
write simulation tools in-house for load generation when the above failed
use tools like Glowcode to analyse memory leaks and performance issues
reinstall the client machine from scratch
get registry dumps and apply them locally
use registry and file watcher tools
Eventually, I find the bug just gives up out of some kind of awe at my persistence. Or the client realises that it's probably a machine or client side install or configuration issue.
Extensive logging usually helps.
The easiest way is always to see the customer in action (assuming that its readily reproducible by the customer). Oftentimes, problems arise due to issues with the customer's computer environment, conflicts with other programs, etc - these are details which you will not be able to catch on your dev rig. So a site visit might be useful; but if that's not convenient, tools like RealVNC might help as well in letting you see the customer 'do their thing'.
(watching the customer in action also allows you to catch them out in any WTF moments that they might have)
Now, if the problem is intermittent, then things get somewhat more complicated. The best way to get around this problem would be to log useful information in places where you guess problems could occur and perhaps use a tool like Splunk to index the log files during analysis. A diagnostic build (i.e. with extra logging) might be useful in this case.
I'm just in the middle of implementing an automated error reporting system that sends back to me information (currently via email although you could use a webservice) from any exception encountered by the app.
That way I get (nearly) all the information that I would do if I was sitting in front of VS2008 and it really helps me to work out what the problem is.
The customers are also usually (sorta) impressed that I know about their problem as soon as they encounter it!
Also, if you use the Application.ThreadException error handler you can send back info on unexpected exceptions too!
We use all the methods you mention progressively starting with the easiest and proceeding to the harder.
However you forget that sometimes hardware is at fault. For example, memory could be malfunctioning and some computation-intensive code will behave strangely throwing exceptions with weird diagnostics. Of cource, it works on your machine, since you don't have faulty hardware.
Experience is needed to identify such errors and insist that customer tries to install the program on another machine or does hardware check. One thing that helps greatly is good error handling - when your code throws an exception it should provide details, not just indicate that something is bad. With good error indication it's easier to identify such suspicious issues related to faulty hardware.
I think one of the most important things is the ability to ask sensible questions around what the customer has reported... More often than not they're not mentioning something that they don't see as relevant, but is actually key.
Telepathy would also be useful...
We've had good success using EurekaLog with it posting directly to FogBugz. This gets us a bug report containing a call stack, along with related system info (other processes running, memory, network details etc) and a screen shot. Occasionally customers enter further info too, which is helpful. It's certainly, in most cases, made it much easier and quicker to fix bugs.
One technique I've found useful is building an application with an integrated "diagnostic" mode (enabled by a command line switch when you launch the app). That certainly avoids the need to create custom builds with additional logging.
Otherwise, it sounds like what you're doing is as good an approach as any.
Copilot (assuming customer is somewhere cold and rainy :)
The usual procedure for this is to expect something like this will happen and add a ton of logging information. Of course you don't enable it from the beginning, but only when this happens.
Usually customers don't like to have to install a new version or some diagnostic tools. It is not their job to do your debugging. And visiting a client for cases like these is rarely an option. You must involve the client as little as possible. Changing a switch and sending you a log file is OK - anything more than this is too much.
I like the alternative of thinking the problem at the bath. I will start from trying to find out the differences between my machine and the client's configuration.
As a software engineer doing webstuff (booking/shop/member systems etc) the most important thing for us is to get as much information from the customer as possible.
Going from
it's broke!
to
it's broke! & here are screenshots of
every option I picked whilst
generating this particular report
reduces the amount of time it takes us to reproduce and fix an issue no end.
It may be obvious, but it takes a fair amount of chasing to get this kind of information from our customers sometimes! But it's worth it just for those moments you find they're not actually doing what they say they are.
I had these problems also. My solution was to add lots of logging and give the customer a debug build with all the possible debug information. Then make sure dr Watson (it was on Windows NT) created a memory dump with enough information.
After loading the memory dump in the debugger I could find out where and why it crashed.
EDIT: Oh, this obviously only works if the application terminates violently...
I think following the trail of the actions user took can lead us to the reasons of failure or selective failures. But most of the times users are at loss to precisely describe the interactions with the applications, the automatic screenshot taking (if it is desktop app. for .net app you can check Jeff's UnhandledExceptionHandler). Logging all the important action which change state of the objects can also help us in understanding it.
I don't have this problem very often, but if I did, I would use a screen sharing or recorded application to watch the user in action without having to go there (unless, as you said, it's warm and sunny and the company pays the trip).
I have recently been investigating such an issue myself. Over the course of my carrier I have learnt that, while computer systems may be complex, they are predictable so have faith that you can find the problem. My approach to these kinds of issues two fold:
1) Gather as much detailed information as possible from the customer about their failure and analyse it meticulously for patterns. Gather multiple sets of data for multiple failure occurrences to build up a clearer picture.
2) Try and reproduce the failure in house. Continue to make your system more and more similar to the customers system until you can reproduce it, the system is identical or it becomes impractical to make it more similar.
While doing this consider:
1)What differences exist between this system and other working systems.
2)What has recently changed in your product or the customers configuration that has caused the problem to start occurring.
Regards
Depending on the issue you could get WinDbg dumps, they normally give a pretty good idea of what is going on. We have diagnosed quite a few problems that weren't crashed from minidumps.
For .Net apps we also was Trace.Writeline then we can get the user to fire up DbgView and send us the output.
Its very complicated issue . I was thinking writing some procedure for this . I just made some procedure for this non-reproducible bug . it might be helpful
When the Bug accorded .. There are several factors it might to occur.
I am Sure all bugs are reproducible . I always keep eye for these kind of issues..
Get the System Information
what other process the customer did before that.
Time period it occurs . its rare or frequent
its next action happened after the issue ( its always same or different )
Find the factors for this bug ( as developer )
Find the exact position where this issue happened .
Find ALL THE SYSTEM Factors on that time
check all memory leaks or user error issue or wrong condition in code
List out all facotrs to may cause this issue.
How the each factors are affected this and wat are the data is holding those factors
Check memeory issues happened
check the customer have the current update code like yours
check all log from atleast 1 month and find any upnormal operation happened . keep on note
Just a short anecdote (hence 'community wiki'): Last week I thought it was a clever idea in a Django app to import the module pprint for pretty printing Python data only if DEBUG was True:
if settings.DEBUG:
from pprint import pprint
Then I used here and there the pprint command as debugging statement:
pprint(somevar) # show somevar on the console
After finishing the work, I tested the app with setting DEBUG=False. You can guess what happened: The site broke with HTTP500 errors all over the place, and I did not know why, because there is no traceback if DEBUG is False. I was puzzled that the errors disappeared magically, if I switched back to debug mode.
It took me 1-2 hours of putting print statements all over the code to find that the code crashes at exactly the above pprint() line. Then it took me another half an hour to convince myself to stop banging my head on the table.
Now comes the moral of the story:
Not every thing that looks like a clever idea in the first view is such savvy in the end.
An important point to look at for debugging these errors are all configuration options and platform switches your code by itself makes. This can be quite a lot more than just some user preferences. Document good, if you make an assumption about the user's platform (e.g., if you test for Win/Mac/Linux only, will your code crash on BSD or Solaris?)
Cheers,
However tough a non-reproducible problem is - we can still have a structured and strategic approach to solve them - and I can say through experience that it requires out of box thinking in 50% of the cases. Generally speaking, one can categorize the problems into different types which helps to identify what tool to be used. For example if you have a non-reproducible application crash issue or a memory issue you can use profilers and nail down the issue caused in the particular functionality.
Also, one of the most important approach is inforamation rich logging. I also use a lot of enums to describe the state of the process depending on the scenario in question. for exampe, I used like Initiated, Triggered, Running, Waiting Repaired etc to describe the schedules states and saved them to DB at different stages.
Not mentioned yet, but "directed code review" is one good solution, especially if you didn't do a proper review (at least 1 hour per 100 lines of code) before release.
I have also seen impressive demos of AppSight Suite, which is basically an advanced environment monitoring and logging tool. It allows the customer to record what happens on his machine in an extensive but fairly compact log file which you can then replay.
As many have mentioned, extensive logging, and asking the client for the log files when something goes wrong. In addition, as I worked more with web apps, I'll also provide detailed, but succinct deployment documentation (e.g., deployment steps, environmental resources that need to be set up etc).
Here are common problems I've seen that lead to the types of problem you are describing:
Environment not set up properly (e.g., missing environment variables, data sources etc).
Application not fully deployed (e.g., database schema not deployed).
Difference in operating system configuration (default character encoding being the most common culprit for me).
Most of the time, these issues can be identified through the log content.
You can use tools like Microsoft SharedView or TeamViewer to connect to remote PC and inspect problem directly on site. Of course, you'll need cooperation with customer.

Resources