Preventing debug code going into production using Progress 4GL? - debugging

How would you prevent chunks of debugging code from accidentally leaking into production enviroment when using Progress 4GL?

I usually just publish a special event - debug-message. In my dev environment there's a menu item in the application which will fire up a window which susbscribes to debug-message anywhere and displays any messages generated. So I can insert debug-messages into my code and then open the window if I want to see the messages. If I forget to tidy up the debug code then the live users don't see any messages, although I can still open the debug window to see what's going on.
( The webspeed version of this would just write the output to an OS file )

If your test database and production databases have different names, you could use this code:
IF DBNNAME = "TESTDB" THEN
DO:
<DEBUG CODE>
END.

Similar to my other answer about the assertions, you can setup an include that will be empty on production sites containing the debug flag. On development sites you just need to define the value so that your debugging code is included in your program.
By wrapping code in a preprocessor the compiler will omit the debug code altogether when you compile it onto a production site.
&if defined( debugalert ) <> 0 &then
&endif
You would then use the "&global-define debug" in versions of the code you want to contain the debug code. Not defining "debug" should cause the compiler to omit the code.
/* debug.i omit the following on production */
&GLOBAL-DEFINE DEBUGALERT
/* test.p */
{debug.i}
DEF VAR h_ct AS INT NO-UNDO
DO h_ct = 1 TO 10:
&IF DEFINED( DEBUGALERT ) <> 0 &THEN
MESSAGE "debug message" h_ct.
<debug code goes here>
&ENDIF
END.

A solution is based on the assumption that development enviroment has a unique propath entry that is not available in other enviroments and code is recompiled when moved over:
&IF DEFINED(DEBUGGING) = 0 &THEN
&IF PROPATH MATCHES '*development*' &THEN
&GLOBAL-DEFINE DEBUGGING TRUE
&ELSE
&GLOBAL-DEFINE DEBUGGING FALSE
&MESSAGE Remove debugging: search for DEBUG within the code.
&ENDIF
&ENDIF
&IF DEFINED(DEBUGGING_STARTED) = 0 &THEN
&GLOBAL-DEFINE DEBUGGING_STARTED TRUE
IF {&DEBUGGING} THEN
DO:
&ELSE
END.
&UNDEFINE DEBUGGING_STARTED
&ENDIF
Usage
Save file as "debug" (without extension) to a directory pointed at by propath, then:
{debug}
/* some debugging code here */
{debug/}

Related

is there a way in autoit script to find if current file was included or it is running on its own?

I mean something like get_included_files(); in php or Error().stack; in javascript or $BASH_SOURCE array in bash ?
I've only found in macros ( https://www.autoitscript.com/autoit3/docs/macros.htm ) #ScriptFullPath and #ScriptName.
Check if #ScriptName matches the name of the current script. If it doesn't your script has been included in something else. I use this method to run a unit test if an included script is run as a stand alone script. I add the following code to the end of included_script.au3:
; unit test code
If #ScriptName == "included_script.au3" Then
MsgBox(0, "Unit Test", "Running unit test...", 3)
test()
Exit
EndIf
Func test()
; no test defined
EndFunc
When included in a "main.au3" file, the #ScriptName will be set to "main.au3" and it will not run test().
You can implement the functionality by using a global variable. Say $includeDepth. $includeDepth should be incremented by 1 before any #include and decremented by 1 after it. If $includeDepth is 0, then the code is not running as part of an #include. For example:
Global $includeDepth
$includeDepth+= 1
#include <MyScript1.au3>
#include <MyScript2.au3>
$includeDepth-= 1
; Declarations, functions, and initializations to be run in both "include" and "non-include" modes
If $includeDepth <= 0 Then
; Code for when in "non-include" mode
EndIf
You need to be very consistent in this usage, though. However, as long as library #includes don't include your own scripts (or scripts where this checking is done), there is no need to modify them. From this point of view, it's also not necessary to increment/decrement $includeDepth while including library #includes. However, it doesn't hurt and it reinforces the practice.

Under what conditions does the stringizing preprocessor operator add an _T?

The following code doesn't compile in my VC++ 2010 project:
define MY_MAJOR_VERSION 20
define OLESTR_(str) L##str
define MOLE( STR ) OLESTR_(#STR)
define MAKE_STR(STR) MOLE(STR)
REGMAP_ENTRY(MAKE_STR(VERSION), MAKE_STR(MY_MAJOR_VERSION))
VERSION is NOT a macro definition, just text. In the end, I want:
REGMAP_ENTRY(L"VERSION", L"20")
but what I get, when I compile in Debug mode is:
REGMAP_ENTRY(L"VERSION", LL"20")
I'm thinking it's a project setting because I've used that in debug mode in other situations, but never with this problem. Is there a VC++ 2010 setting that would cause the stringizing operator to insert an L or _T?
For me, this (note that I changed MAKE_STR to MAKE_OLESTR - I assume that was a typo in the code posted in the question):
#define MY_MAJOR_VERSION 20
#define OLESTR_(str) L##str
#define MOLE( STR ) OLESTR_(#STR)
#define MAKE_OLESTR(STR) MOLE(STR)
REGMAP_ENTRY(MAKE_OLESTR(VERSION), MAKE_OLESTR(MY_MAJOR_VERSION))
preprocesses to (as shown by cl /E test.c):
REGMAP_ENTRY(L"VERSION", L"20")
which seems to be what you want.
You may want to post something that can reproduced using a command line compile.

ruby win32 File Readonly flags

I was playing with some ruby the other day and I wrote the following code
File.open(my_file, "w+") do | fh |
begin
fh.readonly = true <--------Exception thrown here
ensure
fh.close
end
end
this does not work as it throws EACCES because the file is readonly, if I change the open flag to "r" this works just fine. To me this is counter intuitive because I thought opening it with "r" meant i'd only be able to read the file, not change attributes.
I am using win32-file (0.6.6) with ruby 1.8.7 (not upgradable for current project), is this normal behaviour a quirk of the win-32 file gem or just a bug that I am able to code around.
In order to set the readonly bit to true I must open with w+ which seems much more sensible
A bit more info is that this test was performed on Windows Server 2003 64-bit, just in case that makes a diff.
Try opening the file with read and write permissions.
File.open(my_file, "rw+") do | fh |
begin
fh.readonly = true
ensure
fh.close
end
end
I eventually found out what this was, there was another process locking the directory with an exclusive lock on the filesystem, It didn't show up with processexplorer but i noticed in my log something logging that directory, I stopped the service and bam it worked.

Uncaught Throw generated by JLink or UseFrontEnd

This example routine generates two Throw::nocatch warning messages in the kernel window. Can they be handled somehow?
The example consists of this code in a file "test.m" created in C:\Temp:
Needs["JLink`"];
$FrontEndLaunchCommand = "Mathematica.exe";
UseFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
Then these commands pasted and run at the Windows Command Prompt:
PATH = C:\Program Files\Wolfram Research\Mathematica\8.0\;%PATH%
start MathKernel -noprompt -initfile "C:\Temp\test.m"
Addendum
The reason for using UseFrontEnd as opposed to UsingFrontEnd is that an interactive front end may be required to preserve output and messages from notebooks that are usually run interactively. For example, with C:\Temp\test.m modified like so:
Needs["JLink`"];
$FrontEndLaunchCommand="Mathematica.exe";
UseFrontEnd[
nb = NotebookOpen["C:\\Temp\\run.nb"];
SelectionMove[nb, Next, Cell];
SelectionEvaluate[nb];
];
Pause[10];
CloseFrontEnd[];
and a notebook C:\Temp\run.nb created with a single cell containing:
x1 = 0; While[x1 < 1000000,
If[Mod[x1, 100000] == 0,
Print["x1=" <> ToString[x1]]]; x1++];
NotebookSave[EvaluationNotebook[]];
NotebookClose[EvaluationNotebook[]];
this code, launched from a Windows Command Prompt, will run interactively and save its output. This is not possible to achieve using UsingFrontEnd or MathKernel -script "C:\Temp\test.m".
During the initialization, the kernel code is in a mode which prevents aborts.
Throw/Catch are implemented with Abort, therefore they do not work during initialization.
A simple example that shows the problem is to put this in your test.m file:
Catch[Throw[test]];
Similarly, functions like TimeConstrained, MemoryConstrained, Break, the Trace family, Abort and those that depend upon it (like certain data paclets) will have problems like this during initialization.
A possible solution to your problem might be to consider the -script option:
math.exe -script test.m
Also, note that in version 8 there is a documented function called UsingFrontEnd, which does what UseFrontEnd did, but is auto-configured, so this:
Needs["JLink`"];
UsingFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
should be all you need in your test.m file.
See also: Mathematica Scripts
Addendum
One possible solution to use the -script and UsingFrontEnd is to use the 'run.m script
included below. This does require setting up a 'Test' kernel in the kernel configuration options (basically a clone of the 'Local' kernel settings).
The script includes two utility functions, NotebookEvaluatingQ and NotebookPauseForEvaluation, which help the script to wait for the client notebook to finish evaluating before saving it. The upside of this approach is that all the evaluation control code is in the 'run.m' script, so the client notebook does not need to have a NotebookSave[EvaluationNotebook[]] statement at the end.
NotebookPauseForEvaluation[nb_] := Module[{},While[NotebookEvaluatingQ[nb],Pause[.25]]]
NotebookEvaluatingQ[nb_]:=Module[{},
SelectionMove[nb,All,Notebook];
Or##Map["Evaluating"/.#&,Developer`CellInformation[nb]]
]
UsingFrontEnd[
nb = NotebookOpen["c:\\users\\arnoudb\\run.nb"];
SetOptions[nb,Evaluator->"Test"];
SelectionMove[nb,All,Notebook];
SelectionEvaluate[nb];
NotebookPauseForEvaluation[nb];
NotebookSave[nb];
]
I hope this is useful in some way to you. It could use a few more improvements like resetting the notebook's kernel to its original and closing the notebook after saving it,
but this code should work for this particular purpose.
On a side note, I tried one other approach, using this:
UsingFrontEnd[ NotebookEvaluate[ "c:\\users\\arnoudb\\run.nb", InsertResults->True ] ]
But this is kicking the kernel terminal session into a dialog mode, which seems like a bug
to me (I'll check into this and get this reported if this is a valid issue).

How to get R script line numbers at error?

If I am running a long R script from the command line (R --slave script.R), then how can I get it to give line numbers at errors?
I don't want to add debug commands to the script if at all possible; I just want R to behave like most other scripting languages.
This won't give you the line number, but it will tell you where the failure happens in the call stack which is very helpful:
traceback()
[Edit:] When running a script from the command line you will have to skip one or two calls, see traceback() for interactive and non-interactive R sessions
I'm not aware of another way to do this without the usual debugging suspects:
debug()
browser()
options(error=recover) [followed by options(error = NULL) to revert it]
You might want to look at this related post.
[Edit:] Sorry...just saw that you're running this from the command line. In that case I would suggest working with the options(error) functionality. Here's a simple example:
options(error = quote({dump.frames(to.file=TRUE); q()}))
You can create as elaborate a script as you want on an error condition, so you should just decide what information you need for debugging.
Otherwise, if there are specific areas you're concerned about (e.g. connecting to a database), then wrap them in a tryCatch() function.
Doing options(error=traceback) provides a little more information about the content of the lines leading up to the error. It causes a traceback to appear if there is an error, and for some errors it has the line number, prefixed by #. But it's hit or miss, many errors won't get line numbers.
Support for this will be forthcoming in R 2.10 and later. Duncan Murdoch just posted to r-devel on Sep 10 2009 about findLineNum and setBreapoint:
I've just added a couple of functions to R-devel to help with
debugging. findLineNum() finds which line of which function
corresponds to a particular line of source code; setBreakpoint() takes
the output of findLineNum, and calls trace() to set a breakpoint
there.
These rely on having source reference debug information in the code.
This is the default for code read by source(), but not for packages.
To get the source references in package code, set the environment
variable R_KEEP_PKG_SOURCE=yes, or within R, set
options(keep.source.pkgs=TRUE), then install the package from source
code. Read ?findLineNum for details on how to tell it to search
within packages, rather than limiting the search to the global
environment.
For example,
x <- " f <- function(a, b) {
if (a > b) {
a
} else {
b
}
}"
eval(parse(text=x)) # Normally you'd use source() to read a file...
findLineNum("<text>#3") # <text> is a dummy filename used by
parse(text=)
This will print
f step 2,3,2 in <environment: R_GlobalEnv>
and you can use
setBreakpoint("<text>#3")
to set a breakpoint there.
There are still some limitations (and probably bugs) in the code; I'll
be fixing thos
You do it by setting
options(show.error.locations = TRUE)
I just wonder why this setting is not a default in R? It should be, as it is in every other language.
Specifying the global R option for handling non-catastrophic errors worked for me, along with a customized workflow for retaining info about the error and examining this info after the failure. I am currently running R version 3.4.1.
Below, I've included a description of the workflow that worked for me, as well as some code I used to set the global error handling option in R.
As I have it configured, the error handling also creates an RData file containing all objects in working memory at the time of the error. This dump can be read back into R using load() and then the various environments as they existed at the time of the error can be inspected interactively using debugger(errorDump).
I will note that I was able to get line numbers in the traceback() output from any custom functions within the stack, but only if I used the keep.source=TRUE option when calling source() for any custom functions used in my script. Without this option, setting the global error handling option as below sent the full output of the traceback() to an error log named error.log, but line numbers were not available.
Here's the general steps I took in my workflow and how I was able to access the memory dump and error log after a non-interactive R failure.
I put the following at the top of the main script I was calling from the command line. This sets the global error handling option for the R session. My main script was called myMainScript.R. The various lines in the code have comments after them describing what they do. Basically, with this option, when R encounters an error that triggers stop(), it will create an RData (*.rda) dump file of working memory across all active environments in the directory ~/myUsername/directoryForDump and will also write an error log named error.log with some useful information to the same directory. You can modify this snippet to add other handling on error (e.g., add a timestamp to the dump file and error log filenames, etc.).
options(error = quote({
setwd('~/myUsername/directoryForDump'); # Set working directory where you want the dump to go, since dump.frames() doesn't seem to accept absolute file paths.
dump.frames("errorDump", to.file=TRUE, include.GlobalEnv=TRUE); # First dump to file; this dump is not accessible by the R session.
sink(file="error.log"); # Specify sink file to redirect all output.
dump.frames(); # Dump again to be able to retrieve error message and write to error log; this dump is accessible by the R session since not dumped to file.
cat(attr(last.dump,"error.message")); # Print error message to file, along with simplified stack trace.
cat('\nTraceback:');
cat('\n');
traceback(2); # Print full traceback of function calls with all parameters. The 2 passed to traceback omits the outermost two function calls.
sink();
q()}))
Make sure that from the main script and any subsequent function calls, anytime a function is sourced, the option keep.source=TRUE is used. That is, to source a function, you would use source('~/path/to/myFunction.R', keep.source=TRUE). This is required for the traceback() output to contain line numbers. It looks like you may also be able to set this option globally using options( keep.source=TRUE ), but I have not tested this to see if it works. If you don't need line numbers, you can omit this option.
From the terminal (outside R), call the main script in batch mode using Rscript myMainScript.R. This starts a new non-interactive R session and runs the script myMainScript.R. The code snippet given in step 1 that has been placed at the top of myMainScript.R sets the error handling option for the non-interactive R session.
Encounter an error somewhere within the execution of myMainScript.R. This may be in the main script itself, or nested several functions deep. When the error is encountered, handling will be performed as specified in step 1, and the R session will terminate.
An RData dump file named errorDump.rda and and error log named error.log are created in the directory specified by '~/myUsername/directoryForDump' in the global error handling option setting.
At your leisure, inspect error.log to review information about the error, including the error message itself and the full stack trace leading to the error. Here's an example of the log that's generated on error; note the numbers after the # character are the line numbers of the error at various points in the call stack:
Error in callNonExistFunc() : could not find function "callNonExistFunc"
Calls: test_multi_commodity_flow_cmd -> getExtendedConfigDF -> extendConfigDF
Traceback:
3: extendConfigDF(info_df, data_dir = user_dir, dlevel = dlevel) at test_multi_commodity_flow.R#304
2: getExtendedConfigDF(config_file_path, out_dir, dlevel) at test_multi_commodity_flow.R#352
1: test_multi_commodity_flow_cmd(config_file_path = config_file_path,
spot_file_path = spot_file_path, forward_file_path = forward_file_path,
data_dir = "../", user_dir = "Output", sim_type = "spot",
sim_scheme = "shape", sim_gran = "hourly", sim_adjust = "raw",
nsim = 5, start_date = "2017-07-01", end_date = "2017-12-31",
compute_averages = opt$compute_averages, compute_shapes = opt$compute_shapes,
overwrite = opt$overwrite, nmonths = opt$nmonths, forward_regime = opt$fregime,
ltfv_ratio = opt$ltfv_ratio, method = opt$method, dlevel = 0)
At your leisure, you may load errorDump.rda into an interactive R session using load('~/path/to/errorDump.rda'). Once loaded, call debugger(errorDump) to browse all R objects in memory in any of the active environments. See the R help on debugger() for more info.
This workflow is enormously helpful when running R in some type of production environment where you have non-interactive R sessions being initiated at the command line and you want information retained about unexpected errors. The ability to dump memory to a file you can use to inspect working memory at the time of the error, along with having the line numbers of the error in the call stack, facilitate speedy post-mortem debugging of what caused the error.
First, options(show.error.locations = TRUE) and then traceback(). The error line number will be displayed after #

Resources