Uncaught Throw generated by JLink or UseFrontEnd - wolfram-mathematica

This example routine generates two Throw::nocatch warning messages in the kernel window. Can they be handled somehow?
The example consists of this code in a file "test.m" created in C:\Temp:
Needs["JLink`"];
$FrontEndLaunchCommand = "Mathematica.exe";
UseFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
Then these commands pasted and run at the Windows Command Prompt:
PATH = C:\Program Files\Wolfram Research\Mathematica\8.0\;%PATH%
start MathKernel -noprompt -initfile "C:\Temp\test.m"
Addendum
The reason for using UseFrontEnd as opposed to UsingFrontEnd is that an interactive front end may be required to preserve output and messages from notebooks that are usually run interactively. For example, with C:\Temp\test.m modified like so:
Needs["JLink`"];
$FrontEndLaunchCommand="Mathematica.exe";
UseFrontEnd[
nb = NotebookOpen["C:\\Temp\\run.nb"];
SelectionMove[nb, Next, Cell];
SelectionEvaluate[nb];
];
Pause[10];
CloseFrontEnd[];
and a notebook C:\Temp\run.nb created with a single cell containing:
x1 = 0; While[x1 < 1000000,
If[Mod[x1, 100000] == 0,
Print["x1=" <> ToString[x1]]]; x1++];
NotebookSave[EvaluationNotebook[]];
NotebookClose[EvaluationNotebook[]];
this code, launched from a Windows Command Prompt, will run interactively and save its output. This is not possible to achieve using UsingFrontEnd or MathKernel -script "C:\Temp\test.m".

During the initialization, the kernel code is in a mode which prevents aborts.
Throw/Catch are implemented with Abort, therefore they do not work during initialization.
A simple example that shows the problem is to put this in your test.m file:
Catch[Throw[test]];
Similarly, functions like TimeConstrained, MemoryConstrained, Break, the Trace family, Abort and those that depend upon it (like certain data paclets) will have problems like this during initialization.
A possible solution to your problem might be to consider the -script option:
math.exe -script test.m
Also, note that in version 8 there is a documented function called UsingFrontEnd, which does what UseFrontEnd did, but is auto-configured, so this:
Needs["JLink`"];
UsingFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
should be all you need in your test.m file.
See also: Mathematica Scripts
Addendum
One possible solution to use the -script and UsingFrontEnd is to use the 'run.m script
included below. This does require setting up a 'Test' kernel in the kernel configuration options (basically a clone of the 'Local' kernel settings).
The script includes two utility functions, NotebookEvaluatingQ and NotebookPauseForEvaluation, which help the script to wait for the client notebook to finish evaluating before saving it. The upside of this approach is that all the evaluation control code is in the 'run.m' script, so the client notebook does not need to have a NotebookSave[EvaluationNotebook[]] statement at the end.
NotebookPauseForEvaluation[nb_] := Module[{},While[NotebookEvaluatingQ[nb],Pause[.25]]]
NotebookEvaluatingQ[nb_]:=Module[{},
SelectionMove[nb,All,Notebook];
Or##Map["Evaluating"/.#&,Developer`CellInformation[nb]]
]
UsingFrontEnd[
nb = NotebookOpen["c:\\users\\arnoudb\\run.nb"];
SetOptions[nb,Evaluator->"Test"];
SelectionMove[nb,All,Notebook];
SelectionEvaluate[nb];
NotebookPauseForEvaluation[nb];
NotebookSave[nb];
]
I hope this is useful in some way to you. It could use a few more improvements like resetting the notebook's kernel to its original and closing the notebook after saving it,
but this code should work for this particular purpose.
On a side note, I tried one other approach, using this:
UsingFrontEnd[ NotebookEvaluate[ "c:\\users\\arnoudb\\run.nb", InsertResults->True ] ]
But this is kicking the kernel terminal session into a dialog mode, which seems like a bug
to me (I'll check into this and get this reported if this is a valid issue).

Related

Grub throws "can't find command `['." when adding conditional to grub.cfg

It was my understanding that grub supports a small subset of bash. Their documentation doesn't go into super detail, other than it "supports conditionals", etc.
I am trying to run a simple if.
grub> if [ "${myvar}" = "fred" ]; then
> echo "test"
> fi
error: can't find command `['.
Anybody have an idea? I am using grub2-efi 2.00.
You are missing a grub2 module in order to run if tests.
I'm running Gentoo on a PowerPC system (PPC64 G5 machine) and doing a default grub-mkconfig then booting from it gives me the error in your question.
Since bash has that syntax support, I figured it was simply a grub module that needed to be added (I had been doing work with grub modules recently).
tl;dr: You need to load the appropriate grub module and then the error goes away.
The first step is to find out what modules you have. For me, it's whatever is available in my /boot/grub/powerpc-ieee1275/ folder.
There's also modules in /usr/lib/grub/powerpc-ieee1275/.
I wrote up a list of modules I thought I needed:
normal
eval
read
test
test_blockarg
trig
true
I then added them to my /etc/default/grub file:
GRUB_PRELOAD_MODULES="normal eval read test test_blockarg trig true"
I did not find an entry for GRUB_PRELOAD_MODULES in the config file, so I had to do some searching to find out how. I want these modules to be added every time I generate the grub config file, which means putting them in the 00_header portion of grub.
Then I recreated the configuration file:
grub-mkconfig -o /boot/grub/grub.cfg
The modules were in the header and things worked perfectly on reboot.
If I had to guess: you probably only need the test module to enable if statements.

Why isn't my GNAT's standout file descriptor working?

As part of a little project, I'm writing a shell in Ada. As such, when I was investigating the system calls, I learned that there are three ways to do it.
The POSIX system calls, which are probably the least reliable.
Passing the arguments along to C's system(), which I didn't really want to do, since this was about writing the emulator in Ada and not C.
Using GNAT's runtime libraries.
I chose to go for the last option, considering this to be the most "Ada-like" of the choices. I found a code snippet on RosettaCode here. I copied and pasted it and compiled it after changing the "cmd.exe" to "ls" and removing the second argument definition. However, nothing happens when I run the executable. The shell just goes right back to the prompt. I have tested this on two different computers, one running Fedora 21, the other Debian Jessie. Here's what I've done to test it:
Seen if lacking an arguments string caused it
Checked if any of the file descriptors in GNAT's libraries are mis-named
Redirected both stderr and stdin to stdout just to see if GNAT was dumping them to the wrong FD anyway.
Looked thoroughly through the System.OS_lib library file, and there seems to be no reason.
Googled it, but GNAT's own page on the GCC website is very poorly documented.
For now I'm using the C.Interface system in the preparation of my shell, but I'm dissatisfied with this. I'm new to Ada and have only been tinkering with it for a month or so now, so if there's some kind of Ada wisdom here that would help I'm not in on it.
UPDATE: I have tried running it with absolute path, both to /usr/bin and /bin locations, and it doesn't work. Interestingly, the result code returned by the operating system is 1, but I don't know what that means. A quick search suggests that it's for "all general errors", and another site suggests that it's for "incorrect functions".
I had to tweak the RosettaCode example a little to run /bin/ls on Debian Linux, but it does run as expected...
with Ada.Text_IO; use Ada.Text_IO;
with Gnat.OS_Lib; use Gnat.OS_Lib;
procedure Execute_Synchronously is
Result : Integer;
Arguments : Argument_List :=
( 1=> new String'("-al")
);
begin
Spawn
( Program_Name => "/bin/ls",
Args => Arguments,
Output_File_Descriptor => Standout,
Return_Code => Result
);
for Index in Arguments'Range loop
Free (Arguments (Index));
end loop;
end Execute_Synchronously;
Changes :
my Gnat (FSF Gnat 4.92 from Debian Jessie) warned about System.OS_Lib, recommending Gnat.OS_Lib instead. (Which simply renames System.OS_Lib .... why???
System.OS_Lib comments:
-- Note: this package is in the System hierarchy so that it can be directly
-- be used by other predefined packages. User access to this package is via
-- a renaming of this package in GNAT.OS_Lib (file g-os_lib.ads).
Program name including path.
Arguments. The first time I ran it, it displayed the details of "ls" itself, because it was given its own name as the first argument, so I deleted that to see the current directory instead.
Notes :
the best information ot the available subprograms and their arguments is usually in the package specs themselves in the "adainclude" folder : this is /usr/lib/gcc/x86_64-linux-gnu/4.9/adainclude on my Debian installation, locate system.ads will find yours. The specific files are: s-os_lib.ads for System.OS_Lib which exports Spawn and Standout, and a-textio.ads for Ada.Text_IO.
Standout is not the preferred way of accessing Standard Output : it's a file descriptor (integer), the preferred way would be the Standard_Output function from Ada.Text_IO which returns a File. However there doesn't seem to be an overload for Spawn which takes a File (nor would I expect one in this low level library) so the lower level file descriptor is used here.
Absent a shell, you'll need to search the PATH yourself or specify a full path for the desired executable:
Spawn (
Program_Name => "/bin/ls",
…
);
I have tried running it with absolute path…neither /usr/bin nor /bin locations work.
Use which to determine the full path to the executable:
$ which ls
/bin/ls

Reopening closed file: Lua

I have a file called backup.lua, which the program should write to every so often in order to backup its status, in case of a failure.
The problem is that the program writes the backup.lua file completely fine first-time round, but any other times it refuses to write to the file.
I tried removing the file while the program was still open but Windows told me that the file was in use by 'CrysisWarsDedicatedServer.exe', which is the program. I have told the host Lua function to close the backup.lua file, so why isn't it letting me modify the file at will after it has been closed?
I can't find anything on the internet (Google actually tried to correct my search) and the secondary programmer on the project doesn't know either.
So I'm wondering if any of you folks know what we are doing wrong here?
Host function code:
function ServerBackup(todo)
local write, read;
if todo=="write" then
write = true;
else
read = true;
end
if (write) then
local source = io.open(Root().."Mods/Infinity/System/Read/backup.lua", "w");
System.Log(TeamInstantAction:GetTeamScore(2).." for 2, and for 1: "..TeamInstantAction:GetTeamScore(1))
System.LogAlways("[System] Backing up serverdata to file 'backup.lua'");
source:write("--[[ The server is dependent on this file; editing it will lead to serious problems.If there is a problem with this file, please re-write it by accessing the backup system ingame.--]]");
source:write("Backup = {};Backup.Time = '"..os.date("%H:%M").."';Backup.Date = '"..os.date("%d/%m/%Y").."';");
source:write(XFormat("TeamInstantAction:SetTeamScore(2, %d);TeamInstantAction:SetTeamScore(1, %d);TeamInstantAction:UpdateScores();",TeamInstantAction:GetTeamScore(2), TeamInstantAction:GetTeamScore(1) ));
source:close();
for i,player in pairs(g_gameRules.game:GetPlayers() or {}) do
if (IsModerator(player)) then
CMPlayer(player, "[!backup] Completed server backup.");
end
end
end
--local source = io.open(Root().."Mods/Infinity/System/Read/backup.lua", "r"); Can the file be open here and by the Lua scriptloader too?
if (read) then
System.LogAlways("[System] Restoring serverdata from file 'backup.lua'");
--source:close();
Backup = {};
Script.LoadScript(Root().."Mods/Infinity/System/Read/backup.lua");
if not Backup or #Backup < 1 then
System.LogAlways("[System] Error restoring serverdata from file 'backup.lua'");
end
end
end
Thanks all :).
Edit:
Although the file is now written to the disk fine, the system fails to read the dumped file.
So, now the problem is that the "LoadScript" function isn't doing what you expect:
Because I'm psychic, i have divined that you're writing a Crysis plugin, and are attempting to use it's LoadScript API call.
(Please don't assume everyone here would guess this, or be bothered to look for it. It's vital information that must form part of your questions)
The script you're writing attempts to set Backup - but your script, as written - does not separate lines with newline characters. As the first line is a comment, the entire script will be ignored.
Basicallty the script you've written looks like this, which is all treated as a comment.
--[[ comment ]]--Backup="Hello!"
You need to write a "\n" after the comment (and, I'd recommend in other places too) to make it like this. In fact, you don't really need block comments at all.
-- comment
Backup="Hello!"

startup script in freebsd is not running

I have been trying to run a shell script at boot time of freebsd. I have read all simmilar questions in stackoverflow and tried. But nothing is worked. This is the sample code that i tried is dummy.
#!/bin/sh
. /etc/rc.subr
name="dummy"
start_cmd="${name}_start"
stop_cmd=":"
dummy_start()
{
echo "Nothing started."
}
load_rc_config $name
run_rc_command "$1"
Saved with name of dummy.
Permissions are -r-xr-xr-x.
in rc.conf file made dummy_enable="YES".
The problem is, when i rebooted my system to test, dummy file is not there. So script is not executing. what else need to do run my dummy script.
SRC:http://www.freebsd.org/doc/en/articles/rc-scripting/article.html#rc-flags
You need to add rcvar="dummy_enable" to your script. At least for FreeBSD 9.1.
Call your script with parameter rcvar to get the enabled status:
# /etc/rc.d/dummy rcvar
# dummy
#
dummy_enable="YES"
# (default: "")
And finally start it with parameter start - this won't start the service/script unless dummy_enable is set in /etc/rc.conf (or /etc/rc.conf.local, or /etc/defaults/rc.conf)
# /etc/rc.d/dummy start
Nothing started.
One possible explanation is that rcorder(8) says:
Within each file, a block containing a series of "REQUIRE", "PROVIDE",
"BEFORE" and "KEYWORD" lines must appear.
Though elsewhere I recall that if a file doesn't have "REQUIRE", "PROVIDE" or "BEFORE", then it will be arbitrarily placed in the dependency ordering. And, it could be that the arbitrary placement differs between the first run up to $early_late_divider and in the second run of those after $early_late_divider.
OTOH, is this a stock FreeBSD, or some variant? I recall reading that FreeNAS saves its configuration somewhere else and recreates its system files on every boot. And, quite possibly that /etc is actually on a ramdisk.
Also, /usr/local/etc/rc.d doesn't come into existence until the first port installing an rc file is installed.

How to get R script line numbers at error?

If I am running a long R script from the command line (R --slave script.R), then how can I get it to give line numbers at errors?
I don't want to add debug commands to the script if at all possible; I just want R to behave like most other scripting languages.
This won't give you the line number, but it will tell you where the failure happens in the call stack which is very helpful:
traceback()
[Edit:] When running a script from the command line you will have to skip one or two calls, see traceback() for interactive and non-interactive R sessions
I'm not aware of another way to do this without the usual debugging suspects:
debug()
browser()
options(error=recover) [followed by options(error = NULL) to revert it]
You might want to look at this related post.
[Edit:] Sorry...just saw that you're running this from the command line. In that case I would suggest working with the options(error) functionality. Here's a simple example:
options(error = quote({dump.frames(to.file=TRUE); q()}))
You can create as elaborate a script as you want on an error condition, so you should just decide what information you need for debugging.
Otherwise, if there are specific areas you're concerned about (e.g. connecting to a database), then wrap them in a tryCatch() function.
Doing options(error=traceback) provides a little more information about the content of the lines leading up to the error. It causes a traceback to appear if there is an error, and for some errors it has the line number, prefixed by #. But it's hit or miss, many errors won't get line numbers.
Support for this will be forthcoming in R 2.10 and later. Duncan Murdoch just posted to r-devel on Sep 10 2009 about findLineNum and setBreapoint:
I've just added a couple of functions to R-devel to help with
debugging. findLineNum() finds which line of which function
corresponds to a particular line of source code; setBreakpoint() takes
the output of findLineNum, and calls trace() to set a breakpoint
there.
These rely on having source reference debug information in the code.
This is the default for code read by source(), but not for packages.
To get the source references in package code, set the environment
variable R_KEEP_PKG_SOURCE=yes, or within R, set
options(keep.source.pkgs=TRUE), then install the package from source
code. Read ?findLineNum for details on how to tell it to search
within packages, rather than limiting the search to the global
environment.
For example,
x <- " f <- function(a, b) {
if (a > b) {
a
} else {
b
}
}"
eval(parse(text=x)) # Normally you'd use source() to read a file...
findLineNum("<text>#3") # <text> is a dummy filename used by
parse(text=)
This will print
f step 2,3,2 in <environment: R_GlobalEnv>
and you can use
setBreakpoint("<text>#3")
to set a breakpoint there.
There are still some limitations (and probably bugs) in the code; I'll
be fixing thos
You do it by setting
options(show.error.locations = TRUE)
I just wonder why this setting is not a default in R? It should be, as it is in every other language.
Specifying the global R option for handling non-catastrophic errors worked for me, along with a customized workflow for retaining info about the error and examining this info after the failure. I am currently running R version 3.4.1.
Below, I've included a description of the workflow that worked for me, as well as some code I used to set the global error handling option in R.
As I have it configured, the error handling also creates an RData file containing all objects in working memory at the time of the error. This dump can be read back into R using load() and then the various environments as they existed at the time of the error can be inspected interactively using debugger(errorDump).
I will note that I was able to get line numbers in the traceback() output from any custom functions within the stack, but only if I used the keep.source=TRUE option when calling source() for any custom functions used in my script. Without this option, setting the global error handling option as below sent the full output of the traceback() to an error log named error.log, but line numbers were not available.
Here's the general steps I took in my workflow and how I was able to access the memory dump and error log after a non-interactive R failure.
I put the following at the top of the main script I was calling from the command line. This sets the global error handling option for the R session. My main script was called myMainScript.R. The various lines in the code have comments after them describing what they do. Basically, with this option, when R encounters an error that triggers stop(), it will create an RData (*.rda) dump file of working memory across all active environments in the directory ~/myUsername/directoryForDump and will also write an error log named error.log with some useful information to the same directory. You can modify this snippet to add other handling on error (e.g., add a timestamp to the dump file and error log filenames, etc.).
options(error = quote({
setwd('~/myUsername/directoryForDump'); # Set working directory where you want the dump to go, since dump.frames() doesn't seem to accept absolute file paths.
dump.frames("errorDump", to.file=TRUE, include.GlobalEnv=TRUE); # First dump to file; this dump is not accessible by the R session.
sink(file="error.log"); # Specify sink file to redirect all output.
dump.frames(); # Dump again to be able to retrieve error message and write to error log; this dump is accessible by the R session since not dumped to file.
cat(attr(last.dump,"error.message")); # Print error message to file, along with simplified stack trace.
cat('\nTraceback:');
cat('\n');
traceback(2); # Print full traceback of function calls with all parameters. The 2 passed to traceback omits the outermost two function calls.
sink();
q()}))
Make sure that from the main script and any subsequent function calls, anytime a function is sourced, the option keep.source=TRUE is used. That is, to source a function, you would use source('~/path/to/myFunction.R', keep.source=TRUE). This is required for the traceback() output to contain line numbers. It looks like you may also be able to set this option globally using options( keep.source=TRUE ), but I have not tested this to see if it works. If you don't need line numbers, you can omit this option.
From the terminal (outside R), call the main script in batch mode using Rscript myMainScript.R. This starts a new non-interactive R session and runs the script myMainScript.R. The code snippet given in step 1 that has been placed at the top of myMainScript.R sets the error handling option for the non-interactive R session.
Encounter an error somewhere within the execution of myMainScript.R. This may be in the main script itself, or nested several functions deep. When the error is encountered, handling will be performed as specified in step 1, and the R session will terminate.
An RData dump file named errorDump.rda and and error log named error.log are created in the directory specified by '~/myUsername/directoryForDump' in the global error handling option setting.
At your leisure, inspect error.log to review information about the error, including the error message itself and the full stack trace leading to the error. Here's an example of the log that's generated on error; note the numbers after the # character are the line numbers of the error at various points in the call stack:
Error in callNonExistFunc() : could not find function "callNonExistFunc"
Calls: test_multi_commodity_flow_cmd -> getExtendedConfigDF -> extendConfigDF
Traceback:
3: extendConfigDF(info_df, data_dir = user_dir, dlevel = dlevel) at test_multi_commodity_flow.R#304
2: getExtendedConfigDF(config_file_path, out_dir, dlevel) at test_multi_commodity_flow.R#352
1: test_multi_commodity_flow_cmd(config_file_path = config_file_path,
spot_file_path = spot_file_path, forward_file_path = forward_file_path,
data_dir = "../", user_dir = "Output", sim_type = "spot",
sim_scheme = "shape", sim_gran = "hourly", sim_adjust = "raw",
nsim = 5, start_date = "2017-07-01", end_date = "2017-12-31",
compute_averages = opt$compute_averages, compute_shapes = opt$compute_shapes,
overwrite = opt$overwrite, nmonths = opt$nmonths, forward_regime = opt$fregime,
ltfv_ratio = opt$ltfv_ratio, method = opt$method, dlevel = 0)
At your leisure, you may load errorDump.rda into an interactive R session using load('~/path/to/errorDump.rda'). Once loaded, call debugger(errorDump) to browse all R objects in memory in any of the active environments. See the R help on debugger() for more info.
This workflow is enormously helpful when running R in some type of production environment where you have non-interactive R sessions being initiated at the command line and you want information retained about unexpected errors. The ability to dump memory to a file you can use to inspect working memory at the time of the error, along with having the line numbers of the error in the call stack, facilitate speedy post-mortem debugging of what caused the error.
First, options(show.error.locations = TRUE) and then traceback(). The error line number will be displayed after #

Resources