I am using Inno Download Plugin to download a bunch of files for my installation. These files are listed in "files.json".
FileList := TStringList.Create;
FileList.LoadFromFile(ExpandConstant('{tmp}\files.json'));
for i := 0 to FileList.Count-1 do
begin
fileName := ExtractFileName(FileList[i]);
StringChangeEx(FileList[i], '\', '/',True);
//Add each file to download queque
idpAddFile("www.myapiaddress.com/files/" + FileList[i], ExpandConstant('{tmp}\install\') + fileName );
Log(FileList[i]);
end;
What gives me a headache is the line StringChangeEx(FileList[i], '\', '/',True). As soon as I put that in, the idp.iss stops compiling, giving me the error: Variable expected on
procedure idpAddFile(url, filename: String); external 'idpAddFile#files:idp.dll cdecl';
Installer compiles normal, if I remove StringChangeEx from my script entirely. It does not help to place it in a different location...
Any idea, what might cause this problem?
The StringChangeEx function needs a string variable in the first argument (it's declared as var). You are giving it a string value only.
Your code would work, were FileList a string array. But it's not, it's a class with an array-like string default property. Using a property value is an equivalent of using a function/method return value. A property is just syntactic sugar for getter and setter methods.
You will have to copy the value to a string variable and back.
S := FileList[i];
StringChangeEx(S, '\', '/',True);
FileList[i] := S;
Though you actually do not need to copy it back in your case.
Regarding the reason why the error refers to idp.iss file: There seems to be a bug in Inno Setup that makes it report the error as if it had occurred on the very first line of the Pascal Script code (what in your case is the very first real code in the included idp.iss). I've posted a bug report. It was fixed already.
Related
I'm new to Golang, but one of its apparent strengths is in creating command-line tools so I thought a fun little learning exercise would be to recreate the 'touch' command on Windows. I wrote a program to create a new file at either a specified filepath, or in the current folder. This has gone absolutely grand, and I can make a new file no problem:
func create_empty_file(file_name string) bool {
filehandle, err := os.Create(file_name)
if err != nil {
return false
}
defer filehandle.Close()
return true
}
I know that this part is working because I can watch the file being made and open it after the program is complete. My problem here is that I would like to open this file in VSCode after it is created, and there is clearly something that I don't understand about os/exec. When I try to run this:
command_string := strings.Join([]string{"code", full_filepath}, " ")
run_command := exec.Command(command_string)
run_err := run_command.Run()
And I print out the contents of run_err (obviously checking for nil beforehand), I get this:
.Run() failed with error: exec: "code C:\\Go Code\\failed_successfully.go": file does not exist
If I copy and paste "code C:\\Go Code\\failed_successfully.go" into my command prompt, it opens the .go file in VSCode without issue, so clearly there is something about this call than I'm missing/don't understand.
I thought maybe it was trying to open the file before it had been created, so I looked up how to check if a file exists yet and then wrote a short function using Ticks which checks every few milliseconds if the file exists yet and returns true when it finds it. Otherwise, it runs for some specified number of seconds and then returns false. I still get the same error, so I am assuming that this is not the issue.
The last thing I did was to use strings.Replace() to replace all of the back-slashes with forward-slashes, which has no effect.
Any advice on how to achieve what I want here would be much appreciated!
exec.Command does not parse the input string, splitting it on spaces and so on. Instead, pass the arguments individually to exec.Command.
That is:
runCmd := exec.Command("code", full_filepath)
Currently, you're trying to find a command called code C:\\Go Code\\failed_successfully.go -- rather than one simply called code and calling it with an argument.
I get an error:
unit1.pas(91,31) Error: Incompatible type for arg no. 1: Got "File Of Byte", expected "AnsiString"
My code:
var
f : file of byte;
...
AssignFile(f, FileName);
Reset(f);
try
TotalBytes := FileSize(f); // line 93
finally
CloseFile(f);
end;
Can someone help me?
As #Abelisto said, in Lazarus there are two functions FileSize, one in the System unit and one in the Lazarus unit fileutil.
The former takes a File as parameter, whereas the latter takes a string.
So if your code has fileutil in a uses clause, the one from that unit takes precedence over the one in System. That explains the error message.
You will have to fully qualify the call, so instead of a plain FileSize(f), use System.FileSize(f), or, alternatively, use FileSize(FileName) or fileutil.FileSize(FileName).
Line 91 appears to be
Reset(f);
so it is not clear why you include the comment about line 93.
However, if you are getting an error on Reset(f), the cause must be something you have not told us in your q. To see why, please follow the steps below carefully.
Note: The reason for basing the call to FileSize in my code on the copy of the compiled EXE is so that the file is guaranted to exist but is not the EXE itself, because when the EXE is running it cannot by opened in a shareable mode, so attempting to call Reset on it would fail.
Compile (but do not run yet) the console app below.
Copy the resulting exe to a file in the same directory, but with the extension '.BU' rather than '.EXE' is that attempting Reset on the EXE itself will result in a RunError(5), which means "Access denied", because when the EXE is opened by the OS it isn't opened in a shareable mode.
Now run the app. It should correctly report the size of the .BU file.
Assuming the EXE works as predicted, you need to identify where your error is coming from. My first guess would be that the instance of FileSize isn't the one in the System unit - my code calls System.FileSize to ensure that the correct instance of FileSize gets invoked. You can check that by changing your code to TotalBytes := System.FileSize(... - if the error goes away, you've found the cause.
Code:
program Files2;
{$mode objfpc}{$H+}
uses
SysUtils;
var
TotalBytes : Int64;
f : file of byte;
FileName : String;
begin
FileName := ChangeFileExt(ParamStr(0), '.BU'); // get name of this app
AssignFile(f, FileName);
Reset(f);
try
TotalBytes := System.FileSize(f);
writeln('Size of ', FileName, ' = ', TotalBytes, ' bytes');
readln;
finally
CloseFile(f);
end;
end.
I'm writing a Mac OS program, and I have the following lines:
os.execute("cd ~/testdir")
configfile = io.open("configfile.cfg", "w")
configfile:write("hello")
configfile:close()
The problem is, it only creates the configfile in the scripts current directory instead of the folder I have just cd' into. I realised this is because I'm using a console command to change directory, then direct Lua code to write the file. To combat this I changed the code to this:
configfile = io.open("~/testdir/configfile.cfg", "w")
However I get the following result:
lua: ifontinst.lua:22: attempt to index global 'configfile' (a nil value)
stack traceback:
ifontinst.lua:22: in main chunk
My question is, what's the correct way to use IO.Open to create a file in a folder I have just created in the users home directory?
I appreciate I'm making a rookie mistake here, so I apologise if you waste your time on me.
You have problems with ~ symbol. In your os.execute("cd ~/testdir") is the shell who interprets the symbol and replaces it by your home path. However, in io.open("~/testdir/configfile.cfg", "w") is Lua who receives the string and Lua doesn't interprets this symbol, so your program tries to open a file in the incorrect folder. One simple solution is to call os.getenv("HOME") and concatenate the path string with your file path:
configfile = io.open(os.getenv("HOME").."/testdir/configfile.cfg", "w")
In order to improve error messages I suggests you to wrap io.open() using assert() function:
configfile = assert( io.open(os.getenv("HOME").."/testdir/configfile.cfg", "w") )
I saw this question here: How to get an output of an Exec'ed program in Inno Setup?
But I can't get it to work myself, the commented out code are my attempts to make this work, but I resorted to a bat file because I couldn't make my redirection work. CacheInstanceName and CacheInstanceDir are global variable defined elsewhere:
function CheckCacheExists(): Integer;
var
args: String;
buffer: String;
ResultCode: Integer;
begin
// args := 'qlist ' + CacheInstanceName + ExpandConstant(' nodisplay > {tmp}\appcheck.txt');
// MsgBox(args, mbInformation, MB_OK);
// Exec(CacheInstanceDir + '\bin\ccontrol.exe', 'qlist ' + CacheInstanceName + ExpandConstant(' nodisplay > "{tmp}\appcheck.txt"'), '', SW_SHOW,
ExtractTemporaryFile('checkup.BAT');
Exec(ExpandConstant('{tmp}\checkup.BAT'), CacheInstanceDir + ' ' +
CacheInstanceName + ' ' + ExpandConstant('{tmp}'), '', SW_SHOW,
ewWaitUntilTerminated, ResultCode);
LoadStringFromFile(ExpandConstant('{tmp}\appcheck.txt'),buffer);
if Pos('^', buffer) = 0 then
begin
Result := 0
end
else
begin
Result := 1
end
end;
What am I doing wrong?
The output redirection syntax is a feature of the command prompt, not the core Windows APIs. Therefore if you want to redirect output then you need to invoke the command via {cmd} /c actual-command-line > output-file. Don't forget to include quotes where appropriate, as {tmp} (and other constants) may contain spaces.
However, you should strongly consider rewriting whatever is in that batch file into actual code. Anything you can do in a batch file you can do either directly in the Inno script or in a DLL that you call from the script. And this permits you greater control over error checking and the format of whatever data you want to retrieve.
Try running the command directly on your command line with the arguments in your args string to see what the result is which may give an indication of the problem.
Also, check that the file you are trying to redirect your output to is not in use by another process. I have found that when this occurs the actual command may execute successfully with the Exec command returning True but the ResultCode indicates an error and no output is written to the file used in the redirect. In this particular instance of the file being used by another instance the SysErrorMessage(ResultCode) command returns simply Incorrect function. However, testing directly on the command line as I mentioned first returns that the file is in use by another process.
If I am running a long R script from the command line (R --slave script.R), then how can I get it to give line numbers at errors?
I don't want to add debug commands to the script if at all possible; I just want R to behave like most other scripting languages.
This won't give you the line number, but it will tell you where the failure happens in the call stack which is very helpful:
traceback()
[Edit:] When running a script from the command line you will have to skip one or two calls, see traceback() for interactive and non-interactive R sessions
I'm not aware of another way to do this without the usual debugging suspects:
debug()
browser()
options(error=recover) [followed by options(error = NULL) to revert it]
You might want to look at this related post.
[Edit:] Sorry...just saw that you're running this from the command line. In that case I would suggest working with the options(error) functionality. Here's a simple example:
options(error = quote({dump.frames(to.file=TRUE); q()}))
You can create as elaborate a script as you want on an error condition, so you should just decide what information you need for debugging.
Otherwise, if there are specific areas you're concerned about (e.g. connecting to a database), then wrap them in a tryCatch() function.
Doing options(error=traceback) provides a little more information about the content of the lines leading up to the error. It causes a traceback to appear if there is an error, and for some errors it has the line number, prefixed by #. But it's hit or miss, many errors won't get line numbers.
Support for this will be forthcoming in R 2.10 and later. Duncan Murdoch just posted to r-devel on Sep 10 2009 about findLineNum and setBreapoint:
I've just added a couple of functions to R-devel to help with
debugging. findLineNum() finds which line of which function
corresponds to a particular line of source code; setBreakpoint() takes
the output of findLineNum, and calls trace() to set a breakpoint
there.
These rely on having source reference debug information in the code.
This is the default for code read by source(), but not for packages.
To get the source references in package code, set the environment
variable R_KEEP_PKG_SOURCE=yes, or within R, set
options(keep.source.pkgs=TRUE), then install the package from source
code. Read ?findLineNum for details on how to tell it to search
within packages, rather than limiting the search to the global
environment.
For example,
x <- " f <- function(a, b) {
if (a > b) {
a
} else {
b
}
}"
eval(parse(text=x)) # Normally you'd use source() to read a file...
findLineNum("<text>#3") # <text> is a dummy filename used by
parse(text=)
This will print
f step 2,3,2 in <environment: R_GlobalEnv>
and you can use
setBreakpoint("<text>#3")
to set a breakpoint there.
There are still some limitations (and probably bugs) in the code; I'll
be fixing thos
You do it by setting
options(show.error.locations = TRUE)
I just wonder why this setting is not a default in R? It should be, as it is in every other language.
Specifying the global R option for handling non-catastrophic errors worked for me, along with a customized workflow for retaining info about the error and examining this info after the failure. I am currently running R version 3.4.1.
Below, I've included a description of the workflow that worked for me, as well as some code I used to set the global error handling option in R.
As I have it configured, the error handling also creates an RData file containing all objects in working memory at the time of the error. This dump can be read back into R using load() and then the various environments as they existed at the time of the error can be inspected interactively using debugger(errorDump).
I will note that I was able to get line numbers in the traceback() output from any custom functions within the stack, but only if I used the keep.source=TRUE option when calling source() for any custom functions used in my script. Without this option, setting the global error handling option as below sent the full output of the traceback() to an error log named error.log, but line numbers were not available.
Here's the general steps I took in my workflow and how I was able to access the memory dump and error log after a non-interactive R failure.
I put the following at the top of the main script I was calling from the command line. This sets the global error handling option for the R session. My main script was called myMainScript.R. The various lines in the code have comments after them describing what they do. Basically, with this option, when R encounters an error that triggers stop(), it will create an RData (*.rda) dump file of working memory across all active environments in the directory ~/myUsername/directoryForDump and will also write an error log named error.log with some useful information to the same directory. You can modify this snippet to add other handling on error (e.g., add a timestamp to the dump file and error log filenames, etc.).
options(error = quote({
setwd('~/myUsername/directoryForDump'); # Set working directory where you want the dump to go, since dump.frames() doesn't seem to accept absolute file paths.
dump.frames("errorDump", to.file=TRUE, include.GlobalEnv=TRUE); # First dump to file; this dump is not accessible by the R session.
sink(file="error.log"); # Specify sink file to redirect all output.
dump.frames(); # Dump again to be able to retrieve error message and write to error log; this dump is accessible by the R session since not dumped to file.
cat(attr(last.dump,"error.message")); # Print error message to file, along with simplified stack trace.
cat('\nTraceback:');
cat('\n');
traceback(2); # Print full traceback of function calls with all parameters. The 2 passed to traceback omits the outermost two function calls.
sink();
q()}))
Make sure that from the main script and any subsequent function calls, anytime a function is sourced, the option keep.source=TRUE is used. That is, to source a function, you would use source('~/path/to/myFunction.R', keep.source=TRUE). This is required for the traceback() output to contain line numbers. It looks like you may also be able to set this option globally using options( keep.source=TRUE ), but I have not tested this to see if it works. If you don't need line numbers, you can omit this option.
From the terminal (outside R), call the main script in batch mode using Rscript myMainScript.R. This starts a new non-interactive R session and runs the script myMainScript.R. The code snippet given in step 1 that has been placed at the top of myMainScript.R sets the error handling option for the non-interactive R session.
Encounter an error somewhere within the execution of myMainScript.R. This may be in the main script itself, or nested several functions deep. When the error is encountered, handling will be performed as specified in step 1, and the R session will terminate.
An RData dump file named errorDump.rda and and error log named error.log are created in the directory specified by '~/myUsername/directoryForDump' in the global error handling option setting.
At your leisure, inspect error.log to review information about the error, including the error message itself and the full stack trace leading to the error. Here's an example of the log that's generated on error; note the numbers after the # character are the line numbers of the error at various points in the call stack:
Error in callNonExistFunc() : could not find function "callNonExistFunc"
Calls: test_multi_commodity_flow_cmd -> getExtendedConfigDF -> extendConfigDF
Traceback:
3: extendConfigDF(info_df, data_dir = user_dir, dlevel = dlevel) at test_multi_commodity_flow.R#304
2: getExtendedConfigDF(config_file_path, out_dir, dlevel) at test_multi_commodity_flow.R#352
1: test_multi_commodity_flow_cmd(config_file_path = config_file_path,
spot_file_path = spot_file_path, forward_file_path = forward_file_path,
data_dir = "../", user_dir = "Output", sim_type = "spot",
sim_scheme = "shape", sim_gran = "hourly", sim_adjust = "raw",
nsim = 5, start_date = "2017-07-01", end_date = "2017-12-31",
compute_averages = opt$compute_averages, compute_shapes = opt$compute_shapes,
overwrite = opt$overwrite, nmonths = opt$nmonths, forward_regime = opt$fregime,
ltfv_ratio = opt$ltfv_ratio, method = opt$method, dlevel = 0)
At your leisure, you may load errorDump.rda into an interactive R session using load('~/path/to/errorDump.rda'). Once loaded, call debugger(errorDump) to browse all R objects in memory in any of the active environments. See the R help on debugger() for more info.
This workflow is enormously helpful when running R in some type of production environment where you have non-interactive R sessions being initiated at the command line and you want information retained about unexpected errors. The ability to dump memory to a file you can use to inspect working memory at the time of the error, along with having the line numbers of the error in the call stack, facilitate speedy post-mortem debugging of what caused the error.
First, options(show.error.locations = TRUE) and then traceback(). The error line number will be displayed after #