Read statement fortran [duplicate] - gcc

I have a Fortran program that starts with opening and reading data from a .txt file.
At the end of the program a new file is written, which replaces the old file (that was originally imported).
However it can occur that the file that needs to be opened does not exists, for that case the variables that should be imported from the .txt file should be 0.
I thought by doing this with the code below, however this does not work and the script is aborted when the file history.txt does not exists.
How can I let the script set default values to my variables when the history.txt file does not exists?
OPEN(UNIT=in_his,FILE="C:\temp\history.txt",ACTION="read")
if (stat .ne. 0) then !In case history.txt cannot be opened (iteration 1)
write(*,*) "history.txt cannot be opened"
KAPPAI=0
KAPPASH=0
go to 99
end if
read (in_his, *) a, b
KAPPAI=a
KAPPASH=b
write (*, *) "KAPPAI=", a, "KAPPASH=", b
99 close(in_his)
The file that is imported is pretty simple and looks like:
9.900000000000006E-003 3.960000000000003E-003

I would use IOSTAT as stated by #Fortranner. I would also set defaults before trying to open the file and I tend not to use goto's. As in:
program test
implicit none
integer :: in_his, stat
real :: KAPPAI, KAPPASH
in_his = 7
KAPPAI = 0
KAPPASH = 0
OPEN(UNIT=in_his, FILE="history.txt",ACTION='read',IOSTAT=stat,STATUS='OLD')
if (stat .ne. 0) then
write(*,*) "history.txt cannot be opened"
stop 1
end if
read (in_his, *) KAPPAI, KAPPASH
close(in_his)
write (*, *) "KAPPAI=", KAPPAI, "KAPPASH=", KAPPASH
end program test

Another way is to use an inquire statement and check for the existence of the file before you try to open it. This would set a logical variable that could be used in an IF statement to handle the two cases: 1) open file and read values, or 2) set default values w/o opening the file. Or set the default values first, then have the IF statement only handle the case of opening the file and reading the values.

Set iostat in the open statement and handle the case where it is nonzero.

There are two ways to do this. One is using IOSTAT specifier in the OPEN statement like Fortranner and Timothy Brown suggested. The other is to use the ERR specifier in the OPEN statement which lets you specify a label to which the program will transfer control in the even of an error:
OPEN(UNIT=in_his,FILE="C:\temp\history.txt",ACTION="read",STATUS='OLD',ERR=101)
...
101 CONTINUE
The label must be in the same scoping unit as the OPEN statement.

Related

HWUT - selectively printing from read buffer into .exe file in OUT folder

I am receiving data from serial port. I use HWUT for comparing my test results. The content from receive buffer cannot be directly used for comparison of GOOD and OUT result. Becuase the OUT will always have unnecessary command prompts, enters and other stuff. I am looking to select what must be written from read buffer into OUT file. For example below is an example
←[36m
A> target cmd
←[36m
{t=3883.744541 s} Received data
A> result : 1
bytes read 518Closing serial port...OK
And I would like the out file to only have 'result : 1'.
When i checked the code, messages.py seems to be printing to std out. But not sure if that is being used for printing into OUT file. How can this be achieved?
Anything that you print to 'stdout' should appear in the "OUT/*" files. If it does not, then this would have nothing to do with receiption via serial line(s). Here is what I would do to analyze:
In your connector application there must be something like
receive_n = receive(.., &buffer[0], Size);
buffer[receive_n] = '\0'; /* terminating zero */
printf("%s", &buffer[0]);
If this is so, then
Write in paralell into a log file.
static log_fh = fopen("tmp.log", "wb");
...
printf("%s", &buffer[0]);
fwrite((void*)buffer, 1, received_n, log_fh);
Compare 'tmp.log' with the file in OUT.
If there is a difference, HWUT is to blame.
Check the output before you write it.
if( my_condition(buffer, received_n) ) printf("%s", &buffer[0]);
HWUT has an internal infrastructure to post-process test output, but it is not documented and therefore not reliable--at the time of this writing.
Edit the file "hwut-info.dat" in your TEST directory.
These R my Tests on Something Important (Title)
-------------------------------------------------------
--not *.exe
bash execute-this.sh
-------------------------------------------------------
The --not *.exe makes sure that HWUT will not execute the *.exe files which you compiled. The bash execute-this.sh line lets HWUT consider the file execute-this.sh as a test application and call it with 'bash'.
Inside the execute-this.sh you might want to make your application, execute it and filter the output, i.e.
#! bash
make my-test.exe
./my-test.exe | awk ' /^A>/ '
which will print only those lines which start with 'A>'. grep and awk are your friends, here. You might want to familiarize yourself with these two.
Alternatively, you may filter directly in your connection application.

File not being created in Ruby script

I am trying to open a non existent file and write to it, however when I run the script, no file is being created.
Here is the line of code
File.open("valid_policies.txt", 'a+').write(policy_number.to_s + "\n")
Instead of using .write try this instead:
File.open("valid_policies.txt", 'a+') {|f| f.write(policy_number.to_s + "\n") }
You're using:
File.open("valid_policies.txt", 'a+').write(policy_number.to_s + "\n")
That's a non-block form of open which doesn't automatically close the file. That means the data is most likely not being written to the file but is sitting in the IO buffer waiting to be flushed/synced. You could add a close but that only propagates non-idiomatic code.
Instead you can use:
File.write("valid_policies.txt", policy_number.to_s + "\n")
File.write automatically creates then writes to the file then closes it. It will overwrite existing files though.
If you aren't sure whether the file exists and want to create it if it doesn't, or append to it, then you use File.open with the a mode instead of a+. From the mode documentation:
"a" Write-only, each write call appends data at end of file.
Creates a new file for writing if file does not exist.
Using a+ will work but it unnecessarily opens the file for reading also. Don't do that unless you're sure that's what you have to do.
If I needed to append I'd use:
File.open('valid_policies.txt', 'a') do |fa|
fa.puts policy_number
end
That's idiomatic Ruby. puts will automatically "stringify" policy_number if it has a to_s method, which it should have since you're already calling it, and it'll also automatically add the trailing "\n" if it doesn't exist at the end of the string. Also, using the block form of open will automatically close the file when the block exists, which is smart house-keeping.

Is there any method to detect whether STDIN has been redirected within VBscript?

I'm trying to process/filter input within a VBscript, but only if the input has been piped into the script. I don't want the script processing user/keyboard input. I'd like to code this as something like this:
stdin_is_tty = ...
if not stdin_is_tty then
...
input = WScript.StdIn.ReadAll
end if
Otherwise, the script will hang, waiting on user input when it executes WScript.StdIn.ReadAll (or even earlier if I test the stream with WScript.StdIn.AtEndOfStream).
In C#, I'd use:
stdin_is_tty = not System.Console.IsInputRedirected // NET 4.5+
The accepted answer for Q: "How to detect if Console.In (stdin) has been redirected?" shows how to build that result using Win32 calls via P/Invoke, for versions of NET earlier than NET 4.5. But I don't know of any way to translate that method into VBscript.
I've constructed a clumsy, partial solution using SendKeys to send an end-of-stream sequence into the scripts' keyboard buffer. But the solution leaves keys in the buffer if STDIN is redirected, which I can't clean up unless I know that STDIN was redirected... so, same problem.
I'd prefer to keep the script in one packaged piece, so I'd rather avoid a separate wrapping script or anything not available on a generic Windows 7+ installation.
Any brilliant ideas or workarounds?
EDIT: added copy of initial solution
I've added a copy of my improved initial solution here (admittedly, a "hack"), which now cleans up after itself but still has several negatives:
input = ""
stdin_is_tty = False
test_string_length = 5 ' arbitrary N (coder determined to minimize collision with possible inputs)
sendkey_string = ""
test_string = ""
for i = 1 to test_string_size
sendkey_string = sendkey_string & "{TAB}"
test_string = test_string & CHR(9)
next
sendkey_string = sendkey_string & "{ENTER}"
wsh.sendkeys sendkey_string ' send keyboard string signal to self
set stdin = WScript.StdIn
do while not stdin.AtEndOfStream
input = input & stdin.ReadLine
if input = test_string then
stdin_is_tty = True
else
input = input & stdin.ReadAll
end if
exit do
loop
stdin.Close
if not stdin_is_tty then
set stdin = fso.OpenTextFile( "CON:", 1 )
text = stdin.ReadLine
stdin.Close
end if
This solution suffers from the three problems:
leaving a visible trace at the command line (though now, just a single blank line which is low visibility)
possible collision of the test string (a set series of N [coder determined] TABs followed by a NEWLINE) with the first line of any redirected input causing a false positive redirection determination. Since the number of TABs can be modified, this possibility can be made arbitrarily low by the coder.
a race condition that if another window receives focus before the SendKeys portion is executed, the wrong window will receive the code string, leading to a false negative redirection determination. My estimate is that the possibility of this circumstance occurring is very low.
In short, no, but ...
I've tested everything i could think of and have not found a reasonable way to do it.
None of the properties/methods exposed by the TextStream wrappers retrieved with WScript.StdIn or fso.GetStdStream give enough information to determine if the input is redirected/piped.
Trying to obtain information from the behaviour/environment of a spawned process (how to create the executable is other story) is also unlikely to be useful because
WshShell.Execute always spawns the process with its input and output handles redirected
WshShell.Run creates a new process that does not inherit the handles of the current one
Shell.Application.ShellExecute has the same problem as WshShell.Run
So, none of these methods allow the spawned process to inherit the handles of the current process to check if they are redirected or not.
Using WMI to retrieve information from the running process does not return anything usable (well, HandleCount property for the process differs when there is a redirection, but it is not reliable)
So, not being able to determine from vbs code if there is a redirection, the remaining options are
Don't detect it: If the piped input must be present, behave as the more command and in all cases try to retrieve it
Indicate it: If the pipe input is not always required, use an argument to determine if the stdin stream needs to be read.
In my case, I usually use a single slash / as argument (for coherence with some of the findstr arguments that also use a slash to indicate stdin input). Then in the vbs code
If WScript.Arguments.Named.Exists("") Then
' here the stdin read part
End If
Check before: Determine if there is redirection before starting the script. A wrapper .cmd is needed, but with some tricks both files (.cmd and .vbs) can be combined into one
To be saved as .cmd
<?xml : version="1.0" encoding="UTF-8" ?> ^<!------------------------- cmd ----
#echo off
setlocal enableextensions disabledelayedexpansion
timeout 1 >nul 2>nul && set "arg=" || set "arg=/"
endlocal & cscript //nologo "%~f0?.wsf" //job:mainJob %arg% %*
exit /b
---------------------------------------------------------------------- wsf --->
<package>
<job id="mainJob">
<script language="VBScript"><![CDATA[
If WScript.Arguments.Named.Exists("") Then
Do Until WScript.StdIn.AtEndOfStream
WScript.StdOut.WriteLine WScript.StdIn.ReadLine
Loop
Else
WScript.StdOut.WriteLine "Input is not redirected"
End If
]]></script>
</job>
</package>
It is a .wsf file stored inside a .cmd. The batch part determines if the input is redirected (timeout command fails to get a console handle on redirected input) and pass the argument to the script part.
Then, the process can be invoked as
< inputfile.txt scriptwrapper.cmd input redirected
type inputfile.txt | scriptwrapper.cmd input piped
scriptwapper.cmd no redirection
While this is a convenient way to handle it, the invocation of the .wsf part from the .cmd, while being stable and working without problems, relies in an undocumented behaviour of the script host / cmd combination.
Of course you can do the same but with two separate files. Not as clean, but the behaviour is documented.

Reopening closed file: Lua

I have a file called backup.lua, which the program should write to every so often in order to backup its status, in case of a failure.
The problem is that the program writes the backup.lua file completely fine first-time round, but any other times it refuses to write to the file.
I tried removing the file while the program was still open but Windows told me that the file was in use by 'CrysisWarsDedicatedServer.exe', which is the program. I have told the host Lua function to close the backup.lua file, so why isn't it letting me modify the file at will after it has been closed?
I can't find anything on the internet (Google actually tried to correct my search) and the secondary programmer on the project doesn't know either.
So I'm wondering if any of you folks know what we are doing wrong here?
Host function code:
function ServerBackup(todo)
local write, read;
if todo=="write" then
write = true;
else
read = true;
end
if (write) then
local source = io.open(Root().."Mods/Infinity/System/Read/backup.lua", "w");
System.Log(TeamInstantAction:GetTeamScore(2).." for 2, and for 1: "..TeamInstantAction:GetTeamScore(1))
System.LogAlways("[System] Backing up serverdata to file 'backup.lua'");
source:write("--[[ The server is dependent on this file; editing it will lead to serious problems.If there is a problem with this file, please re-write it by accessing the backup system ingame.--]]");
source:write("Backup = {};Backup.Time = '"..os.date("%H:%M").."';Backup.Date = '"..os.date("%d/%m/%Y").."';");
source:write(XFormat("TeamInstantAction:SetTeamScore(2, %d);TeamInstantAction:SetTeamScore(1, %d);TeamInstantAction:UpdateScores();",TeamInstantAction:GetTeamScore(2), TeamInstantAction:GetTeamScore(1) ));
source:close();
for i,player in pairs(g_gameRules.game:GetPlayers() or {}) do
if (IsModerator(player)) then
CMPlayer(player, "[!backup] Completed server backup.");
end
end
end
--local source = io.open(Root().."Mods/Infinity/System/Read/backup.lua", "r"); Can the file be open here and by the Lua scriptloader too?
if (read) then
System.LogAlways("[System] Restoring serverdata from file 'backup.lua'");
--source:close();
Backup = {};
Script.LoadScript(Root().."Mods/Infinity/System/Read/backup.lua");
if not Backup or #Backup < 1 then
System.LogAlways("[System] Error restoring serverdata from file 'backup.lua'");
end
end
end
Thanks all :).
Edit:
Although the file is now written to the disk fine, the system fails to read the dumped file.
So, now the problem is that the "LoadScript" function isn't doing what you expect:
Because I'm psychic, i have divined that you're writing a Crysis plugin, and are attempting to use it's LoadScript API call.
(Please don't assume everyone here would guess this, or be bothered to look for it. It's vital information that must form part of your questions)
The script you're writing attempts to set Backup - but your script, as written - does not separate lines with newline characters. As the first line is a comment, the entire script will be ignored.
Basicallty the script you've written looks like this, which is all treated as a comment.
--[[ comment ]]--Backup="Hello!"
You need to write a "\n" after the comment (and, I'd recommend in other places too) to make it like this. In fact, you don't really need block comments at all.
-- comment
Backup="Hello!"

How to get R script line numbers at error?

If I am running a long R script from the command line (R --slave script.R), then how can I get it to give line numbers at errors?
I don't want to add debug commands to the script if at all possible; I just want R to behave like most other scripting languages.
This won't give you the line number, but it will tell you where the failure happens in the call stack which is very helpful:
traceback()
[Edit:] When running a script from the command line you will have to skip one or two calls, see traceback() for interactive and non-interactive R sessions
I'm not aware of another way to do this without the usual debugging suspects:
debug()
browser()
options(error=recover) [followed by options(error = NULL) to revert it]
You might want to look at this related post.
[Edit:] Sorry...just saw that you're running this from the command line. In that case I would suggest working with the options(error) functionality. Here's a simple example:
options(error = quote({dump.frames(to.file=TRUE); q()}))
You can create as elaborate a script as you want on an error condition, so you should just decide what information you need for debugging.
Otherwise, if there are specific areas you're concerned about (e.g. connecting to a database), then wrap them in a tryCatch() function.
Doing options(error=traceback) provides a little more information about the content of the lines leading up to the error. It causes a traceback to appear if there is an error, and for some errors it has the line number, prefixed by #. But it's hit or miss, many errors won't get line numbers.
Support for this will be forthcoming in R 2.10 and later. Duncan Murdoch just posted to r-devel on Sep 10 2009 about findLineNum and setBreapoint:
I've just added a couple of functions to R-devel to help with
debugging. findLineNum() finds which line of which function
corresponds to a particular line of source code; setBreakpoint() takes
the output of findLineNum, and calls trace() to set a breakpoint
there.
These rely on having source reference debug information in the code.
This is the default for code read by source(), but not for packages.
To get the source references in package code, set the environment
variable R_KEEP_PKG_SOURCE=yes, or within R, set
options(keep.source.pkgs=TRUE), then install the package from source
code. Read ?findLineNum for details on how to tell it to search
within packages, rather than limiting the search to the global
environment.
For example,
x <- " f <- function(a, b) {
if (a > b) {
a
} else {
b
}
}"
eval(parse(text=x)) # Normally you'd use source() to read a file...
findLineNum("<text>#3") # <text> is a dummy filename used by
parse(text=)
This will print
f step 2,3,2 in <environment: R_GlobalEnv>
and you can use
setBreakpoint("<text>#3")
to set a breakpoint there.
There are still some limitations (and probably bugs) in the code; I'll
be fixing thos
You do it by setting
options(show.error.locations = TRUE)
I just wonder why this setting is not a default in R? It should be, as it is in every other language.
Specifying the global R option for handling non-catastrophic errors worked for me, along with a customized workflow for retaining info about the error and examining this info after the failure. I am currently running R version 3.4.1.
Below, I've included a description of the workflow that worked for me, as well as some code I used to set the global error handling option in R.
As I have it configured, the error handling also creates an RData file containing all objects in working memory at the time of the error. This dump can be read back into R using load() and then the various environments as they existed at the time of the error can be inspected interactively using debugger(errorDump).
I will note that I was able to get line numbers in the traceback() output from any custom functions within the stack, but only if I used the keep.source=TRUE option when calling source() for any custom functions used in my script. Without this option, setting the global error handling option as below sent the full output of the traceback() to an error log named error.log, but line numbers were not available.
Here's the general steps I took in my workflow and how I was able to access the memory dump and error log after a non-interactive R failure.
I put the following at the top of the main script I was calling from the command line. This sets the global error handling option for the R session. My main script was called myMainScript.R. The various lines in the code have comments after them describing what they do. Basically, with this option, when R encounters an error that triggers stop(), it will create an RData (*.rda) dump file of working memory across all active environments in the directory ~/myUsername/directoryForDump and will also write an error log named error.log with some useful information to the same directory. You can modify this snippet to add other handling on error (e.g., add a timestamp to the dump file and error log filenames, etc.).
options(error = quote({
setwd('~/myUsername/directoryForDump'); # Set working directory where you want the dump to go, since dump.frames() doesn't seem to accept absolute file paths.
dump.frames("errorDump", to.file=TRUE, include.GlobalEnv=TRUE); # First dump to file; this dump is not accessible by the R session.
sink(file="error.log"); # Specify sink file to redirect all output.
dump.frames(); # Dump again to be able to retrieve error message and write to error log; this dump is accessible by the R session since not dumped to file.
cat(attr(last.dump,"error.message")); # Print error message to file, along with simplified stack trace.
cat('\nTraceback:');
cat('\n');
traceback(2); # Print full traceback of function calls with all parameters. The 2 passed to traceback omits the outermost two function calls.
sink();
q()}))
Make sure that from the main script and any subsequent function calls, anytime a function is sourced, the option keep.source=TRUE is used. That is, to source a function, you would use source('~/path/to/myFunction.R', keep.source=TRUE). This is required for the traceback() output to contain line numbers. It looks like you may also be able to set this option globally using options( keep.source=TRUE ), but I have not tested this to see if it works. If you don't need line numbers, you can omit this option.
From the terminal (outside R), call the main script in batch mode using Rscript myMainScript.R. This starts a new non-interactive R session and runs the script myMainScript.R. The code snippet given in step 1 that has been placed at the top of myMainScript.R sets the error handling option for the non-interactive R session.
Encounter an error somewhere within the execution of myMainScript.R. This may be in the main script itself, or nested several functions deep. When the error is encountered, handling will be performed as specified in step 1, and the R session will terminate.
An RData dump file named errorDump.rda and and error log named error.log are created in the directory specified by '~/myUsername/directoryForDump' in the global error handling option setting.
At your leisure, inspect error.log to review information about the error, including the error message itself and the full stack trace leading to the error. Here's an example of the log that's generated on error; note the numbers after the # character are the line numbers of the error at various points in the call stack:
Error in callNonExistFunc() : could not find function "callNonExistFunc"
Calls: test_multi_commodity_flow_cmd -> getExtendedConfigDF -> extendConfigDF
Traceback:
3: extendConfigDF(info_df, data_dir = user_dir, dlevel = dlevel) at test_multi_commodity_flow.R#304
2: getExtendedConfigDF(config_file_path, out_dir, dlevel) at test_multi_commodity_flow.R#352
1: test_multi_commodity_flow_cmd(config_file_path = config_file_path,
spot_file_path = spot_file_path, forward_file_path = forward_file_path,
data_dir = "../", user_dir = "Output", sim_type = "spot",
sim_scheme = "shape", sim_gran = "hourly", sim_adjust = "raw",
nsim = 5, start_date = "2017-07-01", end_date = "2017-12-31",
compute_averages = opt$compute_averages, compute_shapes = opt$compute_shapes,
overwrite = opt$overwrite, nmonths = opt$nmonths, forward_regime = opt$fregime,
ltfv_ratio = opt$ltfv_ratio, method = opt$method, dlevel = 0)
At your leisure, you may load errorDump.rda into an interactive R session using load('~/path/to/errorDump.rda'). Once loaded, call debugger(errorDump) to browse all R objects in memory in any of the active environments. See the R help on debugger() for more info.
This workflow is enormously helpful when running R in some type of production environment where you have non-interactive R sessions being initiated at the command line and you want information retained about unexpected errors. The ability to dump memory to a file you can use to inspect working memory at the time of the error, along with having the line numbers of the error in the call stack, facilitate speedy post-mortem debugging of what caused the error.
First, options(show.error.locations = TRUE) and then traceback(). The error line number will be displayed after #

Resources