Passing arguments to interactive fortran program - bash

I have a fortran program (which I cannot modify) that requires several inputs from the user (in the command line) when it is run. The program takes quite a while to run, and I would like to retain use of the terminal by running it in the background; however, this is not possible due to its interactive nature.
Is there a way, using a bash script or some other method, that I can pass arguments to the program without directly interacting with it via the command line?
I'm not sure if this is possible; I tried searching for it but came up empty, though I'm not exactly sure what to search for.
Thank you!
ps. I am working on a unix system where I cannot install things not already present.

You can pipe it in:
$ cat delme.f90
program delme
read(*, *) i, j, k
write(*, *) i, j, k
end program delme
$ echo "1 2 3" | ./delme
1 2 3
$ echo "45 46 47" > delme.input
$ ./delme < delme.input
45 46 47
$ ./delme << EOF
> 3 2 1
> EOF
3 2 1

Related

How does bash handle control characters?

I'm writing a shell that tries to simulate bash's behaviour. The problem is that I've read that when Bash gets the user input, it uses a non-canonical mode and turn off the ECHO kind of like this :
(*conf)->newterm.c_lflag &= ~(ICANON | ECHO);
But if it turns ECHO off, how is SIGINT still displayed as ^C on the temrinal ? When I try this, I get the character � which is -1 (if you try to print it).
If we pay enough attention to bash behaviour, control characters are never displayed except for ctrl-c AKA SIGINT. My only theory is that bash hard coded ^C and just print it to the screen when SIGINT is detected. Am I right saying this ? Of not, how does BASH or ZSH display ^C having ECHO and ICANON turned off ? I've looked everywhere and I really can't seem to find an answer to this...
Thank you for your time and your explanation !
EDIT :
Here is what I do. First I initialize my shell as this :
tcgetattr(STDIN_FILENO, &(*conf)->oldterm);
(*conf)->newterm = (*conf)->oldterm;
(*conf)->newterm.c_lflag &= ~(ICANON | ECHO);
tcsetattr(STDIN_FILENO, TCSANOW, &(*conf)->newterm); // Apply immediately
Then well I press ctrl-c, I get a newline and ^C is not displayed. This is because I'm disabling ECHO so ECHOCTL gets disabled too. Now, What I want to have is if I press ctr-c, have ^C displayed, but also not having ^[[A when I press the upper arrow key. I'm just unsure how to configure termios to do that.
I believe the ^C is actually being generated by the terminal device interface (tty), as opposed to Bash or Z Shell. For example, and following stty -echoctl and stty -echo, ^C is no longer displayed. As another example, a simple C program with sleep(1)s in an infinite loop and SIGINT set to SIG_IGN (and with stty echo), still displays ^C.
This example works for me, the most relevant lines being 17, 19, 20, 36, and 37 (gcc -Wall compiled, in macOS and Linux, in Bash and Z Shell, with Terminal [Apple], Terminator, and GNOME Terminal, within and without GNU Screen and/or tmux, and with TERMs of xterm-256color and screen-256color):
1 #include <errno.h>
2 #include <signal.h>
3 #include <stdio.h>
4 #include <termios.h>
5 #include <unistd.h>
6
7 void sh( int _sig )
8 {
9 puts("SIGINT");
10 }
11
12 int main()
13 {
14 struct termios t;
15 void (*s)(int);
16
17 if ( tcgetattr(fileno(stdin),&t) )
18 return(1);
19 t.c_lflag &= ~ECHO;
20 if ( tcsetattr(fileno(stdin),TCSANOW,&t) )
21 return(2);
22 if ( ( s = signal(SIGINT,&sh) ) == SIG_ERR )
23 return(3);
24
25 /**
26 ** While the following `sleep` is running, [ctrl]+c will
27 ** trigger a SIGINT, which will lead to a `sh(SIGINT)` call,
28 ** which will lead to "SIGINT\n" being written to STDOUT,
29 ** and `sleep` will be interrupted, returning sleep seconds
30 ** remaining. "^C" should not be printed to the terminal.
31 **/
32 puts("Tap [ctrl]+c within 60 seconds and expect \"SIGINT\"...");
33 if ( sleep(60) == 0 || errno != EINTR )
34 return(4);
35
36 t.c_lflag |= ECHO;
37 if ( tcsetattr(fileno(stdin),TCSANOW,&t) )
38 return(5);
39 if ( signal(SIGINT,s) == SIG_ERR )
40 return(6);
41
42 /**
43 ** With the default SIGINT handler restored, the following will
44 ** only return if the time fully elapses. And, with the ECHO
45 ** bit restored, "^C" should be printed to the terminal (unless
46 ** such echo'ing was otherwise disabled).
47 **/
48 puts("Tap [ctrl]+c within 60 seconds, expect \"^C\", and expect a 130 return (128+SIGINT)...");
49 sleep(86400);
50
51 return(7);
52 }
P.S. Keep in mind that [ctrl]-c (generally) triggers a SIGINT, that SIGINT can be generated without a [ctrl]-c, and that, for example, kill -INT "$pid", should not trigger a tty-driven ^C output.
If Bash does use some non standard input mode, it's probably just to support line editing, and not part of its core functionality as a shell. I think a traditional unix shell would just read lines of input, exectute them, and then prompt. An interactive shell would capture sigint so that the shell wouldn't log you out when you hit ^C, but that's about it. Quite a few things that you might imagine are handled in the shell, are actually not, they're in the kernel, in terminal drivers, input drivers and so forth.
Why are you trying to emulate bash, is this for fun and learning? If you want to do that, you should go start by looking at the source code of older and simpler shells. You could certainly go look at the old bsd "csh" code, or maybe the original bourne shell code. These shells do the essence of what it is to be a shell. There's been even simpler shells if you look for them. Later shells complicate things by adding line editing, and if you want to understand those, you can get the gnu readline library, and add that to your own app. You could add it to your shell for that matter.

How to break shell script if a script it calls produces an error

I'm currently debugging a shell script, which acts as a master-script in a data pipeline. In order to run the pipeline, you feed a bunch of arguments into the shell script. From there, the shell script sequentially calls 6 different scripts [4 in R, 2 in Python], writes out stuff to log files, and so on. Basically, my idea is to use this script to automate a data pipeline that takes a long time to run.
Right now, if any of the individual R or Python scripts break within the shell script, it just jumps to the next script that it's supposed to call. However, running script 03.py requires the data input to scripts 01.R and 02.R to be fully run and processed, otherwise 03 will produce erroneous output data which will then be written out and further processed in later scripts.
What I want to do is,
1. Break the overall shell script if there's an error in any of the R scripts
2. Output a message telling me where this error happened [line of individual R / python script]
Here's a sample of the master.sh shell script which calls the individual scripts.
#############
# STEP 2 : RUNNING SCRIPTS
#############
# A - 01.R
#################################################################
# log_file - this needs to be reassigned for every individual script
log_file=01.log
current_time=$(date)
echo "Current time: $current_time"
echo "Now running script 01. Log file output being written to $log_file_dir$log_file."
Rscript 01.R -f $input_file -s $sql_db > $log_file_dir$log_file
# current time/date
current_time=$(date)
echo "Current time: $current_time"
# B - 02.R
#################################################################
log_file=02.log
current_time=$(date)
echo "Current time: $current_time"
echo "Now running script 02. Log file output being written to $log_file_dir$log_file"
Rscript 02.R -f $input_file -s $sql_db > $log_file_dir$log_file
# PRINT OUT TIMINGS
current_time=$(date)
echo "Current time: $current_time"
This sequence is repeated throughout the master.sh script until script 06.R, after which it collates some data retrieved from output files and log files, and prints them to stout.
Here's some sample output that gets printed by my current master.sh, which shows how the script just keeps moving even though 01.R has produced an error.
file: test-data/minisample.txt
There are a total of 101 elements in file.
Using the main database.
Writing log-files to this directory: log_files/minisample/.
Writing output-csv with classifications to output/minisample.csv.
Current time: Wed Nov 14 18:19:53 UTC 2018
Now running script 01. Log file output being written to log_files/minisample/01.log.
Loading required package: stringi
Loading required package: dplyr
Attaching package: ‘dplyr’
The following objects are masked from ‘package:stats’:
filter, lag
The following objects are masked from ‘package:base’:
intersect, setdiff, setequal, union
Loading required package: RMySQL
Loading required package: DBI
Loading required package: methods
Loading required package: hms
Error: The following 2 arguments need to be provided:
-f <input file>.csv
-s <MySQL db name>
Execution halted
Current time: Wed Nov 14 18:19:54 UTC 2018
./master.sh: line 95: -1: substring expression < 0
./master.sh: line 100: -1: substring expression < 0
./master.sh: line 104: -1: substring expression < 0
Total time taken to run script 01.R:
Average time taken per user to run script 01.R:
Total time taken to run pipeline so far [01/06]:
Average time taken per user to run pipeline so far [01/06]:
Current time: Wed Nov 14 18:19:54 UTC 2018
Now running script 02. Log file output being written to log_files/minisample/02.log
Seeing as the R script 01.R produces an error, I want the script master.sh to stop. But how?
Any help would be greatly appreciated, thanks in advance!
As another user mentioned, simply running set -e will make your script terminate on first error. However, if you want more control, you can also check the exit status with ${?} or simply $? assuming your program gives an exit code of 0 on success, and non-zero otherwise.
#!/bin/bash
url=https://nosuchaddress1234.com/nosuchpage.html
error_file=errorFile.txt
wget ${url} 2> ${error_file}
exit_status=${?}
if [ ${exit_status} -ne 0 ]; then
echo -n "wget ${url} "
if [ ${exit_status} -eq 4 ]; then
echo "- Network failure."
elif [ ${exit_status} -eq 8 ]; then
echo "- Server issued an error response."
else
echo "- Other error"
fi
echo "See ${error_file} for more details"
exit ${exit_status};
fi
I like to put some boilerplate at the top of most scripts like this -
trap 'echo >&2 "ERROR in $0 at line $LINENO, Aborting"; exit $LINENO;' ERR
set -u
While coding at debugging, I usually add
set -x
And a lot of trace "comments" with colons -
: this will parse its args but only show under set -x
Then the trick is to make sure any errors you know are ok are handled.
Conditionals consume the errors, so those are safe.
if grep foo nonexistantfile
then : do the success stuff
else : if you *want* a failout here, just call false
false here will abort # args don't matter :)
fi
By the same token, if you just want to catch and ignore a known possible error -
ls $mightNotExist ||: # || says "do on fail"; : is an alias for "true"
Just always check your likely errors. Then the only thing that will crash your script is a fail.

Is there a way to flush stdout on process termination for parallel processes

I'm running several independent programs on a single machine in parallel.
The processes (say 100) are all relatively short (<5 minutes) and their output is limited to a few hundred lines (~kilobytes).
Usually the output in a terminal then becomes mangled because the processes write directly to the same buffer. I would like these outputs to be un-mangled so that it's easier to debug certain processes. I could write these outputs to temporary files but I would like to limit disk IO and would prefer another method if possible. It would require cleaning up and probably won't really improve code readability.
Is there any shell native method that allows buffers to be PID separated which then flushes to stdout/stderr when the process terminates ? Do you see any other way to do this ?
Update
I ended up using the tail -n 1000000 trick from the comment of #Gem. Since the commands I'm using are long and (covering multiple lines) and I was already using subshells ( ... ) & that was a quite minimal change from ( ... ) & to ( ... ) 2>&1 | tail -n 1000000 &.
You can do that with GNU Parallel. Use -k to keep the output in order and ::: to separate the arguments you want passed to your program.
Here we run 4 instances of echo in parallel:
parallel -k echo {} ::: {0..4}
0
1
2
3
4
Now add in --tag to tag your output lines with the filenames or parameters you are using:
parallel --tag -k 'echo "Line 1, param {}"; echo "Line 2, param {}"' ::: {1..4}
1 Line 1, param 1
1 Line 2, param 1
2 Line 1, param 2
2 Line 2, param 2
3 Line 1, param 3
3 Line 2, param 3
4 Line 1, param 4
4 Line 2, param 4
You should notice that each line is tagged on the left side with the parameters and that the two lines from each job are kept together.
You can now specify how your output is organised.
Use --group to group output by job
Use --line-buffer to buffer a line at a time
Use --ungroup if you want output all mixed up, but as soon as available
Sounds like you just want syslog, or rather logger its Bash interface. Example:
echo "Something happened!" | logger -i -p local0.notice
If you insist on getting output to stderr too use --stderr. rsyslog will handle buffering, atomic writes, etc, and is presumably pretty good at optimizing disk I/O. However you could also easily configure rsyslog to route the log facility (i.e. local0 or what ever you choose to use) where ever you want, such as on a tmpfs or dedicated disk, or even over TCP. See /etc/rsyslog.conf.

What is wrong with neo4j-shell traversal with command (trav -c)

I am trying to traversal over graph nodes, and execute some command for each node. Like this:
neo4j-sh (0)$ trav -d 2 -c "ls $i"
But I always get the error:
Thread[...] already has a transaction bound
What is wrong? Is it a Neo4j bug?
it's a bug and a sign that one one used this command for at least 2 years :)
you can run the equivalent cypher:
WITH {self} as n
MATCH (n)-[*2]-(m)
RETURN m;

Linpack sometimes starting, sometimes not, but nothing changed

I installed Linpack on a 2-Node cluster with Xeon processors. Sometimes if I start Linpack with this command:
mpiexec -np 28 -print-rank-map -f /root/machines.HOSTS ./xhpl_intel64
linpack starts and prints the output, sometimes I only see the mpi mappings printed and then nothing following. To me this seems like random behaviour because I don't change anything between the calls and as already mentioned, Linpack sometimes starts, sometimes not.
In top I can see that xhpl_intel64processes have been created and they are heavily using the CPU but when watching the traffic between the nodes, iftop is telling me that it nothing is sent.
I am using MPICH2 as MPI implementation. This is my HPL.dat:
# cat HPL.dat
HPLinpack benchmark input file
Innovative Computing Laboratory, University of Tennessee
HPL.out output file name (if any)
6 device out (6=stdout,7=stderr,file)
1 # of problems sizes (N)
10000 Ns
1 # of NBs
250 NBs
0 PMAP process mapping (0=Row-,1=Column-major)
1 # of process grids (P x Q)
2 Ps
14 Qs
16.0 threshold
1 # of panel fact
2 PFACTs (0=left, 1=Crout, 2=Right)
1 # of recursive stopping criterium
4 NBMINs (>= 1)
1 # of panels in recursion
2 NDIVs
1 # of recursive panel fact.
1 RFACTs (0=left, 1=Crout, 2=Right)
1 # of broadcast
1 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM)
1 # of lookahead depth
1 DEPTHs (>=0)
2 SWAP (0=bin-exch,1=long,2=mix)
64 swapping threshold
0 L1 in (0=transposed,1=no-transposed) form
0 U in (0=transposed,1=no-transposed) form
1 Equilibration (0=no,1=yes)
8 memory alignment in double (> 0)
edit2:
I now just let the program run for a while and after 30min it tells me:
# mpiexec -np 32 -print-rank-map -f /root/machines.HOSTS ./xhpl_intel64
(node-0:0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15)
(node-1:16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31)
Assertion failed in file ../../socksm.c at line 2577: (it_plfd->revents & 0x008) == 0
internal ABORT - process 0
APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)
Is this a mpi problem?
Do you know what type of problem this could be?
I figured out what the problem was: MPICH2 uses different random ports each time it starts and if these are blocked your application wont start up correctly.
The solution for MPICH2 is to set the environment variable MPICH_PORT_RANGE to START:END, like this:
export MPICH_PORT_RANGE=50000:51000
Best,
heinrich

Resources