Can I create one instance of Julia and use it to run multiple Julia scripts from bash script?
#!/bin/bash
J=getjuliainstance()
J.run(temp.jl)
J.run(j1.jl)
J.run(j2.jl)
J.run(j3.jl)
J.exit()
I could run all of them from inside a master Julia script but that is not the intent.
This is to circumvent Julia's load time for the first script so that runtime of subsequent scripts can be timed consistently.
Any way to spawn a single process and reuse it to launch scripts? From shell script only please!
One of the solutions (allows for a tail -f):
julia <pipe 2>&1 | tee submission.log > /dev/null &
You can try named pipes:
$ mkfifo pipe # create named pipe
$ sleep 10000 > pipe & # keep pipe alive
[1] 11521
$ julia -i <pipe & # make Julia read from pipe
[2] 11546
$ echo "1+2" >pipe
$ 3
$ echo "rand(10)" >pipe
$ 10-element Array{Float64,1}:
0.938396
0.690747
0.615235
0.298277
0.780966
0.775423
0.197329
0.136582
0.302169
0.607562
$
You can send any commands to Julia using echo.
If you use stdout for Julia output then you have to press enter when Julia writes something there to return to prompt.
Stop Julia by writing echo "exit()" >pipe. If you want to execute a file this way use include function.
EDIT: it seems that you even do not have to use -i if you run Julia this way.
EDIT2: I did not notice that actually you want to use only one bash script (not an interactive mode). In such a case it should be even simpler to use named pipes.
Related
I will first write a sequence of commands which I want to run
echo <some_input> | <some_command> > temp.dat & pid=$!
sleep 5
kill -INT "$pid"
The above is working perfectly fine when i run it one by one from bash shell, and the contents in the temp.dat file is exactly what I want. But, when I create a bash script containing the same set of commands, I am getting nothing in the temp.dat file.
Now, I'll mention why I'm writing those commands in such a way:
<some_command> asks for an input, that's why I'm piping <some_input>
I want the output of that command in a separate file, that's why I've redirected the output.
I want to kill the command by sending SIGINT signal after some time.
I've tried running an interactive shell by writing #!/bin/bash -i in the first line of the shell script, but it's not working.
Any alternate method to achieve the same results will be appreciated.
Update: <some_command> is also invoking a python script, but I don't think that this will cause it to behave differently.
Update2: python script was the only cause of that different behavior.
One likely cause here is that your Python process may not be flushing stdout within the allowed five seconds of runtime.
export PYTHONUNBUFFERED=1
...will cause content to be promptly written, rather than waiting for process exit / file close / amount of buffered content to reach a level sufficient to justify the overhead of a flush operation.
will this work for you?
read -p "Input Data : " inputdata ; echo $inputdata > temp.data ; sleep 5; exit
obvs
#!/usr/bin/env bash
read -p "Input Data : " inputdata
echo $inputdata > temp.data
sleep 5
should work as a script
to suit :D
#!/usr/bin/env bash
read -p "Input Data : " inputdata
<code you write eg echo $inputdata> > temp.data
sleep 5
I have a simple Bash script:
#!/usr/bin/env bash
read X
echo "X=$X"
When I execute it with ./myscript.sh it works. But when I execute it with cat myscript.sh | bash it actually puts echo "X=$X" into $X.
So this script prints Hello World executed with cat myscript.sh | bash:
#!/usr/bin/env bash
read X
hello world
echo "$X"
What's the benefit of executing a script with cat myscript.sh | bash? Why doesn't do it the same things as if I execute it with ./myscript.sh?
How can I avoid Bash to execute line by line but execute all lines after the STDIN reached the end?
Instead of just running
read X
...instead replace it with...
read X </dev/tty || {
X="some default because we can't read from the TTY here"
}
...if you want to read from the console. Of course, this only works if you have a /dev/tty, but if you wanted to do something robust, you wouldn't be piping from curl into a shell. :)
Another alternative, of course, is to pass in your value of X on the command line.
curl https://some.place/with-untrusted-code-only-idiots-will-run-without-reading \
| bash -s "value of X here"
...and refer to "$1" in your script when you want X.
(By the way, I sure hope you're at least using SSL for this, rather than advising people to run code they download over plain HTTP with no out-of-band validation step. Lots of people do it, sure, but that's making sites they download from -- like rvm.io -- big targets. Big, easy-to-man-in-the-middle-or-DNS-hijack targets).
When you cat a script to bash the code to execute is coming from standard input.
Where does read read from? That's right also standard input. This is why you can cat input to programs that take standard input (like sed, awk, etc.).
So you are not running "a script" per-se when you do this. You are running a series of input lines.
Where would you like read to read data from in this setup?
You can manually do that (if you can define such a place). Alternatively you can stop running your script like this.
I am trying to build shell script. One of the commands used in this script is supposedly using read command demanding param to complete its execution. Now i want to pass same argument everytime for this. Can i automate this ?
In short, how to automate read command by shell script ?
Because of some reasons i can not share actual script.
If read is reading from standard input, you can just redirect from a file containing the necessary data:
$ cat foo.txt
a
b
$ someScript.sh < foo.txt
or pipe the data from another command:
$ printf 'a\nb\n' | someScript.sh
I have a simple Bash script:
#!/usr/bin/env bash
read X
echo "X=$X"
When I execute it with ./myscript.sh it works. But when I execute it with cat myscript.sh | bash it actually puts echo "X=$X" into $X.
So this script prints Hello World executed with cat myscript.sh | bash:
#!/usr/bin/env bash
read X
hello world
echo "$X"
What's the benefit of executing a script with cat myscript.sh | bash? Why doesn't do it the same things as if I execute it with ./myscript.sh?
How can I avoid Bash to execute line by line but execute all lines after the STDIN reached the end?
Instead of just running
read X
...instead replace it with...
read X </dev/tty || {
X="some default because we can't read from the TTY here"
}
...if you want to read from the console. Of course, this only works if you have a /dev/tty, but if you wanted to do something robust, you wouldn't be piping from curl into a shell. :)
Another alternative, of course, is to pass in your value of X on the command line.
curl https://some.place/with-untrusted-code-only-idiots-will-run-without-reading \
| bash -s "value of X here"
...and refer to "$1" in your script when you want X.
(By the way, I sure hope you're at least using SSL for this, rather than advising people to run code they download over plain HTTP with no out-of-band validation step. Lots of people do it, sure, but that's making sites they download from -- like rvm.io -- big targets. Big, easy-to-man-in-the-middle-or-DNS-hijack targets).
When you cat a script to bash the code to execute is coming from standard input.
Where does read read from? That's right also standard input. This is why you can cat input to programs that take standard input (like sed, awk, etc.).
So you are not running "a script" per-se when you do this. You are running a series of input lines.
Where would you like read to read data from in this setup?
You can manually do that (if you can define such a place). Alternatively you can stop running your script like this.
I need a way to make a process keep a certain file open forever. Here's an example of what I have so far:
sleep 1000 > myfile &
It works for a thousand seconds, but really don't want to make some complicated sleep/loop statement. This post suggested that cat is the same thing as sleep for infinite. So I tried this:
cat > myfile &
It almost looks like a mistake doesn't it? It seemed to work from the command line, but in a script the file connection did not stay open. Any other ideas?
Rather than using a background process, you can also just use bash to open one of its file descriptors:
exec 5>myfile
(The special use of exec here allows changing the current file descriptor redirections - see man bash for details). This will open file descriptor 5 to "myfile" (use >> if you don't want to empty the file).
You can later close the file again with:
exec 5>&-
(One possible downside of this is that the FD gets inherited by every program that the shell runs in the meantime. Mostly this is harmless - e.g. your greps and seds will generally ignore the extra FD - but it could be annoying in some cases, especially if you spawn any processes that stay around (because they will then keep the FD open).
Note: If you are using a newer version of bash (>4.1) you can use a slightly different syntax:
exec {fd}>myfile
This allocates a new file descriptor, and puts it in the variable fd. This can help ensure that scripts don't accidentally overwrite each other's file descriptors. To close the file later, use
exec {fd}>&-
The reason that cat>myfile& works is because it re-directs standard input into a file.
if you launch it with an ampersand (in background), it won't get ANY input, including end-of-file, which means it will forever wait and print nothing to the output file.
You can get an equivalent effect, except WITHOUT dependency on standard input (the latter is what makes it not work in your script), with this command:
tail -f /dev/null > myfile &
On the cat > myfile & issue running in terminal vs not running as part of a script: In a non-interactive shell the stdin of a backgrounded command & gets implicitly redirected from /dev/null.
So, cat > myfile & in a script actually gets translated into cat </dev/null > myfile, which terminates cat immediately.
See the POSIX standard on the Shell Command Language & Asynchronous Lists:
The standard input for an asynchronous list, before any explicit redirections are
performed, shall be considered to be assigned to a file that has the same
properties as /dev/null. If it is an interactive shell, this need not happen.
In all cases, explicit redirection of standard input shall override this activity.
# some tests
sh -c 'sleep 10 & lsof -p ${!}'
sh -c 'sleep 10 0<&0 & lsof -p ${!}'
sh -ic 'sleep 10 & lsof -p ${!}'
# in a script
- cat > myfile &
+ cat 0<&0 > myfile &
tail -f myfile
This 'follows' the file, and outputs any changes to the file. If you don't want to see the output of tail, redirect output to /dev/null or something:
tail -f myfile > /dev/null
You may want to use the --retry option, depending on your specific case. See man tail for more information.