My bash script looks like:
read -p "Do you wish to continue?" yn
# further actions ...
And I just want to interact with this script using nodejs / child_process.
How can I detect that it's waiting for the user input?
var spawn = require('child_process').spawn;
var proc = spawn('./script.sh');
proc.stdout.on("data", function(data) {
console.log("Data from bash");
}
proc.stdin.on("data", function(data) {
console.log("Data from bash"); // doesn't work :/
}
Thank you!
From the bash man page:
read -p prompt
Display prompt on standard error, without a trailing newline, before
attempting to read any input. The prompt is displayed only if input is
coming from a terminal.
And I don't think there a any way in node.js to detect that the script is waiting for input. The problems is actually that bash detects a non-terminal and disables the output to standard error. And even then you would have to read from stderr and not stdin to detect any waiting states.
In the end, as Antoine pointed out, you might have to use tools like empty or Expect to wrap your shell-scripts and trick Bash to think it is in a terminal.
Btw.: proc.stdin.write("yes\n") works fine. Thus you could work with the script, but won't get any prompts on proc.stderr and will not know when the script actually reads the input. You can also immediately proc.stdin.write the input even if the script is not yet at the read -p statement. The input is buffered until the scripts eats it up.
Have you try to use "expect" http://en.wikipedia.org/wiki/Expect ?
It's a tool that "expect" some text (it use regular expression) and the bash script can automatically answers.
Related
I'm working on a script, that requires you press control + d when you complete your entries. I'd like to send this command so I can just script my work rather than having to redo my work.
You're probably talking about the "end of transmission" delimiter which is used to indicate the end of user input. If that's the case then you can always pipe data into your script. That is, instead of this:
$ test_script.sh
My input!
^D
You'd write that data to a file:
$ cat > input
My input!
^D
Then pipe that into the script:
$ test_script.sh < input
No ^D is required because once that file is fully read the script is signalled accordingly. The < shell operator switches STDIN to read from a file instead of the terminal. Likewise, > can be used to capture the output of a program and save it to a file, as done in the second step here, though you can use any tool you'd like to create or edit that input file.
This works with pretty much any scripting language, from Python, Perl, Ruby to Node.js as well as bash and other shells.
I don't think that running a process on foreground is any way useful. So I'd like to run all process on background. Is that possible?
Also tell me if there is any problem associated with doing so.
You can adapt the code from this question: https://superuser.com/questions/175799/does-bash-have-a-hook-that-is-run-before-executing-a-command
Basically this uses the DEBUG trap to run a command before whatever you've typed on the command line. So, this:
preexec () { :; }
preexec_invoke_exec () {
[ -n "$COMP_LINE" ] && return # do nothing if completing
[ "$BASH_COMMAND" = "$PROMPT_COMMAND" ] && return # don't cause a preexec for $PROMPT_COMMAND
local this_command=$(HISTTIMEFORMAT= history 1);
preexec "$this_command" &
}
trap 'preexec_invoke_exec' DEBUG
Runs the command, but with & afterwards, backgrounding the process.
Note that this will have other rather weird effects on your terminal, and anything supposed to run in the foreground (command line browsers, mail readers, interactive commands, anything requiring input, etc.) will have issues.
You can try this out by just typing bash, which will execute another shell. Paste the above code, and if things start getting weird, just exit out of the shell and things will reset.
Do you mean bash script? Jush add & at the end. Example :
$ ./myscript &
While it might be possible to do something clever like suggested by #pgl, it's not a good idea. Processes running in the background don't show you their output in a useful way. So, if all processes are automatically sent to the background, your terminal will be flooded with their various standard output and standard error messages but you will have no way of knowing what came from what, your terminal will be next to useless and confusion will ensue.
So, yes there is a very good reason to keep processes in the foreground: to see what they're doing and be able to control them easily. To give an even more concrete example, any program that requires you to interact with it can't be run in the background. This includes things that ask for Continue [Y/N]? or things like sudo that ask for your password. If you just blindly make everything run int the background such commands will just silently hang.
What I would like to do is:
run a ruby script...
that executes a shell command
and redirects it to a named pipe accessible outside the script
from the system shell, read from that pipe
That is, have the Ruby script capture some command output and redirect it in such a way that it's connectable to from outside the script?
I want to mention that the script cannot simply start and exit, since it's a REPL. The idea is that using the REPL you would be able to run a command and redirect its output elsewhere to consume it.
Using abort and an exit message, will pass the message to STDERR (and the script will fail with exit code 1). You can pass this shell command output in this way.
This is possibly not the only (or best) way, but it has worked for me in the past.
[edit]
You can also redirect the output to a file (using standard methods), and read that file outside the ruby script.
require 'open3'
stdin, stderr, status = Open3.capture3(commandline)
stdin.chomp #Here, you should ge
Incase, if someone wanted to use you can get the output via stdin.chomp
I'm writing an Expect script and am having trouble dealing with the shell prompt (on Linux). My Expect script spawns rlogin and the remote system is using ksh. The prompt on the remote system contains the current directory followed by " > " (space greater-than space). A script snippet might be:
send "some command here\r"
expect " > "
This works for simple commands, but things start to go wrong when the command I'm sending exceeds the width of the terminal (or more precisely, what ksh thinks is the width of the terminal). In that case, ksh does some weird horizontal scrolling of the interactive command line, which seems to rewrite the prompt and stick an extra " > " in the output. Naturally this causes the Expect script to get confused and out of sync when there appears to be more than one prompt in the output after executing a command (my script contains several send/expect pairs).
I've tried changing PS1 on the remote system to something more distinctive like "prompt> " but a similar problem arises which indicates to me that's not the right way to solve this.
What I'm thinking might help is the ability for the script to tell Expect that "I know I'm properly synchronised with the remote system at this point, so flush the input buffer now." The expect statement has the -notransfer flag which doesn't discard the input buffer even if the pattern does match, so I think I need the opposite of that.
Are there any other useful techniques that I can use to make the remote shell behave more predictably? I understand that Expect goes through a lot of work to make sure that the spawned session appears to be interactive to the remote system, but I'd rather that some of the more annoying interactive features (such as the horizontal scrolling of ksh) be turned off.
If you want to throw away all output Expect has seen so far, try
expect -re $
This is a regexp match on $ which means the end of the input buffer, so it will just skip everything received so far. More details at the Expect man page.
You could try "set -o multiline" or COLUMNS=1000000 (or some other suitably large value).
I have had difficulty with ksh and Expect in the past. My solution was to use something other than
ksh for a login shell.
If you can change the remote login to other than ksh (using the chsh command or editing /etc/passwd) then you might try this with /bin/sh as the shell.
Another alternative is to tell KSH that the terminal is a dumb terminal - disallow it from doing any special processing.
$ export TERM=""
might do the trick.
My $SHELL is tcsh. I want to run a C shell script that will call a program many times with some arguments changed each time. The program I need to call is in Fortran. I do not want to edit it. The program only takes arguments once it is executed, but not on the command line. Upon calling the program in the script, the program takes control (this is where I am stuck currently, I can never get out because the script will not execute anything until after the program process stops). At this point I need to pass it some variables, then after several iterations I will need to Ctrl+C out of the program and continue with the script.
How can this be done?
To add to what #Toybuilder said, you can use a "here document". I.e. your script could have
./myfortranprogram << EOF
first line of input
second line of input
EOF
Everything between the "<<EOF" and the "EOF" will be fed to the program's standard input (does Fortran still use "read (5,*)" to read from standard input?)
And because I think #ephemient's comment deserves to be in the answer:
Some more tips: <<'EOF' prevents
interpolation in the here-doc body;
<<-EOF removes all leading tabs (so
you can indent the here-doc to match
its surroundings), and EOF can be
replaced by any token. An empty token
(<<"") indicates a here-doc that stops
at the first empty line.
I'm not sure how portable those ones are, or if they're just tcsh extensions - I've only used the <<EOF type "here document" myself.
What you want to use is Expect.
Uhm, can you feed your Fortran code with a redirection? You can create a temporary file with your inputs, and then pipe it in with the stdin redirect (<).
This is a job for the unix program expect, which can nicely and easily interactively command programs and respond to their prompts.
I was sent here after being told my question was close to being a duplicate of this one.
FWIW, I had a similar problem with a csh C shell script.
This bit of code was allowing the custom_command to execute without getting ANY input arguments:
foreach f ($forecastTimes)
custom_command << EOF
arg1=x$f;2
arg2=ya
arg3=z,z$f
run
exit
EOF
end
It didn't work the first time I tried it, but after I backspaced out all of the white space in that section of the code I removed the space between the "<<" and the "EOF". I also backspaced the closing "EOF" all the way to the left margin. After that it worked:
foreach f ($forecastTimes)
custom_command <<EOF
arg1=x$f;2
arg2=ya
arg3=z,z$f
run
exit
EOF
end
Not a tcsh user, but if the program runs then reads in commands via stdin then you can use shell redirection < to feed it the required commands. If you run it in the background with & you will not block when it is executed. Then you can sleep for a bit, then use whatever tools you have (ps, grep, awk, etc) to discover the program's PID, then use kill to send it SIGTERM which is the same as doing a Ctrl-C.