I want to write a background sh (it can be python or any other language actually) script very much like script.
Its main purpose is to run on the background after invoked and listen to inputs (not the outputs nor the PS1 shell prompt string nor keystrokes like "End", "Ctrl", arrow-keys, etc.) that the user enters to the shell from which the script was originally invoked and then record them.
The dumbest approach would use script and then try to subtract PS1 and outputs from the generated file. But this would be error-prone and there sure are better ways to do this.
I've read about pam_tty_audit but I'm not sure if one can easily filter out just the user input and I'm afraid it won't work for all Linux distributions.
What else can I look into to accomplish that?
A quick example illustrating what I seek:
The user would input this to the shell:
$ ./myscript
MyScript started in the background
$ echo "foo"
$ sudo apt install netcat
[sudo] password for user: MYPASSWORD
...
Do you want to continue? [Y/n] NO
...
$ exit
MyScript exited
And my script would render this output file:
#!/bin/bash
eval "echo \"foo\""
echo "NO" | echo "MYPASSWORD" | eval "sudo apt install netcat"
Obs.: Capturing the password typed for sudo is not really something I would like to perform, the above example is just the first which came to mind.
Related
I have a script that calls an application that requires user input, e.g. run app that requires user to type in 'Y' or 'N'.
How can I get the shell script not to ask the user for the input but rather use the value from a predefined variable in the script?
In my case there will be two questions that require input.
You can pipe in whatever text you'd like on stdin and it will be just the same as having the user type it themselves. For example to simulating typing "Y" just use:
echo "Y" | myapp
or using a shell variable:
echo $ANSWER | myapp
There is also a unix command called "yes" that outputs a continuous stream of "y" for apps that ask lots of questions that you just want to answer in the affirmative.
If the app reads from stdin (as opposed to from /dev/tty, as e.g. the passwd program does), then multiline input is the perfect candidate for a here-document.
#!/bin/sh
the_app [app options here] <<EOF
Yes
No
Maybe
Do it with $SHELL
Quit
EOF
As you can see, here-documents even allow parameter substitution. If you don't want this, use <<'EOF'.
the expect command for more complicated situations, you system should have it. Haven't used it much myself, but I suspect its what you're looking for.
$ man expect
http://oreilly.com/catalog/expect/chapter/ch03.html
I prefer this way: If You want multiple inputs... you put in multiple echo statements as so:
{ echo Y; Y; } | sh install.sh >> install.out
In the example above... I am feeding two inputs into the install.sh script. Then... at the end, I am piping the script output to a log file to be archived and viewed for later.
Content of my remote file:
#!/bin/sh
read foo && echo "$foo"
What I’m doing locally in my terminal:
$ curl -fLSs http://example.com/my-remote-file.sh | sh
What I’m expecting:
The downloaded script should prompt the user to enter something and wait for the users input.
What actually happens:
The downloaded script skips the read command and continues executing.
Question:
How can I prompt the user to input something from a script that is downloaded via curl? I know that sh <(curl -fLSs http://example.com/my-remote-file.sh) is working, but this is not POSIX compliant and I’d like to achieve that.
The problem is that stdin is reading from curl, rather than the keyboard. You need to change your script to instruct it to read specifically from the terminal, like this:
read input < /dev/tty
echo $input
I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.
I know I can run this command to spawn a background process and get the PID:
PID=`$SCRIPT > /dev/null 2>&1 & echo $!`
and to run a command under different user:
su - $USER -c "$COMMAND"
I don't want the script to run as root and I can't quite figure out how to combine the two and get the PID of the spawned process.
Thanks!
I think you want the runuser command. General syntax:
runuser -l userNameHere -c 'command'
I suspect that if you set your $SCRIPT variable to the above (with appropriate changes), your first command will do what you want.
To elaborate on: See the following: - stackoverflow.com/questions/9119885/…
See particularly the following quote from Chris Dodd:
Unfortunately there's no easy way to do this prior to bash version 4, when $BASHPID was
introduced. One thing you can do is to write a tiny program that prints its parent PID:...
If you have bash 4 and BASHPID, see $$ in a script vs $$ in a subshell
I don't have version 4, so I can't provide an example of it's usage.
Or write a tiny C program which execvs it's arguments and make it setuid to USER.
Or even make a setuid shell script (not generally recommended). Hopefully the USER is fixed; if not, get the source for runuser, this is essentially what runuser (not a POSIX command) does.
PID=`su - $USER -c "$SCRIPT > /dev/null 2>&1 & echo $!"`
The problems with the your use of su (above) include:
the $! is being executed in the context of the -c sub-shell of su, not the current shell where PID is,
you're requesting that your SCRIPT be run as a login shell, so you don't even know if USER's shell supports $!,
you have no control over the parent-child process chain that su (and the user's shell) create.
IOW, when you use
PID=`$SCRIPT > /dev/null 2>&1 & echo $!`
there's only one program involved, bash, and two (maybe three?) processes that you pretty much have complete control over. When you throw su into the mix, that changes things much more than is apparent on the surface -- bash and su support similar arguments, right?!?
For obvious reasons, su does mucho magic to protect it and its' children's environment from attacks; it doesn't even like being put in the background....
It's kind of late, but here is a two liner will work, seems to need to be two so that it doesn't wait for the $SCRIPT to complete:
su $USER -c "$SCRIPT 2>&1 & >> $LogOrNull echo $! > /some/writeable/path"
PID="$(cat /some/writeable/path)"
/some/writeable/path will need to be writeable by $USER
And the user running these commands will need to have read access
I know how to run shell scripts pretty easily.
I would have my file say:
#!/bin/zsh
python somefile.py
but the file, somefile in this case requires an input. example:
What is the password?
Can you write a script which will enter that password, or have pause while it waits for input?
My goal overall, is to run a tunneling python script to build a connection and watch a port, pull some data through the tunnel, and then close the python script.
Ideally: I want to have this shellscript option somefile.py in an alternate terminal, as i dont know if i can just no-hup until it is no longer needed then kill the process.
First thing is first. Can you have script which will do something like:
#!/bin/zsh
python somefile.py
echo admin12345
or something similar to auto enter info?
Assuming the python script reads from stdin, just do "echo admin12345 | somefile.py".
Usually, however, that's not the case, and scripts that read passwords will want to read from a terminal, not just any stdin.
In that case, look into "expect".
It worked for me with java and python examples:
#!/bin/bash
echo "1234" | python somefile.py
Just give some permissions to your script chmod +x yourscript.sh, and run it ./yourscript.sh.