I want to call a program when any SSH user logs in that prints a welcome message. I did this by editing the /etc/ssh/sshrc file:
#!/bin/bash
ip=`echo $SSH_CONNECTION | cut -d " " -f 1`
echo $USER logged in from $ip
For simplicity, I replaced the program call with a simple echo command in the example
The problem is, I learned SCP is sensitive to any script that prints to stdout in .bashrc or, apparently, sshrc. My SCP commands failed silently. This was confirmed here: https://stackoverflow.com/a/12442753/2887850
Lots of solutions offered quick ways to check if the user is in an interactive terminal:
if [[ $- != *i* ]]; then return; fi link
Fails becase [ is not linked
case $- in *i* link
Fails because in is not recognized?
Use tty program (same as above)
tty gave me a bizarre error code when executed from sshrc
While all of those solutions could work in a normal BASH environment, none of them work in the sshrc file. I believe that is because PATH (and I suspect a few other things) aren't actually available when executing from sshrc, despite specifying BASH with a shebang. I'm not really sure why this is the case, but this link is what tipped me off to the fact that sshrc is running in a limited environment.
So the question becomes: is there a way to detect interactive terminal in the limited environment that sshrc executes in?
Use test to check $SSH_TTY (final solution in this link):
test -z $SSH_TTY || echo $USER logged in from $ip
Related
I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.
I am trying to login on one of the remote server(Box1) and trying to read one file on remote server(Box1).
That contain the another server(Box2) details, base upon that details I have to come back to the local server and ssh to another server(Box2) for some data crunching. and so on.....
ssh box1.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node1= `cat /home/rakesh/tomar.log`
fi
EOF
ssh box2.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node2= `cat /home/rakesh/tomar.log`
fi
EOF
but I am not getting value of "server_node1" and "server_node2" on local machine.
any help would be appreciated.
Just like bash -c 'export foo=bar' cannot declare a variable in the calling shell where you typed this, an ssh command cannot declare a variable in the calling shell. You will have to refactor so that the calling shell receives the information and knows what to do with it.
I agree with the comment that storing a log file in a variable is probably not a sane, or at least elegant, thing to do, but the easy way to do what you are attempting is to put the ssh inside the assignment.
server_node1=$(ssh box1.com cat tomar.log)
server_node2=$(ssh box2.com cat tomar.log)
A few notes and amplifications:
The remote shell will run in your home directory, so I took it out (on the assumption that /home/rt9419 is your home directory, obviously).
In case of an error in the cat command, the exit code of ssh will be the error code from cat, and the error message on standard error will be visible on your standard error, so the echo seemed quite superfluous. (If you want a custom message, variable=$(ssh whatever) || echo "Custom message" >&2 would do that. Note the redirection to standard error; it doesn't seem to matter here, but it's good form.)
If you really wanted to, you could run an arbitrarily complex command in the ssh; as outlined above, it didn't seem necessary here, but you could do assigment=$(ssh remote 'if [[ things ]]; then for variable in $(complex commands to drive a loop); do : etc etc; done; fi; more </dev/null; exit "$variable"') or whatever.
As further comments on your original attempt,
The backticks in the here document in your attempt would be evaluated by your local shell before the ssh command even ran. There are separate questions about how to fix that; see e.g. How have both local and remote variable inside an SSH command. but in short, unless you absolutely require the local shell to be able to modify the commands you send, probably put them in single quotes, like I did in the silly complex ssh example above.
The function of export is to make variables visible to child processes. There is no way to affect the environment of a parent process (short of having it cooperate and/or coordinate the change, as in the code above). As an example to illustrate the difference, if you set PERL5LIB to a directory with Perl libraries, but fail to export it, the Perl process you start will not see the variable; it is only visible to the current shell. When you export it, any Perl process you start as a child of this shell will also see this variable and the value you assigned. In other words, you export variables which are not private to the current shell (and don't export private ones; aside from making sure they are private, this saves the amount of memory which needs to be copied between processes), but that still only makes them visible to children, by the design of the U*x process architecture.
You should get back the file from box1and box2 with an scp:
scp box1.com:/home/rt9419/tomar.log ~/tomar1.log
#then you can cat!
export server_node1=`cat ~/tomar1.log`
idem with box2
scp box2.com:/home/rt9419/tomar.log ~/tomar2.log
#then you can cat!
export server_node2=`cat ~/tomar2.log`
There are several possibilities. In your case, you could on the remote system create a file (in bash syntax), containing the assignments of these variables, for example
echo "export server_node2='$(</home/rt9419/tomar.log)'" >>export_settings
(which makes me wonder why you want the whole content of your logfile be stored into a variable, but this is another question), then transfer this file to your host (for example with scp) and source it from within your bash script.
Is it possible to identify, if a Linux shell script is executed by a user or a cronjob?
If yes, how can i identify/check, if the shell script is executed by a cronjob?
I want to implement a feature in my script, that returns some other messages as if it is executed by a user. Like this for example:
if [[ "$type" == "cron" ]]; then
echo "This was executed by a cronjob. It's an automated task.";
else
USERNAME="$(whoami)"
echo "This was executed by a user. Hi ${USERNAME}, how are you?";
fi
One option is to test whether the script is attached to a tty.
#!/bin/sh
if [ -t 0 ]; then
echo "I'm on a TTY, this is interactive."
else
logger "My output may get emailed, or may not. Let's log things instead."
fi
Note that jobs fired by at(1) are also run without a tty, though not specifically by cron.
Note also that this is POSIX, not Linux- (or bash-) specific.
I've been looking for a way to log some more detailed information about the history of commands. My main purpose is to have a rough log of commands that were issued in order to build rough server timelines when debugging issues with our application. It is not for highly detailed auditing purposes. I came across this post which suggested an excellent way to modify PROMPT_COMMAND to augment the history log with additional information about each command. It suggests adding the following to the ~/.bashrc file:
export PROMPT_COMMAND='hpwd=$(history 1); hpwd="${hpwd# *[0-9]* }"; if [[ ${hpwd%% *} == "cd" ]]; then cwd=$OLDPWD; else cwd=$PWD; fi; hpwd="${hpwd% ### *} ### $cwd"; history -s "$hpwd"'
This works awesome, except that it only happens when the PS1 prompt is issued. Is there a way to enhance this to work with non-interactive shells (I think that's the correct term)?
For example, I would like:
ssh host "ls | grep home"
To create an entry for ls | grep home on host as well, but since this isn't done through a PS1 prompt the linked solution falls short.
I have looked into auditd a little. This is a great utility, but the level of detail was way more than I need. I could have parsed the logs pretty easily, but pipes, redirects, loops become a nightmare to rebuild sanely into something pretty like what history already reports.
A simple wrapper around ssh would seem like a straightforward way to achieve this.
shout () {
local host
host=$1
shift
ssh "$host" <<____HERE
echo "$#" >>\$HOME/.shout-history
bash -c "$#"
____HERE
}
Or if you want the wrapper to run locally,
shout () {
local host
host=$1
shift
echo "$#" >>$HOME/.shout-history
ssh "$host" "$#"
}
I called this shout in opposition to ssh which ought to be, you know, quiet. See also this. Of course, if you are admin, you could simply move /usr/bin/ssh to someplace obscure and force your users to run a /usr/local/bin/ssh with contents similar to the above. It's easy enough to bypass by a knowledgeable user, but if you're really draconian, there are ways to make it harder.
If you are the admin of the remote host, you could force all users to run /usr/local/bin/shout as their shell, for example, and populate it with something more or less similar.
#!/bin/bash
echo "$#" >>/home/root/im.in.ur.sh.reading.ur.seekrit.cmds.lol
exec /bin/bash -c "$#"
Just make sure the transcript file is world writable but not world readable.
I have a script which executes a git-pull when I log in. The problem is, if I su to a different user and preserve my environment with an su -lp, the script gets run again and usually gets messed up for various reasons because I'm the wrong user. Is there a way to determine in a shell script whether or not I'm currently SUing? I'm looking for a way that doesn't involve hard coding my username into the script, which is my current solution. I use Bash and ZSH as shells.
You could use the output of the who command with the id command:
WHO=`who am i | sed -e 's/ .*//'`
ID_WHO=`id -u $WHO`
ID=`id -u`
if [[ "$ID" = "$ID_WHO" ]]
then
echo "Not su"
else
echo "Is su"
fi
if test "$(id -u)" = "0";
: # commands executed for root
else
: # commands executed for non root
fi
If you are changing user identities with an suid executable, your real and effective user id will be different. But if use use su (or sudo), they'll both be set to the new user. This means that commands that call getuid() or geteuid() won't be useful.
A better method is to check who owns the terminal the script is being run on. This obviously won't work if the process has detached from it's terminal, but unless the script is being run by a daemon, this is unlikely. Try stat -c %U $(tty). I believe who am i will do the same thing on most Unix-like OSes as well.
You can use "$UID" environment variable.
If its value is ZERO, then the user has SUDOed.. Bcos root as $UID==0
Well.... on linux, if I su to another user the process su is in the new user's process list.
sudo... doesn't leave such pleasant things for you.
I'm using zsh... but I don't think anything in this is shell specific.
if:
%ps | grep " su$"
returns anything, then you're running in an su'd shell.
Note: there is a space before su$ in that to exclude command simply ending in su. Doesn't guard against any custom program/script called su, though.