My /etc/bash.bashrc contains this code by default (Git for Windows)
# If started from sshd, make sure profile is sourced
if [[ -n "$SSH_CONNECTION" ]] && [[ "$PATH" != *:/usr/bin* ]]; then
. /etc/profile
fi
I know it is documented, but I still don't understand what it means. I set an echo "here" inside of the if to see if it ever runs, but I can't make it. What exactly does this mean, and what even is an sshd? Or did I accidentally just type "d" on my keyboard and its a typo?
https://www.ssh.com/ssh/sshd/
sshd is the OpenSSH server process. It listens to incoming connections using the SSH protocol and acts as the server for the protocol. It handles user authentication, encryption, terminal connections, file transfers, and tunneling.
That code checks if the $SSH_CONNECTION environment variable is set to see if the shell was started by sshd. If so, and $PATH does not contain /usr/bin, then it executes the commands in /etc/profile in the current shell context.
Related
I want to call a program when any SSH user logs in that prints a welcome message. I did this by editing the /etc/ssh/sshrc file:
#!/bin/bash
ip=`echo $SSH_CONNECTION | cut -d " " -f 1`
echo $USER logged in from $ip
For simplicity, I replaced the program call with a simple echo command in the example
The problem is, I learned SCP is sensitive to any script that prints to stdout in .bashrc or, apparently, sshrc. My SCP commands failed silently. This was confirmed here: https://stackoverflow.com/a/12442753/2887850
Lots of solutions offered quick ways to check if the user is in an interactive terminal:
if [[ $- != *i* ]]; then return; fi link
Fails becase [ is not linked
case $- in *i* link
Fails because in is not recognized?
Use tty program (same as above)
tty gave me a bizarre error code when executed from sshrc
While all of those solutions could work in a normal BASH environment, none of them work in the sshrc file. I believe that is because PATH (and I suspect a few other things) aren't actually available when executing from sshrc, despite specifying BASH with a shebang. I'm not really sure why this is the case, but this link is what tipped me off to the fact that sshrc is running in a limited environment.
So the question becomes: is there a way to detect interactive terminal in the limited environment that sshrc executes in?
Use test to check $SSH_TTY (final solution in this link):
test -z $SSH_TTY || echo $USER logged in from $ip
Is there a way for a shell script to determine if it is called from a terminal in a virtual machine running Ubuntu, or a W10 terminal using the bash call (installed Ubuntu app in W10)?
I am working in both environments and have a lot of useful shell scripts to make my work more efficient on the virtual machine, e.g. opening specific URLs or running sets of commands. I would like them to work on the Windows side as well. However, my scripts sets up directories which will have to be different on my Windows side.
I have installed the ubuntu app from Windows Store, which allows me to open a bash window and source the files. I could just check if ~ returns an empty string, but is there a more robust way of doing it?
I am running Windows 10, version 17763 and using Ubuntu 18.04 LTS.
E.g.
C:\.sourceThis.sh
#!/bin/bash
myDir="/home/user/stuff"
cdMySub() {
cd "$myDir/$1"
}
I can run this in a Windows terminal by
C:\> bash
USER#XXXX:/mnt/c/$ source ./.sourceThis.sh
USER#XXXX:/mnt/c/$ cdMySub someSubDirectoryName
-bash: cd: /home/user/stuff/someSubDirectoryname: No such file or directory
USER#XXXX:/mnt/c/$ #Fail!
but it does not work, since the Ubuntu file system is different to Windows.
I would like to change .sourceThis.sh to something like
...
if [[ "Something that detects virtual machine" ]] ; then
myDir="/home/user/stuff"
elif [[ "Something that detects 'bash' from Windows prompt" ]] ; then
myDir="/mnt/c/user/stuff"
fi
so that the outcome is instead
C:\> bash
USER#XXXX:/mnt/c/$ source ./.sourceThis.sh
USER#XXXX:/mnt/c/$ cdMySub someSubDirectoryName
USER#XXXX:/mnt/c/stuff/someSubDirectoryName$ #Yeay, success!
EDIT:
I cannot just check for the validity of the default directory, since the scripts create the directory if it does not exist. I want it to point to another default path instead.
I use different user names, so I could check that the output from ~ is the "Windows or VM user".
USER#XXXX:/mnt/c$ echo ~
/home/USER
Thus,
tmpHome=~
if [[ "${tmpHome##*/}" == "USER" ]] ; then
# Windows user
elif [[ "${tmpHome##*/}" == "VM" ]] ; then
# VM user
fi
works for my specific user. However, I suspect that I want to use this on different users (e.g. share it with a colleague). This demands a more robust way.
I am not too experience with Linux. I do not know how to navigate the world of users, processes and tasks, which I suspect can give the answer.
I have used this for a long time successfully:
if [[ "$(uname -r)" == *Microsoft ]]; then
do stuff
fi
You could always use an if condition checking whether the path exists, and run the script from there :
if [[ -f /home/user/stuff ]]; do
script if running on linux
else
script if running on windows
fi
Here the -f flags is a bash condition checking whether the file at specified path exists, returns true if it does. You can add other validations to check whether the file also exists when running on Windows and whatnot.
Bash provides information about the system that is running it it the MACHTYPE, HOSTTYPE, and OSTYPE built-in variables.
Example values for a physical Linux system are:
MACHTYPE=x86_64-redhat-linux-gnu
HOSTTYPE=x86_64
OSTYPE=linux-gnu
Example values for a WSL Linux system are:
MACHTYPE=x86_64-pc-linux-gnu
HOSTTYPE=x86_64
OSTYPE=linux-gnu
One possible way to check if the system is WSL Linux is:
if [[ $MACHTYPE == *-pc-* ]]; then
...
fi
#Dexirian and #Michael Hoffman suggested a method that worked!
For me uname -r returns x.x.x-17763-Microsoft from the Windows prompt and x.x.x-xx-generic on my virtual machine.
Thus,
if [[ "$(uname -r)" =~ "Microsoft" ]] ; then
myDir="/mnt/c/user/stuff"
elseif [[ "$(uname -r)" =~ "generic" ]] ; then
myDir="/home/user/stuff"
fi
works like a charm!
I have a situation where I have a host machine where I need certain applications installed with side by side versions. Obviously only one can get added to the exports to run by default. As an example we might say it's Python 2.7, Python 3.5, & Python 3.7.
I need to be able to establish an SSH connection to the host, where each connection can set the correct path for the specific version that is required. Is there an easy way to do this. The key here is that each connection cannot affect either the host itself or other connections. Someone running on the host itself shouldn't break because the path was updated by a remote connection.
For the case of multiple (python and other) hierarchies, and assuming that the tools are invokes by the tool name (python ...), prepending the preferred path to the system path will provide a way to specify per-instance tool setting, without having side effect between jobs.
ssh ... 'PATH=/path/to/python3.1/bin:$PATH command'
Depending on the number of tools, and complexity of the setup, you might want to implement this as a wrapper
ssh ... '/path/to/run-with-pkgs python-3.2 pkg2 -- command'
With the pkg-setup script source various config script. Something along the lines of:
run-with-pkgs
#! /bin/bash
while [ $# -gt 0 ] && [ "$1" != "--" ] ; do
source "/path/to/setup.d/$1.sh"
shift
done
if [ "$1" = "--" ] ; then
shift
exec "$#"
fi
I am facing an issue using environment variables in my service script.
In my services script, i am using an environmental variable i.e. INSTALL_DIR whose value may vary on different system. I have to get the installation directory from $INSTALL_DIR and then i have to start the service. when i am running the service script the environment variable is not sourced at all.
Is it possible to source the installation directory from INSTALL_DIR environment variable. another option i can think is dynamically creating the service script using INSTALL_DIR environment variable.
echo "INSTALL DIR: ${INSTALL_DIR}"
name=`basename $0`
pid_file="/var/run/$name.pid"
get_pid() {
cat "$pid_file"
}
is_running() {
[ -f "$pid_file" ] && ps `get_pid` > /dev/null 2>&1
}
Start()
{
echo "Starting Application"
if is_running; then
echo "[`get_pid`] Already Started"
else
if [ -z "$user" ]; then
nohup $INSTALL_DIR/bin/application 2>&1 &
else
nohup sudo -u "$user" $cmd 1> $INSTALL_DIR/bin/application 2>&1 &
fi
echo $! > "$pid_file"
if ! is_running; then
echo "Unable to start, see logs"
exit 1
fi
echo "[`get_pid`] Started"
fi
}
I am trying to run the application using following command
service application start
In my services script ... I have to get the installation directory from $INSTALL_DIR and then i have to start the service.
Your question isn't really about shell scripting, but about your system's startup. Unfortunately that process varies by Linux distribution, and tends to be poorly documented.
For example, man service says, service runs a System V init script or upstart job in as predictable an environment as possible, removing most environment variables and with the current working directory set to /., but man upstart says:
$ man -k upstart
upstart: nothing appropriate.
Not only that, but the service manpage specifically lists the environment variables a script will start with. Needless to say, yours isn't among them.
The traditional approach to parameterizing startup scripts is to put the information in a known file, normally in /etc, and reference that file in the script. In your case, you could do something like:
INSTALL_DIR=$(cat /etc/my-install-dir.cfg)
and then proceed accordingly.
There might be ways to coerce your startup to support other environment variables. But, sooner or later, the information you need has to be stored somewhere on the filesystem. It seems to me the simplest approach is to reserve a filename to hold that information, and read that file directly.
Use this below code in your script.
if [[ -z "${INSTALL_DIR}" ]]; then
echo "INSTALL_DIR is undefined"
else
INSTALL_DIR=<<your installation directory>>
fi
I am trying to login on one of the remote server(Box1) and trying to read one file on remote server(Box1).
That contain the another server(Box2) details, base upon that details I have to come back to the local server and ssh to another server(Box2) for some data crunching. and so on.....
ssh box1.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node1= `cat /home/rakesh/tomar.log`
fi
EOF
ssh box2.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node2= `cat /home/rakesh/tomar.log`
fi
EOF
but I am not getting value of "server_node1" and "server_node2" on local machine.
any help would be appreciated.
Just like bash -c 'export foo=bar' cannot declare a variable in the calling shell where you typed this, an ssh command cannot declare a variable in the calling shell. You will have to refactor so that the calling shell receives the information and knows what to do with it.
I agree with the comment that storing a log file in a variable is probably not a sane, or at least elegant, thing to do, but the easy way to do what you are attempting is to put the ssh inside the assignment.
server_node1=$(ssh box1.com cat tomar.log)
server_node2=$(ssh box2.com cat tomar.log)
A few notes and amplifications:
The remote shell will run in your home directory, so I took it out (on the assumption that /home/rt9419 is your home directory, obviously).
In case of an error in the cat command, the exit code of ssh will be the error code from cat, and the error message on standard error will be visible on your standard error, so the echo seemed quite superfluous. (If you want a custom message, variable=$(ssh whatever) || echo "Custom message" >&2 would do that. Note the redirection to standard error; it doesn't seem to matter here, but it's good form.)
If you really wanted to, you could run an arbitrarily complex command in the ssh; as outlined above, it didn't seem necessary here, but you could do assigment=$(ssh remote 'if [[ things ]]; then for variable in $(complex commands to drive a loop); do : etc etc; done; fi; more </dev/null; exit "$variable"') or whatever.
As further comments on your original attempt,
The backticks in the here document in your attempt would be evaluated by your local shell before the ssh command even ran. There are separate questions about how to fix that; see e.g. How have both local and remote variable inside an SSH command. but in short, unless you absolutely require the local shell to be able to modify the commands you send, probably put them in single quotes, like I did in the silly complex ssh example above.
The function of export is to make variables visible to child processes. There is no way to affect the environment of a parent process (short of having it cooperate and/or coordinate the change, as in the code above). As an example to illustrate the difference, if you set PERL5LIB to a directory with Perl libraries, but fail to export it, the Perl process you start will not see the variable; it is only visible to the current shell. When you export it, any Perl process you start as a child of this shell will also see this variable and the value you assigned. In other words, you export variables which are not private to the current shell (and don't export private ones; aside from making sure they are private, this saves the amount of memory which needs to be copied between processes), but that still only makes them visible to children, by the design of the U*x process architecture.
You should get back the file from box1and box2 with an scp:
scp box1.com:/home/rt9419/tomar.log ~/tomar1.log
#then you can cat!
export server_node1=`cat ~/tomar1.log`
idem with box2
scp box2.com:/home/rt9419/tomar.log ~/tomar2.log
#then you can cat!
export server_node2=`cat ~/tomar2.log`
There are several possibilities. In your case, you could on the remote system create a file (in bash syntax), containing the assignments of these variables, for example
echo "export server_node2='$(</home/rt9419/tomar.log)'" >>export_settings
(which makes me wonder why you want the whole content of your logfile be stored into a variable, but this is another question), then transfer this file to your host (for example with scp) and source it from within your bash script.