if statement with variable comparison not working in bash shell - bash

I'm trying to write a very simple script to check whether iptables are already updated for Synergy to work. The current script is:
if [[ $SYNERGY = "yes" ]]
then
echo "Synergy is active"
else
sudo iptables -I INPUT -p tcp --dport 24800 -j ACCEPT
export SYNERGY=yes
fi
But it does not work (I'm always asked for the sudo password each time I open a new terminal)
I also tried with this modified version, but the result is the same
syn="yes"
if [ "$SYNERGY" = "$syn" ]
then
echo "Synergy is active"
else
sudo iptables -I INPUT -p tcp --dport 24800 -j ACCEPT
export SYNERGY=yes
fi
Where is the issue?

If you are expecting this to be run from one terminal/shell session and to affect other unrelated terminals/shell sessions then the issue is that that isn't how export works.
export sets the variable in the environment of the current process so that any processes spawned from this process also have it in their environment. Notice how I said "spawned from"? It only applies to processes that process spawns. Unrelated processes aren't affected.
If you want something globally checkable then you either need a flag/lock/state file of some sort or an actual runtime check of the iptables configuration.

Just to help those who have the same question, this is how I managed to persist firewall settings:
sudo apt-get install iptables-presistent
and then the rules specified in the files rules.v4 or rules.v6 in /etc/iptables are automatic loaded at startup

Related

How to ignore reboot prompt in a ShellScript

I am trying to create a Shellscript with the following commands.
#!/bin/bash
ipa-client-install --uninstall
/usr/local/sbin/new-clone.sh -i aws -s aws-dev
My problem is that the ipa-client-install --uninstall command prompts for a reboot at the end with the default value being no.
Here is the output.
Client uninstall complete. The original nsswitch.conf configuration
has been restored. You may need to restart services or reboot the
machine. Do you want to reboot the machine? [no]:
How can I supress the reboot dialog and just accept the default "no"?
How can I check to see if ipa-client-install is installed before attempting to remove it?
I am new to Shellscripting, so I am struggling a bit :-)
Please be safe.
You can use Linux pipes to take care of the prompt issue. To rpm -q will help you to check if the package is available.
Your final script would be like
#!/bin/bash
if rpm -q ip-client-install
then
echo no | ipa-client-install --uninstall
else
echo "Package not found"
fi
/usr/local/sbin/new-clone.sh -i aws -s aws-dev

How To Obtain Remote Client SSH IP From Sudo'd Script

I'm trying to create a small setup script which requires the user to enter their own IP address... I'm trying to make the default IP address be that of the connected SSH session.
When I run ${SSH_CLIENT%% *} in the command line, it shows 99.99.99.99: command not found and when I do echo ${SSH_CLIENT%% *} it shows just 99.99.99.99 (where 99.99.99.99 is my actual WAN IP)...
However, the following code (when ran in a bash script) shows
[6/9] Enter your public IP address. (eg. ) :
declare CONNECTED_IP=${SSH_CLIENT%% *}
read -e -p "[6/9] Enter your public IP address. (eg. $CONNECTED_IP) : " ipAddress
[ -z "$ipAddress" ] && ipAddress=$CONNECTED_IP
Update 1
Upon further review, it does work... except for when I run the script with sudo. Could someone explain this behavior to me and is there a work-around?
Update 2
So, it's been explained to be that this behavior is due to the fact that environment variables are not preserved when switching users (sudo) and that I could use the -E flag when executing the script to preserve the environment.
Unfortunately though, this is a script that will be shared and I therefore cannot ensure that the user executes the script with the -E flag. Furthermore, I don't even know if they'll use sudo at all.
That said, I'm looking for a consistent way to obtain the IP address of the user connected via SSH.
The reason the variable doesn't show when using sudo is that it doesn't automatically preserve the environment when you switch from another user. You can use the -E sudo option to preserve the current environment.
$ sudo -E ./script.sh
Should show the ip as expected.
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve
their existing environment variables. The security policy may
return an error if the user does not have permission to preserve
the environment.

getting multiple issues while creating a script to update hostnames in /etc/hosts file?

We've around 3000 VMs & 450 Physical servers which are Linux based servers (few of then ubuntu starting from 9.x & few of them are Susu starting 8.X & majority of them are RHEL starting from 4.x till 7.4) on all of them I need to add few hostname entries with IP details into their respective /etc/hosts files.
I've different users on each server with full sudoers access which I can use
Hence I've created a CSV file with hostname, username & password format. which contains required details to log in. Filename is "hostname_logins.csv"
I need to upload a file (i.e. hostname_list to each of these servers and then update those same details in each of the servers host files.
I'll be running this script using one RHEL 6 server. (All of the other hosts are resolvable from this server & are reachable, I've confirmed it already.)
The script is working but it's asking for accepting the host key once and also asked for the password 2 times however the 3rd time it does not asked for a password it worked automatically I guess, but need to ensure it does not askes to accept the host key or passwords.:
#!/bin/bash
runing_ssh()
{
while read hostname_login user_name user_password
do ssh -vveS -ttq rishee:rishee#192.168.1.105 "sudo -S -ttq < ./.pwtmp cp -p /etc/hosts /etc/hosts.$(date +%Y-%m-%d_%H:%M:%S).bkp && sudo -S bash -c 'cat ./hostname_list >> /etc/hosts' && rm -f ./.pwtmp ./hostname_list"
done < hostname_logins.csv
}
while read hostname_login user_name user_password
do echo $user_password > ./.pwtmp
cat ./.pwtmp
scp -p ./.pwtmp ./hostname_list $user_name#$hostname_login:
runing_ssh
done < hostname_logins.csv
I need to make this as a single script which will work on all these servers. thanks in advance.
You are executing the original copy from /tmp with sudo, but nothing else.
while read hostname_login user_name user_password
do echo $myPW >.pwtmp
scp -p ./.pwtmp ./hostname_list $user_name:$user_password#$hostname_login:
ssh -etS $user_name:$user_password#$hostname_login "sudo -S <.pwtmp cp -p /etc/hosts /etc/hosts.bkp && sudo -S <.pwtmp cat ./hostname_list >> /etc/hosts && rm -f ./.pwtmp ./hostname_list"
done < hostname_logins.csv
I dropped the explicit send to /tmp and the cp back to your home dir, and defaulted the location (to $user_name's home dir) by not passing anything to scp after the colon. Fix that if it doesn't work for you.
I created a password file for improved security and code reuse, and sent it along with the hosts list. I added a sudo -S to each relevant command, reading from the password file.
That [bash -c ...] syntax doesn't work on my implementation, so I took it out.
Hope that helps.
Update
Added -t to ssh call. Try that.

Is there a way to write the ip-address of an interface to a file when it comes up?

I'm looking for a way to trigger writing the IP address of a host to the file /etc/environment once the networking is up.
Right now, all my IP's are static. I'd like in future for them to be DHCP as well.
For example: When eth0 becomes up and assigned it's IP configured from 10-eth0.network, the ip is written /etc/environment in some form like
private_ipv4=x.x.x.x
public_ipv4=y.y.y.y
I'll consider other options like perhaps a script that can run from a systemd service that can do the same. I don't mind if it requires configuration. For example, to tell it which interface and possibly network prefix is considered public vs private.
Depending on your distro, you might be able to have dhclient do the writing.
See: dhclient(8), dhclient-script(8), and dhclient.conf(5)
You can write the script /etc/dhclient-exit-hooks, test for the BOUND condition and write what you want.
I've found a solution found through this link which works.
It seems that coreos-cloud-init writes COREOS_PUBLIC_IPV4 and COREOS_PRIVATE_IPV4 to /etc/environment if these variables are in the environment before cloud-init runs. This script achieves the same thing and can simply be copied to /usr/share/oem/cloud-config.yml
#!/bin/sh
workdir=$(mktemp --directory)
trap "rm --force --recursive ${workdir}" SIGINT SIGTERM EXIT
cat << EOF >"${workdir}/cloud-config.yml"
#cloud-config
coreos:
etcd:
discovery:
addr: \\$public_ipv4:4001
peer-addr: \\$private_ipv4:7001
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
EOF
get_ipv4() {
IFACE="${1}"
local ip
while [ -z "${ip}" ]; do
ip=$(ip -4 -o addr show dev "${IFACE}" scope global | gawk '{split ($4, out, "/"); print out[1]}')
sleep .1
done
echo "${ip}"
}
export COREOS_PUBLIC_IPV4=$(get_ipv4 eth0)
export COREOS_PRIVATE_IPV4=$(get_ipv4 eth1)
coreos-cloudinit --from-file="${workdir}/cloud-config.yml"

How can I ssh directly to a particular directory?

I often have to login to one of several servers and go to one of several directories on those machines. Currently I do something of this sort:
localhost ~]$ ssh somehost
Welcome to somehost!
somehost ~]$ cd /some/directory/somewhere/named/Foo
somehost Foo]$
I have scripts that can determine which host and which directory I need to get into but I cannot figure out a way to do this:
localhost ~]$ go_to_dir Foo
Welcome to somehost!
somehost Foo]$
Is there an easy, clever or any way to do this?
You can do the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted ; bash --login"
This way, you will get a login shell right on the directory_wanted.
Explanation
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
If you don't use -t then no prompt will appear.
If you don't add ; bash then the connection will get closed and return control to your local machine
If you don't add bash --login then it will not use your configs because its not a login shell
You could add
cd /some/directory/somewhere/named/Foo
to your .bashrc file (or .profile or whatever you call it) at the other host. That way, no matter what you do or where you ssh from, whenever you log onto that server, it will cd to the proper directory for you, and all you have to do is use ssh like normal.
Of curse, rogeriopvl's solution works too, but it's a tad bit more verbose, and you have to remember to do it every time (unless you make an alias) so it seems a bit less "fun".
My preferred approach is using the SSH config file (described below), but there are a few possible solutions depending on your usages.
Command Line Arguments
I think the best answer for this approach is christianbundy's reply to the accepted answer:
ssh -t example.com "cd /foo/bar; exec \$SHELL -l"
Using double quotes will allow you to use variables from your local machine, unless they are escaped (as $SHELL is here). Alternatively, you can use single quotes, and all of the variables you use will be the ones from the target machine:
ssh -t example.com 'cd /foo/bar; exec $SHELL -l'
Bash Function
You can simplify the command by wrapping it in a bash function. Let's say you just want to type this:
sshcd example.com /foo/bar
You can make this work by adding this to your ~/.bashrc:
sshcd () { ssh -t "$1" "cd \"$2\"; exec \$SHELL -l"; }
If you are using a variable that exists on the remote machine for the directory, be sure to escape it or put it in single quotes. For example, this will cd to the directory that is stored in the JBOSS_HOME variable on the remote machine:
sshcd example.com \$JBOSS_HOME
SSH Config File
If you'd like to see this behavior all the time for specific (or any) hosts with the normal ssh command without having to use extra command line arguments, you can set the RequestTTY and RemoteCommand options in your ssh config file.
For example, I'd like to type only this command:
ssh qaapps18
but want it to always behave like this command:
ssh -t qaapps18 'cd $JBOSS_HOME; exec $SHELL'
So I added this to my ~/.ssh/config file:
Host *apps*
RequestTTY yes
RemoteCommand cd $JBOSS_HOME; exec $SHELL
Now this rule applies to any host with "apps" in its hostname.
For more information, see http://man7.org/linux/man-pages/man5/ssh_config.5.html
I've created a tool to SSH and CD into a server consecutively – aptly named sshcd. For the example you've given, you'd simply use:
sshcd somehost:/some/directory/somewhere/named/Foo
Let me know if you have any questions or problems!
Based on additions to #rogeriopvl's answer, I suggest the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted && bash"
Chaining commands by && will make the next command run only when the previous one was successful (as opposed to using ;, which executes commands sequentially). This is particularly useful when needing to cd to a directory performing the command.
Imagine doing the following:
/home/me$ cd /usr/share/teminal; rm -R *
The directory teminal doesn't exist, which causes you to stay in the home directory and remove all the files in there with the following command.
If you use &&:
/home/me$ cd /usr/share/teminal && rm -R *
The command will fail after not finding the directory.
In my very specific case, I just wanted to execute a command in a remote host, inside a specific directory from a Jenkins slave machine:
ssh myuser#mydomain
cd /home/myuser/somedir
./commandThatMustBeRunInside_somedir
exit
But my machine couldn't perform the ssh (it couldn't allocate a pseudo-tty I suppose) and kept me giving the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal
I could get around this issue passing "cd to dir + my command" as a parameter of the ssh command (to not have to allocate a Pseudo-terminal) and by passing the option -T to explicitly tell to the ssh command that I didn't need pseudo-terminal allocation.
ssh -T myuser#mydomain "cd /home/myuser/somedir; ./commandThatMustBeRunInside_somedir"
I use the environment variable CDPATH
going one step further with the -t idea. I keep a set of scripts calling the one below to go to specific places in my frequently visited hosts. I keep them all in ~/bin and keep that directory in my path.
#!/bin/bash
# does ssh session switching to particular directory
# $1, hostname from config file
# $2, directory to move to after login
# can save this as say 'con' then
# make another script calling this one, e.g.
# con myhost repos/i2c
ssh -t $1 "cd $2; exec \$SHELL --login"
My answer may differ from what you really want, but I write here as may be useful for some people. In my solution you have to enter into the directory once and then every new ssh session goes to the same dir (after the first logout).
How to ssh to the same directory you have been in your last login.
(I assume you use bash on the remote node.)
Add this line to your ~/.bash_logout on the remote node(!):
echo $PWD > ~/.bash_lastpwd
and these lines to the ~/.bashrc file (still on the remote node!)
if [ -f ~/.bash_lastpwd ]; then
cd $(cat ~/.bash_lastpwd)
fi
This way you save your current path on every logout and .bashrc put you into that directory after login.
ps: You can tweak it further like using the SSH_CLIENT variable to decide to go into that directory or not, so you can differentiate between local logins and ssh or even between different ssh clients.
Another way of going to directly after logging in is create "Alias". When you login into your system just type that alias and you will be in that directory.
Example : Alias = myfolder '/var/www/Folder'
After you log in to your system type that alias (this works from any part of the system)
this command if not in bashrc will work for current session. So you can also add this alias to bashrc to use that in future
$ myfolder => takes you to that folder
I know this has been answered ages ago but I found the question while trying to incorporate an ssh login in a bash script and once logged in run a few commands and log back out and continue with the bash script. The simplest way I found which hasnt been mentioned elsewhere because it is so trivial is to do this.
#!/bin/bash
sshpass -p "password" ssh user#server 'cd /path/to/dir;somecommand;someothercommand;exit;'
Connect With User
In case if you don't know this, you can use this to connect by specifying both user and host
ssh -t <user>#<Host domain / IP> "cd /path/to/directory; bash --login"
Example: ssh -t admin#test.com "cd public_html; bash --login"
You can also append the commands to be executed on every login by appending it in the double quotes with a ; before each command
Unfortunately, the suggested solution (of #rogeriopvl) doesn't work when you use multiple hops, so I found another one.
On remote machine add into ~/.bashrc the following:
[ "x$CDTO" != "x" ] && cd $CDTO
This allows you to specify the desired target directory on command line in this way:
ssh -t host1 ssh -t host2 "CDTO=/desired_directory exec bash --login"
Sure, this way can be used for a single hop too.
This solution can be combined with the usefull tip of #redseven for greater flexibilty (if no $CDTO, go to saved directory, if exists).
SSH itself provides a means of communication, it does not know anything about directories. Since you can specify which remote command to execute (this is - by default - your shell), I'd start there.
simply modify your home with the command:
usermod -d /newhome username

Resources