OS X Bash script doesn't work but individual commands do - bash

I'm trying to turn the instructions on this page about connecting to a Soft Ether VPN on OS X into a bash script, but I'm running into some issues.
When I run each of these commands individually at the command line, I'm able to initiate the connection to the VPN just fine and set up the routing appropriately, but when I put it into a script, it doesn't work.
Here is the script in question:
#!/bin/bash
GATEWAY=`route -n get default | grep gateway | awk '{print $2}'`
VPN_IP=130.158.6.123/32
VPN_GATEWAY=192.168.0.1
vpnclient start
vpncmd localhost /CLIENT /CMD AccountConnect HomeVPN;
ipconfig set tap0 DHCP;
ifconfig tap0 down; ifconfig tap0 up
echo "waiting for dhcp to get us an address..."
sleep 15
route delete default;
route -n add $VPN_IP $GATEWAY;
route add default $VPN_GATEWAY;
Upon testing, I have confirmed that GATEWAY gets the correct value and all the other variables are set correctly. The script seems to do everything correctly up until the part where it starts changing the routes. At first I thought it was because the interface hadn't had enough time to get an IP address, so I put a pretty long wait time in to make sure it had an IP before it started trying to change routes.
Any thoughts as to why this doesn't work when put into script form?

Just a guess: sudo doesn't work well in shell scripts as it's an interactive tool and need to prompt for a password. You might consider removing the sudo commands and running the entire script using sudo.

Related

Bash script doesn't function as intented on usb connection

I have written a bash script which starts a tcpip port and connects my device to my laptop for wireless debugging. This is the script at /bin/device_added.sh:
#!/bin/bash
adb shell ip -f inet addr show 2> /tmp/scripts.log
ip=$(adb shell ip -f inet addr show | egrep -o '192.*/' | sed 's/.$//')
adb tcpip 5555
adb connect $ip:5555
echo "USB device added at $(date)" >>/tmp/scripts.log
After configuring permissions with chmod, this works flawlessly on its own. But I want this script to be triggered whenever I plug in usb. I followed this answer to try to make this work. I created a 80-test.rules file at /etc/udev/rules.d and added this:
SUBSYSTEM=="usb", ACTION=="add", ENV{DEVTYPE}=="usb_device", RUN+="/bin/device_added.sh"
and reloaded the rules file using: sudo udevadm control --reload
Whenever I plug in usb, the script gets run(the date gets logged in scripts.log) but my device doesn't get connected. What am I doing wrong? Why does the script work properly when I run it manually but not when it is triggered through udev?
Edit: On basis of #markp-fuso's and #Charles Duffy's comment, I tried logging the error to /tmp/scripts.log file. Turns out I am getting this error:
line 3: adb: command not found
Now the strange part is, I got this error earlier but I solved it by placing the shell command before the tcpip command(atleast that worked when I ran the script directly). How am I supposed to deal with this error now?
Update:
As #markp-fuso pointed out, the problem was that environment variables weren't accessible to that script. Hence I created a the adb's location as a variable in the script and then made that used that variable as throught. My script now:
#!/bin/bash
adb=/home/pranil/Android/Sdk/platform-tools/adb
$adb shell ip -f inet addr show 2> /tmp/scripts.log
ip=$($adb shell ip -f inet addr show | egrep -o '192.*/' | sed 's/.$//')
$adb tcpip 5555
$adb connect $ip:5555
echo "USB device added at $(date)" >>/tmp/scripts.log
This solved the error I was getting in logs but still the adb doesn't get connected at the required port. I have no idea where I am going wrong now. One more thing, after my script runs, the offline emulator is no longer shown as an output of abd devices command.

How To Obtain Remote Client SSH IP From Sudo'd Script

I'm trying to create a small setup script which requires the user to enter their own IP address... I'm trying to make the default IP address be that of the connected SSH session.
When I run ${SSH_CLIENT%% *} in the command line, it shows 99.99.99.99: command not found and when I do echo ${SSH_CLIENT%% *} it shows just 99.99.99.99 (where 99.99.99.99 is my actual WAN IP)...
However, the following code (when ran in a bash script) shows
[6/9] Enter your public IP address. (eg. ) :
declare CONNECTED_IP=${SSH_CLIENT%% *}
read -e -p "[6/9] Enter your public IP address. (eg. $CONNECTED_IP) : " ipAddress
[ -z "$ipAddress" ] && ipAddress=$CONNECTED_IP
Update 1
Upon further review, it does work... except for when I run the script with sudo. Could someone explain this behavior to me and is there a work-around?
Update 2
So, it's been explained to be that this behavior is due to the fact that environment variables are not preserved when switching users (sudo) and that I could use the -E flag when executing the script to preserve the environment.
Unfortunately though, this is a script that will be shared and I therefore cannot ensure that the user executes the script with the -E flag. Furthermore, I don't even know if they'll use sudo at all.
That said, I'm looking for a consistent way to obtain the IP address of the user connected via SSH.
The reason the variable doesn't show when using sudo is that it doesn't automatically preserve the environment when you switch from another user. You can use the -E sudo option to preserve the current environment.
$ sudo -E ./script.sh
Should show the ip as expected.
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve
their existing environment variables. The security policy may
return an error if the user does not have permission to preserve
the environment.

where to change the default location of .Xauthority file when log in via ssh -X as a specific user

I need to change the .Xauthority file location for a group of users to be $HOME/tmp/.Xauthority rather than the default $HOME/.Xauthority.
I already tried what I could catch up from several sources like:
I set the environment variable like this in several /etc/.profile, .profile, .bashrc .... etc. with the following: XAUTHORITY=$HOME/tmp/.Xauthority
With the result of:
Any login attempt with a user of sshx group (ssh -X server) results in timeout in locking $HOME/.Xauthority. It is like having changed nothing. Interesting about is that if I echo $XAUTHORITY it shows $HOME/tmp/.Xauthority. authx is working as well, but not at the time of login.
Therefore the processing I need must happen somewhere before ssh -X or while establishing the X connection. Where do I have to change it so that I can address a group of users only since I do not want root or users without a sshX group be affected since they eventually do not have the directory?
The way I do it is to set XAUTHORITY=/tmp/Xauthority-username in ~/.ssh/environment, but that requires changing /etc/ssh/sshd_config to say PermitUserEnvironment yes.
I use /tmp because that keeps it local to each machine. With home directories on NFS, that becomes a bottleneck and causes race conditions where starting several apps simultaneously on multiple remote hosts can cause some to fail.
I came up with something partial, but still I have now the .Xauthority relocated to ~/tmp/.Xauthority which is actually a great progress for now. (Ubuntu Server is the target OS)
All the settings stay the same only a file need to be created ~/.ssh/rc which is loaded upon connection of ssh -X servername:
if read proto cookie && [ -n "$DISPLAY" ]; then
if [ `echo $DISPLAY | cut -c1-10` = 'localhost:' ]; then
# X11UseLocalhost=yes
echo add unix:`echo $DISPLAY |
cut -c11-` $proto $cookie
else
# X11UseLocalhost=no
echo add $DISPLAY $proto $cookie
fi | xauth -q -f ~/tmp/.Xauthority -
fi
which starts the xauth and creates the file in the location you want, it also adds/creates entries in the .Xauthority file for proper authentication.
Now you need to modifiy the ~./profile since the shell is loaded it needs to know where the
.Xauthority file is found. Therefore we add one line at the very top:
export XAUTHORITY=~/tmp/.Xauthority
This enables me to connect via ssh -X servername to a shell and start any X app. Lets try this by starting xeyes or xclock.
Cool, but still another issue came up to me to have it done right, but I have no solution for it now. If you try to start the X app directly from the remote, like:
x#y:~$ ssh -X servername xeyes
X11 connection rejected because of wrong authentication.
X11 connection rejected because of wrong authentication.
X11 connection rejected because of wrong authentication.
X11 connection rejected because of wrong authentication.
Error: Can't open display: localhost:11.0
This is a interesting error, since if you google it there are a lot of answers, but now the situation itself leads to that assumption that something is different when bash is loaded and it is left out. The only thing I assume is the line in .profile which sets the XAUTHORITY variable, but how do I set it without loading a shell. Why does it work if I have a user which has the .Xauthority file in the default location (~/.Xauthority)?

I'm trying to get the MAC address as a variable in Linux, but it rarely works

I'm using the following code to get the MAC address of eth0 into a variable for use in a filename, but it rarely every works. It isn't that it NEVER works, it is just unpredictable.
ntpdate -b 0.centos.pool.ntp.org
DATE=$(date +%s)
MAC=$(ifconfig eth0 | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}' | sed 's/://g')
cp logfile logfilecp-$MAC-$DATE
Now, it seems to work less-frequently if I use the ntpdate line, but regardless, it is wholly unpredictable. Anyone have any idea what I could do to make this work better? I end up with a filename like
logfile--1375195808.bz2
New info
I've got the script setup to run as a cronjob (crontab -e). I notice that when it runs as a cronjob, it doesn't get the MAC, but when I run it manually ./runscript.bash it does get the MAC. Hopefully someone knows why this might be causing it.
Thanks.
Try an easier method to get you mac address than through ifconfig, i.e.
cat /sys/class/net/eth0/address
I've tested it in shell (not through script) and works like a charm :
TEST=`cat /sys/class/net/eth0/address`
touch /tmp/blabla-$TEST
EDIT for your second problem
in you cron script, add the full path of the binaries you're using (i.e. /sbin/ifconfig), or use my method as above :)
ip addr | grep link/ether | awk '{print $2}'

Bash eval inside quotes

I have a bash script which takes one parameter and does something like this:
ssh -t someserver "setenv DISPLAY $1; /usr/bin/someprogram"
How can I force bash to substitute in the $1 instead of passing the literal characters "$1" as the display variable?
Based on your comment on sehe's answer, it sounds like you just want the remote command to use the local X display — so that the program is running on your remote server (someserver) but being displayed on the machine you ran the ssh command on.
This can be done by just passing -X, e.g.
ssh -X someserver /usr/bin/someprogram
For some reason, this doesn't work with a few programs, for example evince. I'm not really sure why. I'm pretty sure that evince is the only app I've had trouble forwarding back over an SSH connection.
If this isn't what you're aiming to do, please explain.
Edit Are you aware of
ssh -X ...
ssh -Y ...
which already support X forwarding out of the box? Also look at
xhost +
in case you need to increase permissions to 'guests'.
If you want to forward non-standard X display address, you could always use
DISPLAY=localhost:3 ssh -XCt user#remote xterm
Bonus: to make ssh background after authentication, add '-f'
What locally? That should already work as shown. Remotely? escape the $: \$
However, I'm not sure where the command would be taking it's arguments ($1) from

Resources