Verify IP address? - bash

I need to write a script in which I have to verify if an IP destination respond (read from keyboard).
I supposed I have to use ping, but I'm not sure how.

To learn about the ping command, you may execute the command
man ping
or the command
ping --help
After executing the ping command in your script, you will need to process its return value and/or its output, somehow. Honza gave you a few commands that can be what you need, although you may want to do it differently, for several normal reasons.

Related

How to send input to a console/CLI program running on remote host using bash?

I have a script that I normally launch using the following syntax:
ssh -Yq user#host "xterm -e '. /home/user/bin/prog1 $arg1;prog2'"
(note: I've removed some of the complexities of the command, so please excuse if there are any syntax errors in the ssh command; it should not be relevant to the question)
This launches an xterm window that runs prog1, and after completion runs prog2. prog2 is a console-style program that performs some setup, then several seconds later waits for user input.
Is there a way via bash script (preferably without downloading external packages) that I can send data to prog2 that's running on $host?
I've looked into << and expect, but it's way over my head. My intuition is that there's probably a straightforward way of doing this, but I can't figure out what terms to search for. I also understand that I can remotely send keystrokes to a host using xdotools or something similar, but I'm hesitant to request a new package installation unless I know that's the only reasonable solution.
Thanks!

Bash: Remote SSH ncat command requires sigint to exit

I have a bit of an interesting situation that I can't figure out.
I need the ability to issue a command remotely over SSH, which then pipes the results into an ncat tunnel back to the original server.
My commands are a little more complex than this (involving innobackupex, a MySQL backup utility), but this minimal example shows the same issue:
ncat -l 9970 &
ssh db-202 "echo 'testing' | ncat $BACKUPSERVER 9970"
... other commands
The issue is that after the echo command completes (or whatever is run), the script just hangs and the later commands don't run. I need to send a ctrl-c (SIGINT) to continue. Obviously this isn't ideal in the context of a bash script, where lots of things may need to happen after this command completes automatically.
Not sure if it's relevant, but sending a sigint doesn't always properly terminate the ncat tunnel on both sides. Sometimes I need to kill -9 it, probably due to the sigint messing something up.
Can anybody explain this behavior and how I can get around it? Or a better way to do what I want?
Thanks!
Finally figured it out.
I added the --recv-only and --send-only arguments to each side of the tunnel.
According to the manpage:
--send-only Only send data, ignoring received; quit on EOF
This may explain it, but I though that any EOF would terminate the tunnel. Maybe without these arguments, both sides need to send an EOF?
Anyway, here are the commands that work.
ncat --recv-only -l 9970 &
ssh db-202 "echo 'testing' | ncat --send-only $BACKUPSERVER 9970"
Hopefully this helps someone in the future.

Why does ack over ssh not work?

I've got a simple bash script to remove some folders on a remote server over ssh. It basically does this:
THE_HOST=12.34.56.78
ssh me#$THE_HOST "rm /the/file/path/thefile.zip"
This works perfectly well. Before I do this I often search the contents of the files in a folder for a string using ack:
ack thestring /the/folder/path/
This works perfect when I ssh into the server and run it, but when I use it in one command it doesn't work:
ssh me#$THE_HOST "ack thestring /the/folder/path/"
This seems to freeze or run forever: I get no output and the command never ends. Does anybody know why this doesn't work for ack?
Could be ack behaves differently when it is run in a terminal. Try using the -t argument
ssh -t me#$THE_HOST "ack thestring /the/folder/path/"
When ack detects that stdin is not a terminal(a tty device), it will attempt to read the text to search in from stdin instead of the given file/folder. That's what happens when you run it through ssh, stdin will be connected to the ssh connection, which does not look like a terminal(tty) to ack.
The -t argument to ssh instead allocates a tty and connects it to stdin/out of the program you run, ack will then think it runs in a terminal and instead use the file/folder argument for searching.
See http://github.com/beyondgrep/ack2/issues/659

How can i run two commands exactly at same time on two different unix servers?

My requirement is that i have to reboot two servers at same time (exactly same timestamp) . So my plan is to create two shell script that will ssh to the server and trigger the reboot. My doubt is
How can i run same shell script on two server at same time. (same timestamp)
Even if i run Script1 &; Script2. This will not ensure that reboot will be issued at same time, minor time difference will be there.
If you are doing it remotely, you could use a terminal emulator with broadcast input, so that what you type is sent to all sessions of the open terminal. On Linux tmux is one such emulator.
The other easiest way is write a shell script which waits for the same timestamp on both machines and then both reboot.
First, make sure both machine's time are aligned (use the best implementation of http://en.wikipedia.org/wiki/Network_Time_Protocol and your system's related utilities).
Then,
If you need this just one time: on each servers do a
echo /path/to/your/script | at ....
(.... being when you want it. See man at).
If you need to do it several times: use crontab instead of at
(see man cron and man crontab)

Spawn subshell for SSH and continue with program flow

I'm trying to write a shell script that automates certain startup tasks based on my location (home/campusA/campusB). I go to University and take classes in two different campuses (hence campusA/campusB). My location is determined by which wireless network I'm connected to. For the purposes of this script, we can assume that I will be connected to one of these networks when the script is called and my script knows which one I'm connected to based on a call to iwconfig.
This is what I want it to do:
cat file1 > file2 # always do this, regardless of where I am
if Im at home:
start tweetdeck, thunderbird, skype
else if Im at campusA:
activate the login script # I need to login on a webform before I get internet access.
# I have written a script to automate this.
# Wait for this script to finish before doing anything else
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
else if Im at campusB:
ssh username#domain # this is the problematic line
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
start tweetdeck, thunderbird
close the terminal with the "exit" command
The problem is that campusB's wireless network is behind a firewall, which grants me internet access ONLY after I successfully ssh by username#domain. After a successful ssh, I need to keep the terminal window active in order to hold keep the internet access. If I close the terminal window, I lose internet access (this is bad).
When I try doing just ssh username#domain, the script stops because I don't exit the ssh command. I can't ^C out of it, which means that the rest of the script is never executed. I also have the same problem if I just close the terminal window in an attempt to kill the ssh session.
Some googling brought me to subshell, which I'm either using wrong or can't use to solve my problem. So how should I go about solving this problem? I'd appreciate any help - I've been at this for a while now and am unable to find anything helpful. If it makes a difference, I'd rather not store my ssh password in the script
Further, ampersanding the ssh call (ssh username#domain &) doesn't seem to do any good (can anyone explain why?)
Thank you in advance
EDIT
I must clarify, that the ssh connection has to be active in order for me to have internet access. Thus, when I close the terminal window, I need the ssh connection to still be active.
I had a script that looped on 6 servers, calling via ssh in the background. In 1 part of the script, there was a mis-behaving vendor application; the application didn't 'let go' of the connection properly. (other parts of the script using ssh in background worked fine).
I found that using ssh -t -t cured the problem. Maybe this can help you too.
(a teammate found this on the web, and we had spent so much time, I never went back to read the article that suggested this. The man page on our system gave no hint that such a thing was possible)
Hope this helps.
You may want to try to double background myProg2 to detach it from the tty:
# cf. "Wizard Boot Camp, Part Six: Daemons & Subshells",
# http://www.linux-mag.com/id/5981
(myProg2 &) &
Another option may be to use the daemon tool from the libslack package:
http://ingvar.blog.linpro.no/2009/05/18/todays-sysadmin-tip-using-libslack-daemon-to-daemonize-a-script/
Having a ssh with pseudy tty on background shell
In addition to #shellter's answer, I would like make some precision:
where #shelter said:
The man page on our system gave no hint that such a thing was possible
On my system (Debian 7 GNU/Linux), if I hit:
man -Pcol\ -b ssh| grep -A3 '^ *-t '
I could read:
-t Force pseudo-tty allocation. This can be used to execute arbi‐
trary screen-based programs on a remote machine, which can be
very useful, e.g. when implementing menu services. Multiple -t
options force tty allocation, even if ssh has no local tty.
Yes: Multiple -t options force tty allocation, even if ssh has no local tty.
This mean: If you remotely run a tool that require access to pseudo terminal ( pty like /dev/pts/0), you could run them by using -t switch.
But this would work only if ssh is run from a shell console (aka having his own pty). If you plan to run them is shell session without console, like background scripts, you may use Multiple -t to enforce pseudo tty allocation from ssh.
Multiple ssh shell on one ssh connection
In addition to answers from #tommy and #geekosaur, I would make some precision:
#tommy point to a very intersting feature of ssh. Not sure this have a lot to do with answer, but speaking around long time connection, this feature has to be clearly understood.
Once a connection is established, ssh could (and know how to) use them to drive a lot of thing in this one connection:
-L let you drive remote TCP connections to local machines/network. (full syntax is: -L localip:localport:distip:distport) where localip could be specified to permit other hosts from same local domain to access same tcp bind, and distip could by any host from distant network ( not only localhost ) sample: -L192.168.1.31:8443:google.com:443 permit any host from local domain to reach google through your host: http://192.168.1.31:8443
-R Same remarks in reverse way!
-M Tell ssh to open a local unix socket for bindind next ssh consoles. Simply open two terminal window. First in both window, hit: ssh somewhere than hit netstat -tan | grep :22 or netstat -tan | grep 192.168.1.31:22 (assuming 192.168.1.31 is your onw host's ip)
Than compare close all your ssh session and in first terminal, hit: ssh -M somewhere and in second, simply ssh somewhere. you may see in second terminal:
$ ssh somewhere
+ ssh somewhere
Last login: Mon Feb 3 08:58:01 2014 from elsewhere
If now you hit netstat -tan | grep 192.168.1.31:22 (on any of two oppened ssh session;) you must see that there is only one tcp connection.
This kind of features could be used in combination with -L and maybe some sleep 86399...
To work around a tcp killer router that close every inactive TCP connection from more than 120 seconds, I run:
ssh -M somewhere 'while :;do uptime;sleep 60;done'
This ensure connection stay up even if I dont hit a key for more than two minutes.
Here's a few thoughts that might help.
Sub-shells
Sub-shells fork new processes, but don't return control to the calling shell. If you want to fork a sub-shell to do the work for you, then you'll need to append a & to the line.
(ssh username#domain) &
But this doesn't look like a compelling reason to use a sub-shell. If you had a number commands you wanted to execute in order from each other, yet in parallel from the calling shell, then maybe it would be worth it. For example...
(dothis.sh; thenthis.sh; andthislastthingtoo.sh) &
Forking
I'm not sure why & isn't working for you, but it may be worth looking into nohup as well. This makes the command "immune" to hang up signals.
nohup ssh username#domain (try with and without the & at the end)
Passwords
Not storing passwords in the script is essential for any ssh automation. You can accomplish that using public key cryptography which is an inherent feature of ssh. I wont go into the details here because there are a number of great resources all across the interwebs on setting this up. I strongly suggest investigating this further.
HOWTO: set up ssh keys - Paul Keck, 2001
SSH Keys - archlinux.org
SSH with authentication key instead of password - Debian Administration
Secure Shell - Wikipedia, the free encyclopedia
If you do go this route, I also suggest running ssh in "batch mode" which will disable password querying and will automatically disconnect from the server if it becomes unresponsive after 5 minutes.
ssh -o 'BatchMode=yes' username#domain
Persistence
Then if you want to persist the connection, run some silly loop in bash! :)
ssh -o 'BatchMode=yes' username#domain "while (( 1 == 1 )); do sleep 60; done"
The problem with & is that ssh loses access to its standard input (the terminal), so when it goes to read something to send to the other side it either gets an error and exits, or is killed by the system with SIGTTIN which will implicitly suspend it. The -n and -f options are used to deal with this: -n tells it not to use standard input, -f tells it to set up any necessary tunnels etc., then close the terminal stream.
So the best way to do this is probably to do
ssh -L 9999:localhost:9999 -f host & # for some random unused port
and then manually kill the ssh before logout. Alternately,
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' </dev/null &
(The redirection is to make sure the SIGTTIN doesn't happen anyway.)
While you're at it, you may want to save the process ID and shut it down from your .logout/.bash_logout:
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' < /dev/null & echo $! >~.ssh_pid; chmod 0600 ~/.ssh_pid
and in .bash_logout:
if test -f ~/.ssh_pid; then
set -- $(sed -n 's/^\([0-9][0-9]*\)$/\1/p' ~/.ssh_pid)
if [ $# = 1 ]; then
kill $1 >/dev/null 2>&1
fi
rm ~/.ssh_pid
fi
The extra code there attempts to avoid someone sabotaging your ~/.ssh_pid, because I'm a professional paranoid.
(Code untested and may have typoes)
It's been a while since I've used ssh, and I can't test it right now, but have you tried the -f switch?
ssh -f username#domain
The man page says it backgrounds ssh. Not sure why & wouldn't work, but I guess it's interpreting it as a command to be run on the remote machine.
Maybe screen + ssh would fit the bill as well?
Something like:
screen -d -m -S sessionName cmd
screen -d -m -S sessionName cmd &
# reconnect with
screen -r sessionName

Resources