I've been experimenting with Pis for a little while now and am close to completing my first project, I have all the bits working but I'm struggling to put them all together into an automated process.
Basically I have a Pi setup to run a fbi slideshow from a specific folder and I want it to constantly be looking for a pre established Wifi network and when it finds this network it needs to run an update script. I've got these two bits working.
From here, I want the Pi to be turned on and load straight into fbi whilst running the checking script in the background, if the checking script finds the Wifi network then it will run as normal (preferably without ending the slideshow) and when it's done fbi should have an updated selection of images to run (if a restart of fbi is necessary then so be it).
I'm coming up short on achieving this. I can run one script or the other, and if I automate the checking script (cron hasn't worked though I dare say I'm doing something wrong) with rc.local it just gets stuck in a checking loop before login, which kinda makes sense given the script.
Here's the monitoring script:
#!/bin/bash
while true ; do
if ifconfig wlan0 | grep -q "inet addr:" ; then
echo "Wifi connected!"
echo "Initiating Grive sync!"
(cd /home/pi/images/; ./grive -s Pi_Test -V)
sleep 60
else
echo "Wifi disconnected! Attempting to reconnect now."
ifup --force wlan0
sleep 10
fi
done
and in case it's relevant, here's the command the run the fbi slideshow:
fbi -noverbose -a -t 10 -u /home/pi/images/Pi_Test/*.jpg
I do not have a Pi but I have used cron on my VPS which is running a CentOS, but the overall procedure should be similar.
To have some script to be executed by cron, you need to:
Edit /etc/cron.allow
You need to add in your user id to this file so that you can use crontab.
crontab -e
Use this command to add rules you want to fire.
From your description, it seems to me that you already know the syntax to add rules into cron table.
After that, you can use crontab -l to verify your change.
As for stuck at before login, that is very likely due to the while loop. You might want to get rid of the while and sleep because cron is helping you out by invoking your script periodically.
Therefore the following should not suffer from the stuck issue.
if ifconfig wlan0 | grep -q "inet addr:" ; then
echo "Wifi connected!"
echo "Initiating Grive sync!"
(cd /home/pi/images/; ./grive -s Pi_Test -V)
else
echo "Wifi disconnected! Attempting to reconnect now."
ifup --force wlan0
fi
The trick instead, is to have some line similar to this in your cron table
*/1 * * * * /home/David_Legassick/test.sh
The */1 asks cron to call your script test.sh every minute.
Related
Due to some issues I wont elaborate here to not waste time, I made a bash script which will ping google every 10 minutes and if there is a response it will keep the loop running and if not then the PC will restart. After a lot of hurdle I have been able to make the script and also make it start on bootup. However the issue is that i want to see the results on the terminal, meaning I want to keep monitoring it but the terminal does not open on bootup. But it does open if I run it as ./net.sh.
The script is running on startup, that much I know because I use another script to open an application and it works flawlessly.
My system information
NAME="Linux Mint"
VERSION="18.3 (Sylvia)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 18.3"
VERSION_ID="18.3"
HOME_URL="http://www.linuxmint.com/"
SUPPORT_URL="http://forums.linuxmint.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/linuxmint/"
VERSION_CODENAME=sylvia
UBUNTU_CODENAME=xenial
The contents of my net.sh bash script are
#! /bin/bash
xfce4-terminal &
sleep 30
while true
do
ping -c1 google.com
if [ $? == 0 ]; then
echo "Ping Sucessful. The Device will Continue Operating"
sleep 600
else
systemctl reboot
fi
done
I have put the scripts in /usr/bin and inserted the scripts for startup at boot in /etc/rc.local
So I did some further research and with help from reddit I realized that the reason I couldnt get it to show on terminal was because the script was starting on bootup but I needed it to start after user login. So I added the script on startup application (which can be found searching on start menu if thats whats it called). But it was still giving issues so I divided the script in two parts.
I put the net.sh script on startup and directed that script to open my main script which i named net_loop.sh
This is how the net.sh script looks
#! /bin/bash
sleep 20
xfce4-terminal -e usr/bin/net_loop.sh
And the net_loop.sh
#! /bin/bash
while true
do
ping -c1 google.com
if [ $? == 0 ]; then
echo "Ping Sucessful. The Device will Continue Operating"
sleep 600
else
systemctl reboot
fi
done
The results are the results of the net_loop.sh script are open in another terminal.
Note: I used help from this thread
If minute interval is usable why not use "cron" to start your?
$> crontab –e
or
$> sudo crontab –e
I'm trying to write a bash script.
The script should check if the MC server is running. If it crashed or stopped it will start the server automatically.
I'll use crontab to run the script every minute. I think I can run it every second it won't stress the CPU too much. I also would like to know when was the server restarted. So I'm going to print the date to the "RestartLog" file.
This is what I have so far:
#!/bin/sh
ps auxw | grep start.sh | grep -v grep > /dev/null
if [ $? != 0 ]
then
cd /home/minecraft/minecraft/ && ./start.sh && echo "Server restarted on: $(date)" >> /home/minecraft/minecraft/RestartLog.txt > /dev/null
fi
I'm just started learning Bash and I'm not sure if this is the right way to do it.
The use of cron is possible, there are other (better) solutions (monit, supervisord etc.). But that is not the question; you asked for "the right way". The right way is difficult to define, but understanding the limits and problems in your code may help you.
Executing with normal cron will happen at most once per minute. That means that you minecraft server may be down 59 seconds before it is restarted.
#!/bin/sh
You should have the #! at the beginning of the line. Don't know if this is a cut/paste problem, but it is rather important. Also, you might want to use #!/bin/bash instead of #!/bin/sh to actually use bash.
ps auxw | grep start.sh | grep -v grep > /dev/null
Some may suggest to use ps -ef but that is a question of taste. You may even use ps -ef | grep [s]tart.sh to prevent using the second grep. The main problem however with this line is that that you are parsing the process-list for a fairly generic start.sh. This may be OK if you have a dedicated server for this, but if there are more users on the server, you run the risk that someone else runs a start.sh for something completely different.
if [ $? != 0 ]
then
There was already a comment about the use of $? and clean code.
cd /home/minecraft/minecraft/ && ./start.sh && echo "Server restarted on: $(date)" >> /home/minecraft/minecraft/RestartLog.txt > /dev/null
It is a good idea to keep a log of the restarts. In this line, you make the execution of the ./start.sh dependent on the fact that the cd succeeds. Also, the echo only gets executed after the ./start.sh exists.
So that leaves me with a question: does start.sh keep on running as long as the server runs (in that case: the ps-test is ok, but the && echo makes no sense, or does start.sh exit while leaving the minecraft-server in the background (in that case the ps-grep won't work correctly, but it makes sense to echo the log record only if start.sh exits correctly).
fi
(no remarks for the fi)
If start.sh blocks until the server exists/crashes, you'd be better off to simply restart it in an infinite loop without the involvement of cron. Simply type in a console (or put into another script):
#!/bin/bash
cd /home/minecraft/minecraft/
while sleep 3; do
echo "$(date) server (re)start" >> restart.log
./start.sh # blocks until server crashes
done
But if it doesn't block (i.e. if start.sh starts the server and then returns, but the server keeps running), you would need to implement a different check to verify if the server is actually still running, other than ps|grep start.sh
PS: To kill the infinite loop you have to Ctrl+C twice: Once to stop ./start.sh and once to exit from the immediate sleep.
You can use monit for this task. See docu. It is available on most linux distributions and has a straightforward config. Find some examples in this post
For your app it will look something like
check process minecraftserver
matching "start.sh"
start program = "/home/minecraft/minecraft/start.sh"
stop program = "/home/minecraft/minecraft/stop.sh"
I wrote this answer because sometimes the most efficient solution is already there and you don't have to code anything. Also follow the suggestions of William Pursell and use the init system of your OS (systemd,upstart,system-v,etc.) to host your scripts.
Find more:
Shell Script For Process Monitoring
I use GMediaRenderer to send audio via UPNP from a Raspberry Pi. Occasionally, for reasons unknown, I have to SSH into my Pi and send the command sudo service gmediarenderer restart to get it to work properly. I'd like to add a command to crontab or similar that periodically checks whether the service is running properly. I already have a crontab entry that checks whether the service is running, and starts if it isn't. The trouble I'm having is that sometimes, even though the service is running, it doesn't appear to be communicating with UPNP control points. Executing the restart command brings it back, so I assume it is simply the case that the service has crashed but not closed down.
Does anyone know how to programmatically check (preferably using a bash script) whether the GMediaRenderer service is up and running?
I have found a solution to this. The command gssdp-discover returns a list of active renderers. I setup a sudo crontab job to run a bash script every minute that checks whether or not a particular renderer is running, and to restart gmediarenderer if it isn't found.
The following command will list your active renderers:
gssdp-discover -i wlan0 --timeout=3
Change wlan0 above depending on your specific network connection. In my case, the renderer that I'm interested in is listed as urn:av-openhome-org:service:Info:1 (run the command with and without the renderer active, and look for the one that only appears when running). So, my bash script contains the following:
gssdp-discover -i wlan0 --timeout=3 --target=urn:av-openhome-org:service:Info:1 | grep available &> /dev/null
if [ $? == 0 ]; then
echo "OpenHome renderer is already running"
else
echo "restarting gmediarenderer"
/etc/init.d/gmediarenderer stop
/etc/init.d/gmediarenderer start
fi
I have the following in my crontab to sync my folders on the servers as they are run on round robin DNS settings it syncs every minute.
* * * * * rsync -ar --delete -e ssh user#skynet.rizzler.se:/home/user/folder1/ /home/user/folder1/ >/dev/null 2>&1
* * * * * rsync -ar --delete -e ssh user#skynet.rizzler.se:/home/user/folder2/ /home/user/folder2/ >/dev/null 2>&1
This Rsync works for small files, but it's getting bigger so I am wondering how I can get this into a script, put that script into my crontab and each time the script is run, it will check if it's running. If the script is running, it will do nothing; if it's not running, it should go ahead and start the rsync.
Anyone know how I can do this?
You could write the script's PID into a file with echo $$ > /var/run/rsync_job.pid
Then when you start you could first check if that file exists, and if it does read the PID from it and see if that process still exists like
if ps -p $PID &>/dev/null; then
# it's still running
exit
fi
As swornabsent noted in the comments, you can use file-based locks, though that SO link identifies some good reasons not to use it. See this SO question about quick-and-dirty locking in shell scripts, which has a great answer about advantages to directory-based locks instead.
To identify whether a process is running, you may also choose to use pgrep, though that may not be a good idea in the general case due to the time after the check and before the process starts up.
I'm trying to write a shell script that automates certain startup tasks based on my location (home/campusA/campusB). I go to University and take classes in two different campuses (hence campusA/campusB). My location is determined by which wireless network I'm connected to. For the purposes of this script, we can assume that I will be connected to one of these networks when the script is called and my script knows which one I'm connected to based on a call to iwconfig.
This is what I want it to do:
cat file1 > file2 # always do this, regardless of where I am
if Im at home:
start tweetdeck, thunderbird, skype
else if Im at campusA:
activate the login script # I need to login on a webform before I get internet access.
# I have written a script to automate this.
# Wait for this script to finish before doing anything else
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
else if Im at campusB:
ssh username#domain # this is the problematic line
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
start tweetdeck, thunderbird
close the terminal with the "exit" command
The problem is that campusB's wireless network is behind a firewall, which grants me internet access ONLY after I successfully ssh by username#domain. After a successful ssh, I need to keep the terminal window active in order to hold keep the internet access. If I close the terminal window, I lose internet access (this is bad).
When I try doing just ssh username#domain, the script stops because I don't exit the ssh command. I can't ^C out of it, which means that the rest of the script is never executed. I also have the same problem if I just close the terminal window in an attempt to kill the ssh session.
Some googling brought me to subshell, which I'm either using wrong or can't use to solve my problem. So how should I go about solving this problem? I'd appreciate any help - I've been at this for a while now and am unable to find anything helpful. If it makes a difference, I'd rather not store my ssh password in the script
Further, ampersanding the ssh call (ssh username#domain &) doesn't seem to do any good (can anyone explain why?)
Thank you in advance
EDIT
I must clarify, that the ssh connection has to be active in order for me to have internet access. Thus, when I close the terminal window, I need the ssh connection to still be active.
I had a script that looped on 6 servers, calling via ssh in the background. In 1 part of the script, there was a mis-behaving vendor application; the application didn't 'let go' of the connection properly. (other parts of the script using ssh in background worked fine).
I found that using ssh -t -t cured the problem. Maybe this can help you too.
(a teammate found this on the web, and we had spent so much time, I never went back to read the article that suggested this. The man page on our system gave no hint that such a thing was possible)
Hope this helps.
You may want to try to double background myProg2 to detach it from the tty:
# cf. "Wizard Boot Camp, Part Six: Daemons & Subshells",
# http://www.linux-mag.com/id/5981
(myProg2 &) &
Another option may be to use the daemon tool from the libslack package:
http://ingvar.blog.linpro.no/2009/05/18/todays-sysadmin-tip-using-libslack-daemon-to-daemonize-a-script/
Having a ssh with pseudy tty on background shell
In addition to #shellter's answer, I would like make some precision:
where #shelter said:
The man page on our system gave no hint that such a thing was possible
On my system (Debian 7 GNU/Linux), if I hit:
man -Pcol\ -b ssh| grep -A3 '^ *-t '
I could read:
-t Force pseudo-tty allocation. This can be used to execute arbi‐
trary screen-based programs on a remote machine, which can be
very useful, e.g. when implementing menu services. Multiple -t
options force tty allocation, even if ssh has no local tty.
Yes: Multiple -t options force tty allocation, even if ssh has no local tty.
This mean: If you remotely run a tool that require access to pseudo terminal ( pty like /dev/pts/0), you could run them by using -t switch.
But this would work only if ssh is run from a shell console (aka having his own pty). If you plan to run them is shell session without console, like background scripts, you may use Multiple -t to enforce pseudo tty allocation from ssh.
Multiple ssh shell on one ssh connection
In addition to answers from #tommy and #geekosaur, I would make some precision:
#tommy point to a very intersting feature of ssh. Not sure this have a lot to do with answer, but speaking around long time connection, this feature has to be clearly understood.
Once a connection is established, ssh could (and know how to) use them to drive a lot of thing in this one connection:
-L let you drive remote TCP connections to local machines/network. (full syntax is: -L localip:localport:distip:distport) where localip could be specified to permit other hosts from same local domain to access same tcp bind, and distip could by any host from distant network ( not only localhost ) sample: -L192.168.1.31:8443:google.com:443 permit any host from local domain to reach google through your host: http://192.168.1.31:8443
-R Same remarks in reverse way!
-M Tell ssh to open a local unix socket for bindind next ssh consoles. Simply open two terminal window. First in both window, hit: ssh somewhere than hit netstat -tan | grep :22 or netstat -tan | grep 192.168.1.31:22 (assuming 192.168.1.31 is your onw host's ip)
Than compare close all your ssh session and in first terminal, hit: ssh -M somewhere and in second, simply ssh somewhere. you may see in second terminal:
$ ssh somewhere
+ ssh somewhere
Last login: Mon Feb 3 08:58:01 2014 from elsewhere
If now you hit netstat -tan | grep 192.168.1.31:22 (on any of two oppened ssh session;) you must see that there is only one tcp connection.
This kind of features could be used in combination with -L and maybe some sleep 86399...
To work around a tcp killer router that close every inactive TCP connection from more than 120 seconds, I run:
ssh -M somewhere 'while :;do uptime;sleep 60;done'
This ensure connection stay up even if I dont hit a key for more than two minutes.
Here's a few thoughts that might help.
Sub-shells
Sub-shells fork new processes, but don't return control to the calling shell. If you want to fork a sub-shell to do the work for you, then you'll need to append a & to the line.
(ssh username#domain) &
But this doesn't look like a compelling reason to use a sub-shell. If you had a number commands you wanted to execute in order from each other, yet in parallel from the calling shell, then maybe it would be worth it. For example...
(dothis.sh; thenthis.sh; andthislastthingtoo.sh) &
Forking
I'm not sure why & isn't working for you, but it may be worth looking into nohup as well. This makes the command "immune" to hang up signals.
nohup ssh username#domain (try with and without the & at the end)
Passwords
Not storing passwords in the script is essential for any ssh automation. You can accomplish that using public key cryptography which is an inherent feature of ssh. I wont go into the details here because there are a number of great resources all across the interwebs on setting this up. I strongly suggest investigating this further.
HOWTO: set up ssh keys - Paul Keck, 2001
SSH Keys - archlinux.org
SSH with authentication key instead of password - Debian Administration
Secure Shell - Wikipedia, the free encyclopedia
If you do go this route, I also suggest running ssh in "batch mode" which will disable password querying and will automatically disconnect from the server if it becomes unresponsive after 5 minutes.
ssh -o 'BatchMode=yes' username#domain
Persistence
Then if you want to persist the connection, run some silly loop in bash! :)
ssh -o 'BatchMode=yes' username#domain "while (( 1 == 1 )); do sleep 60; done"
The problem with & is that ssh loses access to its standard input (the terminal), so when it goes to read something to send to the other side it either gets an error and exits, or is killed by the system with SIGTTIN which will implicitly suspend it. The -n and -f options are used to deal with this: -n tells it not to use standard input, -f tells it to set up any necessary tunnels etc., then close the terminal stream.
So the best way to do this is probably to do
ssh -L 9999:localhost:9999 -f host & # for some random unused port
and then manually kill the ssh before logout. Alternately,
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' </dev/null &
(The redirection is to make sure the SIGTTIN doesn't happen anyway.)
While you're at it, you may want to save the process ID and shut it down from your .logout/.bash_logout:
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' < /dev/null & echo $! >~.ssh_pid; chmod 0600 ~/.ssh_pid
and in .bash_logout:
if test -f ~/.ssh_pid; then
set -- $(sed -n 's/^\([0-9][0-9]*\)$/\1/p' ~/.ssh_pid)
if [ $# = 1 ]; then
kill $1 >/dev/null 2>&1
fi
rm ~/.ssh_pid
fi
The extra code there attempts to avoid someone sabotaging your ~/.ssh_pid, because I'm a professional paranoid.
(Code untested and may have typoes)
It's been a while since I've used ssh, and I can't test it right now, but have you tried the -f switch?
ssh -f username#domain
The man page says it backgrounds ssh. Not sure why & wouldn't work, but I guess it's interpreting it as a command to be run on the remote machine.
Maybe screen + ssh would fit the bill as well?
Something like:
screen -d -m -S sessionName cmd
screen -d -m -S sessionName cmd &
# reconnect with
screen -r sessionName