rc.local python script unrelated error: "getrandom indicates that the ..." - shell

rc.local
/home/pi/home/gate/run.sh &
exit 0
/home/pi/home/gate/run.sh
#!/bin/sh
cd /home/pi/home/gate
export CREDENTIALS=credentials.json
sudo -E -H -u pi sh -c 'python3 localclient.py > log.txt 2>&1'
all I get in log.txt (update - plus some of my debug prints, initially I forgot to set PYTHONUNBUFFERED=1):
getrandom indicates that the entropy pool has not been initialized. Rather than continue with poor entropy, this process will block until entropy is available.
also I'm using sudo because su - pi didn't work (I got import errors, maybe because it didn't set $HOME?)
If I run run.sh manually (that is not automatically on startup), it works.
Also, the process uses all of the cpu (while blocking until more entropy?) and the pi gets hot.

Related

Executing notify-send from fish script as cronjob

I am trying to call notify-send from a fish-script as a cronjob. Eventhough the script is being called by cron, the notification does not pop up on my display. I am not sure where it is failing, if notify-send is being executed at all, if it is a shell problem or some other problem. Executing the script in the terminal produces the expected (i.e. popup window) results
in crontab -e -u $USER:
SHELL=/bin/fish
* * * * * memcheck >> /tmp/cron.memcheck.log
running tail --follow /tmp/cron.memcheck.log shows that the script is being called, since it is echoing the debug output into the log file, but it tails to launch notify-send.
This is my (noobish) script:
# Defined in /home/mio/.config/fish/functions/memcheck.fish # line 2
function memcheck
set MEM_USED (free | string replace '3914132' '' | string match 'Mem: [ ]{1,}[0-9]{1,}' --regex | string match '\d$
#echo $MEM_USED
set MEM_CAP 3914132
set MEM_FREE (math $MEM_CAP - $MEM_USED)
echo $MEM_FREE
if test $MEM_FREE -lt 8700700
echo "WARNING: memory usage out of control. 21:10"
set DISPLAY :0.0
echo $DISPLAY
echo $USER
/usr/bin/notify-send "Memory Usage" $MEM_FREE --urgency=critical
end
end
I've read that in some instances notify-send cannot find the display and that setting $DISPLAY to :0.0 might do the trick. if I echo $DISPLAY in my terminal I get :0.0. Also echoing the $USER gives me my user name, which I expected since I ran cronjob -u mio -e and didn't edit /etc/crontab directly. Thanks for the time.
if I echo $DISPLAY in my terminal I get :0.0
Yes, but your cronjob doesn't run in your terminal.
In Unix, environment variables are passed from parent processes to their children when they're started.
The fish inside your terminal is a child of that terminal, which has $DISPLAY set to contact X.
But your cronjobs are run by your cron daemon, which is typically a child of your init process, which in turn doesn't have any parent. So it inherits the environment of init.
Set $DISPLAY in your script. This isn't pretty (and I can't say I like the approach of having a cronjob that sends notifications to begin with), but it should work, at least if you have the typical setup with one X server.
Note that fish is entirely irrelevant in this case - it would happen no matter what you picked as shell.
Some plausible alternatives (though I've not looked into them far):
Run a watch job in a terminal or via your DE's autostart mechanism. This just reruns things every X seconds, but has $DISPLAY
Use systemd's timer stuff, in particular as a user. There's a command to "upload" an environment variable to systemd, so it can then use it in timers.

How can I make chromium run on startup using Raspberry Pi 3?

I have a Raspberry Pi 3 - Model B, with Raspbian jessie operation system.
Im trying to open "chromium" on startup.
i wrote a simple script:
#!/bin/bash
/usr/bin/chromium-browser --noerordialogs --disable-session-crashed-bubble --disable-infobars --kiosk http://www.google.com
exit 0
I can run the script manually and it works perfect.
I read about a lot of various ways to run this script on startup.
I have tried:
adding this line #reboot path/to/my/script to crontab -e file with no success.
Also i have tried to edit /etc/rc.local file by adding this line:
#!/bin/sh -e
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
/home/pi/Desktop/script1.sh& <-------- THIS LINE
fi
exit 0
I have checked that the script is executable and rc.local too:
rwxrwxrwx 1 pi pi script1.sh
rwxr-xr-x 1 root root rc.local
I can see script1.sh tesk on my Task Manger (it runs as root) but nothing happen.
The default user is Pi and i log as a Pi user and not as root, maybe this is the problem?
Can someone explain me what is the problom and why i can see the script only in the Task Manager? what should i do ?
TNX!
UPDATE
i have changed the rc.local to be like:
!/bin/sh -e
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
su - pi -c "/usr/bin/chromium-browser --noerordialogs --disable-session-crashed-bubble --disable-infobars --kiosk http://www.google.com &"
fi
exit 0
still does not work for me :|
Check out the verified answer on this question...
Running Shell Script after boot on Raspberry PI
Looks like you need to run the script as the user pi.
su - pi -c "/usr/bin/chromium-browser --noerordialogs --disable-session-crashed-bubble --disable-infobars --kiosk http://www.google.com &"
EDIT: I missed the & at the end of the command.
I did a small hack...
I added this line #lxterminal to the end of this file:
nano .config/lxsession/LXDE-pi/autostart
It will auto-start terminal on boot.
Then I edited $ sudo nano .bashrc file.
At the end of the file, I added my path to my script.
./home/pi/Desktop/script.sh
It means that:
The terminal will open every time you boot your Raspberry Pi (first command).
Every time that terminal runs, my script will run also (second command)
It does work for me.
TNX for the help :)
Adding the shell script path directly to ~/.config/lxsession/LXDE-pi/autostart (not to ~/.bashrc) works better.
Namely it does not execute the command every terminal session (including ssh).
Try this in the autostart file instead:
#sh /home/pi/Desktop/script.sh &

Problems with Raspian autostart via /etc/init.d

(Sorry for bad English, I'm German)
I'm trying (without success) to make my own program start automatically after booting (on a raspberry with raspian).
This is my script: (Note: You have to run this program with root privileges) (Note#2: There must be an empty file called "/home/testLog.txt" with write privileges for every user):
rm /etc/init.d/RMStart
echo "
#! /bin/sh
### BEGIN INIT INFO
# Provides: bla1
# Required-Start:
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: bla2
# Description: bla3
### END INIT INFO
#Switch case for the first parameter
case \"\$1\" in
start)
echo \"Start\" >> /home/testLog.txt
echo runlevel >> /home/testLog.txt
;;
stop)
echo \"Stop\" >> /home/testLog.txt
echo runlevel >> /home/testLog.txt
;;
restart)
echo \"Restart\" >> /home/testLog.txt
echo runlevel >> /home/testLog.txt
;;
*)
echo \"something else\" >> /home/testLog.txt
;;
esac
exit 0
" >> /etc/init.d/RMStart
chmod +x /etc/init.d/RMStart
update-rc.d RMStart remove #Remove older versions of this program ... in theory
update-rc.d RMStart defaults #Install new version of this program ... in theory
I've rebooted the raspberry, but the file /home/testLog.txt is still empty.
However, if I run the command: "/etc/init.d/RMStart" or "/etc/init.d/RMStart start" there is a new entry in /home/testLog.txt.
I would be thankful if anyone knows why the file /home/testLog.txt is still empty and how I could fix that.
Update:
I've tried a new installation script:
#RMS install script
chmod +x botComp.sh
rm /home/pi/RMS
pkill RMS
./botComp.sh
cp RMS /home/pi
chmod +x /home/pi/RMS
rm /etc/init.d/startRMS
sudo echo "#!/bin/sh
### BEGIN INIT INFO
# Provides: fqew
# Required-Start:
# Required-Stop:
# Default-Start: 3 4 5
# Default-Stop: 0 1 6
# Short-Description: sfwef
# Description: gfewf
### END INIT INFO
# Actions
case \"\$1\" in
start)
# START
su pi sh -c \" /home/pi/RMS \"
;;
stop)
# STOP
;;
restart)
# RESTART
;;
esac
exit 0 " >> /etc/init.d/startRMS
chmod +x /etc/init.d/startRMS
update-rc.d startRMS remove
update-rc.d startRMS defaults
The only difference I can see is the name of the script (/etc/init.d/startRMS instead of /etc/init.d/RMStart).
The script works, RMS is running.
It's not really a problem, but the script outputs:
insserv: script RMStart: service F already provided!
insserv: script RMStart: service F already provided!
I've added the line system("runlevel >> /home/pi/runlevelLog.txt"); In the program (RMS) but the content of /home/pi/runlevelLog.txt is: "unknown".
Does RMS start at runlevel 3? How I can I verify this? (I think runlevel 3 is ideal, because RMS needs Network Connection.) Thank you for your help.
is /etc/init.d/RMStart definitely being executed on reboot? use ls -lu to check the last time the file was accessed, wait a minute before rebooting, and repeat the command once you're back up. If the access time hasn't moved on then your script isn't being run which would explain the empty file as your script looks Ok.
You should also double check that update-rc.d has created symbolic links to your script in the appropriate run level directives e.g. does /etc/rc2.d/RMStart exist?
Another sanity check would be running your script using the symbolic links in the above directory rather than from /etc/init.d e.g. does /etc/rc2.d/RMStart
generate output in /home/testLog.txt?
Let me know what you find and we'll take it from there.
EDIT: attempting to replicate..
Well I managed to find my PI; the good news is that neither of us are going mad because it worked perfectly first time as we both believed it should.
I took a copy of your file, and I wrote a quick script (x) to check the exit codes from update-rc.d just to make 100% sure it wasn't complaining about anything.
Hopefully you can follow what I did in the screen shot above - I replicated the steps you followed almost exactly with a bit of extra checking along the way. The script certainly works as designed when called directly.
I then rebooted immediately and checked testLog.txt as soon soon as the system was up. You can see two entries in the file which is expected behaviour as init would have run /etc/rc6.d/K01RMStart as the system went down for reboot, and /etc/rc5.d/S01RMStart as it came up again.
Unfortunately this doesn't help you much.....
The only significant differnce between our tests was that I ran everything as root rather than using sudo. This shouldn't make a difference but the next logical thing for you to try is probably copying my test exactly and seeing if it works for you?
Not that this should be at all significant but I'm running Raspbian 8 (kernel 4.1.13+).
EDIT2: awesome... great stuff. I'd still like to know what the problem was but the main thing in that it's working.
System V based distributions will usually start in either level 3 or level 5, the difference being that level 5 will start the GUI whereas level 3 will start in text mode, their default runlevel is controlled by a line in /etc/inittab.
Debian(Raspian) distros are a bit different - (https://www.debian.org/doc/debian-policy/ch-opersys.html#s-sysvinit). They make no distinction between run levels 2-5 and leave it up the user to configure them to suit their requirements by adding services using the mechanism that's been causing us pain for the last 24 hours. They always start in level 5 unless a "init=" kernel boot parameter has been set, which you can do either at boot time or with a tool like bum or raspi-config.
The command runlevel will tell you the current level on raspian.
You can change the runlevel on the fly with telinit new_runlevel if you need to, but whatever you do, don't set the default runlevel to 0 :-)

Bash file locking including flock for subprocesses

I am trying to work through securing my scripts from parallel execution by incorporating flock. I have read a number of threads here and came across a reference to this: http://www.kfirlavi.com/blog/2012/11/06/elegant-locking-of-bash-program/ which incorporates many of the examples presented in the other threads.
My scripts will eventually run on Ubuntu (>14), OS X 10.7 and 10.11.4. I am mainly testing on OS X 10.11.4 and have installed flock via homebrew.
When I run the script below, locks are being created but I think I am forking the subscripts and it is these scripts I am trying to ensure are not running more than one instance each.
#!/bin/bash
#----------------------------------------------------------------
set -vx
set -euo pipefail
set -o errexit
IFS=$'\n\t'
readonly PROGNAME=$(basename "$0")
readonly LOCKFILE_DIR=/tmp
readonly LOCK_FD=200
subprocess1="/bash$/subprocess1.sh"
subprocess2="/bash$/subprocess2.sh"
lock() {
local prefix=$1
local fd=${2:-$LOCK_FD}
local lock_file=$LOCKFILE_DIR/$prefix.lock
# create lock file
eval "exec $fd>$lock_file"
# acquier the lock
flock -n $fd \
&& return 0 \
|| return 1
}
eexit() {
local error_str="$#"
echo $error_str
exit 1
}
main() {
lock $PROGNAME \
|| eexit "Only one instance of $PROGNAME can run at one time."
##My child scripts
sh "$subprocess1" #wait for it to finish then run
sh "$subprocess2"
}
main
$subprocess1 is a script that loads ncftpget and logs into a remote server to grab some files. Once finished, the connection closes. I want to subprocess1 every 15 minutes via cron. I have done so with success, but sometimes there are many files to grab and the job takes longer than 15 minutes. It is rare, but it does happen. In such a case, I want to ensure a second instance of $subprocess1 can't be started. For clarity a small example of such a subscript is:
#!/bin/bash
remoteftp="someftp.ftp"
ncftplog="somelog.log"
localdir="some/local/dir"
ncftpget -R -T -f "$remoteftp" -d "$ncftplog" "$localdir" "*.files"
EXIT_V="$?"
case $EXIT_V in
0) O="Success!";;
1) O="Could not connect to remote host.";;
2) O="Could not connect to remote host - timed out.";;
3) O="Transfer failed.";;
4) O="Transfer failed - timed out.";;
5) O="Directory change failed.";;
6) O="Directory change failed - timed out.";;
7) O="Malformed URL.";;
8) O="Usage error.";;
9) O="Error in login configuration file.";;
10) O="Library initialization failed.";;
11) O="Session initialization failed.";;
esac
if [ "$EXIT_V" = 0 ];
then
echo ""$O"
else
echo "There has been an error: "$O""
echo "Exiting now..."
exit
fi
echo "Goodbye"
and an example of subprocess2 is:
#!/bin/bash
...preamble script setup items etc and then:
java -jar /some/javaprog.java
When I execute the parent script with "sh lock.sh", it progresses through the script without error and exits. The first issue I have is that if I load up the script again I get an error that indicates only one instance of lock.sh can run. What should I have added in the script that would indicate the processes have not completed yet (rather than merely exiting and giving back the prompt).
However, if subprocess1 was running on its own, lock.sh would load up a second instance of subprocess1 because it was not locked. How would one go about locking child scripts and ideally ensuring that forked processes were taken care of as well? If someone had run subprocess1 at the terminal or there was a runaway instance, if cron loads lock.sh, I would want it to fail when trying to load its instance subprocess1 and subprocess2 and not merely exit if cron tried to load two lock.sh instances.
My main concern is in loading multiple instances of ncftpget that is called by subprocess1 and then further, a third script I hope to incorporate, "subprocess2," which launches a java program that deals with the downloaded files, both ncftpget and the java program can't have parallel processes without breaking many things. But I'm at a loss on how to control them adequately.
I thought I could use something similar to this in the main() function of lock.sh:
#This is where I try to lock the subscript
pidfile="${subprocess1}"
# lock it
exec 200>$pidfile
flock -n 200 || exit 1
pid=$$
echo $pid 1>&200
but am not sure how to incorporate it.

Execute bash script from url via cron

I am trying to get a cloud server (built from an image I have saved) to execute a script from a URL upon startup, but the script is not executing properly.
I used one of the answers from Execute bash script from URL to configure a curl script, and am executing that script via the #reboot directive in crontab (Ubuntu 14.04). My setup looks like this:
The script contains these commands:
user#cloud-server-01:~$ cat startup.sh
#! /bin/sh
/usr/bin/curl -s http://192.168.100.59/user/startup.sh.txt | bash /dev/stdin
I call the script via crontab:
user#cloud-server-01:~$ crontab -l
#reboot /home/user/startup.sh > startup.log 2>&1 &
If I manually execute the script from the command line using exactly the same command, it works fine. However, executing by crontab on startup, it seems to hang, and I see the following processes running:
user#cloud-server-01:~$ ps ux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
user 1287 0.0 0.1 4444 632 ? S 19:17 0:00 /bin/sh /home/user/startup.sh
user 1290 0.0 0.7 89536 3536 ? S 19:17 0:00 /usr/bin/curl -s http://192.168.100.59/user/startup.sh.txt
user 1291 0.0 0.2 12632 1196 ? S 19:17 0:00 bash /dev/stdin
Am I missing something obvious in why the cron execution isn't giving me the same results as my command line?
EDIT:
Thanks Olof for the redirect on my troubleshooting. In fact, curl is executing, and if I wait long enough (several minutes) it appears to operate as desired. I suspect the problem is that the network interface and/or URL is not available when curl is initially called, and while it may poll for a connection, it probably backs off its polling interval. So the question now becomes, "How do I check whether I have a connection to this URL before calling curl?"
This is not a bash problem; your curl command is still running so bash is still running, waiting for curl to close the pipe that the bash shell is reading from.
To troubleshoot your curl invocation I would run it first without piping to bash to check that I get the output I expected.
The hint in Olof's answer got me there, but I'm posting the full result here for completeness:
Because of a cloud provider's script which takes 20-40 seconds following reboot, my desired connection IP wasn't available to me when I first executed cron. It would either timeout, or connect after a significant delay. I have modified my connection script to poll the connection until it is available before calling curl:
#! /bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOST_IP=192.168.100.59
check_online() {
IS_ONLINE=$(netcat -z -w 5 $HOST_IP 80 && echo 1 || echo 0)
}
# Initial check to see if we're online
check_online
# Loop while we're not online.
while [ $IS_ONLINE -eq 0 ];do
# We're offline. Sleep for a bit, then check again
sleep 5;
check_online
done
# Run remote script
bash <(curl -s http://${HOST_IP}/user/startup.sh.txt)

Resources