Init infinite loop on bootup (shell/Openwrt) - shell

I've been trying to generate and infinite loop in OpenWRT, and I've succeeded:
#!/bin/sh /etc/rc.common
while [ true ]
do
# Code to run
sleep 15
done
This code works as a charm if I execute it as ./script. However, I want this to start on its own when I turn on my router. I've placed the script in /etc/init.dand enabled it with chmod +x script.
Regardless, the program doesn't start running at all. My guess is that I shouldn't execute this script on boot up but have a script that calls this other script. I haven't been able to work this out.
Any help would be appreciated.

As I have messed with init scripts of OpenWRT in my previous projects. I would like contribute to Rich Alloway's answer (for the ones who will likely to drop here from google search). His answer only covers for "traditional SysV style init scripts" as it is mentioned in the page that he gave link Init Scripts.
There is new process management daemon, Procd that you might find in your OpenWRT version. Sadly documentation of it has not been completed yet; Procd Init Scripts.
There are minor differences like they have pointed out in their documentation :
procd expects services to run in the foreground,
Different shebang,
line: #!/bin/sh /etc/rc.common Explicitly use procd USE_PROCD=1
start_service() instead of start()
A simple init script for procd would look like :
#!/bin/sh /etc/rc.common
# it is run order of your script, make it high to not mess up with other init scripts
START=100
USE_PROCD=1
start_service() {
procd_open_instance
procd_set_param command /target/to/your/useless/command -some -useless -shit -here
}
I have posted some blog post about it while ago that might help.

You need to have a file in /etc/rc.d/ with an Sxx prefix in order for the system to execute the script at boot time. This is usually accomplished by having the script in /etc/init.d and a symlink in /etc/rc.d pointing to the script.
The S indicates that the script should run at startup while the xx dictates the order that the script will run. Scripts are executed in naturally increasing order: S10boot runs before S40network and S50cron runs before S50dropbear.
Keep in mind that the system may not continue to boot with the script that you have shown here!
/etc/init.d/rcS calls each script sequentially and waits for the current one to exit before calling the next script. Since your script is an infinite loop, it will never exit and rcS may not complete the boot process.
Including /etc/rc.common will be more useful if you use functions in your script like start(), stop(), restart(), etc and add START and STOP variables which describe when the script should be executed during boot/shutdown.
Your script can then be used to enable and disable itself at boot time by creating or removing the symlink: /etc/init.d/myscript enable
See also OpenWRT Boot Process and Init Scripts
-Rich Alloway (RogueWave)

Related

How can I make a local Git hook run a Windows executable and wait for it to return?

I'm working in a Windows environment. I have a Git repository and am writing a custom pre-commit hook. I am much more comfortable writing a quick and dirty console application in C# than trying to figure out Perl syntax so that's the route I'm going.
My .git/hooks/precommit file looks like this:
#!/bin/sh
start MyHelperApp.exe
And this works somewhat. As you can see I have a compiled helper application in the root of the repo directory (and it is .gitignore'd), and this does indeed launch my application successfully when I call git commit. However, it doesn't wait for the process to finish nor does it seem to care what the return code of the process is. I assume this is because start is asynchronous and it returns a 0 exit code every time.
I have reason to suspect that the start process which is getting called here is not the native Windows start command, because I tried changing it to start /wait MyHelperApp.exe but this had no effect. Also trying to call MyHelperApp.exe directly gives a "command not found" error, and so does changing start to call. I suspect that start is an emulated bash command and it's running the bash version instead of the Windows version?
Anyways, my helper app does return different exit codes depending on different conditions, so it'd be great if those could be used. (Pre-commit hooks fail if a program in the script returns any exit code besides zero.) How might I go about utilizing this?
Call the executable directly, don't use start.
Also trying to call MyHelperApp.exe directly gives a "command not found" error
If the PATH variable doesn't contain a . entry, bash won't look in the current directory to find executables. Call ./MyHelperApp.exe to make it explicit that it should be run from the current directory.

Creating a startup daemon for a shell script in FreeBSD

I am trying to create a file in rc.d/ that will start up a /bin/sh script that I have written. I am following some examples found here:
http://www.freebsd.org/doc/en/articles/rc-scripting/article.html#rc-flags
#!/bin/sh -x
# PROVIDE: copyfiles
. /etc/rc.subr
name=copyfiles
rcvar=copyfiles_enable
pidfile="/var/run/${name}.pid"
command="/var/etc/copy_dat_files.sh -f /var/etc/copydatafiles.conf"
command_args="&"
load_rc_config $name
run_rc_command "$1"
It seems like I am having a problem with the pidfile. Does my script need to be the one that creates the pid file, or does it automatically get created? I have tried both ways, and whether or not i make my script create a pid file, I get an error that the pid file is not readable.
If my script is supposed to make it, what is the proper way to make the pid file?
Thanks
Look at the existing daemons for example (such as /etc/rc.d/mountd). Then look at the subroutines in /etc/rc.subr -- there is code in there to check the PID-file, but nothing creates it.
In other words, you can declare in the daemon-starting script, what the PID-file is, but creating it is up to the daemon. Speaking of the daemons, you may wish to use the daemon(8) utility, if your daemon is, in fact, a shell script. The utility will take care of the PID-file creation for you. (If the daemon is written in C, you can/should use daemon(3) function.)
BTW, in my own opinion, daemons, when opening up the PID-files for creation, should also lock them (with flock(3) or fcntl(2) or lockf(3)). This way, if an instance crashes (or is killed) without removing the PID-file, the next instance will have no problem determining, the file is stale.
In general, a daemon is supposed to create and clean up its own PID file.
From a shell-script you can give the following command to create it;
echo $$ >/var/run/${name}.pid
Do not forget to remove the file before exiting the script. Write a cleanup() function that does that and let trap call that function when certain signals occur. Also call cleanup just before exiting the script.

Ensuring Programs Run In Ordered Sequence

This is my situation:
I want to run Python scripts sequentially in sequence, starting with scriptA.py. When scriptA.py finishes, scriptB.py should run, followed by scriptC.py. After these scripts have run in order, I need to run an rsync command.
I plan to create bash script like this:
#!/bin/sh
python scriptA.py
python scriptB.py
python scriptC.py
rsync blablabla
Is this the best solution for perfomance and stability ?
To run a command only after the previous command has completed successfully, you can use a logical AND:
python scriptA.py && python scriptB.py && python scriptC.py && rsync blablabla
Because the whole statement will be true only if all are true, bash "short-circuits" and only starts the next statement when the preceding one has completed successfully; if one fails, it stops and doesn't start the next command.
Is that the behavior you're looking for?
If you have some experience with python it will almost certainly be better to write a python script that imports and executes the relevant functions from the other script. That way you will be able to use pythons exceptions handling. Also you can run the rsync from within python.

How to test things in crontab

This keeps happening to me all the time:
1) I write a script(ruby, shell, etc).
2) run it, it works.
3) put it in crontab so it runs in a few minutes so I know it runs from there.
4) It doesnt, no error trace, back to step 2 or 3 a 1000 times.
When I ruby script fails in crontab, I can't really know why it fails cause when I pipe output like this:
ruby script.rb >& /path/to/output
I sorta get the output of the script, but I don't get any of the errors from it and I don't get the errors coming from bash (like if ruby is not found or file isn't there)
I have no idea what environmental variables are set and whether or not it's a problem. Turns out that to run a ruby script from crontab you have to export a ton of environment variables.
Is there a way for me to just have crontab run a script as if I ran it myself from my terminal?
When debugging, I have to reset the timer and go back to waiting. Very time consuming.
How to test things in crontab better or avoid these problems?
"Is there a way for me to just have crontab run a script as if I ran it myself from my terminal?"
Yes:
bash -li -c /path/to/script
From the man page:
[vindaloo:pgl]:~/p/test $ man bash | grep -A2 -m1 -- -i
-i If the -i option is present, the shell is interactive.
-l Make bash act as if it had been invoked as a login shell (see
INVOCATION below).
G'day,
One of the basic problems with cron is that you get a minimal environment being set by cron. In fact, you only get four env. var's set and they are:
SHELL - set to /bin/sh
LOGNAME - set to your userid as found in /etc/passwd
HOME - set to your home dir. as found in /etc/passwd
PATH - set to "/usr/bin:/bin"
That's it.
However, what you can do is take a snapshot of the environment you want and save that to a file.
Now make your cronjob source a trivial shell script that sources this env. file and then executes your Ruby script.
BTW Having a wrapper source a common env. file is an excellent way to enforce a consistent environment for multiple cronjobs. This also enforces the DRY principle because it gives you just one point to update things as required, instead of having to search through a bunch of scripts and search for a specific string if, say, a logging location is changed or a different utility is now being used, e.g. gnutar instead of vanilla tar.
Actually, this technique is used very successfully with The Build Monkey which is used to implement Continuous Integration for a major software project that is common to several major world airlines. 3,500kSLOC being checked out and built several times a day and over 8,000 regression tests run once a day.
HTH
'Avahappy,
Run a 'set' command from inside of the ruby script, fire it from crontab, and you'll see exactly what's set and what's not.
To find out the environment in which cron runs jobs, add this cron job:
{ echo "\nenv\n" && env|sort ; echo "\nset\n" && set; } | /usr/bin/mailx -s 'my env' you#example.com
Or send the output to a file instead of email.
You could write a wrapper script, called for example rbcron, which looks something like:
#!/bin/bash
RUBY=ruby
export VAR1=foo
export VAR2=bar
export VAR3=baz
$RUBY "$*" 2>&1
This will redirect standard error from ruby to the standard output. Then you run rbcron in your cron job, and the standard output contains out+err of ruby, but also the "bash" errors existing from rbcron itself. In your cron entry, redirect 2>&1 > /path/to/output to get output+error messages to go to /path/to/output.
If you really want to run it as yourself, you may want to invoke ruby from a shell script that sources your .profile/.bashrc etc. That way it'll pull in your environment.
However, the downside is that it's not isolated from your environment, and if you change that, you may find your cron jobs suddenly stop working.

Run a list of bash scripts consecutively

I have a load of bash scripts that backup different directories to different locations. I want each one to run every day. However, I want to make they don't run simultaneously.
I've wrote a script that basically just calls each script in succession and sits in cron.daily, but I want a way for this script to work even if I add and remove backup scripts without having to manually edit it.
So what I need to go is generate a list of the scripts (e.g. "dir -1 /usr/bin/backup*.sh") and then run each script it finds in turn.
Thanks.
#!/bin/sh
for script in /usr/bin/backup*.sh
do
$script
done
#!/bin/bash
for SCRIPT in /usr/bin/backup*.sh
do
[ -x "$SCRIPT" ] && [ -f "$SCRIPT" ] && $SCRIPT
done
If your system has run-parts then that will take care of it for you. You can name your scripts like "10script", "20anotherscript" and they will be run in order in a manner similar to the rc*.d hierarchy (which is run via init or Upstart, however). On some systems it's a script. On mine it's a binary executable.
It is likely that your system is using it to run hourly, daily, etc., cron jobs just by dropping scripts into directories such as /etc/cron.hourly/
Pay particular attention, though, to how you name your scripts. (Don't use dots, for example.) Check the man page specific to your system, since file naming restrictions may vary.

Resources