Ensuring Programs Run In Ordered Sequence - bash

This is my situation:
I want to run Python scripts sequentially in sequence, starting with scriptA.py. When scriptA.py finishes, scriptB.py should run, followed by scriptC.py. After these scripts have run in order, I need to run an rsync command.
I plan to create bash script like this:
#!/bin/sh
python scriptA.py
python scriptB.py
python scriptC.py
rsync blablabla
Is this the best solution for perfomance and stability ?

To run a command only after the previous command has completed successfully, you can use a logical AND:
python scriptA.py && python scriptB.py && python scriptC.py && rsync blablabla
Because the whole statement will be true only if all are true, bash "short-circuits" and only starts the next statement when the preceding one has completed successfully; if one fails, it stops and doesn't start the next command.
Is that the behavior you're looking for?

If you have some experience with python it will almost certainly be better to write a python script that imports and executes the relevant functions from the other script. That way you will be able to use pythons exceptions handling. Also you can run the rsync from within python.

Related

Perl script is running slow when calling bash

I am running a perl script that calls another script. Via command line. But it runs extremely slow. The back ticks makes it run via command line.
for my $results (#$data){
`/home/data/push_to_web $results->{file}`;
}
If i run the same command via bash /home/data/push_to_web book.txt the same script runs extremely fast. If i build a bash file that contains
/home/data/push_to_web book_one.txt
/home/data/push_to_web book_two.txt
/home/data/push_to_web book_three.txt
The code executes extremely fast. Is there any secret to speeding perl up via another perl script.
Your perl script fires up a new bash shell for each element in the array, whereas running the commands in bash from a file doesn't have to create any new shells.
Depending on how many files you have, and what's in your bash startup files, this could add a significant overhead.
One option would be to build a list of semicolon-separated commands in the for loop, and then run one system command at the end to execute them all in one bash process.

Programmatically/script to run zsh command

As part of a bigger script I'm using print -z ls to have zsh's input buffer show the ls command. This requires me to manually press enter to actually execute the command. Is there a way to have ZSH execute the command?
To clarify, the objective is to have a command run, keep it in history, and in case another command is running it shouldn't run in parallel or something like that.
The solution I've found is:
python -c "import fcntl, sys, termios; fcntl.ioctl(sys.stdin, termios.TIOCSTI, '\n');
I'm not sure why, but sometimes you might need to repeat the command 2 times for the actual command to be executed. In my case this is happening because I send a process to the background, although this still doesn't make much sense because that process is sending a signal back to the original shell (triggering a trap) which actually calls this code.
In case anyone is interested, this was my goal:
https://gist.github.com/alexmipego/89c59a5e3abe34faeaee0b07b23b56eb

How to write a wrapper script in unix that calls other shell scripts sequentially?

I am trying to write a wrapper script that calls other shell scripts in a sequential manner
There are 3 shell scripts that pick .csv files of a particular pattern from a specified location and process them.
I need to run them sequentially by calling them from one wrapper script
Let's consider 3 scripts
a.ksh, b.ksh and c.ksh that run sequentially in the same order.
The requirement is that the script should fail if a.ksh fails but continue if b.sh fails.
Please suggest.
Thanks in advance!
Something like:
./a.ksh && ./b.ksh; ./c.ksh
I haven't tried this out. Do test with sample scripts that fail/pass before using.
See: http://www.gnu.org/software/bash/manual/bashref.html#Lists

Init infinite loop on bootup (shell/Openwrt)

I've been trying to generate and infinite loop in OpenWRT, and I've succeeded:
#!/bin/sh /etc/rc.common
while [ true ]
do
# Code to run
sleep 15
done
This code works as a charm if I execute it as ./script. However, I want this to start on its own when I turn on my router. I've placed the script in /etc/init.dand enabled it with chmod +x script.
Regardless, the program doesn't start running at all. My guess is that I shouldn't execute this script on boot up but have a script that calls this other script. I haven't been able to work this out.
Any help would be appreciated.
As I have messed with init scripts of OpenWRT in my previous projects. I would like contribute to Rich Alloway's answer (for the ones who will likely to drop here from google search). His answer only covers for "traditional SysV style init scripts" as it is mentioned in the page that he gave link Init Scripts.
There is new process management daemon, Procd that you might find in your OpenWRT version. Sadly documentation of it has not been completed yet; Procd Init Scripts.
There are minor differences like they have pointed out in their documentation :
procd expects services to run in the foreground,
Different shebang,
line: #!/bin/sh /etc/rc.common Explicitly use procd USE_PROCD=1
start_service() instead of start()
A simple init script for procd would look like :
#!/bin/sh /etc/rc.common
# it is run order of your script, make it high to not mess up with other init scripts
START=100
USE_PROCD=1
start_service() {
procd_open_instance
procd_set_param command /target/to/your/useless/command -some -useless -shit -here
}
I have posted some blog post about it while ago that might help.
You need to have a file in /etc/rc.d/ with an Sxx prefix in order for the system to execute the script at boot time. This is usually accomplished by having the script in /etc/init.d and a symlink in /etc/rc.d pointing to the script.
The S indicates that the script should run at startup while the xx dictates the order that the script will run. Scripts are executed in naturally increasing order: S10boot runs before S40network and S50cron runs before S50dropbear.
Keep in mind that the system may not continue to boot with the script that you have shown here!
/etc/init.d/rcS calls each script sequentially and waits for the current one to exit before calling the next script. Since your script is an infinite loop, it will never exit and rcS may not complete the boot process.
Including /etc/rc.common will be more useful if you use functions in your script like start(), stop(), restart(), etc and add START and STOP variables which describe when the script should be executed during boot/shutdown.
Your script can then be used to enable and disable itself at boot time by creating or removing the symlink: /etc/init.d/myscript enable
See also OpenWRT Boot Process and Init Scripts
-Rich Alloway (RogueWave)

How does Docker run a command without invoking a command shell?

I'm learning about Docker at the moment, and going through the Dockerfile reference, specifically the RUN instruction. There are two forms of RUN - the shell form, which runs the command in a shell, and the exec form, which "does not invoke a command shell" (quoted from the Note section).
If I understood the documentation correctly, my question is - If, and how can, Docker run a command without a shell?
Note that the answers to Can a command be executed without a shell?'s don't actually answer the question.
If I understand your question correctly, you're asking how something can be run (specifically in the context of docker) without invoking a command shell.
The way things are run in the linux kernel are usually using the exec family of system calls.
You pass it the path to the executable you want to run and the arguments that need to be passed to it via an execl call for example.
This is actually what your shell (sh, bash, ksh, zsh) does under the hood anyway. You can observe this yourself if you run something like strace -f bash -c "cat /tmp/foo"
In the output of that command you'll see something like this:
execve("/bin/cat", ["cat", "/tmp/foo"], [/* 66 vars */]) = 0
What's really going on is that bash looks up cat in $PATH, it then finds that cat is actually an executable binary available at /bin/cat. It then simply invokes it via execve. and the correct arguments as you can see above.
You can trivially write a C program that does the same thing as well.
This is what such a program would look like:
#include<unistd.h>
int main() {
execl("/bin/cat", "/bin/cat", "/tmp/foo", (char *)NULL);
return 0;
}
Every language provides its own way of interfacing with these system calls. C does, Python does and Go, which is what's used to write Docker for the most part, does as well. A RUN instruction in the docker likely translates to one of these exec calls when you hit docker build. You can run an strace -f docker build and then grep for exec calls in the log to see how the magic happens.
The only difference between running something via a shell and directly is that you lose out on all the fancy stuff your shell will do for you, such as variable expansion, executable searching etc.
A program can execute another program without a shell; you just create a new process and load an executable onto it, so you don't need the shell for that. The shell is needed for a user to start a program because it is the user interface to the system. Also, a program is not able to run a built-in command like cd or rm without a shell because there's no executable to be found (there are alternative ways, thought, but not as simple).
In very general - docker run will start container with its default process, when docker exec allow you to run any process you want inside the container.
For example, running microsoft/iis container by docker run microsoft/iiswill run default process, which is powershell.
But you can run cmd by running docker exec -i my_container cmd
See this answer for more details.

Resources