Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a program running on a little Raspberry Pi. Is there any way for me to write a bash script that is always checking if the program is running or not? If the program crashes, the script does something (e-mails me, for example).
The simplest implementation would look like this:
#!/bin/bash
while true; do
program "$#"
done
Basically, the script runs program, and then immediately re-runs it if it exits. It's simple, efficient, and robust.
If you want the script to watch something already running, and restart it, then you have a more difficult task. You have to grep the output of ps, or some such, and that means patterns, text manipulation, and possibly some magic. It also means regular polling, which is inefficient, and/or means there will be a noticeable gap between one process exiting, then new one starting.
Alternatively, you could have a "program.pid" file somewhere, which make life easier, but you still need to check that the process with the given PID is the program it ought to be, and it's still all about polling.
Aside: You might like to consider setting your program up as a system service. I'm not sure what Rasbian uses, but both Upstart and Systemd can handle services that must be restarted when they die.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 months ago.
Improve this question
I apologize for the somewhat simple question (I am new to UNIX programming). I am using an HPC system and would like to parallelize a task in a for loop (in this case, a simple unzip to be applied to several files).
How could I do this? I thought that by requesting multiple cores the parallelization would start automatically, but actually the program is operating sequentially.
Thank you very much!
for i in *.zip; do unzip "$i" -d "${i%%.zip}"; done
In bash it would look something like:
for item in bunch-of-items; do
(
the loop body
is here
) &
done
Where the parentheses group the commands, and the whole loop body is put in the background.
If the rest of your program needs all the background jobs to complete, use the wait command.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've written a few dozen Bash scripts for my Mac over the years, although probably 80% of Bash code I have is in .bash_profile. Lately I've been doing things I used to do with Bash by using Python instead.
So, given languages like Python or Ruby (or even PHP), with the exception of login scripts such as .bash_profile (which may not be an exception), are there any tasks that Bash can do that generic scripting languages cannot?
Bash is oldschool UNIX - pulling little utilities together to achieve a greater goal mostly by using pipes and plumbing output from one command to the next.
There is definitely a lot to be said for having the skills involved in this style of seat of the pants programming. Too many people head off and write a self contained program to achieve something that can be done using a few command line inputs.
So in answer to your question , yes. A bash script can teach you to understand the multitude of bash scripts out there and it can do most things on a UNIX box in close to the most efficient way. Bash is here to stay.
Well, first off, bash is itself a shell, so it comes with builtin features like job control (suspend, etc.), file handle/terminal redirection (2 &> 1 and friends) and terminal control (like being able to display the current path in the titlebar, etc.). Other languages that don't have a built-in shell with access to termcap don't have those abilities. Pipe redirection is hard to get right (python's subprocess.popen has a bunch of limitations due to threads and potential deadlocks for example, while bash has access to tee etc.)
No. Bash is written in C, and the programs it runs are written in other languages (which are usually either C or implemented in C). Thus, everything that Bash does can be — and already is — done by other programming languages.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is there a way a bash script (or whatever language) can determine whether its being run in a headless way? I want to know whether a user can input or not. If they can, I am going to ask them something.
From man bash:
An interactive shell is one started without non-option arguments and without the -c option whose standard input and
error are both connected to terminals (as determined by isatty(3)), or one started with the -i option. PS1 is set and
$- includes i if bash is interactive, allowing a shell script or a startup file to test this state.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a script that transfers some files via ssh. I usually start the script and once I'm sure it is running okay I halt it using CTRL-Z then make it run in the background with bg.
> ./download-script.sh
Downloading...
Got file foobar.txt
Got file baz.txt
Downloading bash.txt (42%)
[2]+ Stopped download-script.sh
> bg
[1]+ download-script.sh &
>
How is this safe? It seems like the server sending the file doesn't know to wait for my process to come back online, does it?
What if I waited for an hour and then resumed the script in the background, would it continue where it left off?
My example uses an ssh file transfer, but this becomes a concern for me also when my script is interacting with most any resource.
I/O buffers will help it to withstand a little delay (ie, it will not barf if you suspend the script/command just a few seconds, at most. But more than a few secs and I think you would probably encounter other problems: TCP/UDP timeouts between origin and destination? I/O timeouts? (ex: too long to enter password, etc.)
If you have just "local" things and no timeout built-in the commands you use : for example, if you do :
tar cvf something.tar /path/to/something
and then ctrl-z it, and then bg (to awake and send to background) or fg (to awake and send to foreground) : it will work, even if you wait a loong time.
HOWEVER in the meantime you have more chance one of the file being tar-ed to be modified...
Or your shell could have a TIMEOUT/TMOUT making it stop before.
Or (any other reason, really : power off, your cat stomping on CTRL+d exiting the shell, etc)
iow: you can, unless something relies on it being "fast".
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am using the terminal program called screen, which can create several "virtual terminals" in a single "real" terminal (the words "virtual" and "real" here are quite relative, the "real" terminal can be a konsole tab as well, not necessarily tty1-tty6). The problem is that I cannot create more than 40 windows inside a single screen. When I try to create more, screen says "No more windows." After some googling I found that that this is controlled by something called MAXWIN, but I didn't find any information how to modify this MAXWIN. How can I increase the maximal number of windows inside a single screen?
I use Debian 6 "squeeze".
PS I understand that I can run several screen's in several "real" (in the above sense) terminals, but this makes it harder to use multiple display mode (screen -x).
That's a compile time option. Using strictly packages from upstream, it can't be done. If you wanted to compile screen yourself, you could accomplish this. Look in the config.h.in file. Near the top will be # define MAXWIN 40. Change that to your new limit.
(more info)