Let’s say I have two Bash scripts:
prog1.sh and prog2.sh
I know I can run these two scripts in parallel via:
prog1.sh & prog2.sh
However, let’s say these two scripts are operating in two different directories, so I’d like them to be running via two different terminals. Otherwise, I'll run into an issue with concurrency.
My question is, how can I run these (or more generally, an arbitrary collection of scripts) simultaneously?
I tried answers at:
Run different bash scripts, started by one bash startscript, in different terminal tabs
https://unix.stackexchange.com/questions/582092/how-can-i-run-multiple-bash-scripts-simultaneously-in-a-terminal-window?newreg=2529ef31224a4e44ae7d374f8809eef9
and others.
In your "master" script, you can have something like
workdir=( "pathToWorkDir1" "pathToWorkDir2" "pathToWorkDir3" ... )
progs=( "prog1.sh" "prog2.sh" "prog3.sh" ... )
for i in 1 2 3 ...
do
( cd "${workdir[${i}]}" ; progs[${i}] & )
done
wait
If you don't want to "wait" before exiting the master script, you will need to add nohup before the "pro${i}.sh", to ensure they survive independantly of that master script.
If the paths to work directories are relative to the start directory, you need the round brackets. If the paths are absolute, you don't need the round brackets.
Related
I have 3 scripts for example: first.ksh, second.ksh, third.ksh.
I run all of those scripts one by one manually, when the first is done I run the second and then also the third. those scripts take time to run, doing them manually is time-consuming because is required me to be in front of the computer.
how can write a script or query which runs those scripts one by one, after one is finished, automatically?
Assuming the scripts are in the current working directory,
./first.sh ; ./second.sh ; ./third.sh
Where ; is the separator for "sequential list"
The topic you're looking for is "shell programming". There are many Unix shells, and they share basic features defined by POSIX and Single Unix Specification. Most of them have additional features as well as online documentations.
I would like to get insight on how to get started or what general direction to look in when trying to make a script or makefile that will run 3 make commands at once that take in the same input. These three commands all ask for the same input but just output different excel files due to it manipulating the pulled data in different ways. Therefore If I were able to create a script or makefile that ran all three commands at once when giving the input one time it would SAVE ME A TON OF TIME.
This is all being done in putty pretty much (in terms of the commands)
Thanks,
NP
You want to use a shell script.
For instance, you can create run.sh with:
#!/bin/bash
make FLAG1=ON $*
make FLAG2=ON $*
make FLAG3=ON $*
Make it executable and do `./run.sh MYCOMMOFLAG1=ON MYCOMMONFLAG2=OFF...
I have a program that is a compiled binary that calls a whole bunch (~300) of child bash scripts. I would just like a way to time each of those child processes to find out which ones take the longest. The problem comes from the fact that I cannot change the compiled binary (by adding for example time foo.sh >> bar_log.txt). I can change the child scripts, so in theory I could move them to a different location and replace them with scripts that just call the ones from the new location with the time command, but again -- there are 300+ with unique names and whatnot. I was wondering if there was a way to call the original binary and get a log of the times to execute the child processes.
Thanks in advance.
EDIT: I also have limited access to additional programs and cannot download/install new ones. For example, I do not have atop.
The environment variable ENV (for POSIX shells; for bash when not started under the name sh, BASH_ENV) is parsed even by noninteractive shells as the location of a script to run on startup.
Thus, you can create a file that initializes tracing:
if [ -n "$BASH_VERSION" ]; then
# SECONDS is time since the start of this individual script
PS4=':$BASH_SOURCE:$LINENO:$SECONDS+'
else
# note that calling $(date) slows your code substantially
# nothing to do about it without shell extensions, however.
PS4=':${0}:$(date)+'
fi
set -x
...and set both ENV and BASH_ENV to point to that file.
I have searched the forum couldn't find one.can we define a variable that only increments on every cronjob run?
for example:
i have a script that runs every 5minutes so i need a variable that increments based on the cron run
Say if the job ran 5minutes for minutes. so 6 times the script got executed so my counter variable should be 6 now
Im expecting in bash/shell
Apologies if a duplicate question
tried:
((count+1))
You can do it this way:
create two scripts: counter.sh and increment_counter.sh
add execution of increment_counter.sh in your cron job
add . /path/to/counter.sh into /etc/profile or /etc/bash.bashrc or wherever you need
counter.sh
declare -i COUNTER
COUNTER=1
export COUNTER
increment_counter.sh
#!/bin/bash
echo "COUNTER=\$COUNTER+1" >> /path/to/counter.sh
The shell that you've run the command in has exited; any variables it has set have gone away. You can't use variables for this purpose.
What you need is some sort of permanent data store. This could be a database, or a remote network service, or a variety of things, but by far the simplest solution is to store the value in a file somewhere on disk. Read the file in when the script starts and write out the incremented value afterwards.
You should think about what to do if the file is missing and what happens if multiple copies of the script are run at the same time, and decide whether those are situations you care about at all. If they are, you'll need to add appropriate error handling and locking, respectively, in your script.
Wouldn't this be a better solution?
...to define a file under /tmp, such that a command like:
echo -n "." > $MyCounterFilename
Tracks the number of times something is invoked, in my particular case of app.:
#!/bin/bash
xterm [ Options ] -T "$(cat $MyCounterFilename | wc -c )" &
echo -n "." > $MyCounterFilename
Because i had to modify the way xterm is invoked for my purposes and i found already that having opened many of these concurrently one waste less time if knowing exactly what is running on each one by its number (without having to cycle alt+tab and eye inspect through everything).
NOTE: /etc/profile, or better either ~/.profile or ~/.bash_profile needs only a env. variable name defined containing the full path to your counter file.
Anyway, if you dont like the idea above, experiments might be performed to determine a) 1st time out of all that /etc/profile is executed since machine is powered on and system boots. 2) Wether /etc/profile is executed or not, and how many times (Each time we open an xterm?, for instance). ... thereafter the same sort of testing for the other dudes lesser general than /etc one.
I have a load of bash scripts that backup different directories to different locations. I want each one to run every day. However, I want to make they don't run simultaneously.
I've wrote a script that basically just calls each script in succession and sits in cron.daily, but I want a way for this script to work even if I add and remove backup scripts without having to manually edit it.
So what I need to go is generate a list of the scripts (e.g. "dir -1 /usr/bin/backup*.sh") and then run each script it finds in turn.
Thanks.
#!/bin/sh
for script in /usr/bin/backup*.sh
do
$script
done
#!/bin/bash
for SCRIPT in /usr/bin/backup*.sh
do
[ -x "$SCRIPT" ] && [ -f "$SCRIPT" ] && $SCRIPT
done
If your system has run-parts then that will take care of it for you. You can name your scripts like "10script", "20anotherscript" and they will be run in order in a manner similar to the rc*.d hierarchy (which is run via init or Upstart, however). On some systems it's a script. On mine it's a binary executable.
It is likely that your system is using it to run hourly, daily, etc., cron jobs just by dropping scripts into directories such as /etc/cron.hourly/
Pay particular attention, though, to how you name your scripts. (Don't use dots, for example.) Check the man page specific to your system, since file naming restrictions may vary.