I have searched the forum couldn't find one.can we define a variable that only increments on every cronjob run?
for example:
i have a script that runs every 5minutes so i need a variable that increments based on the cron run
Say if the job ran 5minutes for minutes. so 6 times the script got executed so my counter variable should be 6 now
Im expecting in bash/shell
Apologies if a duplicate question
tried:
((count+1))
You can do it this way:
create two scripts: counter.sh and increment_counter.sh
add execution of increment_counter.sh in your cron job
add . /path/to/counter.sh into /etc/profile or /etc/bash.bashrc or wherever you need
counter.sh
declare -i COUNTER
COUNTER=1
export COUNTER
increment_counter.sh
#!/bin/bash
echo "COUNTER=\$COUNTER+1" >> /path/to/counter.sh
The shell that you've run the command in has exited; any variables it has set have gone away. You can't use variables for this purpose.
What you need is some sort of permanent data store. This could be a database, or a remote network service, or a variety of things, but by far the simplest solution is to store the value in a file somewhere on disk. Read the file in when the script starts and write out the incremented value afterwards.
You should think about what to do if the file is missing and what happens if multiple copies of the script are run at the same time, and decide whether those are situations you care about at all. If they are, you'll need to add appropriate error handling and locking, respectively, in your script.
Wouldn't this be a better solution?
...to define a file under /tmp, such that a command like:
echo -n "." > $MyCounterFilename
Tracks the number of times something is invoked, in my particular case of app.:
#!/bin/bash
xterm [ Options ] -T "$(cat $MyCounterFilename | wc -c )" &
echo -n "." > $MyCounterFilename
Because i had to modify the way xterm is invoked for my purposes and i found already that having opened many of these concurrently one waste less time if knowing exactly what is running on each one by its number (without having to cycle alt+tab and eye inspect through everything).
NOTE: /etc/profile, or better either ~/.profile or ~/.bash_profile needs only a env. variable name defined containing the full path to your counter file.
Anyway, if you dont like the idea above, experiments might be performed to determine a) 1st time out of all that /etc/profile is executed since machine is powered on and system boots. 2) Wether /etc/profile is executed or not, and how many times (Each time we open an xterm?, for instance). ... thereafter the same sort of testing for the other dudes lesser general than /etc one.
Related
I have some problem in automation the generation of a csv file. The bash code used to produce the csv works in parallel using 3 cores in order to reduce the time consumption; initially different csv files are produced, which are subsequently combined to form a single csv file. The core of the code is this cycle:
...
waitevery=3
for j in `seq 1 24`; do
if((j==1)); then
printf '%s\n' A B C D E | paste -sd ',' >> code${namefile}01${rr}.csv
fi
j=$(printf "%02d" $j)
../src/thunderstorm --mask-file=mask.grib const_${namefile}$j${rr}.grib surf_${namefile}$j${rr}.grib ua_${namefile}$j${rr}.grib hl_const.grib out &
if ! ((c % waitevery)); then
wait
fi
c=$((c+1))
done
...
where ../src/thunderstorm is a .F90 code which produce the second and successive files.
If I run this code manually it produces the right csv file, but if I run it by a programmed crontab command it generates a csv file with the only header A B C D E
Some suggestions?
Thanks!
cron runs your script in an environment, that often does not match your expectations.
check that the PATH is correct and that the script is called from the correct location: ../src is obviously relative, but to what?
I find cron-scripts to be much more reliable when using full paths for input, output and programs.
As #umläute points out, cron runs your scripts but does not run the typical initiallizations that you may have when you open a terminal session. Which means that you have to make no assumptions regarding your environment.
For scripts that may be invoked from the shell and may be invoked from cron I usually add at the beginning something like this:
BIN_DIR=/home/myhome/bin
PATH=$PATH:$BIN_DIR
Also, make sure you do not use relative paths to executables like ../src/thunderstorm. The working directory of the script invoked by cron may not be what you think. You may use $BIN_DIR/../src/thunderstorm. If you want to save typing add the relevant directories to the PATH.
The same logic goes for all other shell variables.
Doing a good initialization at the beginning of your script will allow you to run it from the shell for testing (or manual execution) and then run it as a cron job too.
I have a program that is a compiled binary that calls a whole bunch (~300) of child bash scripts. I would just like a way to time each of those child processes to find out which ones take the longest. The problem comes from the fact that I cannot change the compiled binary (by adding for example time foo.sh >> bar_log.txt). I can change the child scripts, so in theory I could move them to a different location and replace them with scripts that just call the ones from the new location with the time command, but again -- there are 300+ with unique names and whatnot. I was wondering if there was a way to call the original binary and get a log of the times to execute the child processes.
Thanks in advance.
EDIT: I also have limited access to additional programs and cannot download/install new ones. For example, I do not have atop.
The environment variable ENV (for POSIX shells; for bash when not started under the name sh, BASH_ENV) is parsed even by noninteractive shells as the location of a script to run on startup.
Thus, you can create a file that initializes tracing:
if [ -n "$BASH_VERSION" ]; then
# SECONDS is time since the start of this individual script
PS4=':$BASH_SOURCE:$LINENO:$SECONDS+'
else
# note that calling $(date) slows your code substantially
# nothing to do about it without shell extensions, however.
PS4=':${0}:$(date)+'
fi
set -x
...and set both ENV and BASH_ENV to point to that file.
This question has been posted here many times, but it never seems to answer my question.
I have two scripts. The first one contains one or multiple variables, the second script needs those variables. The second script also needs to be able to change the variables in the first script.
I'm not interested in sourcing (where the first script containing the variables runs the second script) or exporting (using environment variables). I just simply want to make sure that the second script can read and change (get and set) the variables available in the first script.
(PS. If I misunderstood how sourcing or exporting works, and it applies to my scenario, please let me know. I'm not completely closed to those methods, after what I've read, I just don't think those things will do what I want)
Environment variables are per process. One process can not modify the variables in another. What you're asking for is not possible.
The usual workaround for scripts is sourcing, which works by running both scripts in the same shell process, but you say you don't want to do that.
I've also given this some thought. I would use files as variables. For example in script 1 you use for writing variable values to files:
echo $varnum1 > /home/username/scriptdir/vars/varnum1
echo $varnum2 > /home/username/scriptdir/vars/varnum2
And in script 2 you use for reading values from files back into variables:
$varnum1=$(cat /home/username/scriptdir/vars/varnum1)
$varnum2=$(cat /home/username/scriptdir/vars/varnum2)
Both scripts can read or write to the variables at any given time. Theoretically two scripts can try to access the same file at the same time, I'm not sure what exactly would happen but since each file only contains one value, the time to read or write should be extremely short.
In order to even reduce those times you can use a ramdisk.
I think this is much better than scripts editing each other (yuk!). Live editing of scripts can mess up scripts and only works when you initiate the script again after the edit was made.
Good luck!
So after a long search on the web and a lot of trying, I finally found some kind of a solution. Actually, it's quite simple.
There are some prerequisites though.
The variable you want to set already has to exist in the file you're trying to set it in (I'm guessing the variable can be created as well when it doesn't exist yet, but that's not what I'm going for here).
The file you're trying to set the variable in has to exist (obviously. I'm guessing again this can be done as well, but again, not what I'm going for).
Write
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' FILENAME
So i.e. setting the variable called Var1 to the value 5, in the file
test.ini:
sudo sed -i 's/^\(Var1=\).*/\15/' test.ini
Read
sudo grep -Po '(?<=VARNAME=).*' FILENAME
So i.e. reading the variable called Var1 from the file test.ini
sudo grep -Po '(?<=Var1=).*' test.ini
Just to be sure
I've noticed some issues when running the script that sets variables from a different folder than the one where your script is located.
To make sure this always go right, you can do one of two things:
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' `dirname $0`/FILENAME
So basically, just put `dirname $0`/ (including the backticks) in front of the filename.
The other option is to make `dirname $0`/ a variable (again including the backticks), which would look like this.
my_dir=`dirname $0`
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' $my_dir/FILENAME
So basically, if you've got a file named test.ini, which contains this line: Var1= (In my tests, the variable can start empty, and you will still be able to set it. Mileage may vary.), you will be able to set and get the value for Var1
I can confirm that this works (for me), but since you all, with way more experience in scripting then me, didn't come up with this, I'm guessing this is not a great way to do it.
Also, I couldn't tell you the first thing about what's happening in those commands above, I only know they work.
So if I'm doing something stupid, or if you can explain to me what's happening in the commands above, please let me know. I'm very curious to find out what you guys think if this solution.
I have multiple remote sites which run a bash script, initiated by cron (running VERY frequently -- 10 minutes or less), in which one of it's jobs is to sync a "scripts" directory. The idea is for me to be able to edit the scripts in one location (a server in a data center) rather than having to log into each remote site and doing any edits manually. The question is, what are the best options for syncing the script that is currently running the sync? (I hope that's clear).
I would imagine syncing a script that is currently running would be very bad. Does the following look feasible if I run it as the last statement of my script? pros? cons? Other options??
if [ -e ${newScriptPath} ]; then
mv ${newScriptPath} ${permanentPath}" | at "now + 1 minute"
fi
One problem I see is that it's possible that if I use "1 minute" (which is "at's" smallest increment), and the script ends, and cron initiates the next job before "at" replaces the script, it could try to replace it during the next run of the script....
Changing the script file during execution is indeed dangerous (see this previous answer), but there's a trick that (at least with the versions of bash I've tested with) forces bash to read the entire script into memory, so if it changes during execution there won't be any effect. Just wrap the script in {}, and use an explicit exit (inside the {}) so if anything gets added to the end of the file it won't be executed:
#!/bin/bash
{
# Actual script contents go here
exit
}
Warning: as I said, this works on the versions of bash I have tested it with. I make no promises about other versions, or other shells. Test it with the shell(s) you'll be using before putting it into production use.
Also, is there any risk that any of the other scripts will be running during the sync process? If so, you either need to use this trick with all of them, or else find some general way to detect which scripts are in use and defer updates on them until later.
So I ended up using the "at" utility, but only if the file changed. I have a ".cur" and ".new" version of the script on the local machine. If the MD5 is the same on both, I do nothing. If they are different, I wait until after the main script completes, then force copy the ".new" to the ".cur" in a different script.
I create the same lock file (name) for the update_script so another instance of the first script won't run if I'm changing it..
part in main script....
file1=`script_cur.sh`
file2=`script_new.sh`
if [ "$file1" == "$file2" ] ; then
echo "Files have the same content"
else
echo "Files are different, scheduling update_script.sh at command"
at -f update_script.sh now + 1 minute
fi
I have a load of bash scripts that backup different directories to different locations. I want each one to run every day. However, I want to make they don't run simultaneously.
I've wrote a script that basically just calls each script in succession and sits in cron.daily, but I want a way for this script to work even if I add and remove backup scripts without having to manually edit it.
So what I need to go is generate a list of the scripts (e.g. "dir -1 /usr/bin/backup*.sh") and then run each script it finds in turn.
Thanks.
#!/bin/sh
for script in /usr/bin/backup*.sh
do
$script
done
#!/bin/bash
for SCRIPT in /usr/bin/backup*.sh
do
[ -x "$SCRIPT" ] && [ -f "$SCRIPT" ] && $SCRIPT
done
If your system has run-parts then that will take care of it for you. You can name your scripts like "10script", "20anotherscript" and they will be run in order in a manner similar to the rc*.d hierarchy (which is run via init or Upstart, however). On some systems it's a script. On mine it's a binary executable.
It is likely that your system is using it to run hourly, daily, etc., cron jobs just by dropping scripts into directories such as /etc/cron.hourly/
Pay particular attention, though, to how you name your scripts. (Don't use dots, for example.) Check the man page specific to your system, since file naming restrictions may vary.