bash script rsync itself from remote host - how to? - bash

I have multiple remote sites which run a bash script, initiated by cron (running VERY frequently -- 10 minutes or less), in which one of it's jobs is to sync a "scripts" directory. The idea is for me to be able to edit the scripts in one location (a server in a data center) rather than having to log into each remote site and doing any edits manually. The question is, what are the best options for syncing the script that is currently running the sync? (I hope that's clear).
I would imagine syncing a script that is currently running would be very bad. Does the following look feasible if I run it as the last statement of my script? pros? cons? Other options??
if [ -e ${newScriptPath} ]; then
mv ${newScriptPath} ${permanentPath}" | at "now + 1 minute"
fi
One problem I see is that it's possible that if I use "1 minute" (which is "at's" smallest increment), and the script ends, and cron initiates the next job before "at" replaces the script, it could try to replace it during the next run of the script....

Changing the script file during execution is indeed dangerous (see this previous answer), but there's a trick that (at least with the versions of bash I've tested with) forces bash to read the entire script into memory, so if it changes during execution there won't be any effect. Just wrap the script in {}, and use an explicit exit (inside the {}) so if anything gets added to the end of the file it won't be executed:
#!/bin/bash
{
# Actual script contents go here
exit
}
Warning: as I said, this works on the versions of bash I have tested it with. I make no promises about other versions, or other shells. Test it with the shell(s) you'll be using before putting it into production use.
Also, is there any risk that any of the other scripts will be running during the sync process? If so, you either need to use this trick with all of them, or else find some general way to detect which scripts are in use and defer updates on them until later.

So I ended up using the "at" utility, but only if the file changed. I have a ".cur" and ".new" version of the script on the local machine. If the MD5 is the same on both, I do nothing. If they are different, I wait until after the main script completes, then force copy the ".new" to the ".cur" in a different script.
I create the same lock file (name) for the update_script so another instance of the first script won't run if I'm changing it..
part in main script....
file1=`script_cur.sh`
file2=`script_new.sh`
if [ "$file1" == "$file2" ] ; then
echo "Files have the same content"
else
echo "Files are different, scheduling update_script.sh at command"
at -f update_script.sh now + 1 minute
fi

Related

Define a Increment variable in shell script that increments on every cronjob

I have searched the forum couldn't find one.can we define a variable that only increments on every cronjob run?
for example:
i have a script that runs every 5minutes so i need a variable that increments based on the cron run
Say if the job ran 5minutes for minutes. so 6 times the script got executed so my counter variable should be 6 now
Im expecting in bash/shell
Apologies if a duplicate question
tried:
((count+1))
You can do it this way:
create two scripts: counter.sh and increment_counter.sh
add execution of increment_counter.sh in your cron job
add . /path/to/counter.sh into /etc/profile or /etc/bash.bashrc or wherever you need
counter.sh
declare -i COUNTER
COUNTER=1
export COUNTER
increment_counter.sh
#!/bin/bash
echo "COUNTER=\$COUNTER+1" >> /path/to/counter.sh
The shell that you've run the command in has exited; any variables it has set have gone away. You can't use variables for this purpose.
What you need is some sort of permanent data store. This could be a database, or a remote network service, or a variety of things, but by far the simplest solution is to store the value in a file somewhere on disk. Read the file in when the script starts and write out the incremented value afterwards.
You should think about what to do if the file is missing and what happens if multiple copies of the script are run at the same time, and decide whether those are situations you care about at all. If they are, you'll need to add appropriate error handling and locking, respectively, in your script.
Wouldn't this be a better solution?
...to define a file under /tmp, such that a command like:
echo -n "." > $MyCounterFilename
Tracks the number of times something is invoked, in my particular case of app.:
#!/bin/bash
xterm [ Options ] -T "$(cat $MyCounterFilename | wc -c )" &
echo -n "." > $MyCounterFilename
Because i had to modify the way xterm is invoked for my purposes and i found already that having opened many of these concurrently one waste less time if knowing exactly what is running on each one by its number (without having to cycle alt+tab and eye inspect through everything).
NOTE: /etc/profile, or better either ~/.profile or ~/.bash_profile needs only a env. variable name defined containing the full path to your counter file.
Anyway, if you dont like the idea above, experiments might be performed to determine a) 1st time out of all that /etc/profile is executed since machine is powered on and system boots. 2) Wether /etc/profile is executed or not, and how many times (Each time we open an xterm?, for instance). ... thereafter the same sort of testing for the other dudes lesser general than /etc one.

bash file: cronjob vs manual, why are they different?

I have a bash script that runs every five minutes. Among other things it runs php scripts reading on existing files, and at the end it sends an email. When run manually, it does the job. When the cronjob runs, it partially completes the task. The code below:
DIR="/somedir/"
php ${DIR}client.php $DIR
cat ${DIR}alert_list.txt | uniq | while read alert;
do
if [ -s ${DIR}alerts/$alert.txt ]; then
# send the email.
echo "Sending email for..."$alert >> ${DIR}email.txt
DETAILFILE="tools/"$alert
DETAILFILEP=${DETAILFILE}".txt"
php ${DIR}email.php $alert
fi
done
echo 'search completed.'
in 'cronjob mode' it never gets to the 'do' statement. In manual mode it does everything.
Any thoughts?
Thanks a lot!
I found the issue within the PHP scripts. Relative calls to files which are located in other paths get missed when it runs automatically. Apparently, it runs from somewhere else, so it could not progress because of the missing input files created by initial php script.
Thanks.
The difference between the manual run and the cron run in bash is that in case of a cron the .bash* files are not sourced initially, and hence it might happen some of the required settings (Eg: PATH) are different.
And also, (replying to your previous comment) The PWD in case of a cron is $HOME, so the needed files as you mentioned are not picked, whereas in case of manual run it picks from the path you run.
Hope this helps.

Slow load time of bash in cygwin

At the moment bash takes about 2 seconds to load. I have ran bash with -x flag and I am seeing the output and it seems as though PATH is being loaded many times in cygwin. The funny thing is I use the same file in linux environment, but it works fine, without the reload problem. Could the following cause the problem?
if [ `uname -o` = "Cygwin" ]; then
....
fi
As you've noted in your answer, the problem is Cygwin's bash-completion package. The quick and easy fix is to disable bash-completion, and the correct way to do that is to run Cygwin's setup.exe (download it again if you need to) and select to uninstall that package.
The longer solution is to work through the files in /etc/bash_completion.d and disable the ones you don't need. On my system, the biggest culprits for slowing down Bash's load time (mailman, shadow, dsniff and e2fsprogs) all did exactly nothing, since the tools they were created to complete weren't installed.
If you rename a file in /etc/bash_completion.d to have a .bak extension, it'll stop that script being loaded. Having disabled all but a select 37 scripts on one of my systems in that manner, I've cut the average time for bash_completion to load by 95% (6.5 seconds to 0.3 seconds).
In my case that was windows domain controller.
I did this to find the issue:
I started with a simple, windows cmd.exe and the, typed this:
c:\cygwin\bin\strace.exe c:\cygwin\bin\bash
In my case, I noticed a following sequence:
218 12134 [main] bash 11304 transport_layer_pipes::connect: Try to connect to named pipe: \\.\pipe\cygwin-c5e39b7a9d22bafb-lpc
45 12179 [main] bash 11304 transport_layer_pipes::connect: Error opening the pipe (2)
39 12218 [main] bash 11304 client_request::make_request: cygserver un-available
1404719 1416937 [main] bash 11304 pwdgrp::fetch_account_from_windows: line: <CENSORED_GROUP_ID_#1>
495 1417432 [main] bash 11304 pwdgrp::fetch_account_from_windows: line: <CENSORED_GROUP_ID_#2>
380 1417812 [main] bash 11304 pwdgrp::fetch_account_from_windows: line: <CENSORED_GROUP_ID_#3>
etc...
The key thing was identifying the client_request::make_request: cygserver un-available line. You can see, how after that, cygwin tries to fetch every single group from windows, and execution times go crazy.
A quick google revealed what a cygserver is:
https://cygwin.com/cygwin-ug-net/using-cygserver.html
Cygserver is a program which is designed to run as a background
service. It provides Cygwin applications with services which require
security arbitration or which need to persist while no other cygwin
application is running.
The solution was, to run the cygserver-config and then net start cygserver to start the Windows service. Cygwin startup times dropped significantly after that.
All of the answers refer to older versions of bash_completion, and are irrelevant for recent bash_completion.
Modern bash_completion moved most of the completion files to /usr/share/bash-completion/completions by default, check the path on your system by running
# pkg-config --variable=completionsdir bash-completion
/usr/share/bash-completion/completions
There are many files in there, one for each command, but that is not a problem, since they are loaded on demand the first time you use completion with each command. The old /etc/bash_completion.d is still supported for compatibility, and all files from there are loaded when bash_completion starts.
# pkg-config --variable=compatdir bash-completion
/etc/bash_completion.d
Use this script to check if there are any stale files left in the old dir.
#!/bin/sh
COMPLETIONS_DIR="$(pkg-config --variable=completionsdir bash-completion)"
COMPAT_DIR="$(pkg-config --variable=compatdir bash-completion)"
for file in "${COMPLETIONS_DIR}"/*; do
file="${COMPAT_DIR}/${file#${COMPLETIONS_DIR}/}"
[ -f "$file" ] && printf '%s\n' $file
done
It prints the list of files in compat dir that are also present in the newer (on-demand) completions dir. Unless you have specific reasons to keep some of them, review, backup and remove all of those files.
As a result, the compat dir should be mostly empty.
Now, for the most interesting part - checking why bash startup is slow.
If you just run bash, it will start a non-login, interactive shell - this one on Cygwin source /etc/bash.bashrc and then ~/.bashrc. This most likely doesn't include bash completion, unless you source it from one of rc files. If you run bash -l (bash --login), start Cygwin Terminal (depends on your cygwin.bat), or log in via SSH, it will start a login, interactive shell - which will source /etc/profile, ~/.bash_profile, and the aforementioned rc files. The /etc/profile script itself sources all executable .sh files in /etc/profile.d.
You can check how long each file takes to source. Find this code in /etc/profile:
for file in /etc/profile.d/*.$1; do
[ -e "${file}" ] && . "${file}"
done
Back it up, then replace it with this:
for file in /etc/profile.d/*.$1; do
TIMEFORMAT="%3lR ${file}"
[ -e "${file}" ] && time . "${file}"
done
Start bash and you will see how long each file took. Investigate files that take a significant amount of time. In my case, it was bash_completion.sh and fzf.sh (fzf is fuzzy finder, a really nice complement to bash_completion). Now the choice is to disable it or investigate further. Since I wanted to keep using fzf shortcuts in bash, I investigated, found the source of the slowdown, optimized it, and submitted my patch to fzf's repo (hopefully it will be accepted).
Now for the biggest time spender - bash_completion.sh. Basically that script sources /usr/share/bash-completion/bash_completion. I backed up that file, then edited it. On the last page there is for loop that sources all the files in compat dir - /etc/bash_completion.d. Again, I added TIMEFORMAT and time, and saw which script was causing the slow starting. It was zzz-fzf (fzf package). I investigated and found a subshell ($()) being executed multiple times in a for loop, rewrote that part without using a subshell, making the script work quickly. I already submitted my patch to fzf's repo.
The biggest reason for all these slowdowns is: fork is not supported by Windows process model, Cygwin did a great job emulating it, but it's painfully slow compared to a real UNIX. A subshell or a pipeline that does very little work by itself spends most of it's execution time for fork-ing. E.g. compare the execution times of time echo msg (0.000s on my Cygwin) vs time echo $(echo msg) (0.042s on my Cygwin) - day and night. The echo command itself takes no appreciable time to execute, but creating a subshell is very expensive. On my Linux system, these commands take 0.000s and 0.001s respectively. Many packages Cygwin has are developed by people who use Linux or other UNIX, and can run on Cygwin unmodified. So naturally these devs feel free to use subshells, pipelines and other features wherever convenient, since they don't feel any significant performance hit on their system, but on Cygwin those shell scripts might run tens and hundreds of times slower.
Bottom line, if a shell script works slowly in Cygwin - try to locate the source of fork calls and rewrite the script to eliminate them as much as possible.
E.g. cmd="$(printf "$1" "$2")" (uses one fork for subshell) can be replaced with printf -v cmd "$1" "$2".
Boy, it came out really long. Any people still reading up to here are real heros. Thanks :)
I know this is an old thread, but after a fresh install of Cygwin this week I'm still having this problem.
Instead of handpicking all of the bash_completion files, I used this line to implement #me_and's approach for anything that isn't installed on my machine. This significantly reduced the startup time of bash for me.
In /etc/bash_completion.d, execute the following:
for i in $(ls|grep -v /); do type $i >/dev/null 2>&1 || mv $i $i.bak; done
New answer for an old thread, relating to the PATH of the original question.
Most of the other answers deal with the bash startup. If you're seeing a slow load time when you run bash -i within the shell, those may apply.
In my case, bash -i ran fast, but anytime I opened a new shell (be it in a terminal or in xterm), it took a really long time. If bash -l is taking a long time, it means it's the login time.
There are some approaches at the Cygwin FAQ at https://cygwin.com/faq/faq.html#faq.using.startup-slow but they didn't work for me.
The original poster asked about the PATH, which he diagnosed using bash -x. I too found that although bash -i was fast, bash -xl was slow and showed a lot of information about the PATH.
There was such a ridiculously long Windows PATH that the login process kept on running programs and searching the entire PATH for the right program.
My solution: Edit the Windows PATH to remove anything superfluous. I'm not sure which piece I removed that did the trick, but the login shell startup went from 6 seconds to under 1 second.
YMMV.
My answer is the same as npe's above. But, since I just joined, I cannot comment or even upvote it! I hope this post doesn't get deleted because it offer reassurance for anyone looking for an answer to the same problem.
npe's solution worked for me. There's only one caveat - I had to close all cygwin processes before I got the best out of it. That includes running cygwin services, like sshd, and the ssh-agent that I start from my login scripts. Before that, the window for the cygwin terminal would appear instantly but hang for several seconds before it presents the prompt. And it hanged for several seconds upon closing the window. After I killed all processes and started the cygserver service (btw I prefer to use the Cygwin way - 'cygrunsrv -S cygserver', than 'net start cygserver'; I don't know if it makes any practical difference) it starts immediately. So thanks to npe again!
I'm on a corporate network with a pretty complicated setup, and it seems that really kills cygwin startup times. Related to npe's answer, I also had to follow some of the steps laid out here: https://cygwin.com/faq/faq.html#faq.using.startup-slow
Another cause for AD client system is slow DC replies, commonly observed in configurations with remote DC access. The Cygwin DLL queries information about every group you're in to populate the local cache on startup. You may speed up this process a little by caching your own information in local files. Run these commands in a Cygwin terminal with write access to /etc:
getent passwd $(id -u) > /etc/passwd
getent group $(id -G) > /etc/group
Also, set /etc/nsswitch.conf as follows:
passwd: files db
group: files db
This will limit the need for Cygwin to contact the AD domain controller (DC) while still allowing for additional information to be retrieved from DC, such as when listing remote directories.
After doing that plus starting the cygserver my cygwin startup time dropped significantly.
As someone mentioned above, one possible issue is the PATH environment variable contains too much path, cygwin will search all of them. I prefer direct edit the /etc/profile, just overwrite the PATH variable to cygwin related path, e.g. PATH="/usr/local/bin:/usr/bin". Add additional path if you want.
I wrote a Bash function named 'minimizecompletion' for inactivating not needed completion scripts.
Completion scripts can add more than one completion specification or have completion specifications for shell buildins, therefore it is not sufficient to compare script names with executable files found in $PATH.
My solution is to remove all loaded completion specifications, to load a completion script and check if it has added new completion specifications. Depending on this it is inactivated by adding .bak to the script file name or it is activated by removing .bak. Doing this for all 182 scripts in /etc/bash_completion.d results in 36 active and 146 inactive completion scripts reducing the Bash start time by 50% (but it should be clear this depends on installed packages).
The function also checks inactivated completion scripts so it can activate them when they are needed for new installed Cygwin packages. All changes can be undone with argument -a that activates all scripts.
# Enable or disable global completion scripts for speeding up Bash start.
#
# Script files in directory '/etc/bash_completion.d' are inactived
# by adding the suffix '.bak' to the file name; they are activated by
# removing the suffix '.bak'. After processing all completion scripts
# are reloaded by calling '/etc/bash_completion'
#
# usage: [-a]
# -a activate all completion scripts
# output: statistic about total number of completion scripts, number of
# activated, and number of inactivated completion scripts; the
# statistic for active and inactive completion scripts can be
# wrong when 'mv' errors occure
# return: 0 all scripts are checked and completion loading was
# successful; this does not mean that every call of 'mv'
# for adding or removing the suffix was successful
# 66 the completion directory or loading script is missing
#
minimizecompletion() {
local arg_activate_all=${1-}
local completion_load=/etc/bash_completion
local completion_dir=/etc/bash_completion.d
(
# Needed for executing completion scripts.
#
local UNAME='Cygwin'
local USERLAND='Cygwin'
shopt -s extglob progcomp
have() {
unset -v have
local PATH="$PATH:/sbin:/usr/sbin:/usr/local/sbin"
type -- "$1" &>/dev/null && have='yes'
}
# Print initial statistic.
#
printf 'Completion scripts status:\n'
printf ' total: 0\n'
printf ' active: 0\n'
printf ' inactive: 0\n'
printf 'Completion scripts changed:\n'
printf ' activated: 0\n'
printf ' inactivated: 0\n'
# Test the effect of execution for every completion script by
# checking the number of completion specifications after execution.
# The completion scripts are renamed depending on the result to
# activate or inactivate them.
#
local completions total=0 active=0 inactive=0 activated=0 inactivated=0
while IFS= read -r -d '' f; do
((++total))
if [[ $arg_activate_all == -a ]]; then
[[ $f == *.bak ]] && mv -- "$f" "${f%.bak}" && ((++activated))
((++active))
else
complete -r
source -- "$f"
completions=$(complete | wc -l)
if (( $completions > 0 )); then
[[ $f == *.bak ]] && mv -- "$f" "${f%.bak}" && ((++activated))
((++active))
else
[[ $f != *.bak ]] && mv -- "$f" "$f.bak" && ((++inactivated))
((++inactive))
fi
fi
# Update statistic.
#
printf '\r\e[6A\e[15C%s' "$total"
printf '\r\e[1B\e[15C%s' "$active"
printf '\r\e[1B\e[15C%s' "$inactive"
printf '\r\e[2B\e[15C%s' "$activated"
printf '\r\e[1B\e[15C%s' "$inactivated"
printf '\r\e[1B'
done < <(find "$completion_dir" -maxdepth 1 -type f -print0)
if [[ $arg_activate_all != -a ]]; then
printf '\nYou can activate all scripts with %s.\n' "'$FUNCNAME -a'"
fi
if ! [[ -f $completion_load && -r $completion_load ]]; then
printf 'Cannot reload completions, missing %s.\n' \
"'$completion_load'" >&2
return 66
fi
)
complete -r
source -- "$completion_load"
}
This is an example output and the resulting times:
$ minimizecompletion -a
Completion scripts status:
total: 182
active: 182
inactive: 0
Completion scripts changed:
activated: 146
inactivated: 0
$ time bash -lic exit
logout
real 0m0.798s
user 0m0.263s
sys 0m0.341s
$ time minimizecompletion
Completion scripts status:
total: 182
active: 36
inactive: 146
Completion scripts changed:
activated: 0
inactivated: 146
You can activate all scripts with 'minimizecompletion -a'.
real 0m17.101s
user 0m1.841s
sys 0m6.260s
$ time bash -lic exit
logout
real 0m0.422s
user 0m0.092s
sys 0m0.154s

Disown, nohup or & on Mac OS zsh… not working as hoped

Hi. I'm new to the shell and am working on my first kludged together script. I've read all over the intertube and SO and there are many, MANY places where disown, nohup, & and return are explained but something isn't working for me.
I want a simpler timer. The script asks for user input for the hours, mins., etc., then:
echo "No problem, see you then…"
sleep $[a*3600+b*60+c]
At this point (either on the first or second lines, not sure) I want the script OR the specific command in the script to become a background process. Maybe a daemon? So that the timer will still go off on schedule even if
that terminal window is shut
the terminal app is quit completely
the computer is put to sleep (I realize I probably need some different code still to wake the mac itself)
Also after the "No problem" line I want a return command so that the existing shell window is still useful in the meantime.
The terminal-notifier command (the timer wakeup) is getting called immediately under certain usage of the above (I can't remember which right now), then a second notification at the right time. Using the return command anywhere basically seems to quit the script.
One thing I'm not clear on is whether/how disown, nohup, etc. are applicable to a command process vs. a script process, i.e., will any of them work properly on only a command inside a script (and if not, how to initialize a script as a background process that still asks for input).
Maybe I should use some alternative to sleep?
It isn't necessary to use a separate script or have the script run itself in order to get part of it to run in the background.
A much simpler way is to place the portions that you want to be backgrounded (the sleep and following command) inside of parentheses, and put an ampersand after them.
So the end of the script would look like:
(
sleep $time
# Do whatever
)&
This will cause that portion of the code to be run inside a subshell which is placed into the background, since there's no code after that the first shell will immediately exit returning control to your interactive shell.
When your script is run, it is actually run by starting a new shell to execute it. In order for you to get your script into the background, you would need to send that shell into the background, which you can't do because you would need to communicate with its parent shell.
What you can do is have your script call itself with a special argument to indicate that it should do the work:
#! /bin/zsh
if [ "$1" != '--run' ] ; then
echo sending to background
$0 --run $# &
exit
fi
sleep 1
echo backgrounded $#
This script first checks to see if its first argument is --run. If it is not, then it calls itself ($0) with that argument and all other arguments it received ($#) in the background, and exits. You can use a similar method, performing the test when you want to enter the background, and possibly sending the data you will need instead of every argument. For example, to send just the number of seconds:
$0 --run $[a*3600+b*60+c] &

Run a list of bash scripts consecutively

I have a load of bash scripts that backup different directories to different locations. I want each one to run every day. However, I want to make they don't run simultaneously.
I've wrote a script that basically just calls each script in succession and sits in cron.daily, but I want a way for this script to work even if I add and remove backup scripts without having to manually edit it.
So what I need to go is generate a list of the scripts (e.g. "dir -1 /usr/bin/backup*.sh") and then run each script it finds in turn.
Thanks.
#!/bin/sh
for script in /usr/bin/backup*.sh
do
$script
done
#!/bin/bash
for SCRIPT in /usr/bin/backup*.sh
do
[ -x "$SCRIPT" ] && [ -f "$SCRIPT" ] && $SCRIPT
done
If your system has run-parts then that will take care of it for you. You can name your scripts like "10script", "20anotherscript" and they will be run in order in a manner similar to the rc*.d hierarchy (which is run via init or Upstart, however). On some systems it's a script. On mine it's a binary executable.
It is likely that your system is using it to run hourly, daily, etc., cron jobs just by dropping scripts into directories such as /etc/cron.hourly/
Pay particular attention, though, to how you name your scripts. (Don't use dots, for example.) Check the man page specific to your system, since file naming restrictions may vary.

Resources