Use of the 'hash' command - bash

I'm working on a small app based on ffmpeg, and I read a tutorial made for ubuntu where they advise to use the command hash on the produced executable.
I'm curious about that command, did you ever use it? For which purpose?
When I run it in my source folder, I get this (once compiled)
$ hash
hits command
1 /usr/bin/strip
1 /usr/local/bin/ffmpeg
1 /usr/bin/svn
4 /usr/local/bin/brew
2 /usr/bin/git
1 /bin/rm
1 /bin/cat
1 /usr/bin/ld
1 /bin/sh
4 /usr/bin/man
5 /usr/bin/make
4 /usr/bin/otool
15 /bin/ls
6 /usr/bin/open
2 /usr/bin/clear
Looks like a summary of my bash_history…
When I run it on an executable file, I do not have lots of lines displayed, and nothing seems to changes in that application ?
$ md5 ffserver
MD5 (ffserver) = 2beac612e5efd6ee4a827ae0893ee338
$ hash ffserver
$ md5 ffserver
MD5 (ffserver) = 2beac612e5efd6ee4a827ae0893ee338
When I look for the man, it just says it's a builtin function. Really useful :)
It does work (let say exist) on Linux and on MacOSX.

hash isn't actually your history; it is a bash(1) shell built-in that maintains a hash table of recently executed programs:
Bash uses a hash table to remember the full pathnames of executable files (see hash under SHELL BUILTIN COMMANDS below). A full search of the directories in PATH is performed only if the command is not found in the hash table.
(From bash(1).)
The guide your found may have suggested running it just to see which ffmpeg command was going to be executed by the next step; perhaps there is an ffmpeg program supplied by the distribution packaging, and they wanted to make sure the new one would be executed instead of the distro-supplied one if you just typed ffmpeg at the shell.
It seems a stretch, because it would also require having the directory containing the new ffmpeg in the PATH before the distro-provided version, and there's no guarantee of that.

If you use commands that might not be installed on the system, check for their availability and tell the user what's missing. From Scripting with style
Example:
NEEDED_COMMANDS="sed awk lsof who"
missing_counter=0
for needed_command in $NEEDED_COMMANDS; do
if ! hash "$needed_command" >/dev/null 2>&1; then
printf "Command not found in PATH: %s\n" "$needed_command" >&2
((missing_counter++))
fi
done
if ((missing_counter > 0)); then
printf "Minimum %d commands are missing in PATH, aborting" "$missing_counter" >&2
exit 1
fi

Related

Bash differences in changes to running scripts

I'm scratching my head about two seemingly different behaviors of bash when editing a running script.
This is not a place to discuss WHY one would do this (you probably shouldn't). I would only like to try to understand what happens and why.
Example A:
$ echo "echo 'echo hi' >> script.sh" > script.sh
$ cat script.sh
echo 'echo hi' >> script.sh
$ chmod +x script.sh
$ ./script.sh
hi
$ cat script.sh
echo 'echo hi' >> script.sh
echo hi
The script edits itself, and the change (extra echo line) is directly executed. Multiple executions lead to more lines of "hi".
Example B:
Create a script infLoop.sh and run it.
$ cat infLoop.sh
while true
do
x=1
echo $x
done
$ ./infLoop.sh
1
1
1
...
Now open a second shell and edit the file changing the value of x. E.g. like this:
$ sed --in-place 's/x=1/x=2/' infLoop.sh
$ cat infLoop.sh
while true
do
x=2
echo $x
done
However, we observe that the output in the first terminal is still 1. Doing the same with only one terminal, interrupting infLoop.sh through Ctrl+Z, editing, and then continuing it via fg yields the same result.
The Question
Why does the change in example A have an immediate effect but the change in example B not?
PS: I know there are questions out there showing similar examples but none of those I saw have answers explaining the difference between the scenarios.
There are actually two different reasons that example B is different, either one of which is enough to prevent the change from taking effect. They're due to some subtleties of how sed and bash interact with files (and how unix-like OSes treat files), and might well be different with slightly different programs, etc.
Overall, I'd say this is a good example of how hard it is to understand & predict what'll happen if you modify a file while also running it (or reading etc from it), and therefore why it's a bad idea to do things like this. Basically, it's the computer equivalent of sawing off the branch you're standing on.
Reason 1: Despite the option's name, sed --in-place does not actually modify the existing file in place. What it actually does is create a new file with a temporary name, then when it's finished that it deletes the original and renames the new file into its place. It has the same name, but it's not actually the same file. You can tell this by looking at the file's inode number with ls -li:
$ ls -li infLoop.sh
88 -rwxr-xr-x 1 pi pi 39 Aug 4 22:04 infLoop.sh
$ sed --in-place 's/x=1/x=2/' infLoop.sh
$ ls -li infLoop.sh
4073 -rwxr-xr-x 1 pi pi 39 Aug 4 22:05 infLoop.sh
But bash still has the old file open (strictly speaking, it has an open file handle pointing to the old file), so it's going to continue getting the old contents no matter what changed in the new file.
Note that this doesn't apply to all programs that edit files. vim, for example, will rewrite the contents of existing files (unless file permissions forbid it, in which case it switches to the delete&replace method). Appending with >> will always append to the existing file rather than creating a new one.
(BTW, if it seems weird that bash could have a file open after it's been "deleted", that's just part of how unix-like OSes treat their files. Files are not truly deleted until their last directory entry is removed and the last open file handle referring to them is closed. Some programs actually take advantage of this for security by opening(/creating) a file and then immediately "deleting" it, so that the open file handle is the only way to reach the file.)
Reason 2: Even if you used a tool that actually modified the existing file in place, bash still wouldn't see the change. The reason for this is that bash reads from the file (parsing as it goes) until it has something it can execute, runs that, then goes back and reads more until it has another executable chunk, etc. It does not go back and re-read the same chunk to see if it's changed, so it'll only ever notice changes in parts of the file it hasn't read yet.
In example B, it has to read and parse the entire while true ... done loop before it can start executing it. Therefore, changes in that part (or possibly before it) will not be noticed once the loop has been read & started executing. Changes in the file after the loop would be noticed after the loop exited (if it ever did).
See my answer to this question (and Don Hatch's comment on it) for more info about this.

Unix side-by-side difference with a remote hidden file

I am working on a battery of automatic tests which executes on 2 Unix virtual machines running with KSH. Those VMs are independant and they have practically the same .profile file. I would like to study their differences by launching:
tkdiff /usr/system/.profile system#{external_IP}:/usr/system/.profile
on the first VM but it doesn't work.
I suppose that directly accessing a hidden file is not possible. Is there a solution to my problem, or maybe an alternative?
If you want to compare different files on two remote machines, I suggest the following procedure:
1. Compare checksums:
First compare the checksums. Use sum, md5sum or sha256sum to compute a hash of the file. If the hash is the same, the probability of having the same file is extremely high! You can even increase that probability by check the total amount of characters, lines and words, in the file using wc.
$ file="/usr/system/.profile"
$ md5sum "$file" && wc "$file"
$ ssh user#host "md5sum '$file' && wc '$file'"
2. run a simple diff
Run a simple diff using the classic command line tools. They understand the POSIX standard to use - as /dev/stdin. This way you can do:
$ ssh user#host "cat -- '$file'" | diff "$file" -
note: with old versions of tkdiff or new versions of svn/git, it can be tricky here due to bugs in tkdiff. It will quickly throw errors of the form svn [XXXX] file .... is not a working copy or file xxxx is not part of a revision control system if one of the files might be under version control or you end up in a directory under version control. Stick to diff!
You are using the filename convention "user#host:/path/to/file" for the second argument to tkdiff.
That convention for naming is not native to Ksh, but instead is understood by some programs like scp and others (which can be interactive, e.g. to ask for a password for the remote system or other authentication related questions).
But from the tkdiff man page, it does not mention having built-in support for that filenaming convention userid#host:/path/to/file, and neither is such support built into ksh.
So you may need to use two steps, first to use scp or similar to copy the remote file locally then then use tkdiff with one argument the local file and the other the file-just-copied, or arrange to mount part of the other VM filesystem locally, and then use tkdiff with appropriate arguments.
Obviously, both files need to be readable by your userid or the user specified on the userid#host:/path/to/file for this to work.
You can directly made a remote ssh compare , run a remote display with help of cat command line, with this :
tkdiff <(ssh system#{external_IP}1 'cat /usr/system/.profile') <(ssh system#{external_IP}2 'cat /usr/system/.profile')
In your case to be able to compare with the local .profile file this :
tkdiff /usr/system/.profile <(ssh system#{external_IP} 'cat /usr/system/.profile')
Do you have just try with the simple diff command line (with -b -B option to remove blank line and space comparaison):
diff -b -B /usr/system/.profile <(ssh system#{external_IP} 'cat /usr/system/.profile')

Slow load time of bash in cygwin

At the moment bash takes about 2 seconds to load. I have ran bash with -x flag and I am seeing the output and it seems as though PATH is being loaded many times in cygwin. The funny thing is I use the same file in linux environment, but it works fine, without the reload problem. Could the following cause the problem?
if [ `uname -o` = "Cygwin" ]; then
....
fi
As you've noted in your answer, the problem is Cygwin's bash-completion package. The quick and easy fix is to disable bash-completion, and the correct way to do that is to run Cygwin's setup.exe (download it again if you need to) and select to uninstall that package.
The longer solution is to work through the files in /etc/bash_completion.d and disable the ones you don't need. On my system, the biggest culprits for slowing down Bash's load time (mailman, shadow, dsniff and e2fsprogs) all did exactly nothing, since the tools they were created to complete weren't installed.
If you rename a file in /etc/bash_completion.d to have a .bak extension, it'll stop that script being loaded. Having disabled all but a select 37 scripts on one of my systems in that manner, I've cut the average time for bash_completion to load by 95% (6.5 seconds to 0.3 seconds).
In my case that was windows domain controller.
I did this to find the issue:
I started with a simple, windows cmd.exe and the, typed this:
c:\cygwin\bin\strace.exe c:\cygwin\bin\bash
In my case, I noticed a following sequence:
218 12134 [main] bash 11304 transport_layer_pipes::connect: Try to connect to named pipe: \\.\pipe\cygwin-c5e39b7a9d22bafb-lpc
45 12179 [main] bash 11304 transport_layer_pipes::connect: Error opening the pipe (2)
39 12218 [main] bash 11304 client_request::make_request: cygserver un-available
1404719 1416937 [main] bash 11304 pwdgrp::fetch_account_from_windows: line: <CENSORED_GROUP_ID_#1>
495 1417432 [main] bash 11304 pwdgrp::fetch_account_from_windows: line: <CENSORED_GROUP_ID_#2>
380 1417812 [main] bash 11304 pwdgrp::fetch_account_from_windows: line: <CENSORED_GROUP_ID_#3>
etc...
The key thing was identifying the client_request::make_request: cygserver un-available line. You can see, how after that, cygwin tries to fetch every single group from windows, and execution times go crazy.
A quick google revealed what a cygserver is:
https://cygwin.com/cygwin-ug-net/using-cygserver.html
Cygserver is a program which is designed to run as a background
service. It provides Cygwin applications with services which require
security arbitration or which need to persist while no other cygwin
application is running.
The solution was, to run the cygserver-config and then net start cygserver to start the Windows service. Cygwin startup times dropped significantly after that.
All of the answers refer to older versions of bash_completion, and are irrelevant for recent bash_completion.
Modern bash_completion moved most of the completion files to /usr/share/bash-completion/completions by default, check the path on your system by running
# pkg-config --variable=completionsdir bash-completion
/usr/share/bash-completion/completions
There are many files in there, one for each command, but that is not a problem, since they are loaded on demand the first time you use completion with each command. The old /etc/bash_completion.d is still supported for compatibility, and all files from there are loaded when bash_completion starts.
# pkg-config --variable=compatdir bash-completion
/etc/bash_completion.d
Use this script to check if there are any stale files left in the old dir.
#!/bin/sh
COMPLETIONS_DIR="$(pkg-config --variable=completionsdir bash-completion)"
COMPAT_DIR="$(pkg-config --variable=compatdir bash-completion)"
for file in "${COMPLETIONS_DIR}"/*; do
file="${COMPAT_DIR}/${file#${COMPLETIONS_DIR}/}"
[ -f "$file" ] && printf '%s\n' $file
done
It prints the list of files in compat dir that are also present in the newer (on-demand) completions dir. Unless you have specific reasons to keep some of them, review, backup and remove all of those files.
As a result, the compat dir should be mostly empty.
Now, for the most interesting part - checking why bash startup is slow.
If you just run bash, it will start a non-login, interactive shell - this one on Cygwin source /etc/bash.bashrc and then ~/.bashrc. This most likely doesn't include bash completion, unless you source it from one of rc files. If you run bash -l (bash --login), start Cygwin Terminal (depends on your cygwin.bat), or log in via SSH, it will start a login, interactive shell - which will source /etc/profile, ~/.bash_profile, and the aforementioned rc files. The /etc/profile script itself sources all executable .sh files in /etc/profile.d.
You can check how long each file takes to source. Find this code in /etc/profile:
for file in /etc/profile.d/*.$1; do
[ -e "${file}" ] && . "${file}"
done
Back it up, then replace it with this:
for file in /etc/profile.d/*.$1; do
TIMEFORMAT="%3lR ${file}"
[ -e "${file}" ] && time . "${file}"
done
Start bash and you will see how long each file took. Investigate files that take a significant amount of time. In my case, it was bash_completion.sh and fzf.sh (fzf is fuzzy finder, a really nice complement to bash_completion). Now the choice is to disable it or investigate further. Since I wanted to keep using fzf shortcuts in bash, I investigated, found the source of the slowdown, optimized it, and submitted my patch to fzf's repo (hopefully it will be accepted).
Now for the biggest time spender - bash_completion.sh. Basically that script sources /usr/share/bash-completion/bash_completion. I backed up that file, then edited it. On the last page there is for loop that sources all the files in compat dir - /etc/bash_completion.d. Again, I added TIMEFORMAT and time, and saw which script was causing the slow starting. It was zzz-fzf (fzf package). I investigated and found a subshell ($()) being executed multiple times in a for loop, rewrote that part without using a subshell, making the script work quickly. I already submitted my patch to fzf's repo.
The biggest reason for all these slowdowns is: fork is not supported by Windows process model, Cygwin did a great job emulating it, but it's painfully slow compared to a real UNIX. A subshell or a pipeline that does very little work by itself spends most of it's execution time for fork-ing. E.g. compare the execution times of time echo msg (0.000s on my Cygwin) vs time echo $(echo msg) (0.042s on my Cygwin) - day and night. The echo command itself takes no appreciable time to execute, but creating a subshell is very expensive. On my Linux system, these commands take 0.000s and 0.001s respectively. Many packages Cygwin has are developed by people who use Linux or other UNIX, and can run on Cygwin unmodified. So naturally these devs feel free to use subshells, pipelines and other features wherever convenient, since they don't feel any significant performance hit on their system, but on Cygwin those shell scripts might run tens and hundreds of times slower.
Bottom line, if a shell script works slowly in Cygwin - try to locate the source of fork calls and rewrite the script to eliminate them as much as possible.
E.g. cmd="$(printf "$1" "$2")" (uses one fork for subshell) can be replaced with printf -v cmd "$1" "$2".
Boy, it came out really long. Any people still reading up to here are real heros. Thanks :)
I know this is an old thread, but after a fresh install of Cygwin this week I'm still having this problem.
Instead of handpicking all of the bash_completion files, I used this line to implement #me_and's approach for anything that isn't installed on my machine. This significantly reduced the startup time of bash for me.
In /etc/bash_completion.d, execute the following:
for i in $(ls|grep -v /); do type $i >/dev/null 2>&1 || mv $i $i.bak; done
New answer for an old thread, relating to the PATH of the original question.
Most of the other answers deal with the bash startup. If you're seeing a slow load time when you run bash -i within the shell, those may apply.
In my case, bash -i ran fast, but anytime I opened a new shell (be it in a terminal or in xterm), it took a really long time. If bash -l is taking a long time, it means it's the login time.
There are some approaches at the Cygwin FAQ at https://cygwin.com/faq/faq.html#faq.using.startup-slow but they didn't work for me.
The original poster asked about the PATH, which he diagnosed using bash -x. I too found that although bash -i was fast, bash -xl was slow and showed a lot of information about the PATH.
There was such a ridiculously long Windows PATH that the login process kept on running programs and searching the entire PATH for the right program.
My solution: Edit the Windows PATH to remove anything superfluous. I'm not sure which piece I removed that did the trick, but the login shell startup went from 6 seconds to under 1 second.
YMMV.
My answer is the same as npe's above. But, since I just joined, I cannot comment or even upvote it! I hope this post doesn't get deleted because it offer reassurance for anyone looking for an answer to the same problem.
npe's solution worked for me. There's only one caveat - I had to close all cygwin processes before I got the best out of it. That includes running cygwin services, like sshd, and the ssh-agent that I start from my login scripts. Before that, the window for the cygwin terminal would appear instantly but hang for several seconds before it presents the prompt. And it hanged for several seconds upon closing the window. After I killed all processes and started the cygserver service (btw I prefer to use the Cygwin way - 'cygrunsrv -S cygserver', than 'net start cygserver'; I don't know if it makes any practical difference) it starts immediately. So thanks to npe again!
I'm on a corporate network with a pretty complicated setup, and it seems that really kills cygwin startup times. Related to npe's answer, I also had to follow some of the steps laid out here: https://cygwin.com/faq/faq.html#faq.using.startup-slow
Another cause for AD client system is slow DC replies, commonly observed in configurations with remote DC access. The Cygwin DLL queries information about every group you're in to populate the local cache on startup. You may speed up this process a little by caching your own information in local files. Run these commands in a Cygwin terminal with write access to /etc:
getent passwd $(id -u) > /etc/passwd
getent group $(id -G) > /etc/group
Also, set /etc/nsswitch.conf as follows:
passwd: files db
group: files db
This will limit the need for Cygwin to contact the AD domain controller (DC) while still allowing for additional information to be retrieved from DC, such as when listing remote directories.
After doing that plus starting the cygserver my cygwin startup time dropped significantly.
As someone mentioned above, one possible issue is the PATH environment variable contains too much path, cygwin will search all of them. I prefer direct edit the /etc/profile, just overwrite the PATH variable to cygwin related path, e.g. PATH="/usr/local/bin:/usr/bin". Add additional path if you want.
I wrote a Bash function named 'minimizecompletion' for inactivating not needed completion scripts.
Completion scripts can add more than one completion specification or have completion specifications for shell buildins, therefore it is not sufficient to compare script names with executable files found in $PATH.
My solution is to remove all loaded completion specifications, to load a completion script and check if it has added new completion specifications. Depending on this it is inactivated by adding .bak to the script file name or it is activated by removing .bak. Doing this for all 182 scripts in /etc/bash_completion.d results in 36 active and 146 inactive completion scripts reducing the Bash start time by 50% (but it should be clear this depends on installed packages).
The function also checks inactivated completion scripts so it can activate them when they are needed for new installed Cygwin packages. All changes can be undone with argument -a that activates all scripts.
# Enable or disable global completion scripts for speeding up Bash start.
#
# Script files in directory '/etc/bash_completion.d' are inactived
# by adding the suffix '.bak' to the file name; they are activated by
# removing the suffix '.bak'. After processing all completion scripts
# are reloaded by calling '/etc/bash_completion'
#
# usage: [-a]
# -a activate all completion scripts
# output: statistic about total number of completion scripts, number of
# activated, and number of inactivated completion scripts; the
# statistic for active and inactive completion scripts can be
# wrong when 'mv' errors occure
# return: 0 all scripts are checked and completion loading was
# successful; this does not mean that every call of 'mv'
# for adding or removing the suffix was successful
# 66 the completion directory or loading script is missing
#
minimizecompletion() {
local arg_activate_all=${1-}
local completion_load=/etc/bash_completion
local completion_dir=/etc/bash_completion.d
(
# Needed for executing completion scripts.
#
local UNAME='Cygwin'
local USERLAND='Cygwin'
shopt -s extglob progcomp
have() {
unset -v have
local PATH="$PATH:/sbin:/usr/sbin:/usr/local/sbin"
type -- "$1" &>/dev/null && have='yes'
}
# Print initial statistic.
#
printf 'Completion scripts status:\n'
printf ' total: 0\n'
printf ' active: 0\n'
printf ' inactive: 0\n'
printf 'Completion scripts changed:\n'
printf ' activated: 0\n'
printf ' inactivated: 0\n'
# Test the effect of execution for every completion script by
# checking the number of completion specifications after execution.
# The completion scripts are renamed depending on the result to
# activate or inactivate them.
#
local completions total=0 active=0 inactive=0 activated=0 inactivated=0
while IFS= read -r -d '' f; do
((++total))
if [[ $arg_activate_all == -a ]]; then
[[ $f == *.bak ]] && mv -- "$f" "${f%.bak}" && ((++activated))
((++active))
else
complete -r
source -- "$f"
completions=$(complete | wc -l)
if (( $completions > 0 )); then
[[ $f == *.bak ]] && mv -- "$f" "${f%.bak}" && ((++activated))
((++active))
else
[[ $f != *.bak ]] && mv -- "$f" "$f.bak" && ((++inactivated))
((++inactive))
fi
fi
# Update statistic.
#
printf '\r\e[6A\e[15C%s' "$total"
printf '\r\e[1B\e[15C%s' "$active"
printf '\r\e[1B\e[15C%s' "$inactive"
printf '\r\e[2B\e[15C%s' "$activated"
printf '\r\e[1B\e[15C%s' "$inactivated"
printf '\r\e[1B'
done < <(find "$completion_dir" -maxdepth 1 -type f -print0)
if [[ $arg_activate_all != -a ]]; then
printf '\nYou can activate all scripts with %s.\n' "'$FUNCNAME -a'"
fi
if ! [[ -f $completion_load && -r $completion_load ]]; then
printf 'Cannot reload completions, missing %s.\n' \
"'$completion_load'" >&2
return 66
fi
)
complete -r
source -- "$completion_load"
}
This is an example output and the resulting times:
$ minimizecompletion -a
Completion scripts status:
total: 182
active: 182
inactive: 0
Completion scripts changed:
activated: 146
inactivated: 0
$ time bash -lic exit
logout
real 0m0.798s
user 0m0.263s
sys 0m0.341s
$ time minimizecompletion
Completion scripts status:
total: 182
active: 36
inactive: 146
Completion scripts changed:
activated: 0
inactivated: 146
You can activate all scripts with 'minimizecompletion -a'.
real 0m17.101s
user 0m1.841s
sys 0m6.260s
$ time bash -lic exit
logout
real 0m0.422s
user 0m0.092s
sys 0m0.154s

bug? in codesign --remove-signature feature

I would like to remove the digital signature from a Mac app that has been signed with codesign. There is an undocumented option to codesign, --remove-signature, which by it's name seems to be what I need. However, I can't get it to work. I realize it is undocumented, but I could really use the functionality. Maybe I'm doing something wrong?
codesign -s MyIdentity foo.app
works normally, signing the app
codesign --remove-signature foo.app
does disk activity for several seconds, then says
foo.app: invalid format for signature
and foo.app has grown to 1.9 GB!!! (Specifically, it is the executable in foo.app/Contents/Resources/MacOS that grows, from 1.1 MB to 1.9 GB.)
The same thing happens when I try to sign/unsign a binary support tool instead of a .app.
Any ideas?
Background: This is my own app; I'm not trying to defeat copy protection or anything like that.
I would like to distribute a signed app so that each update to the app won't need user approval to read/write the app's entries in the Keychain. However, some people need to modify the app by adding their own folder to /Resources. If they do that, the signature becomes invalid, and the app can't use it's own Keychain entries.
The app can easily detect if this situation has happened. If the app could then remove it's signature, everything would be fine. Those people who make this modification would need to give the modified, now-unsigned app permission to use the Keychain, but that's fine with me.
A bit late, but I've updated a public-domain tool called unsign which modifies executables to clear out signatures.
https://github.com/steakknife/unsign
I ran into this issue today. I can confirm that the --remove-signature option to Apple's codesign is (and remains, six years after the OP asked this question) seriously buggy.
For a little background, Xcode (and Apple's command line developer tools) include the codesign utility, but there is not included a tool for removing signatures. However, as this is something that needs to be done in certain situations pretty frequently, there is included a completely undocumented option:
codesign --remove-signature which (one assumes, given lack of documentation) should, well, be fairly self-explanatory but unfortunately, it rarely works as intended without some effort. So I ended up writing a script that should take care of the OP's problem, mine, and similar. If enough people find it here and find it useful, let me know and I'll put it on GitHub or something.
#!/bin/sh # codesign_remove_for_real -- working `codesign --remove-signature`
# (c) 2018 G. Nixon. BSD 2-clause minus retain/reproduce license requirements.
total_size(){
# Why its so damn hard to get decent recursive filesize total in the shell?
# - Darwin `du` doesn't do *bytes* (or anything less than 512B blocks)
# - `find` size options are completely non-standardized and doesn't recurse
# - `stat` is not in POSIX at all, and its options are all over the map...
# - ... etc.
# So: here we just use `find` for a recursive list of *files*, then wc -c
# and total it all up. Which sucks, because we have to read in every bit
# of every file. But its the only truly portable solution I think.
find "$#" -type f -print0 | xargs -0n1 cat | wc -c | tr -d '[:space:]'
}
# Get an accurate byte count before we touch anything. Zero would be bad.
size_total=$(total_size "$#") && [ $size_total -gt 0 ] || exit 1
recursively_repeat_remove_signature(){
# `codesign --remove-signature` randomly fails in a few ways.
# If you're lucky, you'll get an error like:
# [...]/codesign_allocate: can't write output file: [...] (Invalid argument)
# [...] the codesign_allocate helper tool cannot be found or used
# or something to that effect, in which case it will return non-zero.
# So we'll try it (suppressing stderr), and if it fails we'll just try again.
codesign --remove-signature --deep "$#" 2>/dev/null ||
recursively_repeat_remove_signature "$#"
# Unfortunately, the other very common way it fails is to do something? that
# hugely increases the binary size(s) by a seemingly arbitrary amount and
# then exits 0. `codesign -v` will tell you that there's no signature, but
# there are other telltale signs its not completely removed. For example,
# if you try stripping an executable after this, you'll get something like
# strip: changes being made to the file will invalidate the code signature
# So, the solution (well, my solution) is to do a file size check; once
# we're finally getting the same result, we've probably been sucessful.
# We could of course also use checksums, but its much faster this way.
[ $size_total == $(total_size "$#") ] ||
recursively_repeat_remove_signature "$#"
# Finally, remove any leftover _CodeSignature directories.
find "$#" -type d -name _CodeSignature -print0 | xargs -0n1 rm -rf
}
signature_info(){
# Get some info on code signatures. Not really required for anything here.
for info in "-dr-" "-vv"; do codesign $info "$#"; done # "-dvvvv"
}
# If we want to be be "verbose", check signature before. Un/comment out:
# echo >&2; echo "Current Signature State:" >&2; echo >&2; signature_info "$#"
# So we first remove any extended attributes and/or ACLs (which are common,
# and tend to interfere with the process here) then run our repeat scheme.
xattr -rc "$#" && chmod -RN "$#" && recursively_repeat_remove_signature "$#"
# Done!
# That's it; at this point, the executable or bundle(s) should sucessfully
# have truly become stripped of any code-signing. To test, one could
# try re-signing it again with an ad-hoc signature, then removing it again:
# (un/comment out below, as you see fit)
# echo >&2 && echo "Testing..." >&2; codesign -vvvvs - "$#" &&
# signature_info "$#" && recursively_repeat_remove_signature "$#"
# And of course, while it sometimes returns false positives, lets at least:
codesign -dvvvv "$#" || echo "Signature successfully removed!" >&2 && exit 0
Here's the source for codesign which lists all options, including those not covered by the command-line -h and man page.
Also, here is Apple's tech note on recent changes in how code-signing works
I agree that there's something strange going on when you did --remove-signature.
However, instead of trying to un-code-sign, you should change the way your user put extra files in the Resources. Instead, designate a certain path, usually
~/Library/Application Support/Name_Of_Your_App/
or maybe
~/Library/Application Support/Name_Of_Your_App/Resources/
and ask the user to put extra files there. Then, in your code, always check for the directory in addition to the files in the Resources when you need to read a file.
On a second reading of this question, another thought: perhaps a better approach to accomplish what the ultimate goal of the question is would be not to remove the signatures, but to have users (via a script/transparently) re-sign the app after modification, using an ad-hoc signature. That is, codesign -fs - [app], I believe. See https://apple.stackexchange.com/questions/105588/anyone-with-experience-in-hacking-the-codesigning-on-os-x

How can I create a temp file with a specific extension in bash?

I'm writing a shell script, and I need to create a temporary file with a certain extension.
I've tried
tempname=`basename $0`
TMPPS=`mktemp /tmp/${tempname}.XXXXXX.ps` || exit 1
and
tempname=`basename $0`
TMPPS=`mktemp -t ${tempname}` || exit 1
neither work, as the first creates a file name with a literal "XXXXXX" and the second doesn't give an option for an extension.
I need the extension so that preview won't refuse to open the file.
Edit: I ended up going with a combination of pid and mktemp in what I hope is secure:
tempname=`basename $0`
TMPTMP=`mktemp -t ${tempname}` || exit 1
TMPPS="$TMPTMP.$$.ps"
mv $TMPTMP $TMPPS || exit 1
It is vulnerable to a denial of service attack, but I don't think anything more severe.
Recent versions of mktemp offer --suffix:
--suffix=SUFF
append SUFF to TEMPLATE. SUFF must not contain slash. This option is implied if TEMPLATE does not end in X.
$ mktemp /tmp/banana.XXXXXXXXXXXXXXXXXXXXXXX.mp3
/tmp/banana.gZHvMJfDHc2CTilINNuq2P0.mp3
I believe this requires coreutils >= 8 or so.
If you create a temp file (older mktemp version) without suffix and you're renaming that to append one, the least thing you could probably do is check if the file already exists. It doesn't protect you from race conditions, but it does protect you if there's already such a file which has been there for a while.
How about this one:
echo $(mktemp $TMPDIR/$(uuidgen).txt)
All of these solutions except --suffix (which isn't always available) have a race condition. You can eliminate the race condition by using mktemp -d to create a directory, then putting your file in there.
mydir=`mktemp -d`
touch "$mydir"/myfile.ps
MacOS Sierra 10.12 doesn't have --suffix option, so I suggest workaround:
tempname=`basename $0`
TMPPS_PREFIX=$(mktemp "${TMPDIR:-/tmp/}${tempname}.XXXXXX")
TMPPS=$(mktemp "${TMPPS_PREFIX}.ps")
rm ${TMPPS_PREFIX}
echo "Your temp file: ${TMPPS}"
Here's a more portable solution (POSIX-compatible):
temp=$(mktemp -u).ps
: >"$temp"
The first line runs mktemp without creating the file, then sets temp to the generated filename with .ps appended. The second line then creates it; touch "$temp" can be used instead if you prefer it.
EDIT: Note that this doesn't have the same permissions as it creates it using shell redirection. If you need it to be unreadable by other users, you can use chmod to set it manually.
For macOS -
brew install coreutils
Then -
gmktemp --suffix=.ps
On the macOS 13, mktemp still doesn't support --suffix, I rename it after make the file, it seems working fine.
$ tmp=`mktemp -t prefix`
$ mv $tmp $tmp.txt
$ ls $tmp.txt
/var/folders/..../T/prefix.xxxx.txt
MacOS Ventura 13.1 still doesn't support --suffix. But based on Roman Chernyatchik's answer I'm using an inline solution that I think is the simplest possible:
mktemp "$(mktemp -t $tempname).ps"

Resources