How does CMake detect changed files - makefile

I have a "C"/C++ CMake project which works fine. However, I'm sometimes (re)building on a remote cluster where the time is slightly different. This machine runs Linux and I'm building using make. I'm wondering if there is some make/CMake way to change how the changes to the files are detected, e.g. to MD5 or diff rather than using timestamps. Otherwise I guess I'd either have to endure the constant make clean / make -j cycle or have to change my local time every time I'm working with that particular server.
I was poking CMake documentation to see if there is a flag which would change these settings but found none. How would this work on platforms which have no RTC (e.g. Raspberry)?

Right, so knowing that CMake / make does not do what I want and I don't want the hassle of synchronizing the time of my machine to the target, I came up with the following:
#!/bin/bash
touch src_hash.md5
echo -n make "$#" > mymake.sh
find `pwd`/../src `pwd`/../include -print0 |
while IFS= read -r -d $'\0' f; do
if [[ ! -d "$f" ]]; then
MD5=`md5sum "$f" | awk -v fn="$f" '{ print "\"" fn "\" " $1; }'`
echo $MD5 >> src_hash.md5.new
OLDMD5=`grep -e "^\"$f\"" src_hash.md5`
if [[ "$OLDMD5" == "" ]]; then
echo "$MD5 -- [a new file]"
continue # a new file, make can handle that well on its own
fi
HASH=`echo $MD5 | awk '{ print $2; }'`
OLDHASH=`echo $OLDMD5 | awk '{ print $2; }'`
if [[ "$HASH" != "$OLDHASH" ]]; then
echo "$MD5 -- changed from $OLDHASH"
echo -n " \"--what-if=${f}\"" >> mymake.sh
# this is running elsewhere, can't pass stuff via variables
fi
fi
done
touch src_hash.md5.new
mv src_hash.md5.new src_hash.md5
echo using: `cat mymake.sh`
echo >> mymake.sh # add a newline
chmod +x mymake.sh
./mymake.sh
rm -f mymake.sh
This keeps a list of source file hashes in src_hash.md5 and at each time it runs it compares the current files to those hashes (and updates the list accordingly).
At the end, it calls make, passing any arguments you give to the script (such as -j). It makes use of the --what-if= switch which tells make to act like the given file changed - that way the dependences of build targets on sources / headers are handled elegantly.
You might want to also pass the path to source / include files as arguments so that those wouldn't be hardcoded inside.
Or one more iteration on the said script, using touch to change and restore the file timestamps for situations when make is extra stubborn about not rebuilding anything:
#!/bin/bash
if [[ ! -d ../src ]]; then
>&2 echo "error: ../src is not a directory or does not exist"
exit -1
fi
if [[ ! -d ../include ]]; then
>&2 echo "error: ../include is not a directory or does not exist"
exit -1
fi
echo "Scanning for changed files in ../src and ../include"
touch src_hash.md5 # in case this runs for the first time
rm -f mymaketouch.sh
rm -f mymakerestore.sh
touch mymaketouch.sh
touch mymakerestore.sh
echo -n make "$#" > mymake.sh
CWD="`pwd`"
find ../src ../include -print0 |
while IFS= read -r -d $'\0' f; do
if [[ ! -d "$f" ]]; then
fl=`readlink -f "$CWD/$f"`
MD5=`md5sum "$fl" | awk -v fn="$fl" '{ print "\"" fn "\" " $1; }'`
HASH=`echo $MD5 | awk '{ print $2; }'`
echo $MD5 >> src_hash.md5.new
OLDMD5=`grep -e "^\"$fl\"" src_hash.md5`
OLDHASH=`echo $OLDMD5 | awk '{ print $2; }'`
if [[ "$OLDMD5" == "" ]]; then
echo "$f $HASH -- [a new file]"
continue # a new file, make can handle that well on its own
fi
if [[ "$HASH" != "$OLDHASH" ]]; then
echo "$f $HASH -- changed from $OLDHASH"
echo "touch -m \"$fl\"" >> mymaketouch.sh # will touch it and change modification time
stat "$fl" -c "touch -m -d \"%y\" \"%n\"" >> mymakerestore.sh # will restore it later on so that we do not run into problems when copying newer from a different system
echo -n " \"--what-if=$fl\"" >> mymake.sh
# this is running elsewhere, can't pass stuff via variables
fi
fi
done
echo using: `cat mymake.sh`
echo >> mymake.sh # add a newline
echo 'exit $?' >> mymake.sh
chmod +x mymaketouch.sh
chmod +x mymakerestore.sh
chmod +x mymake.sh
control_c() # run if user hits control-c
{
echo -en "\nrestoring modification times\n"
./mymakerestore.sh
rm -f mymaketouch.sh
rm -f mymakerestore.sh
rm -f mymake.sh
rm -f src_hash.md5.new
exit -1
}
trap control_c SIGINT
./mymaketouch.sh
./mymake.sh
RETVAL=$?
./mymakerestore.sh
rm -f mymaketouch.sh
rm -f mymakerestore.sh
rm -f mymake.sh
touch src_hash.md5.new # in case there was nothing new
mv src_hash.md5.new src_hash.md5
# do it now in case someone hits ctrl+c mid-build and not all files are built
exit $RETVAL
Or even run hashing in parallel in case you are building a large project:
#!/bin/bash
if [[ ! -d ../src ]]; then
>&2 echo "error: ../src is not a directory or does not exist"
exit -1
fi
if [[ ! -d ../include ]]; then
>&2 echo "error: ../include is not a directory or does not exist"
exit -1
fi
echo "Scanning for changed files in ../src and ../include"
touch src_hash.md5 # in case this runs for the first time
rm -f mymaketouch.sh
rm -f mymakerestore.sh
touch mymaketouch.sh
touch mymakerestore.sh
echo -n make "$#" > mymake.sh
CWD="`pwd`"
rm -f src_hash.md5.new # will use ">>", make sure to remove the file
find ../src ../include -print0 |
while IFS= read -r -d $'\0' f; do
if [[ ! -d "$f" ]]; then
fl="$CWD/$f"
(echo `md5sum "$f" | awk -v fn="$fl" '{ print "\"" fn "\" " $1; }'` ) & # parallel, echo is atomic (http://stackoverflow.com/questions/9926616/is-echo-atomic-when-writing-single-lines)
# run in parallel (remove the ampersand if you run into trouble)
fi
done >> src_hash.md5.new # >> is atomic but > wouldn't be
# this is fast
cat src_hash.md5 > src_hash.md5.diff
echo separator >> src_hash.md5.diff
cat src_hash.md5.new >> src_hash.md5.diff
# make a compound file for awk (could also read the other file in awk but this seems simpler right now)
cat src_hash.md5.diff | awk 'BEGIN { FS="\""; had_sep = 0; }
{
if(!had_sep && $1 == "separator")
had_sep = 1;
else {
sub(/[[:space:]]/, "", $3);
if(!had_sep)
old_hashes[$2] = $3;
else {
f = $2;
if((idx = index(f, "../")) != 0)
f = substr(f, idx, length(f) - idx + 1);
if($2 in old_hashes) {
if(old_hashes[$2] != $3)
print "\"" f "\" " $3 " -- changed from " old_hashes[$2];
} else
print "\"" f "\" -- a new file " $3;
}
}
}'
# print verbose for the user only
cat src_hash.md5.diff | awk 'BEGIN { FS="\""; had_sep = 0; }
{
if(!had_sep && $1 == "separator")
had_sep = 1;
else {
sub(/[[:space:]]/, "", $3);
if(!had_sep)
old_hashes[$2] = $3;
else {
if($2 in old_hashes) {
if(old_hashes[$2] != $3)
printf($2 "\0"); /* use \0 as a line separator for the below loop */
}
}
}
}' |
while IFS= read -r -d $'\0' fl; do
echo "touch -m \"$fl\"" >> mymaketouch.sh # will touch it and change modification time
stat "$fl" -c "touch -m -d \"%y\" \"%n\"" >> mymakerestore.sh # will restore it later on so that we do not run into problems when copying newer from a different system
echo -n " \"--what-if=$fl\"" >> mymake.sh
# this is running elsewhere, can't pass stuff via variables
done
# run again, handle files that require change
rm -f src_hash.md5.diff
echo using: `cat mymake.sh`
echo >> mymake.sh # add a newline
echo 'exit $?' >> mymake.sh
chmod +x mymaketouch.sh
chmod +x mymakerestore.sh
chmod +x mymake.sh
control_c() # run if user hits control-c
{
echo -en "\nrestoring modification times\n"
./mymakerestore.sh
rm -f mymaketouch.sh
rm -f mymakerestore.sh
rm -f mymake.sh
rm -f src_hash.md5.new
exit -1
}
trap control_c SIGINT
./mymaketouch.sh
./mymake.sh
RETVAL=$?
./mymakerestore.sh
rm -f mymaketouch.sh
rm -f mymakerestore.sh
rm -f mymake.sh
touch src_hash.md5.new # in case there was nothing new
mv src_hash.md5.new src_hash.md5
# do it now in case someone hits ctrl+c mid-build and not all files are built
exit $RETVAL

Related

Delete empty files - Improve performance of logic

I am i need to find & remove empty files. The definition of empty files in my use case is a file which has zero lines.
I did try testing the file to see if it's empty However, this behaves strangely as in even though the file is empty it doesn't detect it so.
Hence, the best thing I could write up is the below script which i way too slow given it has to test several hundred thousand files
#!/bin/bash
LOOKUP_DIR="/path/to/source/directory"
cd ${LOOKUP_DIR} || { echo "cd failed"; exit 0; }
for fname in $(realpath */*)
do
if [[ $(wc -l "${fname}" | awk '{print $1}') -eq 0 ]]
then
echo "${fname}" is empty
rm -f "${fname}"
fi
done
Is there a better way to do what I'm after or alternatively, can the above logic be re-written in a way that brings better performance please?
Your script is slow beacuse wc reads every file to the end, which is not needed for your purpose. This might be what you're looking for:
#!/bin/bash
lookup_dir='/path/to/source/directory'
cd "$lookup_dir" || exit
for file in *; do
if [[ -f "$file" && -r "$file" && ! -L "$file" ]]; then
read < "$file" || echo rm -f -- "$file"
fi
done
Drop the echo after making sure it works as intended.
Another version, calling the rm only once, could be:
#!/bin/bash
lookup_dir='/path/to/source/directory'
cd "$lookup_dir" || exit
for file in *; do
if [[ -f "$file" && -r "$file" && ! -L "$file" ]]; then
read < "$file" || files_to_be_deleted+=("$file")
fi
done
rm -f -- "${files_to_be_deleted[#]}"
Explanation:
The core logic is in the line
read < "$file" || rm -f -- "$file"
The read < "$file" command attempts to read a line from the $file. If it succeeds, that is, a line is read, then the rm command on the right-hand side of the || won't be executed (that's how the || works). If it fails then the rm command will be executed. In any case, at most one line will be read. This has great advantage over the wc command because wc would read the whole file.
if ! read < "$file"; then rm -f -- "$file"; fi
could be used instead. The two lines are equivalent.
To check a "$fname" is a file and is empty or not, use [ -s "$fname" ]:
#!/usr/bin/env sh
LOOKUP_DIR="/path/to/source/directory"
for fname in "$LOOKUP_DIR"*/*; do
if ! [ -s "$fname" ]; then
echo "${fname}" is empty
# remove echo when output is what you want
echo rm -f "${fname}"
fi
done
See: help test:
File operators:
...
-s FILE True if file exists and is not empty.
Yet another method
wc -l ~/tmp/* 2>/dev/null | awk '$1 == 0 {print $2}' | xargs echo rm
This will break if any of your files have whitespace in the name.
To work around that, with awk still
wc -l ~/tmp/* 2>/dev/null \
| awk 'sub(/^[[:blank:]]+0[[:blank:]]+/, "")' \
| xargs echo rm
This works because the sub function returns the number of substitutions made, which can be treated as a boolean zero/not-zero condition.
Remove the echo to actually delete the files.

Looping through each file in directory - bash

I'm trying to perform certain operation on each file in a directory but there is a problem with order it's going through. It should do one file at the time. The long line (unzipping, grepping, zipping) works fine on a single file without a script, so there is a problem with a loop. Any ideas?
Script should grep through through each zipped file and look for word1 or word2. If at least one of them exist then:
unzip file
grep word1 and word2 and save it to file_done
remove unzipped file
zip file_done to /donefiles/ with original name
remove file_done from original directory
#!/bin/bash
for file in *.gz; do
counter=$(zgrep -c 'word1\|word2' $file)
if [[ $counter -gt 0 ]]; then
echo $counter
for file in *.gz; do
filenoext=${file::-3}
filedone=${filenoext}_done
echo $file
echo $filenoext
echo $filedone
gunzip $file | grep 'word1\|word2' $filenoext > $filedone | rm -f $filenoext | gzip -f -c $filedone > /donefiles/$file | rm -f $filedone
done
else
echo "nothing to do here"
fi
done
The code snipped you've provided has a few problems, e.g. unneeded nested for cycle and erroneous pipeline
(the whole line gunzip $file | grep 'word1\|word2' $filenoext > $filedone | rm -f $filenoext | gzip...).
Note also your code will work correctly only if *.gz files don't have spaces (or special characters) in names.
Also zgrep -c 'word1\|word2' will also match strings like line_starts_withword1_orword2_.
Here is the working version of the script:
#!/bin/bash
for file in *.gz; do
counter=$(zgrep -c -E 'word1|word2' $file) # now counter is the number of word1/word2 occurences in $file
if [[ $counter -gt 0 ]]; then
name=$(basename $file .gz)
zcat $file | grep -E 'word1|word2' > ${name}_done
gzip -f -c ${name}_done > /donefiles/$file
rm -f ${name}_done
else
echo 'nothing to do here'
fi
done
What we can improve here is:
since we unzipping the file anyway to check for word1|word2 presence, we may do this to temp file and avoid double-unzipping
we don't need to count how many word1 or word2 is inside the file, we may just check for their presence
${name}_done can be a temp file cleaned up automatically
we can use while cycle to handle file names with spaces
#!/bin/bash
tmp=`mktemp /tmp/gzip_demo.XXXXXX` # create temp file for us
trap "rm -f \"$tmp\"" EXIT INT TERM QUIT HUP # clean $tmp upon exit or termination
find . -maxdepth 1 -mindepth 1 -type f -name '*.gz' | while read f; do
# quotes around $f are now required in case of spaces in it
s=$(basename "$f") # short name w/o dir
gunzip -f -c "$f" | grep -P '\b(word1|word2)\b' > "$tmp"
[ -s "$tmp" ] && gzip -f -c "$tmp" > "/donefiles/$s" # create archive if anything is found
done
It looks like you have an inner loop inside the outer one :
#!/bin/bash
for file in *.gz; do
counter=$(zgrep -c 'word1\|word2' $file)
if [[ $counter -gt 0 ]]; then
echo $counter
for file in *.gz; do #<<< HERE
filenoext=${file::-3}
filedone=${filenoext}_done
echo $file
echo $filenoext
echo $filedone
gunzip $file | grep 'word1\|word2' $filenoext > $filedone | rm -f $filenoext | gzip -f -c $filedone > /donefiles/$file | rm -f $filedone
done
else
echo "nothing to do here"
fi
done
The inner loop goes through all the files in the directory if one of them contains file1 or file2. You probably want this :
#!/bin/bash
for file in *.gz; do
counter=$(zgrep -c 'word1\|word2' $file)
if [[ $counter -gt 0 ]]; then
echo $counter
filenoext=${file::-3}
filedone=${filenoext}_done
echo $file
echo $filenoext
echo $filedone
gunzip $file | grep 'word1\|word2' $filenoext > $filedone | rm -f $filenoext | gzip -f -c $filedone > /donefiles/$file | rm -f $filedone
else
echo "nothing to do here"
fi
done

Bash script functions overflowing into others

Morning,
I'm trying to consolidate a number of smaller scripts into a single large bash script where everything is called via functions.
Most functions will function fine (i.e. script.sh update), however giving script.sh status for example will start giving errors related to the docker() function.
I've corrected all the errors I can via shellcheck and tried adding return to each function but it's still pulling incorrect functions.
Here is the script in full:
#!/bin/bash
# variables and arguments
main() {
export XZ_OPT=-e9
distro=$(awk -F'"' '/^NAME/ {print $2}' /etc/os-release)
username=$(grep home /etc/passwd | sed 1q | cut -f1 -d:)
directory_home="/home/$username"
directory_script="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
rclone_command="rclone --config=$directory_script/rclone.conf"
docker_restart=("flexget" "cbreader" "syncthing")
args "$#"
}
args() {
action=$1
case "$action" in
archive) archive ;;
borg) borg ;;
docker) docker ;;
logger) logger ;;
magnet) magnet ;;
payslip) payslip ;;
permissions) permissions ;;
rclone) rclone_mount ;;
sshfs) sshfs_mount ;;
status) status ;;
sync) sync ;;
update) update ;;
*) echo "$0" && available_options ;;
esac
}
# functions
function available_options() {
sed -n '/^\tcase/,/\tesac$/p' "$0" | cut -f1 -d")" | sed '1d;$d' | sort | tr -d "*" | xargs
return
}
function plural() {
if (("$1">1))
then
echo s
fi
return
}
function dir_find() {
find "$directory_home" -maxdepth 3 -mount -type d -name "$1"
return
}
function domain_find() {
file_config_traefik="$(dir_find config)/traefik/traefik.toml"
awk -F'"' '/domain/ {print $2}' "$file_config_traefik"
return
}
function git_config() {
git config --global user.email "$username#$(domain_find)"
git config --global user.name "$username"
git config pack.windowMemory 10m
git config pack.packSizeLimit 20m
return
}
function delete_docker_env() {
if [[ -f "$directory_script/.env" ]]
then
echo Deleting existing env file
rm "$directory_script/.env"
fi
return
}
function delete_docker_compose() {
if [[ -f "$directory_script/docker-compose.yml" ]]
then
echo Deleting existing env file
rm "$directory_script/docker-compose.yml"
fi
return
}
function write_docker_env() {
{
printf "NAME=%s\\n" "$username"
printf "PASS=%s\\n" "$docker_password"
printf "DOMAIN=%s\\n" "$(domain_find)"
printf "PUID=%s\\n" "$(id -u)"
printf "PGID=%s\\n" "$(id -g)"
printf "TZ=%s\\n" "$(cat /etc/timezone)"
printf "HOMEDIR=%s\\n" "$directory_home"
printf "CONFDIR=%s\\n" "$(dir_find config)"
printf "DOWNDIR=%s\\n" "$(dir_find downloads)"
printf "POOLDIR=%s\\n" "$(dir_find media)"
printf "SAVEDIR=%s\\n" "$(dir_find saves)"
printf "SYNCDIR=%s\\n" "$(dir_find vault)"
printf "WORKDIR=%s\\n" "$(dir_find paperwork)"
printf "RCLONE_REMOTE_MEDIA=%s\\n" "$(rclone_remote media)"
printf "RCLONE_REMOTE_SAVES=%s\\n" "$(rclone_remote saves)"
printf "RCLONE_REMOTE_WORK=%s\\n" "$(rclone_remote work)"
} > "$directory_script/.env"
return
}
function payslip_config_write() {
{
printf "[retriever]\\n"
printf "type = SimpleIMAPSSLRetriever\\n"
printf "server = imap.yandex.com\\n"
printf "username = %s\\n" "$payslip_username"
printf "port = 993\\n"
printf "password = %s\\n\\n" "$payslip_password"
printf "[destination]\\n"
printf "type = Maildir\\n"
printf "path = %s/\\n" "$directory_temp"
} > getmailrc
return
}
function payslip_decrypt() {
cd "$(dir_find paperwork)" || exit
for i in *pdf
do
fileProtected=0
qpdf "$i" --check || fileProtected=1
if [ $fileProtected == 1 ]
then
qpdf --password=$payslip_encryption --decrypt "$i" "decrypt-$i" && rm "$i"
fi
done
return
}
function rclone_remote() {
$rclone_command listremotes | grep "$1"
return
}
function check_running_as_root() {
if [ "$EUID" -ne 0 ]
then
echo "Please run as root"
exit 0
fi
return
}
function include_credentials() {
source "$directory_script/credentials.db"
return
}
function archive() {
rclone_remote=$(rclone_remote backups)
working_directory=$(dir_find backups)/archives
echo "$working_directory"
if [ -z "$*" ]
then
echo Creating archives...
# build folder array?
cd "$(mktemp -d)" || exit
for i in "config" "vault"
do
tar -cJf "backup-$i-$(date +%Y-%m-%d-%H%M).tar.xz" --ignore-failed-read "$HOME/$i"
done
echo "Sending via rclone..."
for i in *
do
du -h "$i"
$rclone_command move "$i" "$rclone_remote"/archives/
done
echo Cleaning up...
rm -r "$PWD"
echo Done!
else
echo Creating single archive...
cd "$(mktemp -d)" || exit
tar -cJf "backup-$1-$(date +%Y-%m-%d-%H%M).tar.xz" --ignore-failed-read "$directory_home/$1"
echo "Sending via rclone..."
for i in *
do
du -h "$i" && $rclone_command move "$i" "$rclone_remote"/archives/
done
echo Cleaning up...
rm -r "$PWD"
echo Done!
fi
return
}
function update-arch() {
if [ -x "$(command -v yay)" ]
then
yay -Syu --noconfirm
else
pacman -Syu --noconfirm
fi
return
}
function update-debian() {
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get dist-upgrade -y
apt-get autoremove --purge -y
apt-get clean
if [ -x "$(command -v youtube-dl)" ]
then
youtube-dl -U
fi
if [ -x "$(command -v rclone)" ]
then
curl --silent "https://rclone.org/install.sh" | bash
fi
return
}
function update-remaining() {
if [ -f "$directory_home/.config/retroarch/lrcm/lrcm" ]
then
"$directory_home/.config/retroarch/lrcm/lrcm" update
fi
find "$(dir_find config)" -maxdepth 2 -name ".git" -type d | sed 's/\/.git//' | xargs -P10 -I{} git -C {} pull
if [ -x "$(command -v we-get)" ]
then
pip3 install --upgrade git+https://github.com/rachmadaniHaryono/we-get
fi
if [ -x "$(command -v plowmod)" ]
then
su -c "plowmod -u" -s /bin/sh "$username"
chown -R "$username":"$username" "$directory_home/.config/plowshare"
fi
return
}
function borg() {
# https://opensource.com/article/17/10/backing-your-machines-borg
working_directory=$(dir_find backups)/borg
echo "$working_directory"
return
}
function docker() {
delete_docker_env
# delete_docker_compose
include_credentials
# update submodules
git pull --recurse-submodules
# write compose file
# {
# printf "nope"
# } > docker-compose.yml
# write env file
write_docker_env
# clean up existing stuff
echo Cleaning up existing docker files
for i in volume image system network
do
docker "$i" prune -f
done
docker system prune -af
# make network, if not existing
if ! printf "$(docker network ls)" | grep -q "proxy"
then
echo Creating docker network
docker network create proxy
fi
# start containers
echo Starting docker containers
docker-compose up -d --remove-orphans
delete_docker_env
return
}
function logger() {
git_config
git_directory="$(dir_find logger)"
file_git_log="$git_directory/media.log"
log_command="git --git-dir=$git_directory/.git --work-tree=$git_directory"
log_remote=$(rclone_remote media)
if [ ! -e "$git_directory" ]
then
mkdir "$git_directory" # make log directory
fi
if [ ! -e "$git_directory/.git" ]
then
$log_command init # initialise git repo
fi
if [ -e "$file_git_log.xz" ]
then
xz -d "$file_git_log.xz" # if xz archive exists, decompress
fi
if [ -e "$file_git_log" ]
then
rm "$file_git_log"
fi
$rclone_command ls "$log_remote" | sort -k2 > "$file_git_log" # create log
$rclone_command size "$log_remote" >> "$file_git_log" # append size
$log_command add "$file_git_log" # add log file
$log_command commit -m "Update: $(date +%Y-%m-%d)" # commit to repo, datestamped
if [ -e "$file_git_log.xz" ]
then
rm "$file_git_log.xz"
fi
xz "$file_git_log" # compress log
$log_command gc --aggressive --prune # compress repo
return
}
function magnet() {
if [ ! -f "$(dir_find vault)/*.magnet" ]
then
echo No magnet files found
exit 0
fi
mag2tor_script_path="$(dir_find config)/magnet2torrent/Magnet_To_Torrent2.py"
if [ ! -f "$mag2tor_script_path" ]
then
echo script not found, downloading
git clone "https://github.com/danfolkes/Magnet2Torrent.git" "$(dir_find config)/magnet2torrent"
fi
sshfs_mount
cd "$(dir_find vault)" || exit
for i in *.magnet
do
magnet_source="$(cat "$i")"
python "$mag2tor_script_path" -m "$magnet_source" -o "$(dir_find downloads)/remote/watch/"
rm "$i"
done
return
}
function payslip() {
# depends on: getmail4 mpack qpdf
directory_temp="$(mktemp -d)"
include_credentials
cd "$directory_temp" || exit
mkdir {cur,new,tmp}
payslip_config_write
getmail --getmaildir "$directory_temp"
cd new || exit
grep "$payslip_sender" ./* | cut -f1 -d: | uniq | xargs munpack -f
mv "*.pdf" "$(dir_find paperwork)/"
payslip_decrypt
rm -r "$directory_temp"
return
}
function permissions() {
check_running_as_root
chown "$username":"$username" "$directory_script/rclone.conf"
return
}
function rclone_mount() {
echo rclone mount checker
for i in backups media paperwork pictures saves
do
mount_point="$directory_home/$i"
if [[ -f "$mount_point/.mountcheck" ]]
then
echo "$i" still mounted
else
echo "$i" not mounted
echo force unmounting
fusermount -uz "$mount_point"
echo sleeping
sleep 5
echo mounting
$rclone_command mount "drive-$i": "/home/peter/$i" --vfs-cache-mode minimal --allow-other --allow-non-empty --daemon --log-file "$(dir-find config)/logs/rclone-$i.log" # --allow-other requires user_allow_other in /etc/fuse.conf
echo restarting docker containers
for j in "${docker_restart[#]}"
do
docker restart "$j"
done
fi
done
return
}
function sshfs_mount() {
include_credentials
echo sshfs mount checker
seedbox_host="$seedbox_username.seedbox.io"
seedbox_mount="$(dir_find downloads)/remote"
if [[ -d "$seedbox_mount/files" ]]
then
echo "sshfs mount exists"
else
echo "sshfs mount missing, mounting"
printf "%s" "$seedbox_password" | sshfs "$seedbox_username#$seedbox_host":/ "$seedbox_mount" -o password_stdin -o allow_other
fi
return
}
function status() {
status_filename=$(dir_find blog)/status.md
status_timestamp="$(date +%Y-%m-%d) at $(date +%H:%M)"
status_uptime=$(( $(cut -f1 -d. </proc/uptime) / 86400 ))
status_cpuavgs=$(cut -d" " -f1-3 < /proc/loadavg)
status_users=$(uptime | grep -oP '.{3}user' | sed 's/\user//g' | xargs)
status_ram=$(printf "%.0f" "$(free | awk '/Mem/ {print $3/$2 * 100.0}')")
status_swap=$(printf "%.0f" "$(free | awk '/Swap/ {print $3/$2 * 100.0}')")
status_rootuse=$(df / | awk 'END{print $5}')
status_dluse=$(df | awk '/downloads/ {print $5}')
status_dockers=$(docker ps -q | wc -l)/$(docker ps -aq | wc -l)
status_packages=$(dpkg -l | grep ^ii -c)
status_ifdata=$(vnstat -i eth0 -m --oneline | cut -f11 -d\;)
{
printf -- "---\\nlayout: page\\ntitle: Server Status\\ndescription: A (hopefully) recently generated server status page\\npermalink: /status/\\n---\\n\\n"
printf "*Generated on %s*\\n\\n" "$status_timestamp"
printf "* Uptime: %s" "$status_uptime"
printf " Day%s\\n" "$(plural "$status_uptime")"
printf "* CPU Load: %s\\n" "$status_cpuavgs"
printf "* Users: %s\\n" "$status_users"
printf "* RAM Usage: %s%%\\n" "$status_ram"
printf "* Swap Usage: %s%%\\n" "$status_swap"
printf "* Root Usage: %s\\n" "$status_rootuse"
printf "* Downloads Usage: %s\\n" "$status_dluse"
printf "* [Dockers](https://github.com/breadcat/Dockerfiles): %s\\n" "$status_dockers"
printf "* Packages: %s\\n" "$status_packages"
printf "* Monthly Data: %s\\n\\n" "$status_ifdata"
printf "Hardware specifications themselves are covered on the [hardware page](/hardware/#server).\\n"
} > "$status_filename"
return
}
function sync() {
source=$(rclone_remote gdrive | sed 1q)
dest=$(rclone_remote gdrive | sed -n 2p)
echo Syncing "$source" to "$dest"
$rclone_command sync "$source" "$dest" --drive-server-side-across-configs --verbose --log-file "$(dir_find config)/logs/rclone-sync-$(date +%Y-%m-%d-%H%M).log"
return
}
function update() {
check_running_as_root
if [[ $distro =~ "Debian" ]]
then
update-debian
elif [[ $distro =~ "Arch" ]]
then
update-arch
else
echo "Who knows what you're running"
fi
update-remaining
return
}
main "$#"
I believe you have a namespace problem.
You define a docker() function that does all strange things.
Then inside docker() you call $(docker network ls), that just calls the same function recursively, or inside status you call $(docker ps -aq | wc -l).
There is only one namespace - after you define a function named docker docker() {} anywhere you call $(docker) it will call that function.
You can use command, ex. echo() { printf "I AM NOT ECHO\n"; }; echo 123; command echo 123 - the first echo 123 will execute the function if it exists, the second one will however try to find echo executable in PATH and execute it.
However I better suggest to just use a unique namespace that will not interfere with anything. Declaring your functions docker hides the real command.
blabla_status() {} # instead of status()
blabla_docker() {} # instead of docker
# etc..
# then later in main()
case "$1" in
docker|status) blabla_"$1"; ;;
*) echo "Unknown function" >&2; ;;
esac

Simulating the find command: why is my code not recursing correctly?

My assignment is to write a Unix shell script that asks the user for the name of a directory, and then works exactly like find.
Here is what I have so far:
#!/bin/bash
dir_lister()
{
cd "$1"
echo "$1"
list=$(ls -l ${1})
nolines=$(echo "$list" | awk 'END{printf "%d",NF}')
if [ $nolines -eq 2 ]
then
echo "$1"
return
fi
filelist=$(echo "$list" | grep ^-.*)
dirlist=$(echo "$list" | grep ^d.*)
filename=$(echo "$filelist"| awk '{printf "%s\n",$NF}')
present=$(pwd)
echo "$filename"| awk -v pres=$present '{printf "%s/%s\n",pres,$0}'
dirlist2=$(echo "$dirlist" | awk '{printf "%s\n",$NF}')
echo "$dirlist2" | while IFS= read -r line;
do
nextCall=$(echo "$present/$line");
dir_lister $nextCall;
cd ".."
done
cd ".."
}
read -p "Enter the name of the direcotry: " dName
dir_lister $dName
The problem is, after a depth of three directories, this script gets into an infinite loop, and I don't see why.
EDIT:
Here is the code i came up with after looking at your answer, it still doesn't go more than 1 directory depth:
#!/bin/bash
shopt -s dotglob # don't miss "hidden files"
shopt -s nullglob # don't fail on empty directories
list_directory()
{
cd "$2"
cd "$1"
##echo -e "I am called \t $1 \t $2"
for fileName in "$1/"*
do
##echo -e "hello \t $fileName"
if [ -d "$fileName" ];
then
echo "$fileName"
list_directory $fileName $2
else
echo "$fileName"
fi
done
}
read -p "Enter the direcotory Name: " dirName
var=$(pwd)
list_directory $dirName $var
Okay, that is completely the wrong way to list files in a directory (see ParsingLs). I'll give you the pieces and you should be able to put them together into a working script.
Put this at the top of your script:
shopt -s dotglob # don't miss "hidden files"
shopt -s nullglob # don't fail on empty directories
Then you can easily loop over directory contents with:
for file in "$directory/"* ; do
#...
done
Test if you have a directory:
if [ -d "$file" ] ; then
# "$file" is a directory, recurse...
fi

Why is my awk print not showing up on the terminal

I have the following script which does a "which -a" on a command then a "ls -l" to let me know if it's a link or not .. ie "grep" since I have gnu commands installed (Mac with iTerm).
#!/usr/bin/env bash
which -a $1 | xargs -I{} ls -l "{}" \
| awk '{for (i = 1; i < 9; i++) $i = ""; sub(/^ */, ""); print}'
When I run this from the script "test grep" I receive no output, but when I run it via "bash -x test grep" I receive the following:
bash -x test grep
+ which -a grep
+ xargs '-I{}' ls -l '{}'
+ awk '{for (i = 1; i < 9; i++) $i = ""; sub(/^ */, ""); print}'
/usr/local/bin/grep -> ../Cellar/grep/3.1/bin/grep
/usr/bin/grep
The last 2 lines is what I'm looking to display. Thought this would be easier to do ;-) .. I also tried appending the pipe thinking printf would fix the issue:
| while read path
do
printf "%s\n" "$path"
done
Thanks and .. Is there a better way to get what I need?
The problem is that you named your script test.
If you want to run a command that's not in your PATH, you need to specify the directory it's in, e.g. ./test.
You're not getting an error for trying to run test because there is a built-in bash command called test that is used instead. For extra confusion, the standard test produces no output.
In conclusion:
Use ./ to run scripts in the current directory.
Never call your test programs test.
Thanks for the never naming a script "test" .. old habits are hard to break (I came from a non-unix background.
I ended with the following
for i in $(which -a $1)
do
stat $i | awk NR==1{'$1 = ""; sub(/^ */, ""); print}'
done
or simpler
for i in $(which -a $1)
do
stat -c %N "$i"
done
Consider the following shell function:
cmdsrc() {
local cmd_file cmd_file_realpath
case $(type -t -- "$1") in
file) cmd_file=$(type -P -- "$1")
if [[ -L "$cmd_file" ]]; then
echo "$cmd_file is a symlink" >&2
elif [[ -f "$cmd_file" ]]; then
echo "$cmd_file is a regular file" >&2
else
echo "$cmd_file is not a symlink or a regular file" >&2
fi
cmd_file_realpath=$(readlink -- "$cmd_file") || return
if [[ $cmd_file_realpath != "$cmd_file" ]]; then
echo "...the real location of the executable is $cmd_file_realpath" >&2
fi
;;
*) echo "$1 is not a file at all: $(type -- "$1")" >&2 ;;
esac
}
...used as such:
$ cmdsrc apt
/usr/bin/apt is a symlink
...the real location of the executable is /System/Library/Frameworks/JavaVM.framework/Versions/A/Commands/apt
$ cmdsrc ls
/bin/ls is a regular file
$ cmdsrc alias
alias is not a file at all: alias is a shell builtin
Took some suggestions and came up with the following:
The prt-underline is just a fancy printf function. I decided not to go with readline since the ultimate command resolution may be unfamiliar to me and I only deal with regular files .. so does't handle every situation but in the end gives me the output I was looking for. Thanks for all the help.
llt ()
{
case $(type -t -- "$1") in
function)
prt-underline "Function";
declare -f "$1"
;;
alias)
prt-underline "Alias";
alias "$1" | awk '{sub(/^alias /, ""); print}'
;;
keyword)
prt-underline "Reserved Keyword"
;;
builtin)
prt-underline "Builtin Command"
;;
*)
;;
esac;
which "$1" &> /dev/null;
if [[ $? = 0 ]]; then
prt-underline "File";
for i in $(which -a $1);
do
stat "$i" | awk 'NR==1{sub(/^ File: /, ""); print}';
done;
fi
}

Resources