I have a directory containing a big number of sub-directories within.
I need to loop over all subdiretories and save it names (without a path!) as a distinct variable
for d in ${output}/*/
do
dir_name=${d%*/}
echo ${dir_name}
done
the problem of the current version that it gives me a full path of the directory instead. Here is the result of echo
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig992
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig993
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig994
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig995
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig996
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig997
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig998
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig999
With the dir_name=${d%*/}, you remove the trailing / only. You will want to remove everything upto the last / as well. Or try basename, which is perhapse a better option.
As in:
for d in /var/*/ ; do
dir_name=${d%/}
base=$(basename "$d")
echo "$d $dir_name ${dir_name##*/} $base"
done
which produces:
/var/adm/ /var/adm adm adm
/var/cache/ /var/cache cache cache
/var/db/ /var/db db db
/var/empty/ /var/empty empty empty
/var/games/ /var/games games games
/var/heimdal/ /var/heimdal heimdal heimdal
/var/kerberos/ /var/kerberos kerberos kerberos
/var/lib/ /var/lib lib lib
/var/lock/ /var/lock lock lock
/var/log/ /var/log log log
/var/mail/ /var/mail mail mail
/var/man/ /var/man man man
/var/named/ /var/named named named
/var/netatalk/ /var/netatalk netatalk netatalk
/var/run/ /var/run run run
/var/slapt-get/ /var/slapt-get slapt-get slapt-get
/var/spool/ /var/spool spool spool
/var/state/ /var/state state state
/var/tmp/ /var/tmp tmp tmp
/var/www/ /var/www www www
/var/yp/ /var/yp yp yp
(on my system).
Can you cd to that parent directory?
cd ${output}/
lst=( */ )
for d in "${lst[#]}"; do echo "${d*/}"; done
If that's not an option, then you can strip it each time.
lst=( ${output}/*/ )
for d in "${lst[#]}"; do dir="${d*/}"; echo "${dir##/}"; done
As a hybrid, you can sometimes use a trick of changing directory inside a subshell, as the cd is local to the subshell and "goes away" when it ends, but so do any assignments.
cd /tmp
( cd ${output}/; lst=( */ ); for d in "${lst[#]}"; do echo "${d*/}"; done )
# in /tmp here, lst array does not exist any more...
Related
I'm wanting to create a script which can run k apply -Rf ./service-token-auth for each of the logical groups here. Mainly all of the graphql-* and data-service-* folders.
Is this something that would be quite easy to implement?
$ ls
README.md data-service-notifications orchestration-workflows-service
argo-cd data-service-reports postgresql-operator
argocd data-service-user prometheus
azure-identities diagnostic-tools pushgateway
azure-nginx-ingress gloo-gateway reloader
azure-private-dns graphql-gateway service-auth
azure-rbacs graphql-service-applications service-b2c-gateway
azure-secrets graphql-service-clients service-dast-auth
blackbox-exporter graphql-service-findings service-dast-ml
cadence graphql-service-logging service-mesh
data-service-application graphql-service-user service-token-auth
data-service-clients kube-state-metrics strimzi-kafka
data-service-findings kuberhealthy tartarus
data-service-logging kubernetes-reflector whs-opa
you can iterate over files in bash
first make sure that it only hits the folders that you want
for i in graphql-* data-service-*; do echo $i; done
then execute
for i in graphql-* data-service-*; do k apply -Rf ./$i; done
I have a bash script running on Ubuntu 18.04. I scheduled it using SYSTEMD timer.
#!/bin/bash
backupdb(){
/usr/bin/mysqldump -u backupuser -pbackuppassword --add-locks --extended-insert --hex-blob $1 > /opt/mysqlbackup/$1.sql
/bin/gzip -c /opt/mysqlbackup/$1.sql > /opt/mysqlbackup/$1-$(date +%A).sql.gz
rm -rf /opt/mysqlbackup/$1.sql
echo `date "+%h %d %H:%M:%S"`": " $1 "- Size:" `/usr/bin/stat -c%s "${1}-$(date +%A).sql.gz"` >> /opt/mysqlbackup/backupsql.log
}
# List of databases to backup
backupdb cardb
backupdb bikedb
When I run this script interactively, the backup log get 2 entries:
Jun 16 20:15:03: cardb - Size: 200345
Jun 16 20:15:12: bikedb - Size: 150123
However, when this is run as a SYSTEMD timer service, the log still gets 2 entries but no file size is given in the log file. Not 0, it's simply blank. The backup file, cardb.sql.gz is created and is non-zero. I can unzip it and it does contain a valid SQL file.
I can't figure out why this is happening.
You need to specify the absolute path of your file
Without specifying the absolute path you are making the assumption that the systemd timer is running your script from the same directory you tested it from. To remedy this, you can either use the absolute path or change directories before accessing your file.
echo `date "+%h %d %H:%M:%S"`": " $1 "- Size:" `/usr/bin/stat -c%s "/opt/mysqlbackup/${1}-$(date +%A).sql.gz"` >> /opt/mysqlbackup/backupsql.log
I want to make a rsync with an update of the distant tree. I'd like my command to recursively create missings leaf folders ex :
Before:
Source
A/A/C_file
A/B/C_file
A/C/C_file
B/A/C_file
B/B/C_file
B/C/C_file
distant
A/A/C_file
A/C/C_file
B/A/C_file
B/C/C_file
After the Rsync command "rsync -atvrz source/dir/ distant/dir " :
Distant :
A/A/C_file
A/B/C_file
A/C/C_file
B/A/C_file
B/B/C_file
B/C/C_file
The --relative solution doesn't work for me because it creates the new path inside the distant : "distant/dir/source/dir"
Seems to work when the user right is harmonized.
So a chown -r user:user /distant solved the issue.
The following bash script checks whenever current pwd is mounted by sshfs:
if (! mountpoint -q $PWD ); then
# not mounted
else
# mounted
fi
I would like to Vim to do the same on current newly opened buffer and if the current directory is networked filesystem (means it is mounted) then Vim should execute set complete-=i command. Only in current split if possible.
To check whether the current buffer's directory is mounted:
:call system('mountpoint -q ' . shellescape(expand('%:h')))
:let isMountpoint = (v:shell_error == 0)
To hook this into buffer reads, invoke this through :autocmd BufRead * ...
The 'complete' option is indeed buffer-local, so with :setlocal complete-=i, you can achieve that, too.
Now, you just need to combine the pieces:
:autocmd BufRead * call system('mountpoint -q ' . shellescape(expand('%:h'))) | if v:shell_error == 0 | setlocal complete-=i | endif
I'm using code like following to monitor the whole file system:
fanotify_mark(fd,
FAN_MARK_ADD | FAN_MARK_MOUNT,
FAN_OPEN | FAN_EVENT_ON_CHILD,
AT_FDCWD, "/"
)
But I need write some tests, so, I want monitor just a specific dir, let say "/tmp/test_dir". The problem is when I change code this way:
fanotify_mark(fd,
FAN_MARK_ADD,
FAN_OPEN | FAN_EVENT_ON_CHILD,
AT_FDCWD, "/tmp/test_dir"
)
fanotify only watchs to events on "/tmp/test_dir" ignoring whatever happen in deeper folders.
For instance: If I open "/tmp/test_dir/aa/bb/cc/test_file.txt" fanotify detects nothing.
I'm missing some flag?
Problem solved.
fanotify isn't recursive. It only works that way when working on mounted directories. I did the following test:
mkdir /tmp/parent
mkdir -p /tmp/other/aa/bb/cc/dd
touch /tmp/other/aa/bb/cc/dd/test.txt
mount --bind /tmp/other /tmp/parent
then in code:
fanotify_mark(fd,
FAN_MARK_ADD | FAN_MARK_MOUNT,
FAN_OPEN | FAN_EVENT_ON_CHILD,
AT_FDCWD, "/tmp/parent"
)
and it's done. Now fanotify fire up events for test.txt file.
With fanotify, either monitor entire mount point of specified path (using FAN_MARK_MOUNT), or monitor files in a directory (not its sub-directory, without specifying FAN_MARK_MOUNT). You can set separate monitors for sub-directories to achieve this. see https://stackoverflow.com/a/20965660/2706918