ls time with second or microsecond accuracy? - ftp

When I run ls -l x.txt in lftp, I see output like this.
-rw-r--r-- 1 ftp ftp 1238835 Mar 09 12:45 x.txt
But if I want to know the second or even microsecond of the time, is it possible with lftp? Or an alternative FTP program provide such high resolution time info?

Related

How to sync the modification date of folders within two directories that are the same?

I have a Dropbox folder on one computer with all the original modification dates. Recently, after transferring my data onto another computer, due to a .DS_Store issue, some of the folder's "Date Modified" dates were changed to today. I am trying to write a script that would take the original modification date of a folder, and then be able to find the corresponding folder in my new computer, and change it using touch. The idea is to use stat and touch -mt to do this. Does anyone have any suggestions or better thoughts? Thanks.
Use one folder as the reference for another with --reference=SOURCE:
$ cd "$(mktemp --directory)"
$ touch -m -t 200112311259 ./first
$ touch -m -t 200201010000 ./second
$ ls -l | sed "s/${USER}/user/g"
total 0
-rw-r--r-- 1 user user 0 Dec 31 2001 first
-rw-r--r-- 1 user user 0 Jan 1 2002 second
$ touch -m --reference=./first ./second
$ ls -l | sed "s/${USER}/user/g"
total 0
-rw-r--r-- 1 user user 0 Dec 31 2001 first
-rw-r--r-- 1 user user 0 Dec 31 2001 second

cd command fails when directory is extracted from windows file

I have one text file in windows that contains lots of directories that I need to extract.
I tried to extract one directory and tried to cd to it in a shell script, but the cd command failed, with prompting cd: /VAR/GPIO/: No such file or directory.
I have confirmed that the directory exists in my local PC and the directory is correct (though it is relative). I have also searched a lot, seems some special windows characters exist in the extract file. I tried to see them with cat -A check and the result is ^[[m^[[K^[[m^[[KVAR/GPIO/$
I don't even know what the meaning of the m^ or [[K.
Could you please help me about this problem? I use Cygwin in Windows 7 64-bit.
Below is my related code for review:
templt_dir=$(cat temp | grep -m 1 "$templt_name" |head -1 | sed -n "s#$templt_name##p" | sed -n "s#\".*##p")
echo $templt_dir ###comment, it runs output: /VAR/GPIO/, that's correct!
cd $templt_dir ###comment, cd error prompts
cat temp | grep -m 1 "$templt_name" |head -1 | sed -n "s#$templt_name##p" | sed -n "s#\".*##p" > check ###comment, for problem checking
Below is the content of the check file:
$ cat -A check
^[[m^[[K^[[m^[[KVAR/GPIO/$
To confirm my directory is correct, below is the results of ls -l on /VAR:
$ ls VAR -l
total 80K
drwxrwx---+ 1 Administrators Domain Users 0 Jun 24 11:11 Analog/
drwxrwx---+ 1 Administrators Domain Users 0 Jun 24 11:37 Communication/
drwxrwx---+ 1 Administrators Domain Users 0 Jun 24 11:10 GPIO/
drwxrwx---+ 1 Administrators Domain Users 0 Jun 24 11:11 HumanInterface/
drwxrwx---+ 1 Administrators Domain Users 0 Jun 24 11:11 Memory/
drwxrwx---+ 1 Administrators Domain Users 0 Jun 24 11:11 PWM/
drwxrwx---+ 1 Administrators Domain Users 0 Jun 24 11:10 Security/
drwxrwx---+ 1 Administrators Domain Users 0 Jun 24 11:11 System/
drwxrwx---+ 1 Administrators Domain Users 0 Jun 25 16:25 Timers/
drwxrwx---+ 1 Administrators Domain Users 0 Jun 24 11:10 UniversalDevice/
The error message cd: /VAR/GPIO/: No such file or directory indicates that
the name stored in $templt_dir doesn’t exist.
This is actually due to the string containing non-printing ANSI escape
sequences.
You need to remove these characters from the string containing the directory.
I found the following sed substitution from this Unix and Linux answer
sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g"
which you should include in your pipe command:
templt_dir=$(grep -m 1 "$templt_name" temp | sed -n "s#$templt_name##p; s#\".*##p" | sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g")
Note: I concatenated your two sed substitutions into the one command and I removed the unnecessary cat. I also removed the redundant head -1 since grep -m 1 should only output one line. You can probably combine all the sed substitutions into one: sed -r "s#$templt_name##; s#\".*##; s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" (the -n sed option and p sed command can be left out if there’s only line being processed but I can’t test this without having the original file).
Other ways of using sed to strip ANSI escape sequences are listed at Remove color codes (special characters) with sed.
However, a better long-term fix would be to modify the process which creates the text file listing the directories to not include ANSI Escape codes in its output.

Speed up rsync with Simultaneous/Concurrent File Transfers?

We need to transfer 15TB of data from one server to another as fast as we can. We're currently using rsync but we're only getting speeds of around 150Mb/s, when our network is capable of 900+Mb/s (tested with iperf). I've done tests of the disks, network, etc and figured it's just that rsync is only transferring one file at a time which is causing the slowdown.
I found a script to run a different rsync for each folder in a directory tree (allowing you to limit to x number), but I can't get it working, it still just runs one rsync at a time.
I found the script here (copied below).
Our directory tree is like this:
/main
- /files
- /1
- 343
- 123.wav
- 76.wav
- 772
- 122.wav
- 55
- 555.wav
- 324.wav
- 1209.wav
- 43
- 999.wav
- 111.wav
- 222.wav
- /2
- 346
- 9993.wav
- 4242
- 827.wav
- /3
- 2545
- 76.wav
- 199.wav
- 183.wav
- 23
- 33.wav
- 876.wav
- 4256
- 998.wav
- 1665.wav
- 332.wav
- 112.wav
- 5584.wav
So what I'd like to happen is to create an rsync for each of the directories in /main/files, up to a maximum of, say, 5 at a time. So in this case, 3 rsyncs would run, for /main/files/1, /main/files/2 and /main/files/3.
I tried with it like this, but it just runs 1 rsync at a time for the /main/files/2 folder:
#!/bin/bash
# Define source, target, maxdepth and cd to source
source="/main/files"
target="/main/filesTest"
depth=1
cd "${source}"
# Set the maximum number of concurrent rsync threads
maxthreads=5
# How long to wait before checking the number of rsync threads again
sleeptime=5
# Find all folders in the source directory within the maxdepth level
find . -maxdepth ${depth} -type d | while read dir
do
# Make sure to ignore the parent folder
if [ `echo "${dir}" | awk -F'/' '{print NF}'` -gt ${depth} ]
then
# Strip leading dot slash
subfolder=$(echo "${dir}" | sed 's#^\./##g')
if [ ! -d "${target}/${subfolder}" ]
then
# Create destination folder and set ownership and permissions to match source
mkdir -p "${target}/${subfolder}"
chown --reference="${source}/${subfolder}" "${target}/${subfolder}"
chmod --reference="${source}/${subfolder}" "${target}/${subfolder}"
fi
# Make sure the number of rsync threads running is below the threshold
while [ `ps -ef | grep -c [r]sync` -gt ${maxthreads} ]
do
echo "Sleeping ${sleeptime} seconds"
sleep ${sleeptime}
done
# Run rsync in background for the current subfolder and move one to the next one
nohup rsync -a "${source}/${subfolder}/" "${target}/${subfolder}/" </dev/null >/dev/null 2>&1 &
fi
done
# Find all files above the maxdepth level and rsync them as well
find . -maxdepth ${depth} -type f -print0 | rsync -a --files-from=- --from0 ./ "${target}/"
Updated answer (Jan 2020)
xargs is now the recommended tool to achieve parallel execution. It's pre-installed almost everywhere. For running multiple rsync tasks the command would be:
ls /srv/mail | xargs -n1 -P4 -I% rsync -Pa % myserver.com:/srv/mail/
This will list all folders in /srv/mail, pipe them to xargs, which will read them one-by-one and and run 4 rsync processes at a time. The % char replaces the input argument for each command call.
Original answer using parallel:
ls /srv/mail | parallel -v -j8 rsync -raz --progress {} myserver.com:/srv/mail/{}
Have you tried using rclone.org?
With rclone you could do something like
rclone copy "${source}/${subfolder}/" "${target}/${subfolder}/" --progress --multi-thread-streams=N
where --multi-thread-streams=N represents the number of threads you wish to spawn.
rsync transfers files as fast as it can over the network. For example, try using it to copy one large file that doesn't exist at all on the destination. That speed is the maximum speed rsync can transfer data. Compare it with the speed of scp (for example). rsync is even slower at raw transfer when the destination file exists, because both sides have to have a two-way chat about what parts of the file are changed, but pays for itself by identifying data that doesn't need to be transferred.
A simpler way to run rsync in parallel would be to use parallel. The command below would run up to 5 rsyncs in parallel, each one copying one directory. Be aware that the bottleneck might not be your network, but the speed of your CPUs and disks, and running things in parallel just makes them all slower, not faster.
run_rsync() {
# e.g. copies /main/files/blah to /main/filesTest/blah
rsync -av "$1" "/main/filesTest/${1#/main/files/}"
}
export -f run_rsync
parallel -j5 run_rsync ::: /main/files/*
You can use xargs which supports running many processes at a time. For your case it will be:
ls -1 /main/files | xargs -I {} -P 5 -n 1 rsync -avh /main/files/{} /main/filesTest/
There are a number of alternative tools and approaches for doing this listed arround the web. For example:
The NCSA Blog has a description of using xargs and find to parallelize rsync without having to install any new software for most *nix systems.
And parsync provides a feature rich Perl wrapper for parallel rsync.
I've developed a python package called: parallel_sync
https://pythonhosted.org/parallel_sync/pages/examples.html
Here is a sample code how to use it:
from parallel_sync import rsync
creds = {'user': 'myusername', 'key':'~/.ssh/id_rsa', 'host':'192.168.16.31'}
rsync.upload('/tmp/local_dir', '/tmp/remote_dir', creds=creds)
parallelism by default is 10; you can increase it:
from parallel_sync import rsync
creds = {'user': 'myusername', 'key':'~/.ssh/id_rsa', 'host':'192.168.16.31'}
rsync.upload('/tmp/local_dir', '/tmp/remote_dir', creds=creds, parallelism=20)
however note that ssh typically has the MaxSessions by default set to 10 so to increase it beyond 10, you'll have to modify your ssh settings.
The simplest I've found is using background jobs in the shell:
for d in /main/files/*; do
rsync -a "$d" remote:/main/files/ &
done
Beware it doesn't limit the amount of jobs! If you're network-bound this is not really a problem but if you're waiting for spinning rust this will be thrashing the disk.
You could add
while [ $(jobs | wc -l | xargs) -gt 10 ]; do sleep 1; done
inside the loop for a primitive form of job control.
3 tricks for speeding up rsync on local net.
1. Copying from/to local network: don't use ssh!
If you're locally copying a server to another, there is no need to encrypt data during transfer!
By default, rsync use ssh to transer data through network. To avoid this, you have to create a rsync server on target host. You could punctually run daemon by something like:
rsync --daemon --no-detach --config filename.conf
where minimal configuration file could look like: (see man rsyncd.conf)
filename.conf
port = 12345
[data]
path = /some/path
use chroot = false
Then
rsync -ax rsync://remotehost:12345/data/. /path/to/target/.
rsync -ax /path/to/source/. rsync://remotehost:12345/data/.
2. Using zstandard zstd for high speed compression
Zstandard could be upto 8x faster than the common gzip. So using this newer compression algorithm will improve significantly your transfer!
rsync -axz --zc=zstd rsync://remotehost:12345/data/. /path/to/target/.
rsync -axz --zc=zstd /path/to/source/. rsync://remotehost:12345/data/.
3. Multiplexing rsync to reduce inactivity due to browse time
This kind of optimisation is about disk access and filesystem structure. There is nothing to see with number of CPU! So this could improve transfer even if your host use single core CPU.
As the goal is to ensure maximum data are using bandwidth while other task browse filesystem, the most suited number of simultaneous process depend on number of small files presents.
Here is a sample bash script using wait -n -p PID:
#!/bin/bash
maxProc=3
source=''
destination='rsync://remotehost:12345/data/'
declare -ai start elap results order
wait4oneTask() {
local _i
wait -np epid
results[epid]=$?
elap[epid]=" ${EPOCHREALTIME/.} - ${start[epid]} "
unset "running[$epid]"
while [ -v elap[${order[0]}] ];do
_i=${order[0]}
printf " - %(%a %d %T)T.%06.0f %-36s %4d %12d\n" "${start[_i]:0:-6}" \
"${start[_i]: -6}" "${paths[_i]}" "${results[_i]}" "${elap[_i]}"
order=(${order[#]:1})
done
}
printf " %-22s %-36s %4s %12s\n" Started Path Rslt 'microseconds'
for path; do
rsync -axz --zc zstd "$source$path/." "$destination$path/." &
lpid=$!
paths[lpid]="$path"
start[lpid]=${EPOCHREALTIME/.}
running[lpid]=''
order+=($lpid)
((${#running[#]}>=maxProc)) && wait4oneTask
done
while ((${#running[#]})); do
wait4oneTask
done
Output could look like:
myRsyncP.sh files/*/*
Started Path Rslt microseconds
- Fri 03 09:20:44.673637 files/1/343 0 1186903
- Fri 03 09:20:44.673914 files/1/43 0 2276767
- Fri 03 09:20:44.674147 files/1/55 0 2172830
- Fri 03 09:20:45.861041 files/1/772 0 1279463
- Fri 03 09:20:46.847241 files/2/346 0 2363101
- Fri 03 09:20:46.951192 files/2/4242 0 2180573
- Fri 03 09:20:47.140953 files/3/23 0 1789049
- Fri 03 09:20:48.930306 files/3/2545 0 3259273
- Fri 03 09:20:49.132076 files/3/4256 0 2263019
Quick check:
printf "%'d\n" $(( 49132076 + 2263019 - 44673637)) \
$((1186903+2276767+2172830+1279463+2363101+2180573+1789049+3259273+2263019))
6’721’458
18’770’978
There was 6,72seconds elapsed to process 18,77seconds under upto three subprocess.
Note: you could use musec2str to improve ouptut, by replacing 1st long printf line by:
musec2str -v elapsed "${elap[i]}"
printf " - %(%a %d %T)T.%06.0f %-36s %4d %12s\n" "${start[i]:0:-6}" \
"${start[i]: -6}" "${paths[i]}" "${results[i]}" "$elapsed"
myRsyncP.sh files/*/*
Started Path Rslt Elapsed
- Fri 03 09:27:33.463009 files/1/343 0 18.249400"
- Fri 03 09:27:33.463264 files/1/43 0 18.153972"
- Fri 03 09:27:33.463502 files/1/55 93 10.104106"
- Fri 03 09:27:43.567882 files/1/772 122 14.748798"
- Fri 03 09:27:51.617515 files/2/346 0 19.286811"
- Fri 03 09:27:51.715848 files/2/4242 0 3.292849"
- Fri 03 09:27:55.008983 files/3/23 0 5.325229"
- Fri 03 09:27:58.317356 files/3/2545 0 10.141078"
- Fri 03 09:28:00.334848 files/3/4256 0 15.306145"
The more: you could add overall stat line by some edits in this script:
#!/bin/bash
maxProc=3 source='' destination='rsync://remotehost:12345/data/'
. musec2str.bash # See https://stackoverflow.com/a/72316403/1765658
declare -ai start elap results order
declare -i sumElap totElap
wait4oneTask() {
wait -np epid
results[epid]=$?
local -i _i crtelap=" ${EPOCHREALTIME/.} - ${start[epid]} "
elap[epid]=crtelap sumElap+=crtelap
unset "running[$epid]"
while [ -v elap[${order[0]}] ];do # Print status lines in command order.
_i=${order[0]}
musec2str -v helap ${elap[_i]}
printf " - %(%a %d %T)T.%06.f %-36s %4d %12s\n" "${start[_i]:0:-6}" \
"${start[_i]: -6}" "${paths[_i]}" "${results[_i]}" "${helap}"
order=(${order[#]:1})
done
}
printf " %-22s %-36s %4s %12s\n" Started Path Rslt 'microseconds'
for path;do
rsync -axz --zc zstd "$source$path/." "$destination$path/." &
lpid=$! paths[lpid]="$path" start[lpid]=${EPOCHREALTIME/.}
running[lpid]='' order+=($lpid)
((${#running[#]}>=maxProc)) &&
wait4oneTask
done
while ((${#running[#]})) ;do
wait4oneTask
done
totElap=${EPOCHREALTIME/.}
for i in ${!start[#]};do sortstart[${start[i]}]=$i;done
sortstartstr=${!sortstart[*]}
fstarted=${sortstartstr%% *}
totElap+=-fstarted
musec2str -v hTotElap $totElap
musec2str -v hSumElap $sumElap
printf " = %(%a %d %T)T.%06.0f %-41s %12s\n" "${fstarted:0:-6}" \
"${fstarted: -6}" "Real: $hTotElap, Total:" "$hSumElap"
Could produce:
$ ./parallelRsync Data\ dirs-{1..4}/Sub\ dir{A..D}
Started Path Rslt microseconds
- Sat 10 16:57:46.188195 Data dirs-1/Sub dirA 0 1.69131"
- Sat 10 16:57:46.188337 Data dirs-1/Sub dirB 116 2.256086"
- Sat 10 16:57:46.188473 Data dirs-1/Sub dirC 0 1.1722"
- Sat 10 16:57:47.361047 Data dirs-1/Sub dirD 0 2.222638"
- Sat 10 16:57:47.880674 Data dirs-2/Sub dirA 0 2.193557"
- Sat 10 16:57:48.446484 Data dirs-2/Sub dirB 0 1.615003"
- Sat 10 16:57:49.584670 Data dirs-2/Sub dirC 0 2.201602"
- Sat 10 16:57:50.061832 Data dirs-2/Sub dirD 0 2.176913"
- Sat 10 16:57:50.075178 Data dirs-3/Sub dirA 0 1.952396"
- Sat 10 16:57:51.786967 Data dirs-3/Sub dirB 0 1.123764"
- Sat 10 16:57:52.028138 Data dirs-3/Sub dirC 0 2.531878"
- Sat 10 16:57:52.239866 Data dirs-3/Sub dirD 0 2.297417"
- Sat 10 16:57:52.911924 Data dirs-4/Sub dirA 14 1.290787"
- Sat 10 16:57:54.203172 Data dirs-4/Sub dirB 0 2.236149"
- Sat 10 16:57:54.537597 Data dirs-4/Sub dirC 14 2.125793"
- Sat 10 16:57:54.561454 Data dirs-4/Sub dirD 0 2.49632"
= Sat 10 16:57:46.188195 Real: 10.870221", Total: 31.583813"
Fake rsync for testing this script
Note: For testing this, I've used a fake rsync:
## Fake rsync wait 1.0 - 2.99 seconds and return 0-255 ~ 1x/10
rsync() { sleep $((RANDOM%2+1)).$RANDOM;exit $(( RANDOM%10==3?RANDOM%128:0));}
export -f rsync
The shortest version I found is to use the --cat option of parallel like below. This version avoids using xargs, only relying on features of parallel:
cat files.txt | \
parallel -n 500 --lb --pipe --cat rsync --files-from={} user#remote:/dir /dir -avPi
#### Arg explainer
# -n 500 :: split input into chunks of 500 entries
#
# --cat :: create a tmp file referenced by {} containing the 500
# entry content for each process
#
# user#remote:/dir :: the root relative to which entries in files.txt are considered
#
# /dir :: local root relative to which files are copied
Sample content from files.txt:
/dir/file-1
/dir/subdir/file-2
....
Note that this doesn't use -j 50 for job count, that didn't work on my end here. Instead I've used -n 500 for record count per job, calculated as a reasonable number given the total number of records.
I've found UDR/UDT to be an amazing tool. The TLDR; It's a UDT wrapper for rsync, utilizing multiple UPD connections rather than a single TCP connection.
References: https://udt.sourceforge.io/ & https://github.com/jaystevens/UDR#udr
If you use any RHEL distros, they've pre-compiled it for you... http://hgdownload.soe.ucsc.edu/admin/udr
The ONLY downside I've encountered is that you can't specify a different SSH port, so your remote server must use 22.
Anyway, after installing the rpm, it's literally as simple as:
udr rsync -aP user#IpOrFqdn:/source/files/* /dest/folder/
and your transfer speeds will increase drastically in most cases, depending on the server I've seen easily 10x increase in transfer speed.
Side note: if you choose to gzip everything first, then make sure to use --rsyncable arg so that it only updates what has changed.
using parallel rsync on a regular disk would only cause them to compete for the i/o, turning what should be a sequential read into an inefficient random read. You could try instead tar the directory into a stream through ssh pull from the destination server, then pipe the stream to tar extract.

display a line every two line (osX) zsh

I'd to display every two line, a line from a file. I've seen the sed -n 'f~d' awk and perl method. But the sed one doesn't work on osX (As I understood) and the two others are are interpreted languages which i can't use.
Can you help me ?
Here's an exemple :
output before :
-rw-r--r-- 1 mfassi-f 2013 22 Jul 17 12:36 test.sh
-rw-r--r-- 1 mfassi-f 2013 29 Jul 17 12:30 test1.sh
-rw-r--r-- 1 mfassi-f 2013 22 Jul 17 12:36 test2.sh
-rw-r--r-- 1 mfassi-f 2013 29 Jul 17 12:30 test3.sh
-rw-r--r-- 1 mfassi-f 2013 22 Jul 17 12:36 test4.sh
-rw-r--r-- 1 mfassi-f 2013 29 Jul 17 12:30 test5.sh
-rw-r--r-- 1 mfassi-f 2013 22 Jul 17 12:36 test6.sh
output after :
-rw-r--r-- 1 mfassi-f 2013 29 Jul 17 12:30 test1.sh
-rw-r--r-- 1 mfassi-f 2013 29 Jul 17 12:30 test3.sh
-rw-r--r-- 1 mfassi-f 2013 29 Jul 17 12:30 test3.sh
-rw-r--r-- 1 mfassi-f 2013 29 Jul 17 12:30 test5.sh
Here are two answers. One for a file, and one for command-line input.
['cause the question's changed ever so slightly, but these two seemed too similar to put as independent answers].
You can use zsh, ls, cut and paste to do this in a for loop. It's not the cleanest solution, but it does work (surprisingly).
for file in `ls -1 | paste - - | cut -f 1`
do
ls -l -d $file
done
We take the output of ls -1, then extract every second filename. (The way ls chooses to sort the files will have an impact here). Then, we do ls -l -d on each of these files. -d is necessary to stop ls from showing us the contents of $file, if $file is a directory. (Not sure if this is OS X specific, or if that's default POSIX ls behaviour).
Second answer: display every second line from a file.
If you're after a mostly zsh solution, you could do something like the following:
$ jot 8 0 7 >> sample.txt # Generate some numbers.
$ count=0 # Storage variable
$ for i in `cat sample.txt`
do
if [ $(( $count % 2 )) -eq 0 ] ; then
echo $i
fi
count=`expr $count + 1`
done
This displays every second line.
Notes:
- This leaves a variable count in your session afterwards (it's not clean).
- This fails badly if sample.txt does not contain a single word per line.
- I'm almost sure that the modulus comparison I do isn't the most efficient: I grabbed it from here.
- I say it's mostly zsh because it does rely on cat, but I'm not sure how to avoid that.
The OS X version of sed is frustrating. Using sed -n '0~2p' <filename> doesn't work because, in the BSD sed, -n does something different:
-n
By default, each line of input is echoed to the standard output after all of the commands have been applied to it. The -n option suppresses this behavior.
I'd highly recommend installing GNU sed, which can be done using Homebrew:
brew install gnu-sed
And then you can use:
gsed -n '0~2p' filename # Display the 2nd, 4th etc
gsed -n '1~2p' filename # Display the 1st, 3rd etc.

UNIX / Linux / Mac OSX get permission of file as number

This must be really simple to do but have completely drawn a blank. I can see the permission of files by using ls -la which can give something like:
-rwxr-xr-x 1 james staff 68 8 Feb 13:33 basic.sh*
-rw-r--r-- 1 james staff 68 8 Feb 13:33 otherFile.sh*
How do I translate that into a number for use with chmod like chmod 755 otherFile.sh (with out doing the manual conversion).
stat -f "%Lp" [filename] works for me in OS X 10.8.
You should be able to use the stat command instead of ls. From looking at the manpage, this should work to get the file permissions:
for f in dir/*
do
perms=$(stat -f '0%Hp%Mp%Lp' $f)
echo "$f has permissions $perms"
done
(although I am not at my Mac at the moment and therefore cannot test it).

Resources