I am trying to use the output from mdfind to create a bunch of symlinks. Output of mdfind is like this:
/pathtofile1/
/pathtofile2/
/pathtofile3/
So, I used sed to add ln -s to the start of each line, and awk {print $0 "/directory where I want this/"};
after my single-line script successfully outputs this:
ln -s "/pathtofile1/" "/directory where I want this"
ln -s "/pathtofile2/" "/directory where I want this"
ln -s "/pathtofile3/" "/directory where I want this"
Problem is, when I run this, I get this error: "/directory where I want this: File does not exist"
The weird thing is that when I run these lines individually, they links are created as expected, but running the entire command returns the error above.
Any ideas?
I don't think that this is the ideal way to do what I'm trying to do, so let me know if you have any better solutions.
Edited with more information.
#! /bin/bash
itemList=`mdfind -s "$1"| awk '{ print "ln -s \""$0"\" \"/Users/username/Local/Recent\""}'`
echo "$itemList"
`$itemList`
$1 is a test *.savedSearch that returns a list of files.
My result (from the echo) is:
ln -s "/Users/username/Dropbox/Document.pdf" "/Users/username/Local/Recent"
ln -s "/Users/username/Dropbox/Document2.pdf" "/Users/username/Local/Recent"
and the error that I get is:
ln: "/Users/username/Local/Recent": No such file or directory
But, if I run a copy-pasted of each line individually, the links are created as expected.
One way to keep it simple:
mdfind -0 "query" | ( cd "/Users/username/Local/Recent" ; xargs -0 -I path ln -s path . )
This is of course doesn't handle duplicate file names, etc.
EDIT:
The reasons your solution is failing is that, first, the contents of $itemList is being executed as one long command (i.e. the line feeds output by awk are ignored), and then, second, the command substitution occurs before quote removal. What is actually processed is roughly equivalent to:
ln '-s' '"/pathtofile1/"' '"/to"' 'ln' '-s' '"/pathtofile2/"' '"/to"' 'ln' '-s' '"/pathtofile3/"' '"/to"'
/bin/ln recognizes this as the:
ln [-Ffhinsv] source_file ... target_dir
form of the command and checks to see that the final parameter is an existing directory. That test fails because the directory name includes the surrounding quote marks. Note carefully the error message you report and compare:
$ ln a b c "/Users/username/Local/Recent"
ln: /Users/username/Local/Recent: No such file or directory
$ ln a b c '"/Users/username/Local/Recent"'
ln: "/Users/username/Local/Recent": No such file or directory
So the morals of the story are, when you are dealing with file names in a shell, the safest solution is to avoid shell processing of the file names so you don't have to deal with quoting and other side effects (which is a big advantage of an xargs solution) and keep it simple: avoid constructing complex multi-line shell commands. It's too easy to get unexpected results.
It would be much easier to determine what the problem was if you used some actual, or at least plausible, paths as examples, but ln isn't going to create these directories for you if that's what you want.
Related
I have 100+ directories as followed:
bins_copy]$ ls
bin.1/
bin.112/
bin.126/
bin.24/
bin.38/
etc. etc.
Each of these directories contains two files names genes.faa and genes.gff, e.g. bin.1/genes.faa
I now want to add a suffix based on the parent directory so each gene file has a unique identifier, e.g. bin.1/bin1_genes.faa and bin1_genes.gff.
I've been going down the google rabbit hole all morning and nothing has sufficiently worked so far.
I tried something like this:
for each in ./bin.*/genes.faa ; mv genes.faa ${bin%-*}_genes.faa $each ; done
but that (and several versions of it) gives me the following error:
-bash: syntax error near unexpected token `mv'
Since this is a really generic one I haven't figured it out yet and truly would appreciate your help with.
Cheers/
Try this Shellcheck-clean code:
#! /bin/bash -p
for genespath in bin.*/genes.*; do
dir=${genespath%/*}
dirnum=${dir##*.}
genesfile=${genespath##*/}
new_genespath="$dir/bin${dirnum}_${genesfile}"
echo mv -iv -- "$genespath" "$new_genespath"
done
It currently just prints the required mv command. Remove the echo when you've confirmed that it will do what you want.
There may be a more elegant way of doing this but create this script in the same directory as the bin directories, chmod 700 and run. you might want to back up with tar first (tar -cf bin.tar ./bin*)
#!/bin/bash
files="bin.*"
for f in $files; do
mv ./${f}/genes.faa ./${f}/${f}_genes.faa
mv ./${f}/genes.gff ./${f}/{$f}_genes.gff
done
I have the following the shell script. Which throws the following error if there is no file exist in the folder. So, how we do handle this so that script doesn't stop executing?
Error:
mv: cannot stat `*': No such file or directory
Script:
for file in *
do
fl=$(basename "$file")
flname="${fl%%.*}"
gunzip "$file"
mv "$flname" "$flname-NxDWeb2"
tar -zcf "$flname-NxDWeb2".tar.gz "$flname-NxDWeb2"
rm "$flname-NxDWeb2"
done;
If the shell is bash, you can allow * to expand to the null string: shopt -s nullglob before your loop.
BTW, you might want to explicitly specify the uncompressed filename to produce, in case your logic doesn't completely agree with gunzip's (which it probably won't, if there's more than one dot in the name, or the file ends with .tgz or .taz):
gunzip -c "$file" >"$flname"
(you will need to remove the original yourself in this case, though)
You can avoid the need to move, too:
flname="${fl%%.*}-NxDWeb2"
And you might want to use trap to ensure your temporaries are cleaned up in the failure case (possible make your temporaries in $TMPDIR, too).
I'm trying to create a bunch of symbolic links for all the files in a directory. It seems like, when I type this command in the shell manually, it works just fine, but when I run it in a shell script, or even use the up arrow to re-run it, I get the following problem.
$ sudo ln -s /path/to/my/files/* /the/target/directory/
This should create a bunch of sym links in /path/to/my/files and if I type the command in manuall, it indeed does, however, when I run the command from a shell script, or use the up arrow to re-run it I get a single symbolic link in /the/target/directory/ called * as in the link name is actually '*' and I then have to run
$ sudo rm *
To delete it, which just seems insane to me.
When you run that command in the script, are there any files in /path/to/my/files? If not, then by default the wildcard has nothing to expand to, and it is not replaced. You end up with the literal "*". You might want to check out shopt -s nullglob and run the ln command like this:
shopt -s nullglob
sudo ln -s -t /the/target/directory /path/to/my/files/*
Maybe the script uses sh and your using bash when executing the command.
You may try something like this:
for file in $(ls /path/to/my/files/*) do
ln -s "${file}" "/the/target/directory/"${file}"
done
I am trying to rename some zip files in bash with an _orig but I seem to be missing something. Any suggestions??
My goal:
move files to an orig directory
rename original files with a "_orig" in the name
The code Ive tried to write:
mv -v $PICKUP/*.zip $ORIGINALS
for origfile in $(ls $ORIGINALS/*.zip);do
echo "Adding _orig to zip file"
echo
added=$(basename $origfile '_orig').zip
mv -v $ORIGINALS/$origfile.zip $ORIGINALS/$added.zip
done
Sorry still kinda new at this.
Using (p)rename :
cd <ZIP DIR>
mkdir -p orig
rename 's#(.*?)\.zip#orig/$1_orig.zip#' *.zip
rename is http://search.cpan.org/~pederst/rename/ (default on many distros)
Thanks to never use
for i in $(ls $ORIGINALS/*.zip);do
but use globs instead :
for i in $ORIGINALS/*.zip;do
See http://porkmail.org/era/unix/award.html#ls.
I know you've got a solution already, but just for posterity, this simplified version of your own shell script should also work for the case you seem to be describing:
mkdir -p "$ORIGINALS"
for file in "$PICKUP"/*.zip; do
mv -v "$file" "$ORIGINALS/${file%.zip}_orig.zip"
done
This makes use of "Parameter Expansion" in bash (you can look that up in bash's man page). The initial mkdir -p simply insures that the target directory exists. The quotes around $PICKUP and $ORIGINALS are intended to make it safe to include special characters like spaces and newlines in the directory names.
While prename is a powerful solution to many problems, it's certainly not the only hammer in the toolbox.
I'm sure there is a simple way to do this, but I am not finding it. What I want to do is execute a series of commands using lftp, and I want to avoid repeatedly connecting to the server if possible.
Basically, I have a file with a list full of ftp directories on the server. I want to connect to the server then execute something like the following: (assume at this point that I have already converted the text file into an array of lines using cat)
for f in "${myarray}"
do
cd $f;
nlist >> $f.txt;
cd ..;
done
Of course that doesn't work, but I have to imagine there is a simple solution to what I am trying to accomplish.
I am quite inexperienced when it comes to shell scripting. Any suggestions?
First build a string that contains the list of lftp commands. Then call lftp, passing the command on its standard input. Lftp itself can redirect the output of a command to a file, with a syntax that resembles the shell.
list_commands=""
for dir in "${myarray[#]}"; do
list_commands="$list_commands
cd \"$dir\"
nlist >\"$dir.txt\"
cd .."
done
lftp <<EOF
open -u $username,$password $site
$list_commands
bye
EOF
Note that I assume that the directory names don't contain backslashes, single quotes or globbing characters. Add proper escaping if necessary.
By the way, to read lines from a file, see Why is while IFS= read used so often, instead of IFS=; while read..?. You might prefer to combine reading from the list of directories and building the commands:
list_commands=""
while IFS= read -r dir; do
list_commands="$list_commands
cd \"$dir\"
nlist >\"$dir.txt\"
cd .."
done <directory_list.txt