I'm trying to create a for loop on folders that contain spaces, comma's and parenthesis. For example:
Italy - Rimini (Feb 09, 2013)
First it scans a parent folder /albums for sub-folders that look like in the example above. Then it executes a curl actions on files in thoses sub-folders. It works fine if the sub-folders do not contain spaces, comma's or other symbols.
for dir in `ls /albums`;
do
for file in /albums/$dir/*
do
curl http://upload.com/up.php -F uploadfile[]=#"$file" > out.txt
php process.php
done
php match.php
done
But if there are such symbols, it seems the the curl bit gets stuck - it can't find the $file (probably because $dir is incorrect).
I could replace all the symbols in the sub-dirs or remove them or rename the folders to 001, 002 and it works flawlessly. But before resorting to that I'd like to know if it can be solved using bash tricks while keeping the sub-folder name intact?
Familiarize yourself with the concept of word splitting of your shell. Then realize that using ls to get a list of files with spaces is asking for trouble. Instead, use shell globbing and then quote expansions:
cd /albums
for dir in *; do
for file in /albums/"$dir"/*; do
echo x"$dir"x"$file"x
done
php match.php
done
For problems with spaces in filenames, you have to change the IFS to
IFS='
'
which tells the shell, that only linebreaks are file separators. By default IFS is set to tabs, spaces and linebreaks.
So just put this before the loop begins, and it will work with filenames that contains spaces.
And of course put quotes around your variablenames.
Related
I have the following command to gather all files in a folder and concatenate them....but what is held in the variable is only the file names and not the directory. How can I add 'colid-data/' to each of the files for cat to us?
cat $(ls -t colid-data) > catfiles.txt
List the filenames, not the directory.
cat $(ls -t colid-data/*) > catfiles.txt
Note that this will not work if any of the filenames contain whitespace. See Why not parse ls? for better alternatives.
If you want to concatenate them in date order, consider using zsh:
cat colid-data/*(.om) >catfiles.txt
That would concatenate all regular files only, in order of most recently modified first.
From bash, you could do this with
zsh -c 'cat colid-data/*(.om)' >catfiles.txt
If the ordering of the files is not important (and if there's only regular files in the directory, no subdirectories), just use
cat colid-data/* >catfiles.txt
All of these variations would work with filenames containing spaces, tabs and newlines, since the list of pathnames returned by a filename globbing pattern is not split into further words (which the result of an unquoted command substitution is).
I have a folder which may contain several files. Among those files I have files like these:
test.xml
test.jar
test.jarGENERATED
dev.project.jar
...
and many other files. To get only the "dev.project.jar" I have executed:
ls | grep ^{{dev}}.*.jar$
This displays the file with its properties for me. However, I only want the file name (only the file name string)
How to rectify it??
ls and grep are both unnecessary here. The shell will show you any file name matches for a wildcard:
echo dev.*.jar
(ls dev.*.jar without options will do something similar per se; if you see anything more than the filename, perhaps you have stupidly defined alias ls='ls -l' or something like that?)
The argument to grep should be a regular expression; what you specified would match {{dev}} and not dev, though in the absence of quoting, your shell might have expanded the braces. The proper regex would be grep '^dev\..*\.jar$' where the single quotes protect the regex from any shell expansions, and . matches any character, and * repeats that character as many times as possible. To match a literal dot, we backslash-escape it.
Just printing a file name is rarely very useful; often times, you actually want something like
for file in ./dev.*.jar; do
echo "$file"
: probably do more things with "$file"
done
though if that's all you want, maybe prefer printf over echo, which also lets you avoid the loop:
printf '%s\n' dev.*.jar
I am trying to copy a .nii file (Gabor3.nii) path to a variable but even though the file is found by the find command, I can't copy the path to the variable.
find . -type f -name "*.nii"
Data= '/$PWD/"*.nii"'
output:
./Gabor3.nii
./hello.sh: line 21: /$PWD/"*.nii": No such file or directory
What went wrong
You show that you're using:
Data= '/$PWD/"*.nii"'
The space means that the Data= parts sets an environment variable $Data to an empty string, and then attempts to run '/$PWD/"*.nii"'. The single quotes mean that what is between them is not expanded, and you don't have a directory /$PWD (that's a directory name of $, P, W, D in the root directory), so the script "*.nii" isn't found in it, hence the error message.
Using arrays
OK; that's what's wrong. What's right?
You have a couple of options. The most reliable is to use an array assignment and shell expansion:
Data=( "$PWD"/*.nii )
The parentheses (note the absence of spaces before the ( — that's crucial) makes it an array assignment. Using shell globbing gives a list of names, preserving spaces etc in the names correctly. Using double quotes around "$PWD" ensures that the expansion is correct even if there are spaces in the current directory name.
You can find out how many files there are in the list with:
echo "${#Data[#]}"
You can iterate over the list of file names with:
for file in "${Data[#]}"
do
echo "File is [$file]"
ls -l "$file"
done
Note that variable references must be in double quotes for names with spaces to work correctly. The "${Data[#]}" notation has parallels with "$#", which also preserves spaces in the arguments to the command. There is a "${Data[*]}" variant which behaves analogously to "$*", and is of similarly limited value.
If you're worried that there might not be any files with the extension, then use shopt -s nullglob to expand the globbing expression into an empty list rather than the unexpanded expression which is the historical default. You can unset the option with shopt -u nullglob if necessary.
Alternatives
Alternatives involve things like using command substitution Data=$(ls "$PWD"/*.nii), but this is vastly inferior to using an array unless neither the path in $PWD nor the file names contain any spaces, tabs, newlines. If there is no white space in the names, it works OK; you can iterate over:
for file in $Data
do
echo "No white space [$file]"
ls -l "$file"
done
but this is altogether less satisfactory if there are (or might be) any white space characters around.
You can use command substitution:
Data=$(find . -type f -name "*.nii" -print -quit)
To prevent multiline output, the -quit option stop searching after the first file was found(unless you're sure only one file will be found or you want to process multiple files).
The syntax to do what you seem to be trying to do with:
Data= '/$PWD/"*.nii"'
would be:
Data="$(ls "$PWD"/*.nii)"
Not saying it's the best approach for whatever you want to do next of course, it's probably not...
I am trying to create a "watch" folder where I will be able to copy files 2 sets of files with the same name, but different file extensions. I have a program that need to reference both files, but since they have the same name, only differing by extension I figure I might be able to do something like this with a cron job
cronjob.sh:
#/bin/bash
ls *.txt > processlist.txt
for filename in 'cat processlist.txt'; do
/usr/local/bin/runcommand -input1=/home/user/process/$filename \
-input2=/home/user/process/strsub($filename, -4)_2.stl \
-output /home/user/process/done/strsub($filename, -4)_2.final;
echo "$filename finished processing"
done
but substr is a php command, not bash. What would be the right way of doing this?
strsub($filename, -4)
in Bash is
${filename:(-4)}
See Shell Parameter Expansion.
Your command can look like
/usr/local/bin/runcommand "-input1=/home/user/process/$filename" \
"-input2=/home/user/process/${filename:(-4)}_2.stl" \
"-output /home/user/process/done/${filename:(-4)}_2.final"
Note: Prefer quoting your arguments with variables around double-quotes to prevent word splitting and possible pathname expansion. This would be helpful to filenames with spaces.
It would also be better to directly pass your glob pattern as an argument to for to properly distribute tokens without getting split with word splitting.
for filename in *.txt; do
So Konsolebox's solution was almost right, but the issue was that when you do ${filename:(-4)} it only returns the last 4 letters of the variable instead of trimming the last 4 off. When I did was change it to ${filename%.txt} where the %.txt matches to the text I want to find and remove, and then just tagged .mp3 on at the end to change the extension.
His other suggestion of using this for loop also was much better than mine:
for filename in *.txt; do
The only other modification was putting the full command all on one line in the end. I divided it up here to make sure it was all easily visible.
I have a lot of files (in single directory) like:
[a]File-. abc'.d -001[xxx].txt
so there are many spaces, apostrophes, brackets, and full stops. The only differences between them are numbers in place of 001, and letters in place of xxx.
How to remove the middle part, so all that remains would be
[a]File-001[xxx].txt
I'd like an explanation how such code would work, so I could adapt it for other uses, and hopefully help answer others similar questions.
Here is a simple script in pure bash:
for f in *; do # for all entries in the current directory
if [ -f "$f" ]; then # if the entry is a regular file (i.e. not a directory)
mv "$f" "${f/-*-/-}" # rename it by removing everything between two dashes
# and the dashes, and replace the removed part
# with a single dash
fi
done
The magic done in the "${f/-*-/-}" expression is described in the bash manual (the command is info bash) in the chapter 3.5.3 Shell Parameter Expansion
The * pattern in the first line of the script can be replaced with anything than can help to narrow the list of the filles you want to rename, e.g. *.txt, *File*.txt, etc.
If you have the rename (aka prename) utility that's a part of Perl distribution, you could say:
rename -n 's/([^-]*-).*-(.*)/$1$2/' *.txt
to rename all txt files in your desired format. The -n above would not perform the actual rename, it'd only tell you what it would do had you not specified it. (In order to perform the actual rename, remove -n from the above command.)
For example, this would rename the file
[a]File-. abc'.d -001[xxx].txt
as
[a]File-001[xxx].txt
Regarding the explanation, this captures the part upto the first - into a group, and the part after the second (or last) one into another and combines those.
Read about Regular Expressions. If you have perl docs available on your system, saying perldoc perlre should help.