I have a folder of images. I want to iterate through the folder, apply the same ImageMagick convert function to each file, and save the output to a separate folder.
The way I'm doing it currently is this:
#!/bin/bash
mkdir "Folder2"
for f in Folder1/*.png
do
echo "convert -brightness-contrast 10x60" "Folder1/$f" "Folder2/"${f%.*}"_suffix.png"
done
Then I copy and paste that terminal output into a new bash script that ends up looking like this:
#!/bin/bash
convert -brightness-contrast 10x60 Folder1/file1.png Folder2/file1_suffix.png
convert -brightness-contrast 10x60 Folder1/file2.png Folder2/file2_suffix.png
convert -brightness-contrast 10x60 Folder1/file3.png Folder2/file3_suffix.png
I tried to write a single bash script for this task but there was some weirdness with the variable handling, and this two-script method got me what I needed ...but I suspect there's an easier/simpler way and possibly even a one-line solution.
It's enough to change your first script, not to just echo the commands, but to execute them.
#!/bin/bash
mkdir "Folder2"
for f in Folder1/*.png
do
convert -brightness-contrast 10x60 "Folder1/$f" "Folder2/${f%.*}_suffix.png"
done
Crystal ball is telling me that there are spaces in filenames causing "some weirdness with the variable handling". In that case you need a workaround for the spaces. For example, you may try the following script:
#!/bin/bash
hasspaces="^(.+[^\'\"])([ ])(.+)$"
function escapespaces {
declare -n name=$1
while [[ $name =~ $hasspaces ]] ; do
name=${BASH_REMATCH[1]}'\'${BASH_REMATCH[2]}${BASH_REMATCH[3]}
echo 'Escaped string: '\'$name\'
done
}
mkdir Folder2
while read -r entry; do
echo "File '$entry'"
escapespaces entry
echo "File '$entry'"
tmp=${entry#Folder1}
eval "convert -brightness-contrast 10x60" "$entry" "Folder2/"${tmp%.*}"_suffix.png"
done <<<"$(eval "ls -1 Folder1/*.png")"
If this does not work, by all means let me know so I can request a refund for my crystal ball! Also, if you can give more details on the "weirdness in variable handling", we could try to help with those other weirdnesses :-)
Check this answer (and the few in the question) out:
You can use it in your example :
find Folder1 -name "*.png" | sed -e 'p;s/Folder1/Folder2/g' -e 's/.png/_suffix.png' | xargs -n2 convert -brightness-contrast 10x60
note : the p in the first sed makes the trick.
find will list you all the files in Folder1 with a name that matches the *.png expression
sed -e 'p;/Folder1/Folder2/g' will (a) print the input line and (b) replace Folder1 by Folder2
-e 's/.png$/_suffix.png' replaces the .png suffix with the _suffix.png suffix
xargs -n2 tells the shell that xargs should take two arguments max (the first one being printed by sed 'p' and the second one going through all the -e)
convert ... is your command, taking two inputs.
Related
I'd like to remove this part of the filenames '225086191.sd.mp4' from the filenames looking like this:
'225086191.sd.mp4 11juillet2017.mp4'
'221896766.sd.mp4 16juin2017.mp4'
'221950562.sd.mp4 17juin2017.mp4'
'222199508.sd.mp4 19juin2017.mp4'
to have only :
'11juillet2017.mp4'
'16juin2017.mp4'
'17juin2017.mp4'
'19juin2017.mp4'
the only delimiter I have in the filename is the blank after ".sd.mp4"
If some people could help me I have a directory with at least 300 files to correct.
Thanks a lot
You can select all files in Finder and choose "Rename n objects..." from context menu. In the dialog choose "replace", "225086191.sd.mp4 " for the search pattern, and leave the replacement empty.
If you need to do it by terminal, you can write a script:
#!/bin/sh
for f in *.mp4
do
n=`echo $f | sed -E 's/[0-9]+\.sd\.mp4 //'`
echo mv "$f" "$n"
done
Place this code as rename-files.sh in the directory of the mp4 files, and execute it with sh rename-files.sh. You can execute this without any harm, because it just prints the commands. To rename the files, you must eliminate the last echo command in the file (mv "$f" "$n").
This question already has answers here:
Rename filename to another name
(3 answers)
Closed 7 years ago.
Let´s say I have a bunch of files named something like this: bsdsa120226.nai bdeqa140223.nai and I want to rename them to 120226.nai 140223.nai. How can i achieve this using the script below?
#!/bin/bash
name1=`ls *nai*`
names=`ls *nai*| grep -Po '(?<=.{5}).+'`
for i in $name1
do
for y in $names
do
mv $i $y
done
done
Solution:
name1=`ls *nai*`
for i in $name1
do
y=$(echo "$i" | grep -Po '(?<=.{5}).+')
mv $i $y
done
This:
#!/bin/bash
shopt -s extglob nullglob
for file in *+([[:digit:]]).nai; do
echo mv -nv -- "$file" "${file##+([^[:digit:]])}"
done
Remove the echo if you're happy with the mv commands.
Note. This solution does not assume that there are 5 leading characters to delete. It will delete all the leading non-numeric characters.
Using only bash, you could do this:
for file in *nai* ; do
echo mv -- "$file" "${file:5}"
done
(Remove the echo when satisfied with the output.)
Avoid ls in scripts, except for displaying information. Use plain globbing instead.
See also How do I do string manipulations in bash? for more string manipulation techniques.
Your script can't work with that structure: if you have 5 files, it will call mv five times for the first file (once for each element in the second list), five times for the second, etc. You'd need to iterate over the two sets of names in lockstep. (It also doesn't deal with things like whitespace in filenames.)
You would be better off using rename (prename on some systems) since that allows you to use Perl regular expressions to do the renaming, along the lines of:
prename 's/^.{5}//' *.nai
The reason your script is not behaving is that, for every source file, you're attempting to rename it to every target file.
If you need to limit yourself to using that script, you need to work out the single target file for each source file, something like:
#!/bin/bash
for i in *.nai; do
y=$(echo "$i" | cut -c6-)
mv "$i" "$y"
done
If your system has rename tool, it's better to go with the simple rename command,
rename 's/^.{5}//' *.nai
It just remove the first 5 characters from the file name.
OR
for i in *.nai; do mv "$i" $(grep -oP '(?<=^.{5}).+' <<< "$i"); done
I'm having an error trying to find a way to replace a string in a directory path with another string
sed: Error tryning to read from {directory_path}: It's a directory
The shell script
#!/bin/sh
R2K_SOURCE="source/"
R2K_PROCESSED="processed/"
R2K_TEMP_DIR=""
echo " Procesando archivos desde $R2K_SOURCE "
for file in $(find $R2K_SOURCE )
do
if [ -d $file ]
then
R2K_TEMP_DIR=$( sed 's/"$R2K_SOURCE"/"$R2K_PROCESSED"/g' $file )
echo "directorio $R2K_TEMP_DIR"
else
# some code executes
:
fi
done
# find $R2K_PROCCESED -type f -size -200c -delete
i'm understanding that the rror it's in this line
R2K_TEMP_DIR=$( sed 's/"$R2K_SOURCE"/"$R2K_PROCESSED"/g' $file )
but i don't know how to tell sh that treats $file variable as string and not as a directory object.
If you want ot replace part of path name you can echo path name and take it to sed over pipe.
Also you must enable globbing by placing sed commands into double quotes instead of single and change separator for 's' command like that:
R2K_TEMP_DIR=$(echo "$file" | sed "s:$R2K_SOURCE:$R2K_PROCESSED:g")
Then you will be able to operate with slashes inside 's' command.
Update:
Even better is to remove useless echo and use "here is string" instead:
R2K_TEMP_DIR=$(sed "s:$R2K_SOURCE:$R2K_PROCESSED:g" <<< "$file")
First, don't use:
for item in $(find ...)
because you might overload the command line. Besides, the for loop cannot start until the process in $(...) finishes. Instead:
find ... | while read item
You also need to watch out for funky file names. The for loop will cough on all files with spaces in them. THe find | while will work as long as files only have a single space in their name and not double spaces. Better:
find ... -print0 | while read -d '' -r item
This will put nulls between file names, and read will break on those nulls. This way, files with spaces, tabs, new lines, or anything else that could cause problems can be read without problems.
Your sed line is:
R2K_TEMP_DIR=$( sed 's/"$R2K_SOURCE"/"$R2K_PROCESSED"/g' $file )
What this is attempting to do is edit your $file which is a directory. What you want to do is munge the directory name itself. Therefore, you have to echo the name into sed as a pipe:
R2K_TEMP_DIR=$(echo $file | sed 's/"$R2K_SOURCE"/"$R2K_PROCESSED"/g')
However, you might be better off using environment variable parameters to filter your environment variable.
Basically, you have a directory called source/ and all of the files you're looking for are under that directory. You simply want to change:
source/foo/bar
to
processed/foo/bar
You could do something like this ${file#source/}. The # says this is a left side filter and it will remove the least amount to match the glob expression after the #. Check the manpage for bash and look under Parameter Expansion.
This, you could do something like this:
#!/bin/sh
R2K_SOURCE="source/"
R2K_PROCESSED="processed/"
R2K_TEMP_DIR=""
echo " Procesando archivos desde $R2K_SOURCE "
find $R2K_SOURCE -print0 | while read -d '' -r file
do
if [ -d $file ]
then
R2K_TEMP_DIR="processed/${file#source/}"
echo "directorio $R2K_TEMP_DIR"
else
# some code executes
:
fi
done
R2K_TEMP_DIR="processed/${file#source/}" removes the source/ from the start of $file and you merely prepend processed/ in its place.
Even better, it's way more efficient. In your original script, the $(..) creates another shell process to run your echo in which then pipes out to another process to run sed. (Assuming you use loentar's solution). You no longer have any subprocesses running. The whole modification of your directory name is internal.
By the way, this should also work too:
R2K_TEMP_DIR="$R2K_PROCESSED/${file#$R2K_SOURCE}"
I just didn't test that.
I've got a directory with a few thousand files in it, named things like:
filename.ext
filename (1).ext
filename (2).ext
otherfile.ext
otherfile (1).ext
etc.
Most of the files with bracketed numbers are duplicates of the original, but in some cases they're not.
How can I keep my original files, delete the duplicates, but not lose the files that are different?
I know that I could rm *\).ext, but that obviously doesn't make sure that files match the original.
I'm using OS X, so I have a md5 program that functions sort of like md5sum in Linux, though it puts the hash at the end of the line instead of the beginning. I was thinking I could use an awk script to take the output of md5 *.ext | awk 'some script', find duplicates by md5, and delete them, but the command line is too long (bash: /sbin/md5: Argument list too long).
And I don't know what to write in the script. I was thinking of storing things in an array with this:
awk '{a[$NF]++} a[$NF]>1{sub(/).*/,""); sub(/.*(/,""); system("rm " $0);}'
But that always seems to delete my original.
What am I doing wrong? How do I do it right?
Thanks.
Your awk script deletes original files because when you sort your files, . (period) sorts after (space). SO the first file that's seen is numbered, not the original, and subsequent checks (including the one against the original) compare files to the first numbered one.
Not only does rm *\).txt fail to match the original, it loses files that may not have an original in the first place.
I wouldn't do this quite this way. Rather than checking every numbered file and verifying whether it matches an original, you can go through your list of originals, then delete the numbered files that match them.
Instead:
$ for file in *[^\)].txt; do echo "-- Found: $file"; rm -v $(basename "$file" .txt)\ \(*\).txt; done
You can expand this to check MD5's along the way. But it's more code, so I'll break it into multiple lines, in a script:
#!/bin/bash
shopt -s nullglob # Show nothing if a fileglob matches no files
for file in *[^\)].ext; do
md5=$(md5 -q "$file") # The -q option gives you only the message digest
echo "-- Found: $file ($md5)"
for duplicate in $(basename "$file" .ext)\ \(*\).ext; do
if [[ "$md5" = "$(md5 -q "$duplicate")" ]]; then
rm -v "$duplicate"
fi
done
done
As an alternative, you can probably get away with doing this a little more simply, with less CPU overhead than calculating MD5 digests. Unix and Linux have a shell tool called cmp, which is like diff without the output. So:
#!/bin/bash
shopt -s nullglob
for file in *[^\)].ext; do
for duplicate in $(basename "$file" .ext)\ \(*\).ext; do
if cmp "$file" "$duplicate"; then
rm -v "$file"
fi
done
done
If you don't need to use AWK, you could maybe do something simpler in bash:
for file in *\([0-9]*\)*; do
[ -e "$(echo "$file" | sed -e 's/ ([0-9]\+)//')" ] && rm "$file"
done
Hope this helps a little =)
So I am going to post a question about shell scripting again.
Problem Definition: For all files under a dir, ex.:
A_anything.txt, B_anything.txt, ......
I want to execute a script, say 'CMD', on each of them, with the output files named like:
A_result.txt, B_result.txt, ......
In addition, at the first line of these output file, I want to have the file name of the original one.
The 'find -exec' util seems to me unable to extract part of the file name.
Does someone know a solution to this problem, by any means(shell, python, find,etc)? Thank you!
cd /directory
for file in *.txt ; do
newfilename=`echo "$file"|sed 's/\(.\+\)_.*/\1_result.txt/`
echo "$file" > "$newfilename"
your-command $file >> "$newfilename"
done
HTH
Well, there's more than one way to do it (including using Perl, where that's the motto), but probably I'd write it like this:
find . -name '[A-Z]_*.txt' -type f -print0 |
xargs -0 modify_rename.sh
And then I'd write the script modify_rename.sh like this:
#!/bin/sh
for file in "$#"
do
dirname=$(dirname "$file")
basename=$(basename "$file" .txt)
leadname=${file%_*}
outname="$dirname/${leadname}_result.txt"
# Optionally check for pre-existence of $outname
{
# Optionally echo "$basename.txt" instead of "$file"
echo "$file"
# Does this invocation of CMD write to standard output?
# If not, adjust invocation appropriately.
CMD "$file"
} > "$outname"
done
The advantage of this separation into separate scripting operations is that the rename/modify operation can be checked out separately from the search process - which runs less risk of zapping your entire directory structure with bad commands.
Bash has the tools to avoid invoking basename and dirname but the notation is moderatly excruciating; I find the clarity of the command names worth having. I'd be happy if bash implemented them as built-ins. There are plenty of other ways to get the prefix of the file; this should be safe, though, even in the presence of spaces (tabs, newlines) in file or directory names because of the careful use of double quotes.