Remove part of name of multiple files on mac os - bash

i have a directory full of .png files with a random caracters in the middle of the filenames like
T1_021_É}ÉcÉjÉV_solid box.png
T1_091_ÉRÉjÉtÉ#Å[_City.png
T1_086_ÉnÉiÉ~ÉYÉL_holiday.png
I expect this after removing
T1_021_solid box.png
T1_091_City.png
T1_086_holiday.png
Thank you

Using for to collect the file lists and bash parameter expansion with substring removal, you can do the following in the directory containing the files:
for i in T1_*; do
beg="${i%_*_*}" ## trim from back to 2nd '_'
end="${i##*_}" ## trim from from through last '_'
mv "$i" "${beg}_$end" ## mv file to new name.
done
(note: you don't have to use variables beg and end you can just combing both parameter expansions to form the new filenaame, e.g. mv "$i" "${i%_*_*}_${i##*_}", up to you, but beg and end make things a bit more readable.)
Result
New file names:
$ ls -al T1_*
T1_021_solid
T1_086_holiday.png
T1_091_City.png
Just another way to approach it from bash only.
Using cut
You can use cut to remove the 3rd field using '_' as the delimiter with :
for i in T1_*; do
mv "$i" $(cut -d'_' -f-2,4- <<< "$i")
done
(same output)
The only drawback is the use of cut in the command substitution would require an additional subshell be spawned each iteration.

If the set of random characters have _ before and after
find . -type f -iname "T1_0*" 2>/dev/null | while read file; do
mv "${file}" "$(echo ${file} | cut -d'_' -f1,2,4-)"
done
Explanation:
Find all files that start with T1_
Read the list line by line using the while loop
Use _ as delimiter and cut the 3rd column
Use mv to rename
Filenames after renaming:
T1_021_solid box.png
T1_086_holiday.png
T1_091_City.png

Related

Substitute shortest match of pattern in filename

I have files with the following filename pattern:
C14_1_S1_R1_001_copy1.fastq.gz
That I would like to be renamed this way:
C14_1_S1_R1.fastq.gz
I have tested unsuccessfully the following pattern replacement strategy:
for f in *.fastq.gz; do echo mv "$f" "${f/_*./_}"; done
Any suggestion is welcome.
Your original filename has several underscore characters but you only want to remove from the second to last underscore. In that case, try:
mv "$f" "${f%_*_*}.fastq.gz"
Consider a directory with these files:
$ ls -1
C14_1_S1_R1_001_copy1.fastq.gz
C15_1_S1_R1_001_copy1.fastq.gz
If we run our loop and then run a new ls, we see the changed filenames:
$ for f in ./*.fastq.gz; do mv "$f" "${f%_*_*}.fastq.gz"; done
$ ls -1
C14_1_S1_R1.fastq.gz
C15_1_S1_R1.fastq.gz
The key here is that ${var%word} is suffix removal and it matches the shortest possible suffix that matches the glob word. Thus, ${f%_*_*} removes the second-to-last underscore character and everything after it. ${f%_*_*}.fastq.gz removes the second-to-last underscore character and everything after and then restores your desired suffix of .fastq.gz.
str="C14_1_S1_R1_001_copy1.fastq.gz"
front=$(echo "${str}" | cut -d'_' -f1-4)
back=$(echo "${str}" | cut --complement -d'.' -f1)
echo "${front}.${back}"
With regex using the =~ test operator and BASH_REMATCH
#!/usr/bin/env bash
for file in *.fastq.gz; do
if [[ $file =~ ^(.+)(_[[:digit:]]+_copy.*[^\.])(\.fastq\.gz)$ ]]; then
echo mv -v "$file" "${BASH_REMATCH[1]}${BASH_REMATCH[3]}"
fi
done
Basically it just split the C14_1_S1_R1_001_copy1.fastq.gz into three parts.
BASH_REMATCH[1] has C14_1_S1_R1
BASH_REMATCH[2] has _001_copy1
BASH_REMATCH[3] has .fastq.gz
Remove the echo if you're ok with the output so the files can be renamed.

Handle files with space in filename and output file names

I need to write a Bash script that achieve the following goals:
1) move the newest n pdf files from folder 1 to folder 2;
2) correctly handles files that could have spaces in file names;
3) output each file name in a specific position in a text file. (In my actual usage, I will use sed to put the file names in a specific position of an existing file.)
I tried to make an array of filenames and then move them and do text output in a loop. However, the following array cannot handle files with spaces in filename:
pdfs=($(find -name "$DOWNLOADS/*.pdf" -print0 | xargs -0 ls -1 -t | head -n$NUM))
Suppose a file has name "Filename with Space". What I get from the above array will have "with" and "Space" in separate array entries.
I am not sure how to avoid these words in the same filename being treated separately.
Can someone help me out?
Thanks!
-------------Update------------
Sorry for being vague on the third point as I thought I might be able to figure that out after achieving the first and second goals.
Basically, it is a text file that have a line start with "%comment" near the end and I will need to insert the filenames before that line in the format "file=PATH".
The PATH is the folder 2 that I have my pdfs moved to.
You can achieve this using mapfile in conjunction with gnu versions of find | sort | cut | head that have options to operate on NUL terminated filenames:
mapfile -d '' -t pdfs < <(find "$DOWNLOADS/*.pdf" -name 'file*' -printf '%T#:%p\0' |
sort -z -t : -rnk1 | cut -z -d : -f2- | head -z -n $NUM)
Commands used are:
mapfile -d '': To read array with NUL as delimiter
find: outputs each file's modification stamp in EPOCH + ":" + filename + NUL byte
sort: sorts reverse numerically on 1st field
cut: removes 1st field from output
head: outputs only first $NUM filenames
find downloads -name "*.pdf" -printf "%T# %p\0" |
sort -z -t' ' -k1 -n |
cut -z -d' ' -f2- |
tail -z -n 3
find all *.pdf files in downloads
for each file print it's modifition date %T with the format specifier # that means seconds since epoch with fractional part, then print space, filename and separate with \0
Sort the null separated stream using space as field separator using only first field using numerical sort
Remove the first field from the stream, ie. creation date, leaving only filenames.
Get the count of the newest files, in this example 3 newest files, by using tail. We could also do reverse sort and use head, no difference.
Don't use ls in scripts. ls is for nice formatted output. You could do xargs -0 stat --printf "%Y %n\0" which would basically move your script forward, as ls isn't meant to be used for scripts. Just that I couldn't make stat output fractional part of creation date.
As for the second part, we need to save the null delimetered list to a file
find downloads ........ >"$tmp"
and then:
str='%comment'
{
grep -B$((2**32)) -x "$str" "$out" | grep -v "$str"
# I don't know what you expect to do with newlines in filenames, but I guess you don't have those
cat "$tmp" | sed -z 's/^/file=/' | sed 's/\x0/\n/g'
grep -A$((2**32)) -x "$str" "$out"
} | sponge "$out"
where output is the output file name
assuming output file name is stored in variable "$out"
filter all lines before the %comment and remove the line %comment itself from the file
output each filename with file= on the beginning. I also substituted zeros for newlines.
the filter all lines after %comment including %comment itself
write the output for outfile. Remember to use a temporary file.
Don't use pdf=$(...) on null separated inputs. You can use mapfile to store that to an array, as other answers provided.
Then to move the files, do smth like
<"$tmp" xargs -0 -i mv {} "$outdir"
or faster, with a single move:
{ cat <"$tmp"; printf "%s\0" "$outdir"; } | xargs -0 mv
or alternatively:
<"$tmp" xargs -0 sh -c 'outdir="$1"; shift; mv "$#" "$outdir"' -- "$outdir"
Live example at turorialspoint.
I suppose following code will be close to what you want:
IFS=$'\n' pdfs=($(find -name "$DOWNLOADS/*.pdf" -print0 | xargs -0 -I ls -lt "{}" | tail -n +1 | head -n$NUM))
Then you can access the output through ${pdfs[0]}, ${pdfs[1]}, ...
Explanations
IFS=$'\n' makes the following line to be split only with "\n".
-I option for xargs tells xargs to substitute {} with filenames so it can be quoted as "{}".
tail -n +1 is a trick to suppress an error message saying "xargs: 'ls' terminated by signal 13".
Hope this helps.
Bash v4 has an option globstar, after enabling this option, we can use ** to match zero or more subdirectories.
mapfile is a built-in command, which is used for reading lines into an indexed array variable. -t option removes a trailing newline.
shopt -s globstar
mapfile -t pdffiles < <(ls -t1 **/*.pdf | head -n"$NUM")
typeset -p pdffiles
for f in "${pdffiles[#]}"; do
echo "==="
mv "${f}" /dest/path
sed "/^%comment/i${f}=/dest/path" a-text-file.txt
done

Move files based of a comparison with a file

I have 1000 files with following names:
something-345-something.txt
something-5468-something.txt
something-100-something.txt
something-6200-something.txt
and a lot more...
And I have one txt file, with only numbers in it. f.e:
1000
500
5468
6200
699
usw...
Now I would like to move all files, which have a number in their filenames which is in my txt file.
So in my example above the following files should be moved only:
something-5468-something.txt
something-6200-something.txt
Is there an easy way to achieve this?
What about on the fly moving files by doing this:
for i in `cat you-file.txt`; do
find . -iname "*-$i-*" -exec mv '{}' /target/dir \;
; done
For every line in your text file, the find command will try to find only does matching the pattern *-$i-* (something-6200-something.txt) and move it to your target dir.
Naive implementation: for file in $(ls); do grep $(echo -n $file | sed -nr 's/[^-]*-([0-9]+).*/\1/p') my-one-txt.txt && mv $file /tmp/somewhere; done
In English: For every file in output of ls: parse number part of filename with sed and grep for it in your text file. grep returns a non-zero exit code if nothing is found, so mv is in evaluated in that case.
Script file named move (executable):
#!/bin/bash
TARGETDIR="$1"
FILES=`find . -type f` # build list of files
while read n # read numbers from standard input
do # n contains a number => filter list of files by that number:
echo "$FILES" | grep "\-$n-" | while read f
do # move file that passed the filter because its name matches n:
mv "$f" "$TARGETDIR"
done
done
Use it like this:
cd directory-with-files
./move target-directory < number-list.txt
Here's a crazy bit of bash hackery
shopt -s extglob nullglob
mv -t /target/dir *-#($(paste -sd "|" numbers.txt))-*
That uses paste to join all the lines in your numbers file with pipe characters, then uses bash extended pattern matching to find the files matching any one of the numbers.
I assume mv from GNU coreutils for the -t option.

Bash variables not acting as expected

I have a bash script which parses a file line by line, extracts the date using a cut command and then makes a folder using that date. However, it seems like my variables are not being populated properly. Do I have a syntax issue? Any help or direction to external resources is very appreciated.
#!/bin/bash
ls | grep .mp3 | cut -d '.' -f 1 > filestobemoved
cat filestobemoved | while read line
do
varYear= $line | cut -d '_' -f 3
varMonth= $line | cut -d '_' -f 4
varDay= $line | cut -d '_' -f 5
echo $varMonth
mkdir $varMonth'_'$varDay'_'$varYear
cp ./$line'.mp3' ./$varMonth'_'$varDay'_'$varYear/$line'.mp3'
done
You have many errors and non-recommended practices in your code. Try the following:
for f in *.mp3; do
f=${f%%.*}
IFS=_ read _ _ varYear varMonth varDay <<< "$f"
echo $varMonth
mkdir -p "${varMonth}_${varDay}_${varYear}"
cp "$f.mp3" "${varMonth}_${varDay}_${varYear}/$f.mp3"
done
The actual error is that you need to use command substitution. For example, instead of
varYear= $line | cut -d '_' -f 3
you need to use
varYear=$(cut -d '_' -f 3 <<< "$line")
A secondary error there is that $foo | some_command on its own line does not mean that the contents of $foo gets piped to the next command as input, but is rather executed as a command, and the output of the command is passed to the next one.
Some best practices and tips to take into account:
Use a portable shebang line - #!/usr/bin/env bash (disclaimer: That's my answer).
Don't parse ls output.
Avoid useless uses of cat.
Use More Quotes™
Don't use files for temporary storage if you can use pipes. It is literally orders of magnitude faster, and generally makes for simpler code if you want to do it properly.
If you have to use files for temporary storage, put them in the directory created by mktemp -d. Preferably add a trap to remove the temporary directory cleanly.
There's no need for a var prefix in variables.
grep searches for basic regular expressions by default, so .mp3 matches any single character followed by the literal string mp3. If you want to search for a dot, you need to either use grep -F to search for literal strings or escape the regular expression as \.mp3.
You generally want to use read -r (defined by POSIX) to treat backslashes in the input literally.

Recursive BASH renaming

EDIT: Ok, I'm sorry, I should have specified that I was on Windows, and using win-bash, which is based on bash 1.14.2, along with the gnuwin32 tools. This means all of the solutions posted unfortunately didn't help out. It doesn't contain many of the advanced features. I have however figured it out finally. It's an ugly script, but it works.
#/bin/bash
function readdir
{
cd "$1"
for infile in *
do
if [ -d "$infile" ]; then
readdir "$infile"
else
renamer "$infile"
fi
done
cd ..
}
function renamer
{
#replace " - " with a single underscore.
NEWFILE1=`echo "$1" | sed 's/\s-\s/_/g'`
#replace spaces with underscores
NEWFILE2=`echo "$NEWFILE1" | sed 's/\s/_/g'`
#replace "-" dashes with underscores.
NEWFILE3=`echo "$NEWFILE2" | sed 's/-/_/g'`
#remove exclamation points
NEWFILE4=`echo "$NEWFILE3" | sed 's/!//g'`
#remove commas
NEWFILE5=`echo "$NEWFILE4" | sed 's/,//g'`
#remove single quotes
NEWFILE6=`echo "$NEWFILE5" | sed "s/'//g"`
#replace & with _and_
NEWFILE7=`echo "$NEWFILE6" | sed "s/&/_and_/g"`
#remove single quotes
NEWFILE8=`echo "$NEWFILE7" | sed "s/’//g"`
mv "$1" "$NEWFILE8"
}
for infile in *
do
if [ -d "$infile" ]; then
readdir "$infile"
else
renamer "$infile"
fi
done
ls
I'm trying to create a bash script to recurse through a directory and rename files, to remove spaces, dashes and other characters. I've gotten the script working fine for what I need, except for the recursive part of it. I'm still new to this, so it's not as efficient as it should be, but it works. Anyone know how to make this recursive?
#/bin/bash
for infile in *.*;
do
#replace " - " with a single underscore.
NEWFILE1=`echo $infile | sed 's/\s-\s/_/g'`;
#replace spaces with underscores
NEWFILE2=`echo $NEWFILE1 | sed 's/\s/_/g'`;
#replace "-" dashes with underscores.
NEWFILE3=`echo $NEWFILE2 | sed 's/-/_/g'`;
#remove exclamation points
NEWFILE4=`echo $NEWFILE3 | sed 's/!//g'`;
#remove commas
NEWFILE5=`echo $NEWFILE4 | sed 's/,//g'`;
mv "$infile" "$NEWFILE5";
done;
find is the command able to display all elements in a filesystem hierarchy. You can use it to execute a command on every found file or pipe the results to xargs which will handle the execution part.
Take care that for infile in *.* does not work on files containing whitespaces. Check the -print0 option of find, coupled to the -0 option of xargs.
All those semicolons are superfluous and there's no reason to use all those variables. If you want to put the sed commands on separate lines and intersperse detailed comments you can still do that.
#/bin/bash
find . | while read -r file
do
newfile=$(echo "$file" | sed '
#replace " - " with a single underscore.
s/\s-\s/_/g
#replace spaces with underscores
s/\s/_/g
#replace "-" dashes with underscores.
s/-/_/g
#remove exclamation points
s/!//g
#remove commas
s/,//g')
mv "$infile" "$newfile"
done
This is much shorter:
#/bin/bash
find . | while read -r file
do
# replace " - " or space or dash with underscores
# remove exclamation points and commas
newfile=$(echo "$file" | sed 's/\s-\s/_/g; s/\s/_/g; s/-/_/g; s/!//g; s/,//g')
mv "$infile" "$newfile"
done
Shorter still:
#/bin/bash
find . | while read -r file
do
# replace " - " or space or dash with underscores
# remove exclamation points and commas
newfile=$(echo "$file" | sed 's/\s-\s/_/g; s/[-\s]/_/g; s/[!,]//g')
mv "$infile" "$newfile"
done
In bash 4, setting the globstar option allows recursive globbing.
shopt -s globstar
for infile in **
...
Otherwise, use find.
while read infile
do
...
done < <(find ...)
or
find ... -exec ...
I've used 'find' in the past to locate files then had it execute another application.
See '-exec'
rename 's/pattern/replacement/' glob_pattern

Resources