How to rename multiple files by truncating the name with command line? - terminal

2001.png
2002.png
2003.png
2004.png
2005.png
2006.png
Let's say I want to programmatically rename these pics to be called:
1.png
2.png
3.png
4.png
5.png
6.png
Best way to do this with terminal? Does it involve Regex? In this case I would assume so since I'm truncating letters

You can get all the files in the current directory and change change their names by using move command. In this case, you want to take substring of the name from 3rd character(after 3rd characters 5 symbols are remaining, so take 5 characters starting with 3rd - file:3:5).
#!/bin/bash
for file in *.png; do
new_file=${file:3:5}
mv "$file" "${new_file%}"
done

Related

Batch resize images when one side is too large (linux)

I know that image resizing on the command line is something ImageMagick and similar could do unfortunately I do only have very basic bash scripting abilities so I wonder if this is even possible:
check all directories and subdirectories for all files that are an image
check width and height of the image
if any of both exceeds X amount of pixels resize it to X while keeping aspect ratio.
replace old file with new file (old file shall be removed/deleted)
Thank you for any input.
Implementation might be not so trivial even for advanced users. As a one-liner:
find \ # 1
~/Downloads \ # 2
-type f \ # 3
-exec file \{\} \; \ # 4
| awk -F: '{if ($2 ~/image/) print $1}' \ # 5
| while IFS= read -r file_path; do \ # 6
mogrify -resize 1024x1024\> "$file_path"; \ # 7
done # 8
Lines 1-4 are an invocation of the find command:
Specify a directory to scan.
Specify you need files only.
Per each found item run file command. Example outputs per file:
/Downloads/391A6 625.png: PNG image data, 1024 x 810, 8-bit/color RGB, interlaced
/Downloads/STRUCTURED NODES IN UML 2.0 ACTIVITES.pdf: PDF document, version 1.4
Note how file names are delimited from their info by : and info about PNG contains image word. This also will be true for other image formats.
Use awk to filter only those files which have image word in their info. This gives us image files only. Here, -F: specifies that the delimiter is :. This gives us the variable $1 to contain the original file name and $2 for the file info. We search image word in file info and print file name if it's present.
This one is a bit tricky. Lines 6-8 read the output of awk line by line and invoke the mogrify command to resize images. Here we do not use piping and xargs, as if file paths contain spaces or other characters which must be escaped,
we will get xargs unterminated quote errors and it's a pain to handle that.
Invoke the mogrify command of ImageMagic. Unlike convert, which is also ImageMagic's command, mogrify changes files in-place without creating new ones. Here, 1024x1024\> tells to resize image to have max size of 1024x1024. The \> part tells to preserve aspect ratio, so that the final image will have the biggest side of 1024px. Other side will be smaller than that, unless the original image is square. Pay attention to the ;, as it's needed inside loops.
Note, it's safe to run mogrify several times over the same file: if a file's size already corresponds to your target dimensions, it will not be resized again. However, it will change file's modification time, though.
Additionally, you may need not only to resize images, but to compress them as well. Please, refer to my gist to see how this can be done: https://gist.github.com/oblalex/79fa3f85f05924017d25004496493adb
If your goal is just to reduce big images in size, e.g. bigger than 300K, you may:
find /path/to/dir -type f -size +300k
and as before combine it with mogrify -strip -interlace Plane -format jpg -quality 85 -define jpeg:extent=300KB "$FILE_PATH"
In such case new jpg files will be created for non-jpg originals and originals will need to be removed. Refer to the gist to see how this can be done.
You can do that with a bash unix shell script looping over your directories. You must identify all the file formats you want such as jpg and png, etc. Then for each directory, loop over each file of the given list of formats. Then use ImageMagick to resize the files.
cd
dirlist="path2/directory1 path2/directory2 ...."
for dir in $dirlist; do
cd "$dir"
imglist=`ls | grep -i ".jpg\|.png"`
for img in $imglist; do
convert $img -resize "200x200>" $img
done
done
See https://www.imagemagick.org/script/command-line-processing.php#geometry

ImageMagick convert tiffs to pdf with sequential file suffix

I have the following scenario and I'm not much of a coder (nor do I know bash well). I don't even have a base working bash script to share, so any help would be appreciated.
I have a file share that contains tiffs (thousands) of a document management system. The goal is to convert and combine from multiple file tiffs to single file pdfs (preferably PDF/A 1a format).
The directory format:
/Document Management Root # This is root directory
./2009/ # each subdirectory represents a year
./2010/
./2011/
....
./2016/
./2016/000009.001
./2016/000010.001
# files are stored flat - just thousands of files per year directory
The document management system stores tiffs with sequential number file names along with sequential file suffixes:
000009.001
000010.001
000011.002
000012.003
000013.001
Where each page of a document is represented by the suffix. The suffix restarts when a new, non-related document is created. In the example above, 000009.001 is a single page tiff. Files 000010.001, 000011.002, and 000012.003 belong to the same document (i.e. the pages are all related). File 000013.001 represents a new document.
I need to preserve the file name for the first file of a multipage document so that the filename can be cross referenced with the document management system database for metadata.
The pseudo code I've come up with is:
for each file in {tiff directory}
while file extension is "001"
convert file to pdf and place new pdf file in {pdf directory}
else
convert multiple files to pdf and place new pd file in {pdf directory}
But this seems like it will have the side effect of converting all 001 files regardless of what the next file is.
Any help is greatly appreciated.
EDIT - Both answers below work. The second answer worked, however it was my mistake in not realizing that the data set I tested against was different than my scenario above.
So, save the following script in your login ($HOME) directory as TIFF2PDF
#!/bin/bash
ls *[0-9] | awk -F'.' '
/001$/ { if(NR>1)print cmd,outfile; outfile=$1 ".pdf"; cmd="convert " $0;next}
{ cmd=cmd " " $0}
END { print cmd,outfile}'
and make it executable (necessary just once) by going in Terminal and running:
chmod +x TIFF2PDF
Then copy a few documents from any given year into a temporary directory to try things out... then go to the directory and run:
~/TIFF2PDF
Sample Output
convert 000009.001 000009.pdf
convert 000010.001 000011.002 000012.003 000010.pdf
convert 000013.001 000013.pdf
If that looks correct, you can actually execute those commands like this:
~/TIFF2PDF | bash
or, preferably if you have GNU Parallel installed:
~/TIFF2PDF | parallel
The script says... "Generate a listing of all files whose names end in a digit and send that list to awk. In awk, use the dot as the separator between fields, so if the file is called 00011.0002, then $0 will be 00011.0002, $1 will be 00011 and $2 will be 0002. Now, if the filename ends in 0001, print the accumulated command and append the output filename. Then save the filename prefix with PDF extension as the output filename of the next PDF and start building up the next ImageMagick convert command. On subsequent lines (which don't end in 0001), add the filename to the list of filenames to include in the PDF. At the end, output any accumulated commands and append the output filename."
As regards the ugly black block at the bottom of your image, it happens because there are some tiny white specks in there that prevent ImageMagick from removing the black area. I have circled them in red:
If you blur the picture a little (to diffuse the specks) and then get the size of the trim-box, you can apply that to the original, unblurred image like this:
trimbox=$(convert original.tif -blur x2 -bordercolor black -border 1 -fuzz 50% -format %# info:)
convert original.tif -crop $trimbox result.tif
I would recommend you do that first to A COPY of all your images, then run the PDF conversion afterwards. As you will want to save a TIFF file but with the extension 0001, 0002, you will need to tell ImageMagick to trim and force the output filetype to TIF:
original=XYZ.001
trimbox=$(convert $original -blur x2 -bordercolor black -border 1 -fuzz 50% -format %# info:)
convert $original -crop $trimbox TIF:$original
As #AlexP. mentions, there can be issues with globbing if there is a large number of files. On OSX, ARG_MAX is very high (262144) and your filenames are around 10 characters, so you may hit problems if there are more than around 26,000 files in one directory. If that is the case, simply change:
ls *[0-9] | awk ...
to
ls | grep "\d$" | awk ...
The following command would convert the whole /Document Management Root tree (assuming it's actual absolute path) properly processing all subfolders even with names including whitespace characters and properly skipping all other files not matching the 000000.000 naming pattern:
find '/Document Management Root' -type f -regextype sed -regex '.*/[0-9]\{6\}.001$' -exec bash -c 'p="{}"; d="${p:0: -10}"; n=${p: -10:6}; m=10#$n; c[1]="$d$n.001"; for i in {2..999}; do k=$((m+i-1)); l=$(printf "%s%06d.%03d" "$d" $k $i); [[ -f "$l" ]] || break; c[$i]="$l"; done; echo -n "convert"; printf " %q" "${c[#]}" "$d$n.pdf"; echo' \; | bash
To do a dry run just remove the | bash in the end.
Updated to match the 00000000.000 pattern (and split to multiple lines for clarity):
find '/Document Management Root' -type f -regextype sed -regex '.*/[0-9]\{8\}.001$' -exec bash -c '
pages[1]="{}"
p1num="10#${pages[1]: -12:8}"
for i in {2..999}; do
nextpage=$(printf "%s%08d.%03d" "${pages[1]:0: -12}" $((p1num+i-1)) $i)
[[ -f "$nextpage" ]] || break
pages[i]="$nextpage"
done
echo -n "convert"
printf " %q" "${pages[#]}" "${pages[1]:0: -3}pdf"
echo
' \; | bash

move specific files to different subfolders matching a character

I have a list of files ~100,000 txt files with the following pattern name: file_[combination of letters and numbers]_[numbers from 1 to 400].txt, three examples would be:
file_ab34_1.txt, file_ab35_1.txt, file_bg12_2.txt, file_bg12_2.txt. What I want to do automatically is move all the files with _1 to a subfolder named /1 and all the files with _2 to a subfolder /2 and so on.
I would need a bash script to do it automatically and not one by one
Please back up your files before trying this answer.
I would try rename, like this:
rename --dry-run 's|(.*)_(\d+).txt|$2/$1_$2.txt|' *.txt
'file_ab34_1.txt' would be renamed to '1/file_ab34_1.txt'
'file_ab35_1.txt' would be renamed to '1/file_ab35_1.txt'
'file_bg12_2.txt' would be renamed to '2/file_bg12_2.txt'
The --dry-run just shows you what it would do without actually doing anything - great for testing before using.
It is basically Perl, and it is doing a substitution on the filename. The bones of it is to substitute like this:
s|something|something else|
Every time there are parentheses on the left side (called capture groups), they capture some aspect of the left side and it is then available as a numbered item to put in the replacement, right hand side, where $1 represents whatever was captured in the first set of parentheses and $2 represents whatever was in the second set of parentheses and so on.
You will likely need -p option to create the output directories, so:
rename -p ....
If you get errors about the argument list being too long, you will probably need to use find and xargs, along these lines (untested):
find . -name \*.txt -maxdepth 1 -print0 | xargs -0 -n 1000 rename ....
You can do this with just mv with shell globbing to get the files:
mv -t /1 file_*_1.txt
mv -t /2 file_*_2.txt

How can I remove hidden characters after a file extension in a variable

When I do
echo $filename
I get
Pew Pew.mp4
However,
echo "${#filename}"
Returns 19
How do I delete all characters after the file extension? It needs to work no matter what the file extension is because the file name in the variable will not always match *.mp4
You should try to find out why you have such strange files before fixing it.
Once you know, you can rename files.
When you just want to rename 1 file, just use the command
mv "Pew Pew.mp4"* "Pew Pew.mp4"
Cutting off the complete extension (with filename=${filename%%.*}) won't help you if you want to use the stripped extension (mp4 or jpg or ...).
EDIT:
I think OP want a work-around so I give another try.
When you have a a short list of extensions, you can try
for ext in mpeg mpg jpg avo mov; do
for filename in *.${ext}*; do
mv "${filename%%.*}.${ext}"* "${filename%%.*}.${ext}"
done
done
You can try strings to get the readable string.
echo "${filename}" | strings | wc
# Rename file
mv "${filename}" "$(echo "${filename}"| strings)"
EDIT:
strings gives more than 1 line as a result and unwanted spaces. Since Pew Pew has a space inside, I hope that all spaces, underscores and minus-signs are in front of the dot.
The newname can be constructed with something like
tmpname=$(echo "${filename}"| strings | head -1)
newname=${tmpname% *}
# or another way
newname=$(echo "${filename}"| sed 's/[[:alnum:]_- ]*\.[[:alnum:]]*\).*/\1/')
# or another (the best?) way (hoping that the first unwanted character is not a space)
newname="${filename%%[^[:alnum:]\.-_]*}"
# resulting in
mv "${filename}" "${filename%%[^[:alnum:]\.-_]*}"

Finding Multiple Files, Copying, and Renaming Sequentially

I have a few hundred thousand files in many, many subdirectories. I am trying to extract all of the relevant image files, using a regular expression like so:
find -E . -regex '.+\.ca/.+(\.gif|\.jpg|\.tif|\.jpeg|\.tiff|\.png|\.jp2|\.j2k|\.bmp|\.pict|\.wmf|\.emf|\.ico|\.xbm)'
This finds the files. However, I want to move them to a newdir and have them named like so:
1.png
2.jpg
3.ico
4.pict
5.png
And so forth. I haven't been able to find a way that (a) preserves the various extensions; (b) renames them as they come in. Many of the files will be duplicates and I will want to preserve that. Thanks so much for your help.
i=1
find ... | while read filename; do
newname=$i.${filename##*.}
mv "$filename" newdir/"$newname"
i=$((i+1))
done

Resources