Reverse the number in a filename - bash

So it seems that I have made a error when trying to make more readable filestructure. I accidentally named files in the wrong order, and now I need to correct it.
The files are named:
001 - file number 1.jpg
001 - file number 2.mp3
002 - file number 3.jpg
002 - file number 4.mp3
003 - file number 5.jpg
003 - file number 6.mp3
and so on up to I think 800 files in one folder, and 300 in another, its kind of a mess.
The correct order should be:
003 - file number 1.jpg
003 - file number 2.mp3
002 - file number 3.jpg
002 - file number 4.mp3
001 - file number 5.jpg
001 - file number 6.mp3
How can I rename all files, and change the number so it goes the reversed order?

Not sure how well will this scale on large amount of files but here it is.
#!/usr/bin/env bash
shopt -s extglob nullglob
file=(*.#(mp3|jpg))
mapfile -t -d '' files < <(printf '%s\0' "${file[#]}")
mapfile -t -d '' renamed < <(paste -zd ' ' <(printf '%s\0' "${files[#]%% *}" | sort -rz ) <(printf '%s\0' "${files[#]#* }"))
for i in "${!files[#]}"; do
echo mv -v "${files[$i]}" "${renamed[$i]}"
done
Output
mv -v 001 - file number 1.jpg 003 - file number 1.jpg
mv -v 001 - file number 2.mp3 003 - file number 2.mp3
mv -v 002 - file number 3.jpg 002 - file number 3.jpg
mv -v 002 - file number 4.mp3 002 - file number 4.mp3
mv -v 003 - file number 5.jpg 001 - file number 5.jpg
mv -v 003 - file number 6.mp3 001 - file number 6.mp3
It will spit an error message like what #oguz posted.
bash4+ only because of mapfile
Also the -z on both paste and sort might be GNU only.
Another option if you have the utility vidir which you can use your favorite text editor to rename your files. The caveat is it does not support file/path names with newlines.
vidir /path/to/files
Using your favorite text editor
EDITOR=kate vidir /path/to/files
If it is the first time you will use vidir then I suggest you try it on some test files first. The first column is just the increment of the files/directories don't touch it.

If the reversal is to be performed in your locale's collation order:
name=(*.{jpg,mp3})
pfix=("${name[#]%% *}")
for ((i=0,j=${#name[#]}-1; j>=0; i++,j--)); do
echo mv "${name[i]}" "${pfix[j]} ${name[i]#* }"
done
Populates an array with filenames and another with prefixes; loops through both in opposite directions and re-pairs them in reverse order.
Drop echo if its output looks good. Might complain that target and the source is the same just once, but that won't cause any harm.

Use param -r in sort → sort -r file

Related

how to produce multiple readlength.tsv at once from multiple fastq files?

ı have 16 fastq files under the different directories to produce readlength.tsv seperately and ı have some script to produce readlength.tsv .this is the script that ı should use to produce readlength.tsv
zcat ~/proje/project/name/fıle_fastq | paste - - - - | cut -f1,2 | while read readID sequ;
do
len=`echo $sequ | wc -m`
echo -e "$readID\t$len"
done > ~/project/name/fıle1_readlength.tsv
one by one ı can produce this readlength but it will take long time .I want to produce readlength at once thats why I created list that involved these fastq fıles but ı couldnt produce any loop to produce readlength.tsv at once from 16 fastq files.
ı would appreaciate ıf you can help me
Assuming a file list.txt contains the 16 file paths such as:
~/proje/project/name/file1_fastq
~/proje/project/name/file2_fastq
..
~/path/to/the/fastq_file16
Then would you please try:
#!/bin/bash
while IFS= read -r f; do # "f" is assigned to each fastq filename in "list.txt"
mapfile -t ary < <(zcat "$f") # assign "ary" to the array of lines
echo -e "${ary[0]}\t${#ary[1]}" # ${ary[0]} is the id and ${#ary[1]} is the length of sequence
done < list.txt > readlength.tsv
As the fastq file format contains the id in the 1st line and the sequence
in the 2nd line, bash built-in mapfile will be better to handle them.
As a side note, the letter ı in your code looks like a non-ascii character.

Extract unmatched files in directory using a text file

I have 100 files in a directory, and a text file that lists out 35 of these files.
####Directory
apple carrot orange pears bananas
###text file
apple
carrot
orange
I would like to use this text file that has filenames and compare in the directory to get unmatched filenames into a separate file. So it will be a file that lists out like below:
##unmatched text file
pears
bananas
I know to do this by using find if the search term was a particular string but could not figure out this
Assume that the text file contains a subset of the files in the directory. Also assume that the file is called list.txt and the directory is called dir1, then the following will work:
(cat list.txt; ls -1 dir1) | sort | uniq -u
Explanations
The command (cat list.txt; ls -1 dir1) starts a sub shell, executes the cat and the ls commands
The combined output is then sorted and uniq -u will picks out those that are unique (not duplicated)
I believe this is what you want. If that works, you can redirect into another file:
(cat list.txt; ls -1 dir1) | sort | uniq -u > list2.txt

bash list file (ls) and find a number

I have in my directory this files
ls -l /toto/
total 0
brw-rw---- 1 tata par 112, 24 Apr 16 13:08 file1
brw-rw---- 1 tata par 112, 23 Apr 16 13:08 file2
My bash have to verify that the number 112 is present for all lines
for f in $(ls -l /toto/);
do
fff=`grep "112" $f`
echo $fff
done
result:
grep: tata: No such file or directory
grep: 112: No such file or directory
grep: file1: No such file or directory
why? how ? Thanks
The files listed in your question are block devices (the b as the first character in the permissions block tells that).
This means 112 and 24 are the major and the minor version of the first file, in decimal notation.
The Unix command stat can be used to produce a file listing that uses a custom format (as opposed to ls that knows only a couple of fixed formats).
The command line you need is:
stat --format "%t %n" /toto/*
The %t format specifier lists the major version of a device file, in hexadecimal notation. %n lists the file name (we use it for debug).
112 in hexadecimal is 0x70. The command above should print:
70 file1
70 file2
Now you can pipe it through grep '^70 ' and then to wc -l to count the number of lines that start with 70 (70 followed by a space):
stat --format "%t %n" /toto/* | grep '^70 ' | wc -l
If you want to know if all files in the /toto/ directory have major version 112 then you can compare the number produced by the command above against the number produced by the next command (it produces the number of files and directories in the /toto/ directory)`
ls -1 /toto/ | wc -l
If you want to also know what files have a different major version then you can run this command:
stat --format "%t %n" /toto/* | grep -v '^70 '
It filters out the lines that do not start with ^70 and displays only the files that have a different major version (and their major version in hex).
If it doesn't display anything then all the files in the /toto/ directory has major version 112.
Remark: the command above will also list the regular files and directories and other files that are not devices (only the devices has versions).

Rename files, using text-file as source

In a folder, i've 600 files, numbered from 001 to 600. It looks like foo_001.bar. In a text file, i've the number & titles of this folder. Now i want to rename foo_001.bar with the corresponding 001 title foobar from the text file.
But i don't have a clue how to do this properly on Linux Mint. Can someone help me or give me a tip?
Content of the titles.txt looks like this. With a tab (can be altered easy off course) between the number and the title.
001 title of 1
002 this is 2
003 and here goes 3
004 number four
005 hi this is five
etc
Content of the folder looks like this. No exceptions.
file_001.ext
file_002.ext
file_003.ext
file_004.ext
file_005.ext
etc
Just loop through your file with read, get the seperated columns with awk cut (Thank you, #Jack) and mv your file accordingly. In this very simple implementation I assume that your text file containing the new names is located at ./filenames and your script is called from the directory containing your files.
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
NR=$(echo "$line" | cut -f 1)
NAME=$(echo "$line" | cut -f 2)
if [ -f "foo_${NR}.ext" ] ; then
mv "foo_${NR}.ext" "$NAME"
fi
done < filenames

Split option to have the variable at the beginning instead on the end?

I have a file of 47 million lines, I want to split this into files of 2 million lines, but the end of the filename has to be the same.
split -l 2000000 test_F5.csfasta splitted_test_F5.csfasta
This split command gives me these files:
test_F5.csfstaaa,
test_F5.csfstaab, etc
And I want these files:
aatest_F5.csfasta,
abtest_F5.csfasta, etc
Is there mayba a function in split to do this, or another way to fix this problem?
To rename those split files:
Currently the files are:
$ ls test*
test_F5.csfstaaa test_F5.csfstaab
Renaming the files:
$ for file in test_F5.csfsta*
> do
> mv $file $(echo $file | sed 's/\(.*\)\(..\)/\2\1/')
> done
After renameing:
$ ls *test_F5*
aatest_F5.csfsta abtest_F5.csfsta

Resources