Batch rename multiple numbers in filename with different padding - bash

I am trying to rename a batch of files of the form:
test1_run1
test1_run2
...
test1_run10
...
test10_run1
test10_run2
...
test10_run10
to the form with multiple paddings. For the first number I need padding with zeros of length 5 and for the second with length 3.
The final result should be of the form:
test00001_run001
test00001_run002
...
test00001_run010
...
test00010_run001
test00010_run002
...
test00010_run010
How can I do this in bash for all the files in a particular folder?

We can convert the string into test + 5 digits + _run + 3 digits formats by saying:
$ awk -F"test" '{split($2,a,"_run"); printf "%s%0.5d%s%0.3d\n", FS, a[1], "_run", a[2]}' a
test00001_run001
test00001_run002
test00001_run010
test00010_run001
test00010_run002
test00010_run010
This works by using test as field separator and splitting the 2nd field in two parts: before and after _run. Then, it uses the printf thingies to get the proper output.
Then, you can print mv together with the previous value and say:
$ awk -F"test" '{split($2,a,"_run"); printf "mv %s %s%0.5d%s%0.3d\n", $0, FS, a[1], "_run", a[2]}' a
mv test1_run1 test00001_run001
mv test1_run2 test00001_run002
mv test1_run10 test00001_run010
mv test10_run1 test00010_run001
mv test10_run2 test00010_run002
mv test10_run10 test00010_run010
If you then pipe it to sh, it will get executed.

If you don't want to use perl or awk, and strictly use bash and some utility programs that are available in most distribution, you can try something like this:
for i in * ; do
testpart=`echo $i | cut -d_ -f1`
testnum=${testpart#test}
runpart=`echo $i | cut -d_ -f2`
runnum=${runpart#run}
destfile=test`printf %05d $testnum`_run`printf %03d $runnum`
mv $i $destfile
done

In bash:
#!/bin/bash
shopt -s nullglob extglob
for file in test+([[:digit:]])_run+([[:digit:]]); do
[[ $file =~ ^test([[:digit:]]+)_run([[:digit:]]+)$ ]]
printf -v newfile 'test_%05d_run%03d' "$((10#${BASH_REMATCH[1]}))" "$((10#${BASH_REMATCH[2]}))"
echo mv "$file" "$newfile"
done
Run this from within the folder you want to process. This will only echo the mv commands to be performed. Remove the echo if you're happy with the result.
we're using the shell option nullglob so that non-matching globs expand to nothing;
we're using the shell option extglob because the for loop will use extended globs;
the extended glob test+([[:digit:]])_run+([[:digit:]]) will expand to the files matching this pattern (if any)
we're using a regex to get the digits from the file names; the first number will be in BASH_REMATCH[1] and the second in BASH_REMATCH[2].
we're using printf to format the new file name; the modifiers %05d and %03d will format the numbers according to your wishes (with appropriate leading zeroes). Observe that we're using ((10#${BASH_REMATCH[1]})) to explicitly specify that the number is in radix 10, in case you have a file test09_run001. The 09 part would make bash misinterpret the number in radix 8 (because of the leading 0) and you'll get a complaint; the -v switch tells printf to not output on standard output, but to store the output in variable newfile;
finally we perform the mv.

Related

How do I select each information in one line with delimiters [duplicate]

I have this string stored in a variable:
IN="bla#some.com;john#home.com"
Now I would like to split the strings by ; delimiter so that I have:
ADDR1="bla#some.com"
ADDR2="john#home.com"
I don't necessarily need the ADDR1 and ADDR2 variables. If they are elements of an array that's even better.
After suggestions from the answers below, I ended up with the following which is what I was after:
#!/usr/bin/env bash
IN="bla#some.com;john#home.com"
mails=$(echo $IN | tr ";" "\n")
for addr in $mails
do
echo "> [$addr]"
done
Output:
> [bla#some.com]
> [john#home.com]
There was a solution involving setting Internal_field_separator (IFS) to ;. I am not sure what happened with that answer, how do you reset IFS back to default?
RE: IFS solution, I tried this and it works, I keep the old IFS and then restore it:
IN="bla#some.com;john#home.com"
OIFS=$IFS
IFS=';'
mails2=$IN
for x in $mails2
do
echo "> [$x]"
done
IFS=$OIFS
BTW, when I tried
mails2=($IN)
I only got the first string when printing it in loop, without brackets around $IN it works.
You can set the internal field separator (IFS) variable, and then let it parse into an array. When this happens in a command, then the assignment to IFS only takes place to that single command's environment (to read ). It then parses the input according to the IFS variable value into an array, which we can then iterate over.
This example will parse one line of items separated by ;, pushing it into an array:
IFS=';' read -ra ADDR <<< "$IN"
for i in "${ADDR[#]}"; do
# process "$i"
done
This other example is for processing the whole content of $IN, each time one line of input separated by ;:
while IFS=';' read -ra ADDR; do
for i in "${ADDR[#]}"; do
# process "$i"
done
done <<< "$IN"
Taken from Bash shell script split array:
IN="bla#some.com;john#home.com"
arrIN=(${IN//;/ })
echo ${arrIN[1]} # Output: john#home.com
Explanation:
This construction replaces all occurrences of ';' (the initial // means global replace) in the string IN with ' ' (a single space), then interprets the space-delimited string as an array (that's what the surrounding parentheses do).
The syntax used inside of the curly braces to replace each ';' character with a ' ' character is called Parameter Expansion.
There are some common gotchas:
If the original string has spaces, you will need to use IFS:
IFS=':'; arrIN=($IN); unset IFS;
If the original string has spaces and the delimiter is a new line, you can set IFS with:
IFS=$'\n'; arrIN=($IN); unset IFS;
I've seen a couple of answers referencing the cut command, but they've all been deleted. It's a little odd that nobody has elaborated on that, because I think it's one of the more useful commands for doing this type of thing, especially for parsing delimited log files.
In the case of splitting this specific example into a bash script array, tr is probably more efficient, but cut can be used, and is more effective if you want to pull specific fields from the middle.
Example:
$ echo "bla#some.com;john#home.com" | cut -d ";" -f 1
bla#some.com
$ echo "bla#some.com;john#home.com" | cut -d ";" -f 2
john#home.com
You can obviously put that into a loop, and iterate the -f parameter to pull each field independently.
This gets more useful when you have a delimited log file with rows like this:
2015-04-27|12345|some action|an attribute|meta data
cut is very handy to be able to cat this file and select a particular field for further processing.
If you don't mind processing them immediately, I like to do this:
for i in $(echo $IN | tr ";" "\n")
do
# process
done
You could use this kind of loop to initialize an array, but there's probably an easier way to do it.
Compatible answer
There are a lot of different ways to do this in bash.
However, it's important to first note that bash has many special features (so-called bashisms) that won't work in any other shell.
In particular, arrays, associative arrays, and pattern substitution, which are used in the solutions in this post as well as others in the thread, are bashisms and may not work under other shells that many people use.
For instance: on my Debian GNU/Linux, there is a standard shell called dash; I know many people who like to use another shell called ksh; and there is also a special tool called busybox with his own shell interpreter (ash).
For posix shell compatible answer, go to last part of this answer!
Requested string
The string to be split in the above question is:
IN="bla#some.com;john#home.com"
I will use a modified version of this string to ensure that my solution is robust to strings containing whitespace, which could break other solutions:
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
Split string based on delimiter in bash (version >=4.2)
In pure bash, we can create an array with elements split by a temporary value for IFS (the input field separator). The IFS, among other things, tells bash which character(s) it should treat as a delimiter between elements when defining an array:
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
# save original IFS value so we can restore it later
oIFS="$IFS"
IFS=";"
declare -a fields=($IN)
IFS="$oIFS"
unset oIFS
In newer versions of bash, prefixing a command with an IFS definition changes the IFS for that command only and resets it to the previous value immediately afterwards. This means we can do the above in just one line:
IFS=\; read -a fields <<<"$IN"
# after this command, the IFS resets back to its previous value (here, the default):
set | grep ^IFS=
# IFS=$' \t\n'
We can see that the string IN has been stored into an array named fields, split on the semicolons:
set | grep ^fields=\\\|^IN=
# fields=([0]="bla#some.com" [1]="john#home.com" [2]="Full Name <fulnam#other.org>")
# IN='bla#some.com;john#home.com;Full Name <fulnam#other.org>'
(We can also display the contents of these variables using declare -p:)
declare -p IN fields
# declare -- IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
# declare -a fields=([0]="bla#some.com" [1]="john#home.com" [2]="Full Name <fulnam#other.org>")
Note that read is the quickest way to do the split because there are no forks or external resources called.
Once the array is defined, you can use a simple loop to process each field (or, rather, each element in the array you've now defined):
# `"${fields[#]}"` expands to return every element of `fields` array as a separate argument
for x in "${fields[#]}" ;do
echo "> [$x]"
done
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
Or you could drop each field from the array after processing using a shifting approach, which I like:
while [ "$fields" ] ;do
echo "> [$fields]"
# slice the array
fields=("${fields[#]:1}")
done
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
And if you just want a simple printout of the array, you don't even need to loop over it:
printf "> [%s]\n" "${fields[#]}"
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
Update: recent bash >= 4.4
In newer versions of bash, you can also play with the command mapfile:
mapfile -td \; fields < <(printf "%s\0" "$IN")
This syntax preserve special chars, newlines and empty fields!
If you don't want to include empty fields, you could do the following:
mapfile -td \; fields <<<"$IN"
fields=("${fields[#]%$'\n'}") # drop '\n' added by '<<<'
With mapfile, you can also skip declaring an array and implicitly "loop" over the delimited elements, calling a function on each:
myPubliMail() {
printf "Seq: %6d: Sending mail to '%s'..." $1 "$2"
# mail -s "This is not a spam..." "$2" </path/to/body
printf "\e[3D, done.\n"
}
mapfile < <(printf "%s\0" "$IN") -td \; -c 1 -C myPubliMail
(Note: the \0 at end of the format string is useless if you don't care about empty fields at end of the string or they're not present.)
mapfile < <(echo -n "$IN") -td \; -c 1 -C myPubliMail
# Seq: 0: Sending mail to 'bla#some.com', done.
# Seq: 1: Sending mail to 'john#home.com', done.
# Seq: 2: Sending mail to 'Full Name <fulnam#other.org>', done.
Or you could use <<<, and in the function body include some processing to drop the newline it adds:
myPubliMail() {
local seq=$1 dest="${2%$'\n'}"
printf "Seq: %6d: Sending mail to '%s'..." $seq "$dest"
# mail -s "This is not a spam..." "$dest" </path/to/body
printf "\e[3D, done.\n"
}
mapfile <<<"$IN" -td \; -c 1 -C myPubliMail
# Renders the same output:
# Seq: 0: Sending mail to 'bla#some.com', done.
# Seq: 1: Sending mail to 'john#home.com', done.
# Seq: 2: Sending mail to 'Full Name <fulnam#other.org>', done.
Split string based on delimiter in shell
If you can't use bash, or if you want to write something that can be used in many different shells, you often can't use bashisms -- and this includes the arrays we've been using in the solutions above.
However, we don't need to use arrays to loop over "elements" of a string. There is a syntax used in many shells for deleting substrings of a string from the first or last occurrence of a pattern. Note that * is a wildcard that stands for zero or more characters:
(The lack of this approach in any solution posted so far is the main reason I'm writing this answer ;)
${var#*SubStr} # drops substring from start of string up to first occurrence of `SubStr`
${var##*SubStr} # drops substring from start of string up to last occurrence of `SubStr`
${var%SubStr*} # drops substring from last occurrence of `SubStr` to end of string
${var%%SubStr*} # drops substring from first occurrence of `SubStr` to end of string
As explained by Score_Under:
# and % delete the shortest possible matching substring from the start and end of the string respectively, and
## and %% delete the longest possible matching substring.
Using the above syntax, we can create an approach where we extract substring "elements" from the string by deleting the substrings up to or after the delimiter.
The codeblock below works well in bash (including Mac OS's bash), dash, ksh, lksh, yash, zsh, and busybox's ash:
(Thanks to Adam Katz's comment, making this loop a lot simplier!)
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
while [ "$IN" != "$iter" ] ;do
# extract the substring from start of string up to delimiter.
iter=${IN%%;*}
# delete this first "element" AND next separator, from $IN.
IN="${IN#$iter;}"
# Print (or doing anything with) the first "element".
printf '> [%s]\n' "$iter"
done
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
Why not cut?
cut is usefull for extracting columns in big files, but doing forks repetitively (var=$(echo ... | cut ...)) become quickly overkill!
Here is a correct syntax, tested under many posix shell using cut, as suggested by This other answer from DougW:
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
i=1
while iter=$(echo "$IN"|cut -d\; -f$i) ; [ -n "$iter" ] ;do
printf '> [%s]\n' "$iter"
i=$((i+1))
done
I wrote this in order to compare execution time.
On my raspberrypi, this look like:
$ export TIMEFORMAT=$'(%U + %S) / \e[1m%R\e[0m : %P '
$ time sh splitDemo.sh >/dev/null
(0.000 + 0.019) / 0.019 : 99.63
$ time sh splitDemo_cut.sh >/dev/null
(0.051 + 0.041) / 0.188 : 48.98
Where overall execution time is something like 10x longer, using 1 forks to cut, by field!
This worked for me:
string="1;2"
echo $string | cut -d';' -f1 # output is 1
echo $string | cut -d';' -f2 # output is 2
I think AWK is the best and efficient command to resolve your problem. AWK is included by default in almost every Linux distribution.
echo "bla#some.com;john#home.com" | awk -F';' '{print $1,$2}'
will give
bla#some.com john#home.com
Of course your can store each email address by redefining the awk print field.
How about this approach:
IN="bla#some.com;john#home.com"
set -- "$IN"
IFS=";"; declare -a Array=($*)
echo "${Array[#]}"
echo "${Array[0]}"
echo "${Array[1]}"
Source
echo "bla#some.com;john#home.com" | sed -e 's/;/\n/g'
bla#some.com
john#home.com
This also works:
IN="bla#some.com;john#home.com"
echo ADD1=`echo $IN | cut -d \; -f 1`
echo ADD2=`echo $IN | cut -d \; -f 2`
Be careful, this solution is not always correct. In case you pass "bla#some.com" only, it will assign it to both ADD1 and ADD2.
A different take on Darron's answer, this is how I do it:
IN="bla#some.com;john#home.com"
read ADDR1 ADDR2 <<<$(IFS=";"; echo $IN)
How about this one liner, if you're not using arrays:
IFS=';' read ADDR1 ADDR2 <<<$IN
In Bash, a bullet proof way, that will work even if your variable contains newlines:
IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
Look:
$ in=$'one;two three;*;there is\na newline\nin this field'
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two three" [2]="*" [3]="there is
a newline
in this field")'
The trick for this to work is to use the -d option of read (delimiter) with an empty delimiter, so that read is forced to read everything it's fed. And we feed read with exactly the content of the variable in, with no trailing newline thanks to printf. Note that's we're also putting the delimiter in printf to ensure that the string passed to read has a trailing delimiter. Without it, read would trim potential trailing empty fields:
$ in='one;two;three;' # there's an empty field
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two" [2]="three" [3]="")'
the trailing empty field is preserved.
Update for Bash≥4.4
Since Bash 4.4, the builtin mapfile (aka readarray) supports the -d option to specify a delimiter. Hence another canonical way is:
mapfile -d ';' -t array < <(printf '%s;' "$in")
Without setting the IFS
If you just have one colon you can do that:
a="foo:bar"
b=${a%:*}
c=${a##*:}
you will get:
b = foo
c = bar
Here is a clean 3-liner:
in="foo#bar;bizz#buzz;fizz#buzz;buzz#woof"
IFS=';' list=($in)
for item in "${list[#]}"; do echo $item; done
where IFS delimit words based on the separator and () is used to create an array. Then [#] is used to return each item as a separate word.
If you've any code after that, you also need to restore $IFS, e.g. unset IFS.
The following Bash/zsh function splits its first argument on the delimiter given by the second argument:
split() {
local string="$1"
local delimiter="$2"
if [ -n "$string" ]; then
local part
while read -d "$delimiter" part; do
echo $part
done <<< "$string"
echo $part
fi
}
For instance, the command
$ split 'a;b;c' ';'
yields
a
b
c
This output may, for instance, be piped to other commands. Example:
$ split 'a;b;c' ';' | cat -n
1 a
2 b
3 c
Compared to the other solutions given, this one has the following advantages:
IFS is not overriden: Due to dynamic scoping of even local variables, overriding IFS over a loop causes the new value to leak into function calls performed from within the loop.
Arrays are not used: Reading a string into an array using read requires the flag -a in Bash and -A in zsh.
If desired, the function may be put into a script as follows:
#!/usr/bin/env bash
split() {
# ...
}
split "$#"
you can apply awk to many situations
echo "bla#some.com;john#home.com"|awk -F';' '{printf "%s\n%s\n", $1, $2}'
also you can use this
echo "bla#some.com;john#home.com"|awk -F';' '{print $1,$2}' OFS="\n"
There is a simple and smart way like this:
echo "add:sfff" | xargs -d: -i echo {}
But you must use gnu xargs, BSD xargs cant support -d delim. If you use apple mac like me. You can install gnu xargs :
brew install findutils
then
echo "add:sfff" | gxargs -d: -i echo {}
So many answers and so many complexities. Try out a simpler solution:
echo "string1, string2" | tr , "\n"
tr (read, translate) replaces the first argument with the second argument in the input.
So tr , "\n" replace the comma with new line character in the input and it becomes:
string1
string2
There are some cool answers here (errator esp.), but for something analogous to split in other languages -- which is what I took the original question to mean -- I settled on this:
IN="bla#some.com;john#home.com"
declare -a a="(${IN//;/ })";
Now ${a[0]}, ${a[1]}, etc, are as you would expect. Use ${#a[*]} for number of terms. Or to iterate, of course:
for i in ${a[*]}; do echo $i; done
IMPORTANT NOTE:
This works in cases where there are no spaces to worry about, which solved my problem, but may not solve yours. Go with the $IFS solution(s) in that case.
If no space, Why not this?
IN="bla#some.com;john#home.com"
arr=(`echo $IN | tr ';' ' '`)
echo ${arr[0]}
echo ${arr[1]}
This is the simplest way to do it.
spo='one;two;three'
OIFS=$IFS
IFS=';'
spo_array=($spo)
IFS=$OIFS
echo ${spo_array[*]}
Apart from the fantastic answers that were already provided, if it is just a matter of printing out the data you may consider using awk:
awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
This sets the field separator to ;, so that it can loop through the fields with a for loop and print accordingly.
Test
$ IN="bla#some.com;john#home.com"
$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
> [bla#some.com]
> [john#home.com]
With another input:
$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "a;b;c d;e_;f"
> [a]
> [b]
> [c d]
> [e_]
> [f]
IN="bla#some.com;john#home.com"
IFS=';'
read -a IN_arr <<< "${IN}"
for entry in "${IN_arr[#]}"
do
echo $entry
done
Output
bla#some.com
john#home.com
System : Ubuntu 12.04.1
Use the set built-in to load up the $# array:
IN="bla#some.com;john#home.com"
IFS=';'; set $IN; IFS=$' \t\n'
Then, let the party begin:
echo $#
for a; do echo $a; done
ADDR1=$1 ADDR2=$2
Two bourne-ish alternatives where neither require bash arrays:
Case 1: Keep it nice and simple: Use a NewLine as the Record-Separator... eg.
IN="bla#some.com
john#home.com"
while read i; do
# process "$i" ... eg.
echo "[email:$i]"
done <<< "$IN"
Note: in this first case no sub-process is forked to assist with list manipulation.
Idea: Maybe it is worth using NL extensively internally, and only converting to a different RS when generating the final result externally.
Case 2: Using a ";" as a record separator... eg.
NL="
" IRS=";" ORS=";"
conv_IRS() {
exec tr "$1" "$NL"
}
conv_ORS() {
exec tr "$NL" "$1"
}
IN="bla#some.com;john#home.com"
IN="$(conv_IRS ";" <<< "$IN")"
while read i; do
# process "$i" ... eg.
echo -n "[email:$i]$ORS"
done <<< "$IN"
In both cases a sub-list can be composed within the loop is persistent after the loop has completed. This is useful when manipulating lists in memory, instead storing lists in files. {p.s. keep calm and carry on B-) }
In Android shell, most of the proposed methods just do not work:
$ IFS=':' read -ra ADDR <<<"$PATH"
/system/bin/sh: can't create temporary file /sqlite_stmt_journals/mksh.EbNoR10629: No such file or directory
What does work is:
$ for i in ${PATH//:/ }; do echo $i; done
/sbin
/vendor/bin
/system/sbin
/system/bin
/system/xbin
where // means global replacement.
IN='bla#some.com;john#home.com;Charlie Brown <cbrown#acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)'
set -f
oldifs="$IFS"
IFS=';'; arrayIN=($IN)
IFS="$oldifs"
for i in "${arrayIN[#]}"; do
echo "$i"
done
set +f
Output:
bla#some.com
john#home.com
Charlie Brown <cbrown#acme.com
!"#$%&/()[]{}*? are no problem
simple is beautiful :-)
Explanation: Simple assignment using parenthesis () converts semicolon separated list into an array provided you have correct IFS while doing that. Standard FOR loop handles individual items in that array as usual.
Notice that the list given for IN variable must be "hard" quoted, that is, with single ticks.
IFS must be saved and restored since Bash does not treat an assignment the same way as a command. An alternate workaround is to wrap the assignment inside a function and call that function with a modified IFS. In that case separate saving/restoring of IFS is not needed. Thanks for "Bize" for pointing that out.
Here's my answer!
DELIMITER_VAL='='
read -d '' F_ABOUT_DISTRO_R <<"EOF"
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
NAME="Ubuntu"
VERSION="14.04.4 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.4 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
EOF
SPLIT_NOW=$(awk -F$DELIMITER_VAL '{for(i=1;i<=NF;i++){printf "%s\n", $i}}' <<<"${F_ABOUT_DISTRO_R}")
while read -r line; do
SPLIT+=("$line")
done <<< "$SPLIT_NOW"
for i in "${SPLIT[#]}"; do
echo "$i"
done
Why this approach is "the best" for me?
Because of two reasons:
You do not need to escape the delimiter;
You will not have problem with blank spaces. The value will be properly separated in the array.
A one-liner to split a string separated by ';' into an array is:
IN="bla#some.com;john#home.com"
ADDRS=( $(IFS=";" echo "$IN") )
echo ${ADDRS[0]}
echo ${ADDRS[1]}
This only sets IFS in a subshell, so you don't have to worry about saving and restoring its value.

How to only concatenate files with same identifier using bash script?

I have a directory with files, some have the same ID, which is given in the first part of the file name before the first underscore (always). e.g.:
S100_R1.txt
S100_R2.txt
S111_1_R1.txt
S111_R1.txt
S111_R2.txt
S333_R1.txt
I want to concatenate those identical IDs (and if possible placing the original files in another dir, e.g. output:
original files (folder)
S100_merged.txt
S111_merged.txt
S333_R1.txt
Small note: I imaging that perhaps a solution would be to place all files which will be processed by the code in a new directory and than in a second step move the files with the appended "merged" back to the original dir or something like this...
I am extremely new to bash scripting, so I really can't produce this code. I am use to R language and I can think how it should be but can't write it.
My pitiful attempt is something like this:
while IFS= read -r -d '' id; do
cat *"$id" > "./${id%.txt}_grouped.txt"
done < <(printf '%s\0' *.txt | cut -zd_ -f1- | sort -uz)
or this:
for ((k=100;k<400;k=k+1));
do
IDList= echo "S${k}_S*.txt" | awk -F'[_.]' '{$1}'
while [ IDList${k} == IDList${k+n} ]; do
cat IDList${k}_S*.txt IDList${k+n}_S*.txt S${k}_S*.txt S${k}_S*.txt >cat/S${k}_merged.txt &;
done
Sometimes there are only one version of the file (e.g. S333_R1.txt) sometime two (S100*), three (S111*) or more of the same.
I am prepared for harsh critique for this question because I am so far from a solution, but if someone would be willing to help me out I would greatly appreciate it!
while read $fil;
do
if [[ "$(find . -maxdepth 1 -name $line"_*.txt" | wc -l)" -gt "1" ]]
then
cat $line_*.txt >> "$line_merged.txt"
fi
done <<< "$(for i in *_*.txt;do echo $i;done | awk -F_ '{ print $1 }')"
Search for files with _.txt and run the output into awk, printing the strings before "_". Run this through a while loop. Check if the number of files for each prefix pattern is greater than 1 using find and if it is, cat the files with that prefix pattern into a merged file.
for id in $(ls | grep -Po '^[^_]+' | uniq) ; do
if [ $(ls ${id}_*.txt 2> /dev/null | wc -l) -gt 1 ] ; then
cat ${id}_*.txt > _${id}_merged.txt
mv ${id}_*.txt folder
fi
done
for f in _*_merged.txt ; do
mv ${f} ${f:1}
done
A plain bash loop with preprocessing:
# first get the list of files
find . -type f |
# then extract the prefix
sed 's#./\([^_]*\)_#\1\t&#' |
# then in a loop merge the files
while IFS=$'\t' read prefix file; do
cat "$file" >> "${prefix}_merged.txt"
done
That script is iterative - one file at a time. To detect if there is one file of specific prefix, we have to look at all files at a time. So first an awk script to join list of filenames with common prefix:
find . -type f | # maybe `sort |` ?
# join filenames with common prefix
awk '{
f=$0; # remember the file path
gsub(/.*\//,"");gsub(/_.*/,""); # extract prefix from filepath and store it in $0
a[$0]=a[$0]" "f # Join path with leading space in associative array indexed with prefix
}
# Output prefix and filanames separated by spaces.
# TBH a tab would be a better separator..
END{for (i in a) print i a[i]}
' |
# Read input separated by spaces into a bash array
while IFS=' ' read -ra files; do
#first array element is the prefix
prefix=${files[0]}
unset files[0]
# rest is the files
case "${#files[#]}" in
0) echo super error; ;;
# one file - preserve the filename
1) cat "${files[#]}" > "$outdir"/"${files[1]}"; ;;
# more files - do a _merged.txt suffix
*) cat "${files[#]}" > "$outdir"/"${prefix}_merged.txt"; ;;
esac
done
Tested on repl.
IDList= echo "S${k}_S*.txt"
Executes the command echo with the environment variable IDList exported and set to empty with one argument equal to S<insert value of k here>_S*.txt.
Filename expansion (ie. * -> list of files) is not executed inside " double quotes.
To assign a result of execution into a variable, use command substitution var=$( something seomthing | seomthing )
IDList${k+n}_S*.txt
The ${var+pattern} is a variable expansion that does not add two variables together. It uses pattern when var is set and does nothing when var is unset. See shell parameter expansion and this my answer on ${var-pattern}, but it's similar.
To add two numbers use arithemtic expansion $((k + n)).
awk -F'[_.]' '{$1}'
$1 is just invalid here. To print a line, print it {print %1}.
Remember to check your scripts with http://shellcheck.net
A pure bash way below. It uses only globs (no need for external commands like ls or find for this question) to enumerate filenames and an associative array (which is supported by bash since the version 4.0) in order to compute frequencies of ids. Parsing ls output to list files is questionable in bash. You may consider reading ParsingLs.
#!/bin/bash
backupdir=original_files # The directory to move the original files
declare -A count # Associative array to hold id counts
# If it is assumed that the backup directory exists prior to call, then
# drop the line below
mkdir "$backupdir" || exit
for file in [^_]*_*; do ((++count[${file%%_*}])); done
for id in "${!count[#]}"; do
if ((count[$id] > 1)); then
mv "$id"_* "$backupdir"
cat "$backupdir/$id"_* > "$id"_merged.txt
fi
done

Update numbers in filenames

I have a set of filenames which are ordered numerically like:
13B12363_1B1_0.png
13B12363_1B1_1.png
13B12363_1B1_2.png
13B12363_1B1_3.png
13B12363_1B1_4.png
13B12363_1B1_5.png
13B12363_1B1_6.png
13B12363_1B1_7.png
13B12363_1B1_8.png
13B12363_1B1_9.png
13B12363_1B1_10.png
[...]
13B12363_1B1_495.png
13B12363_1B1_496.png
13B12363_1B1_497.png
13B12363_1B1_498.png
13B12363_1B1_499.png
After some postprocessing, I removed some files and I would like to update the ordering number and replace the actual number by its new position. Looking at this previous question I end up doing something like:
(1) ls -v | cat -n | while read n f; do mv -i $f ${f%%[0-9]+.png}_$n.png; done
However, this command do not recognize the "ordering number + png" and just append the new number at the end of the filename. Something like 13B12363_1B1_10.png_9.png
On the other hand, if I do:
(2) ls -v * | cat -n | while read n f; do mv $f ${f%.*}_$n.png; done
The ordering number is added without issues. Like 13B12363_1B1_10_9.png
So, for (1) it seems I am not specifying the digit correctly but I am not able to find the correct syntax. So far I tried [0-9], [0-9]+, [[:digits:]] and [[:digits:]]+. Which should be the proper one?
Additionally, in (2) I am wondering how I should specify rename (CentOS version) to remove the numbers between the second and the third underscore. Here I have to say that I have some filenames like 20B12363_22_10_9.png, so I should somehow specify second and third underscore.
Using Bash's built-in Basic Regex Engine and a null delimited list of files.
Tested with sample
#!/usr/bin/env bash
prename=$1
# Bash setting to return empty result if no match found
shopt -s nullglob
# Create a temporary directory to prevent file rename collisions
tmpdir=$(mktemp -d) || exit 1
# Add a trap to remove the temporary directory on EXIT
trap 'rmdir -- "$tmpdir"' EXIT
# Initialize file counter
n=0
# Generate null delimited list of files
printf -- %s\\0 "${prename}_"*'.png' |
# Sort the null delimited list on 3rd field numeric order with _ separator
sort --zero-terminated --field-separator=_ --key=3n |
# Iterate the null delimited list
while IFS= read -r -d '' f; do
# If Bash Regex match the file name AND
# file has a different sequence number
if [[ "$f" =~ (.*)_([0-9]+)\.png$ ]] && [[ ${BASH_REMATCH[2]} -ne $n ]]; then
# Use captured Regex match group 1 to rename file with incrementing counter
# and move it to the temporary folder to prevent rename collision with
# existing file
echo mv -- "$f" "$tmpdir/${BASH_REMATCH[1]}_$((n)).png"
fi
# Increment file counter
n=$((n+1))
done
# Move back the renamed files in place
mv --no-clobber -- "$tmpdir/*" ./
# $tempdir removal is automatic on EXIT
# If something goes wrong, some files remain in it and it is not deleted
# so these can be dealt with manually
Remove the echo if the result matches your expectations.
Output from the sample
mv -- 13B12363_1B1_495.png /tmp/tmp.O2HmbyD7d5/13B12363_1B1_11.png
mv -- 13B12363_1B1_496.png /tmp/tmp.O2HmbyD7d5/13B12363_1B1_12.png
mv -- 13B12363_1B1_497.png /tmp/tmp.O2HmbyD7d5/13B12363_1B1_13.png
mv -- 13B12363_1B1_498.png /tmp/tmp.O2HmbyD7d5/13B12363_1B1_14.png
mv -- 13B12363_1B1_499.png /tmp/tmp.O2HmbyD7d5/13B12363_1B1_15.png
Do not parse ls.
read interprets \ and splits on IFS. bashfaq how to read a stream line by line
In ${f%%replacement} expansion the replacement is not regex, but globulation. Rules differ. + means literally +.
You could shopt -o extglob and then ${f%%+([0-9]).png}. Or write a loop. Or match the _ too and do f=${f%%.png}; f="${f%_[0-9]*}_".
Or something along (untested):
find . -maxdepth 1 -mindepth 1 -type f -name '13B12363_1B1_*.png' |
sort -t_ -n -k3 |
sed 's/\(.*\)[0-9]+\.png$/&\t\1/' |
{
n=1;
while IFS=$'\t' read -r from to; do
echo mv "$from" "$to$((n++)).png";
done;
}
Another alternative, with perl:
perl -e 'while(<#ARGV>){$o=$_;s/\d+(?=\D*$)/$i++.".renamed"/e;die if -e $_;rename $o,$_}while(<*.renamed>){$o=$_;s/\.renamed$//;die if -e $_;rename $o,$_}' $(ls -v|sed -E "s/$|^/'/g"|paste -sd ' ' -)
This solution should avoid rename collisions by: first renaming files adding extra ".renamed" extension. And then removing the ".renamed" extension as the last step. Also, There are checks to detect rename collision.
Anyways, please backup your data before trying :)
The perl script unrolled and explained:
while(<#ARGV>){ # loop through arguments.
# filenames are passed to "$_" variable
# save old file name
$o=$_;
# if not using variable, regex replacement (s///) uses topic variable ($_)
# e flag ==> evals the replacement
s/\d+(?=\D*$)/$i++.".renamed"/e; # works on $_
# Detect rename collision
die if -e $_;
rename $o,$_
}
while(<*.renamed>){
$o=$_;
s/\.renamed$//; # remove .renamed extension
die if -e $_;
rename $o,$_
}
The regex:
\d+ # one number or more
(?=\D*$) # followed by 0 or more non-numbers and end of string

UNIX :: Padding for files containing string and multipleNumber

I have many files not having consistent filenames.
For example
IMG_20200823_1.jpg
IMG_20200823_10.jpg
IMG_20200823_12.jpg
IMG_20200823_9.jpg
I would like to rename all of them and ensure they all follow same naming convention
IMG_20200823_0001.jpg
IMG_20200823_0010.jpg
IMG_20200823_0012.jpg
IMG_20200823_0009.jpg
Found out it's possible to change for file having only a number using below
printf "%04d\n"
However am not able to do with my files considering they mix string + "_" + different numbers.
Could anyone help me ?
Thanks !
With Perl's standalone rename or prename command:
rename -n 's/(\d+)(\.jpg$)/sprintf("%04d%s",$1,$2)/e' *.jpg
Output:
rename(IMG_20200823_10.jpg, IMG_20200823_0010.jpg)
rename(IMG_20200823_12.jpg, IMG_20200823_0012.jpg)
rename(IMG_20200823_1.jpg, IMG_20200823_0001.jpg)
rename(IMG_20200823_9.jpg, IMG_20200823_0009.jpg)
if everything looks fine, remove -n.
With Bash regular expressions:
re='(IMG_[[:digit:]]+)_([[:digit:]]+)'
for f in *.jpg; do
[[ $f =~ $re ]]
mv "$f" "$(printf '%s_%04d.jpg' "${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}")"
done
where BASH_REMATCH is an array containing the capture groups of the regular expression. At index 0 is the whole match; index 1 contains IMG_ and the first group of digits; index 2 contains the second group of digits. The printf command is used to format the second group with zero padding, four digits wide.
Use a regex to extract the relevant sub-strings from the input and then pad it...
For each file.
Extract the prefix, number and suffix from the filename.
Pad the number with zeros.
Create the new filename.
Move files
The following code for bash:
echo 'IMG_20200823_1.jpg
IMG_20200823_10.jpg
IMG_20200823_12.jpg
IMG_20200823_9.jpg' |
while IFS= read -r file; do # foreach file
# Use GNU sed to extract parts on separate lines
tmp=$(<<<"$file" sed 's/\(.*_\)\([0-9]*\)\(\..*\)/\1\n\2\n\3\n/')
# Read the separate parts separated by newlines
{
IFS= read -r prefix
IFS= read -r number
IFS= read -r suffix
} <<<"$tmp"
# create new filename
newfilename="$prefix$(printf "%04d" "$number")$suffix"
# move the files
echo mv "$file" "$newfilename"
done
outputs:
mv IMG_20200823_1.jpg IMG_20200823_0001.jpg
mv IMG_20200823_10.jpg IMG_20200823_0010.jpg
mv IMG_20200823_12.jpg IMG_20200823_0012.jpg
mv IMG_20200823_9.jpg IMG_20200823_0009.jpg
Being puzzled by your hint at printf...
Current folder content:
$ ls -1 IMG_*
IMG_20200823_1.jpg
IMG_20200823_21.jpg
Surely is not a good solution but with printf and sed we can do that:
$ printf "mv %3s_%8s_%d.%3s %3s_%8s_%04d.%3s\n" $(ls -1 IMG_* IMG_* | sed 's/_/ /g; s/\./ /')
mv IMG_20200823_1.jpg IMG_20200823_0001.jpg
mv IMG_20200823_21.jpg IMG_20200823_0021.jpg

How can I save only a substring of file names from a directory without the file extension?

I have a directory that I'm reading from and I want to save only the date representation as a string.
I am close to getting it , although I know there is probably an easier way. Here is what I have so far:
#files are in the format of "THIS_20200420.csv" so I want only "20200420"
declare -a arr
declare -a arr2
FILES=test2/*.csv
for file in $FILES
do
arr=(${arr[*]} "${file##*/}")
done
for i in "${arr[#]}"
do
arr2+=$(echo $i | cut -c6-13)
done
for item in "${arr2[#]}"
do
echo $item
done
the output shows the array only having one element which is all the strings concatenated:
20200110202001202020021920200220202004202020042220200110202001202020021920200220202004202020042220200219202002202020042020200422
Im bashing my head against my computer at this point.
arr=(
"THIS_20200420.csv"
"THIS_20200421.csv"
"THIS_20200422.csv"
"THIS_20200423.csv"
"THIS_20200424.csv"
"THIS_20200425.csv"
"THIS_20200426.csv"
"THIS_20200427.csv"
"THIS_20200428.csv"
"THIS_20200429.csv"
"THIS_20200430.csv" )
arr=( ${arr[#]//*_} )
arr=( ${arr[#]//.*} )
echo "arr: ${arr[#]}"
Explanation:
arr=( ${arr[#]//*_} ) will match all char up to '_' for each element, and replace them with empty string.
arr=( ${arr[#]//.*} ) will match all char after '.' for each element, and replace them with empty string.
For more information on parameter expansion, a good reference is TLDP's guide on parameter expansion.
Try this
declare -a arrayname=($(ls -1 test2/*.csv | grep -o '[0-9]*'))
Demo:
$ls -1 *csv
THIS_20200420.csv
THIS_20200421.csv
THIS_20200422.csv
THIS_20200423.csv
THIS_20200424.csv
THIS_20200425.csv
THIS_20200426.csv
THIS_20200427.csv
THIS_20200428.csv
THIS_20200429.csv
THIS_20200430.csv
$declare -a arrayname=($(ls -1 *csv | grep -o '[0-9]*'))
$echo ${arrayname[#]}
20200420 20200421 20200422 20200423 20200424 20200425 20200426 20200427 20200428 20200429 20200430
$echo ${arrayname[2]}
20200422
$
You could achieve this using a loop with awk:
$ for file in *.csv; do echo $file | awk -F '[^[:alnum:]]' '{print $2}'; done
The -F '[^[:alnum:]]' tells awk to use non alphanumeric characters as the delimiter.
Another way to do this is to use bash shell parameter expansion to echo only the part of the filename you want. This obviously only works if your filenames have consistent formatting:
$ for file in *.csv; do echo "${file:5:8}"; done
I thought it would be nice to use bash parameter expansion to strip the unwanted prefix and suffix but you can't have nested expansion (afaict) so this is the best I could come up with:
$ for file in *.csv; do echo "$(tmp=${file%.csv}; echo ${tmp#THIS_})"; done
Meet Cut! A good friend of Linux Users
for file in ./*.csv; do echo $file | cut -d "_" -f 2 | cut -d "." -f 1 ; done
This one line should do the trick!
Example:
Use an array for the files assignment and parameter expansion.
#!/usr/bin/env bash
shopt -s nullglob
##: Save the files ending in *.csv in an array
## so it expands properly, variable assignment does not expand the glob *
files=(test2/*.csv)
##: Remain only the files that end with .csv without the pathname, longest match
files=("${files[#]##*/}")
##: Remain only the file names without the .csv extention
files=("${files[#]%.csv}")
##: Remain only the filename after the _ from the beginning, shortest match.
files=("${files[#]#*_}")
printf '%s ' "${files[#]}"

Resources