changing the name of many files by increasing the number - bash

I want to change the file names from a terminal. I have many files, so I cannot change all of them one by one.
a20170606_1257.txt -> a20170606_1300.txt
a20170606_1258.txt -> a20170606_1301.txt
I am only able to change it by:
rename 57.txt 00.txt *57.txt
but it is not enough.

Just playing with parameter expansion to extract the longest and shortest strings of type ${str##*} and ${str%%*}
offset=43
for file in *.txt; do
[ -f "$file" ] || continue
woe="${file%%.*}"; ext="${file##*.}"
num="${woe##*_}"
echo "$file" "${woe%%_*}_$((num+offset)).${ext}"
done
Once you have it working, remove the echo line and replace it with mv -v. Change the offset variable as you wish, depending on where you want to start your re-named files from.

Perl e flag to rescue
rename -n -v 's/(?<=_)(\d+)/$1+43/e' *.txt
test
dir $ ls | cat -n
1 a20170606_1257.txt
2 a20170606_1258.txt
dir $
dir $
dir $ rename -n -v 's/(?<=_)(\d+)/$1+43/e' *.txt
rename(a20170606_1257.txt, a20170606_1300.txt)
rename(a20170606_1258.txt, a20170606_1301.txt)
dir $
dir $ rename -v 's/(?<=_)(\d+)/$1+43/e' *.txt
a20170606_1257.txt renamed as a20170606_1300.txt
a20170606_1258.txt renamed as a20170606_1301.txt
dir $
dir $ ls | cat -n
1 a20170606_1300.txt
2 a20170606_1301.txt
dir $
rename --help:
Usage:
rename [ -h|-m|-V ] [ -v ] [ -n ] [ -f ] [ -e|-E *perlexpr*]*|*perlexpr*
[ *files* ]
Options:
-v, -verbose
Verbose: print names of files successfully renamed.
-n, -nono
No action: print names of files to be renamed, but don't rename.
-f, -force
Over write: allow existing files to be over-written.
-h, -help
Help: print SYNOPSIS and OPTIONS.
-m, -man
Manual: print manual page.
-V, -version
Version: show version number.
-e Expression: code to act on files name.
May be repeated to build up code (like "perl -e"). If no -e, the
first argument is used as code.
-E Statement: code to act on files name, as -e but terminated by

Related

Checking file existence in Bash using commandline argument

How do you use a command line argument as a file path and check for file existence in Bash?
I have the simple Bash script test.sh:
#!/bin/bash
set -e
echo "arg1=$1"
if [ ! -f "$1" ]
then
echo "File $1 does not exist."
exit 1
fi
echo "File exists!"
and in the same directory, I have a data folder containing stuff.txt.
If I run ./test.sh data/stuff.txt I see the expected output:
arg1=data/stuff.txt
"File exists!"
However, if I call this script from a second script test2.sh, in the same directory, like:
#!/bin/bash
fn="data/stuff.txt"
./test.sh $fn
I get the mangled output:
arg1=data/stuff.txt
does not exist
Why does the call work when I run it manually from a terminal, but not when I run it through another Bash script, even though both are receiving the same file path? What am I doing wrong?
Edit: The filename does not have spaces. Both scripts are executable. I'm running this on Ubuntu 18.04.
The filename was getting an extra whitespace character added to it as a result of how I was retrieving it in my second script. I didn't note this in my question, but I was retrieving the filename from folder list over SSH, like:
fn=$(ssh -t "cd /project/; ls -t data | head -n1" | head -n1)
Essentially, I wanted to get the filename of the most recent file in a directory on a remote server. Apparently, head includes the trailing newline character. I fixed it by changing it to:
fn=$(ssh -t "cd /project/; ls -t data | head -n1" | head -n1 | tr -d '\n' | tr -d '\r')
Thanks to #bigdataolddriver for hinting at the problem likely being an extra character.

bash - find duplicate file in directory and rename

I have a directory that has thousands of files in it with various extensions. I also have a drop location where users drop files to be migrated to this directory. I'm looking for a script that will scan the target directory for a duplicate file name, if found, rename the file in the drop folder, then move it to the target directory.
Example:
/target/file.doc
/drop/file.doc
Script will rename file.doc to file1.doc then move it to /target/.
It needs to maintain the file extension too.
for fil in /drop/*
do
test -f "/target/$fil"
if [ "$?" = 0 ]
then
suff=$(awk -F\. '{ print "."$NF }' <<<$fil)
bdot=$(basename -s $suff $fil)
mv "/drop/$fil" "/drop/${bdot}1$suff"
cp "/drop/${bdot}1.$suff" "/target/${bdot}1$suff"
fi
done
Take each file in the drop directory and check it is existing the /target using test -e. If it does then move (rename) and then copy.
You have to take a bit more care than simply checking if a file exists before moving in order to provide a flexible solution that can handle files with or without extensions. You also may want to provide a way of forming duplicate filenames that preserves sort order. e.g. if file.txt already exists, you may want to use file_001.txt as the duplicate in target rather than file1.txt as when you reach 10 you will no longer have a canonical sort by filename.
Also, you never want to iterate with for i in $(ls dir) that is wrought with pitfalls. See Bash Pitfalls No. 1
Putting those pieces together, and including detail in the comments below, you could do something similar to the following and have a reasonable flexible solution allowing you to specify only the filename.ext to move or /path/to/drop/filename.ext. You must specify the drop and target directories in the script to meet your circumstance., e.g.
#!/bin/bash
tgt=target ## set target and drop directories as required
drp=drop
declare -i cnt=1 ## counter for filename_$cnt
test -z "$1" && { ## validate one argument given
printf "error: insufficient input\nusage: %s filename\n" "${0##*/}"
exit 1
}
test -w "$1" || test -w "$drp/$1" || { ## validate valid filename is writeable
printf "error: file not found or lack permission to move '%s'.\n" "$1"
exit 1
}
fn="${1##*/}" ## strip any path info from filename
if test "$1" != "${1%.*}" ; then
ext="${fn##*.}" ## get file extension
fnwoe="${fn%."$ext"}" ## get filename without extension
test "$fnwoe" = '' && ext= ## was a dotfile, reset ext
fi
vfn="$fn" ## set valid filename = filename
## form valid filename e.g. "$fn_001.$ext" if duplicate found
while test -e "$tgt/$vfn"; do
if test -n "$ext" ## did we have have an extension?
then
printf -v vfn "%s_%03d.%s" "$fnwoe" "$((cnt++))" "$ext"
else
printf -v vfn "%s_%03d" "$fn" "$((cnt++))"
fi
done
mv "$drp/$fn" "$tgt/$vfn" ## move file under non-conflicting name
Example drop and target
$ ls -1 drop
file
file.txt
$ ls -1 target
file.txt
file_001.txt
file_002.txt
Example Use
$ bash mvdrop.sh file
$ bash mvdrop.sh drop/file.txt
Resulting drop and target
$ ls -1 drop
$ ls -1 target
file
file.txt
file_001.txt
file_002.txt
file_003.txt
This will test to see if it exists, preserve the extension (along with any structure before the extension such as in the case of FILE.tar.gz), and move it to the target directory.
#!/bin/bash
TARGET="\target\"
DROP="\drop\"
for F in `ls $DROP`; do
if [[ -f $TARGET$F ]]; then
EXT=`echo $F | awk -F "." '{print $NF}'`
PRE=`echo $F | awk -F "." '{$NF="";print $0}' | sed -e 's/ $//g;s/ /./g'`
mv $DROP$F $DROP$PRE"1".$EXT
F=$PRE"1".$EXT
fi
mv $DROP$F $TARGET
done
Additionally you may want to do come restricting in the ls command, so that you aren't copying entire directories.
Display only regular files (no directories or symbolic links)
ls -p $DROP | grep -v /

Readlink - How to crop full path?

I use readlink to find a file's full path:
cek=$(readlink -f "$1")
mkdir -p "$ydk$cek"
mv "$1" "$ydk/$cek/$ydkfile"
But readlink -f "$1" gives me the full path. How can I crop the full path?
For example:
/home/test/test/2014/10/13/log.file
But I need just
/test/2014/10/13/
How can I do it?
Judging from multiple comments:
The output should be the last four directory components of the full path returned by readlink.
Given:
full_path=/home/some/where/hidden/test/2014/08/29/sparefile.log
the output should be:
test/2014/08/29
(Don't build any assumption about today's date into the path trimming code.)
If you need the last four directory components of the full path, and if you don't have newlines in the full path, and if you have GNU grep or BSD (Mac OS X) grep with support for -o (output only the matched material) then this gives the required result:
$ cek="/home/test/test/2014/10/13/log.file"
$ echo "${cek%/*}"
/home/test/test/2014/10/13
$ echo "${cek%/*}" | grep -o -E -e '(/[^/]+){4}$'
/test/2014/10/13
$ full_path=/home/some/where/hidden/test/2014/08/29/sparefile.log
$ echo "${full_path%/*}" | grep -o -E -e '(/[^/]+){4}$'
/test/2014/08/29
$
I need path starting /201[0-9]:
/home/bla/bla2/bla3/2014/01/13/13… ⟶ /2014/01/13/13….
So, you need to use grep -o again, starting with the year pattern:
echo "${fullpath%/*}" | grep -o -e '/201[0-9]/.*$'
This is much simpler; you don't even need extended regular expressions for this!
If you need the path element before the year too, then you need:
echo "{fullpath%/*}" | grep -o -e '/[^/][^/]*/201[0-9]/.*$'
Do you really need to remove "/home" ?
cek="/home/test/test/2014/10/13/log.file"
dir=$(dirname "$cek")
echo "${dir#/home}"
/test/test/2014/10/13
Just last 4 directory components:
last4dirs() {
local IFS=/
local -a components=($1)
local l=${#components[#]}
echo "${components[*]:l-5:4}"
}
last4dirs /home/some/where/hidden/test/2014/08/29/sparefile.log
test/2014/08/29

Bash Script to run against all .log files

I currently have a bash script called (log2csv). While in the current directory I can type in terminal:
log2csv *.log
This will run the script on every .log file in the current directory.
Alternatively I can run it against a single .log file with
log2csv test1.log
Instead of typing log2csv *.log, can I have the *.log included in the script? So I can just type log2csv in the directory and it runs. I know I can alias that, but I rather have the script do it.
Here is the bash script I am running:
#!/bin/bash
for path
do
base=$(basename "$path")
noext="${base/.log}"
[ -e "${noext}.csv" ] && continue
/Users/joshuacarter/bin/read_scalepack.pl "$path" > "${noext}.csv"
done
Change:
for path
to:
for path in *.log
or, perhaps better:
names=( "$#" )
if [ "${#names}" = 0 ]
then names=( *.log )
fi
for path in "${names[#]}"
and you can consider whether to set options such as shopt -s nullglob as well. This uses shell arrays to handle names with blanks etc in them. It uses command line arguments if any are given, and the list of files from *.log being expanded if there are no command line arguments.

Dash variable expansion does not work in some cases

This work is being done on a test virtualbox machine
In my /root dir, i have created the following:
"/root/foo"
"/root/bar"
"/root/i have multiple words"
Here is the (relevant)code I currently have
if [ ! -z "$BACKUP_EXCLUDE_LIST" ]
then
TEMPIFS=$IFS
IFS=:
for dir in $BACKUP_EXCLUDE_LIST
do
if [ -e "$3/$dir" ] # $3 is the backup source
then
BACKUP_EXCLUDE_PARAMS="$BACKUP_EXCLUDE_PARAMS --exclude='$dir'"
fi
done
IFS=$TEMPIFS
fi
tar $BACKUP_EXCLUDE_PARAMS -cpzf $BACKUP_PATH/$BACKUP_BASENAME.tar.gz -C $BACKUP_SOURCE_DIR $BACKUP_SOURCE_TARGET
This is what happens when I run my script with sh -x
+ IFS=:
+ [ -e /root/foo ]
+ BACKUP_EXCLUDE_PARAMS= --exclude='foo'
+ [ -e /root/bar ]
+ BACKUP_EXCLUDE_PARAMS= --exclude='foo' --exclude='bar'
+ [ -e /root/i have multiple words ]
+ BACKUP_EXCLUDE_PARAMS= --exclude='foo' --exclude='bar' --exclude='i have multiple words'
+ IFS=
# So far so good
+ tar --exclude='foo' --exclude='bar' --exclude='i have multiple words' -cpzf /backup/root/daily/root_20130131.071056.tar.gz -C / root
tar: have: Cannot stat: No such file or directory
tar: multiple: Cannot stat: No such file or directory
tar: words': Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
# WHY? :(
The Check completes sucessfully, but the --exclude='i have multiple words' does not work.
Mind you that it DOES work when i type it in my shell, manually:
tar --exclude='i have multiple words' -cf /somefile.tar.gz /root
I know that this would work in bash when using arrays, but i want this to be POSIX.
Is there a solution to this?
Consider this scripts; ('with whitespace' and 'example.desktop' is sample files)
#!/bin/bash
arr=("with whitespace" "examples.desktop")
for file in ${arr[#]}
do
ls $file
done
This outputs as exactly as yours;
21:06 ~ $ bash test.sh
ls: cannot access with: No such file or directory
ls: cannot access whitespace: No such file or directory
examples.desktop
You can set IFS to '\n' character to escape white spaces on file names.
#!/bin/bash
arr=("with whitespace" "examples.desktop")
(IFS=$'\n';
for file in ${arr[#]}
do
ls $file
done
)
the output of the second version should be;
21:06 ~ $ bash test.sh
with whitespace
examples.desktop
David the H. from the LinuxQuestions forums steered me in the right direction.
First of all, in my question, I did not make use IFS=: all the way through to the tar command
Second of all, I included "set -f" for safety
BACKUP_EXCLUDE_LIST="foo:bar:i have multiple words"
# Grouping our parameters
if [ ! -z "$BACKUP_EXCLUDE_LIST" ]
then
IFS=: # Here we set our temp $IFS
set -f # Disable globbing
for dir in $BACKUP_EXCLUDE_LIST
do
if [ -e "$3/$dir" ] # $3 is the directory that contains the directories defined in $BACKUP_EXCLUDE_LIST
then
BACKUP_EXCLUDE_PARAMS="$BACKUP_EXCLUDE_PARAMS:--exclude=$dir"
fi
done
fi
# We are ready to tar
tar $BACKUP_EXCLUDE_PARAMS \
-cpzf "$BACKUP_PATH/$BACKUP_BASENAME.tar.gz" \
-C "$BACKUP_SOURCE_DIR" \
"$BACKUP_SOURCE_TARGET"
unset IFS # our custom IFS has done it's job. Let's unset it!
set +f # Globbing is back on
I advise against using the TEMPIFS variable, like I did, because that method does not set the IFS back correctly. It's best to unset IFS when you are done with it

Resources