Rename file and keep the extension in bash - bash

I have to rename some files that I don't exactly know where they are and keep their extensions.
Ex: files system-2.1.3.war, system-2.1.3.ear, system-2.1.3.ejb
to system.ear, system.war,system.ejb
So I wrote this.
find /DIR1 -name "*.ear" -o -name "*.war" -o -name "*.ejb" \
-exec bash -c 'export var1={}; cp $var1 NEW_NAME${var1: -4}' \;
The problem is: It works only for the last file in the "or list" of "find" command. So if the file is system-2.1.3.ejb, works, for system-2.1.3.war and system-2.1.3.ear don't.
If I change the find to
find /DIR1 -name "*.ejb" -o -name "*.war" -o -name "*.ear"
Notice that *.ear now is the last one, it will work for system-2.1.3.ear and not for the others and so on.
Please help me to fix this.
I know I can create a script to accomplish that but I want a "one line" code.

Rather than embedding the {} in the script, pass it as an argument:
find /DIR1 \( -name "*.ear" -o -name "*.war" -o -name "*.ejb" \) \
-exec sh -c 'ext=${1##*.}; cp "$1" "NEW_NAME.$ext"' _ '{}' \;
Without the \(...\) grouping, -exec only applies to the primary it is implicitly "and"ed with, the previous one.
You can also limit the number of calls to the shell by looping over multiple arguments:
find /DIR1 \( ... \) -exec sh -c 'for f; do ext=${f##*.}; cp "$f" "NEW_NAME.$ext"; done' _ {} +

try this;
find /DIR1 \( -name "*.ear" -o -name "*.war" -o -name "*.ejb" \) -exec bash -c 'export var1={}; cp $var1 NEW_NAME${var1: -4}' \;
or
find ./DIR1/ -regex '.*\(.ear\|.war\|.ejb\)$' -exec bash -c 'export var1={}; cp $var1 NEW_NAME${var1: -4}' \;
Eg;
user#host $ ls -arlt DIR1/
total 76
-rw-rw-r-- 1 user user 0 Oct 21 22:59 system-2.1.3.war
-rw-rw-r-- 1 user user 0 Oct 21 22:59 system-2.1.3.ear
-rw-rw-r-- 1 user user 0 Oct 21 22:59 system-2.1.3.ejb
drwxrwxr-x 2 user user 4096 Oct 21 22:59 .
user#host $ find . \( -name "*.ear" -o -name "*.war" -o -name "*.ejb" \) -exec bash -c 'export var1={}; cp $var1 NEW_NAME${var1: -4}' \;
user#host $ ls -ralt
total 76
-rw-rw-r-- 1 user user 0 Oct 21 22:59 system-2.1.3.war
-rw-rw-r-- 1 user user 0 Oct 21 22:59 system-2.1.3.ear
-rw-rw-r-- 1 user user 0 Oct 21 22:59 system-2.1.3.ejb
drwxrwxrwt 11 root root 69632 Oct 21 23:10 ..
-rw-rw-r-- 1 user user 0 Oct 21 23:10 NEW_NAME.war
-rw-rw-r-- 1 user user 0 Oct 21 23:10 NEW_NAME.ear
-rw-rw-r-- 1 user user 0 Oct 21 23:10 NEW_NAME.ejb
drwxrwxr-x 2 user user 4096 Oct 21 23:10 .

If you have rename utility then you can avoid forking BASH subprocess for each file and also make use of regex feature in find to avoid multiple -name options:
find /DIR1 -regextype awk -regex '.*\.([we]ar|ejb)$' \
-exec rename 's/.*(\.[^.]+)$/system$1/' '{}' +

I'd use a while:
find . -name "*.war" -o -name "*.ejb" -o -name "*.ear" | while read file; do cp $file NEW_NAME${file: -4}; done
Keep in mind that both this and your example are copying the files in the current directory, so if you have more than one *.war, *.ejb or *.ear in your tree, only the last one(s) will be left in the target directory.

Related

shell alias executes in home directory

I have following line in my .zshrc file:
alias clean="sed -i 's/\r//g; s/ /\t/g' $(find . -maxdepth 1 -type f)"
but when I try to execute it in /path/to/some/directory the output is:
sed ./.Xauthority
./.lesshst: No such file or directory
.Xauthority and .lesshst are both in my home directory.
Substitutiong . with $(pwd) dos not help.
When defining the alias you've used double quotes to encompass the entire (alias) definition. This has the effect of actually running the find command at the time the alias is defined.
So when the alias is created it will pick up a list of files from the directory in which the alias is being defined (eg, in your home directory when sourcing .zshrc).
You can see this happening in the following example:
$ cd /tmp
$ pwd
/tmp
$ ls -l
total 36036
drwxrwxrwt+ 1 myid None 0 Oct 10 11:31 ./
drwxr-xr-x+ 1 myid None 0 Jul 12 17:28 ../
-rw-r--r-- 1 myid Administrators 0 Oct 10 11:31 a
-rw-r--r-- 1 myid Administrators 0 Oct 10 11:31 b
-rw-r--r-- 1 myid Administrators 0 Oct 10 11:31 c
-rw-r--r-- 1 myid Administrators 0 Oct 10 11:31 d
-rw-r--r-- 1 myid Administrators 36864002 Jun 6 17:29 giga.txt
drwx------+ 1 myid Administrators 0 Mar 8 2020 runtime-xward/
$ alias clean="sed -i 's/\r//g; s/ /\t/g' $(find . -maxdepth 1 -type f)"
$ alias clean
alias clean='sed -i '\''s/\r//g; s/ /\t/g'\'' ./a
./b
./c
./d
./giga.txt'
Notice how the find was evaluated at alias definition time and pulled in all of the files in my /tmp directory.
To address this issue you want to make sure the find is not evaluated at the time the alias is created.
There are a few ways to do this, one idea being to wrap the find portion of the definition in single quotes, another idea would be keep the current double quotes and just escape the $, eg:
$ alias clean="sed -i 's/\r//g; s/ /\t/g' "'$(find . -maxdepth 1 -type f)'
$ alias clean
alias clean='sed -i '\''s/\r//g; s/ /\t/g'\'' $(find . -maxdepth 1 -type f)'
$ alias alias clean="sed -i 's/\r//g; s/ /\t/g' \$(find . -maxdepth 1 -type f)"
$ alias clean
alias clean='sed -i '\''s/\r//g; s/ /\t/g'\'' $(find . -maxdepth 1 -type f)'
Notice in both cases the alias contains the actual find command instead of the results of evaluating it in the current directory.

Error handling for find and remove the directory that returns error even after it is successful

I am writing a shell script to find some old directories to replace them with the latest one. I used the following command. It is successful in deleting but I am running into this error.
-bash-4.1$ find /app/home/data01/dpx/ -type d -mtime +2 -name 'stress' -exec ls -ltr {} \;
total 24
-rw-r--r-- 1 dpx app 1324 Oct 21 2017 relocate1.sh
-rw-r--r-- 1 dpx app 316 Oct 21 2017 re1.sh
-rw-r--r-- 1 dpx app 11876 Oct 21 2017 pre.log
-rw-r--r-- 1 dpx app 1241 Oct 21 2017 relocate2.sh
-bash-4.1$
-bash-4.1$ find /app/home/data01/dpx/ -type d -mtime +2 -name 'stress' -exec rm -rf {} \;
find: `/app/home/data01/dpx/stress': No such file or directory
-bash-4.1$ find /app/home/data01/dpx/ -type d -mtime +2 -name 'stress' -exec ls -ltr {} \;
Why am I getting this error and how I can prevent this?
find: `/app/home/data01/dpx/stress': No such file or directory

Why does my find command seem to execute twice? [duplicate]

This is the contents of the directory I'm working with:
misha#hp-laptop:~/work/c/5$ ls -l
total 8
-rw-rw-r-- 1 misha misha 219 May 20 15:37 demo.c
drwxrwxr-x 2 misha misha 4096 May 20 16:07 folder
-rw-rw-r-- 1 misha misha 0 May 20 16:06 test
Now I would like to remove everything from this directory except for the file demo.c. Here's the command I've come up with:
find . ! \( -name demo.c -o -name . \) -exec rm -Rf {} \;
It does exactly what you'd think it would do (meaning, the file test and the directory folder are gone), but at the same time it also displays the following error message:
find: `./folder': No such file or directory
Why do you think that is?
it also displays this error message:
find: `./folder': No such file or directory
Why is that?
Because find recognizes ./folder as a directory when it first reads directory ., before considering whether it matches the find criteria or performing any action on it. It does not recognize that the action will remove that directory, so after performing the action, it attempts to descend into that directory to scan its contents. By the time it does that, however, the directory no longer exists.
There are multiple ways to address the problem. One not yet mentioned is to use the -prune action. This tells find not to descend into directories that match the tests:
find . ! \( -name demo.c -o -name . \) -exec rm -Rf {} \; -prune
That will serve nicely here, and it also has applications in areas where you are not deleting the directory and you do not want to limit the search depth.
Additionally, another way to avoid affecting . would be to make use of the fact that find accepts multiple base paths to test, that these can designate regular files if you wish, and that during pathname expansion any leading . in a filename must be matched explicitly. If, as in your case, there are no dotfiles in the target directory (other than . and ..), then you can accomplish your objective like this:
find * ! -name demo.c -exec rm -Rf {} \; -prune
You can change your find command to this:
find . -mindepth 1 -not -name demo.c -delete
-mindepth 1 ensure that you don't select DOT
-delete will delete all files and directories
#before
ls -lrt
total 4
-rw-rw-r-- 1 user super 0 May 20 09:14 demo.c
drwxrwxr-x 2 user super 4096 May 20 09:14 folder/
-rw-rw-r-- 1 user super 0 May 20 09:14 test
#Command
ls -1 | grep -v demo.c |xargs rm -rf
#After
ls -lrt
total 0
-rw-rw-r-- 1 user super 0 May 20 09:14 demo.c

List all files older than x days only in current directory

I am new to unix and couldn't get appropriate outcome in other questions.
I want to list only files in current directory which are older than x days. I have below restriction
List only files in current folder which are older than 30
days
Output shouldn't include directories and subdirectories
This should list files similar as "ls" command does
Output should look like file1 file2 file3 ..
I used find . -mtime +30. but this gives files and files in sub-directories as well. I would like to restrict doing search recursively and not to search inside directories.
Thanks a lot in advance !
You can do this:
find ./ -maxdepth 1 -type f -mtime +30 -print
If having problems, do:
find ./ -depth 1 -type f -mtime +30 -print
To add on #Richasantos's answer:
This works perfectly fine
$ find . -maxdepth 1 -type f -mtime +30
Prints:
./file1
./file2
./file3
You can now pipe this to anything you want. Let's say you want to remove all those old files:
$ find . -maxdepth 1 -type f -mtime +30 -print | xargs /bin/rm -f
From man find: ``
If you are piping the output of find into another program and there is the faintest possibility that the files which you are searching for might contain a newline, then you should seriously consider using the -print0 option instead of -print.
So using -print0
$ find . -maxdepth 1 -type f -mtime +30 -print0
Prints (with null characters in between):
./file1./file2./file3
And is used like this to remove those old files:
$ find . -maxdepth 1 -type f -mtime +30 -print0 | xargs -0 /bin/rm -f
You can use find . -maxdepth 1 to exclude subdirectories.
A slightly different spin on this: find is incredibly versatile, you can specify size and time as follows:
This finds you all the logs that are 4 months or older and bigger than 1 meg.
If you remove the + sign, it finds files that are roughly that size.
find /var/log -type f -mtime +120 -size +1M
/var/log/anaconda/journal.log
/var/log/ambari-agent/ambari-alerts.log.23
/var/log/ambari-agent/ambari-alerts.log.22
/var/log/ambari-agent/ambari-alerts.log.24
/var/log/ambari-agent/ambari-alerts.log.25
/var/log/ambari-agent/ambari-alerts.log.21
/var/log/ambari-agent/ambari-alerts.log.20
/var/log/ambari-agent/ambari-alerts.log.19
What's even better, you can feed this into an ls:
find /var/log -type f -mtime +120 -size +1M -print0 | xargs -0 ls -lh
-rw-r--r--. 1 root root 9.6M Oct 1 13:24 /var/log/ambari-agent/ambari-alerts.log.19
-rw-r--r--. 1 root root 9.6M Sep 27 07:44 /var/log/ambari-agent/ambari-alerts.log.20
-rw-r--r--. 1 root root 9.6M Sep 22 03:32 /var/log/ambari-agent/ambari-alerts.log.21
-rw-r--r--. 1 root root 9.6M Sep 16 23:23 /var/log/ambari-agent/ambari-alerts.log.22
-rw-r--r--. 1 root root 9.6M Sep 11 19:12 /var/log/ambari-agent/ambari-alerts.log.23
-rw-r--r--. 1 root root 9.6M Sep 6 15:02 /var/log/ambari-agent/ambari-alerts.log.24
-rw-r--r--. 1 root root 9.6M Sep 1 10:51 /var/log/ambari-agent/ambari-alerts.log.25
-rw-------. 1 root root 1.8M Mar 11 2019 /var/log/anaconda/journal.log

how to tail all files except one

I have log files on two directories, for simplicty I'm going to call then dir 1 and dir 2.
Let's say the user enters file.log which is located in dir1, I should tail -f all files from dir1 and dir2 except file.log. Can somebody help me with this please.
ssh host 'find /path/to/a/log -maxdepth 1 -type f -name "file*" -name "*.log" ! -name "$1" -print0 -exec tail {} \;' > /home/pl-${node}.log
ssh host 'find /path/to/a/log -maxdepth 1 -type f -name "file*" -name "*.out" ! -name "$1" -print0 -exec tail {} \;' > /home/pl-${node}.out
node is just a variable that stores 1 and 2.
when i enter ./test file-1.log, the output is:
pl-1.log
Oct 21 09:15 pl-1.out
Oct 21 09:15 pl-2.log
Oct 21 09:15 pl-2.out
As you see all files were tailed, even though i specified file-1.log to not be tailed in argument $1.
The following would tail all files from dir1 and dir2 except file.log:
shopt -s extglob
tail -f {dir1,dir2}/!(file.log)
The manual provides more information about extglob (that enables extended pattern matching).
Something like this should make it:
find . -type f ! -name "*dir1/file.log" -exec tail {} \;
That is, use find ... -exec tail {} \; adding the extra condition that the file does not have to be named as file.log.
Update
how can i use find if the name of the file is an argument in the
command line.for example find . ! -type f -name "$1"
Like this, for example:
ssh host "find /path/to/a/log -maxdepth 1 -type f -name 'file*.log' !
-name \"$1\" -print0 -exec tail {} \;" > /home/pl-${node}.out
Note that I am using find ".... -name \"$1\" ... " because to "send" variables you need double quotes.

Resources