how to tail all files except one - bash

I have log files on two directories, for simplicty I'm going to call then dir 1 and dir 2.
Let's say the user enters file.log which is located in dir1, I should tail -f all files from dir1 and dir2 except file.log. Can somebody help me with this please.
ssh host 'find /path/to/a/log -maxdepth 1 -type f -name "file*" -name "*.log" ! -name "$1" -print0 -exec tail {} \;' > /home/pl-${node}.log
ssh host 'find /path/to/a/log -maxdepth 1 -type f -name "file*" -name "*.out" ! -name "$1" -print0 -exec tail {} \;' > /home/pl-${node}.out
node is just a variable that stores 1 and 2.
when i enter ./test file-1.log, the output is:
pl-1.log
Oct 21 09:15 pl-1.out
Oct 21 09:15 pl-2.log
Oct 21 09:15 pl-2.out
As you see all files were tailed, even though i specified file-1.log to not be tailed in argument $1.

The following would tail all files from dir1 and dir2 except file.log:
shopt -s extglob
tail -f {dir1,dir2}/!(file.log)
The manual provides more information about extglob (that enables extended pattern matching).

Something like this should make it:
find . -type f ! -name "*dir1/file.log" -exec tail {} \;
That is, use find ... -exec tail {} \; adding the extra condition that the file does not have to be named as file.log.
Update
how can i use find if the name of the file is an argument in the
command line.for example find . ! -type f -name "$1"
Like this, for example:
ssh host "find /path/to/a/log -maxdepth 1 -type f -name 'file*.log' !
-name \"$1\" -print0 -exec tail {} \;" > /home/pl-${node}.out
Note that I am using find ".... -name \"$1\" ... " because to "send" variables you need double quotes.

Related

unix find in script which needs to met a combination of condiations and execute something for all found files

Let's assume we have the following files and directories:
-rw-rw-r--. a.c
-rw-rw-r--. a.cpp
-rw-rw-r--. a.h
-rw-rw-r--. a.html
drwxrwxr-x. dir.c
drwxrwxr-x. dir.cpp
drwxrwxr-x. dir.h
I want to execute grep on all the files (it should also look in subdirectories) which meet the following conditions:
Only files, not directories.
Only files which end with .h, .c and .cpp.
On files found, execute grep and print the file name where grep finds something.
But this ends up in a very long command with a lot of repetitions like:
find . -name '*.c' -type f -exec grep hallo {} \; -print -o -name '*.cpp' -type f -exec grep hallo {} \; -print -o -name '*.h' -type f -exec grep hallo {} \; -print
Can the conditions be grouped to remove the repetitions or are there any other possible simplifications?
My system is Fedora 33, with GNU grep, GNU find and bash available.
With Bash's extglob and globstar special options:
shopt +s extglob globstar
grep -s regex **/*.+(cpp|c|h)
You can put the first line in ~/.bashrc so you don't need to manually enable them options in every shell.
If you happened to have directories with those extensions, Grep would complain without the -s flag.
In GNU Find, use the -regex option to make it easier:
find . -type f -regex '.*\.\(c\|h\|cpp\)' -exec grep regex {} +
find . -type f -regextype awk -regex '.*\.(c|h|cpp)' -exec grep regex {} +
More on Find's regextypes.
With POSIX tools only:
find . -type f \( -name '*.[ch]' -o -name '*.cpp' \) -exec grep regex {} +
You can use regex/regextype instead of name and so:
find . -regextype "posix-extended" -regex "^.*\.((c)|(cpp)|(h))$" -exec grep hallo '{}' \;

Last part of the folder path + filename

I want to print with format Folder/Folder.
Instead of full paths:
/volume1/ArtWork/Folder1/folderA
/volume1/ArtWork/Folder1/folderB
/volume1/ArtWork/Folder2/folderA
/volume1/ArtWork/Folder2/folderA
I want to print in a txt "semi-paths" (not only filenames!):
Folder1/folderA
Folder1/folderB
Folder2/folderA
Folder2/folderA
This is what I am currently using:
find /volume1/ArtWork/* -type d -maxdepth 2 \
-not -empty -printf '%f\n' \
> /volume1/ArtWork/filenamesdir.txt
(I want to print not-empty folders, but the format at the moment is wrong)
I suggest to navigate to that folder before running the find command. Using -printf '%P\n' you can print the part of interest, each on a separate line:
pushd /volume1/ArtWork
find . -type d -maxdepth 2 -not -empty -printf '%P\n'
popd
Note that I'm using pushd/popd instead of cd. That makes changing into the directory and back to the previous one more convenient.
This will do:
find /volume1/ArtWork/* -maxdepth 2 -type d \
-exec sh -c 'echo $0 | grep -o "[^/]*/[^/]*$"' {} \; \
> /volume1/ArtWork/filenamesdir.txt
This is a gnu-sed version:
find /your/path -mindepth 1 -type d -print0 |
xargs -0 -I{} bash -c 'sed -r "s/.*\/(.*\/.*)$/\1/" <<<"{}"'

Using cp in bash to use piped in information about files like modification date

I am trying to copy files from one directory into another from certain modification date ranges. For example, copy all files created after May 10 from dir1 to dir2. I have tried a few things but have been unsuccessful so far.
This made sense to me but cp does not take the filenames piped to it, but just executes ./* and copies all files in the directory:
find . -type f -daystart -mtime 2 | cp ./* /dir/
This almost worked, but did not copy all of the matching files, I also tried xargs -s 50000, but did not work:
find . -type f -daystart -mtime 2 | xargs -I {} cp {} /dir/
find . -type f -daystart -mtime 2 | xargs cp -t /dir/
Found this online, does not work:
cp $(find . -type f -daystart -mtime 2) /dir/
Ideas? Thanks.
Given as your actual question is about using filenames from stdin rather than metadata from stdin, this is quite straightforward:
while IFS= read -r -d '' filename; do
cp "$filename" /wherever
done < <(find . -type f -daystart -mtime 2 -print0)
Note the use of IFS= read -r -d '' and -print0 -- as NUL and / are the only two characters which can't be used in UNIX filenames, using any other character, including the newline, to delimit them is unsafe. Think about what would happen if someone (or a software bug) created a file called $'./ \n/etc/passwd'; you want to be damned sure none of your scripts try to delete or overwrite /etc/passwd when they're trying to delete or overwrite that file.
That said, you don't actually need to use a pipe at all:
find . -type f -daystart -mtime -2 -exec cp '{}' /wherever ';'
...or, if you're only trying to support GNU cp, you can use this more efficient variant:
find . -type f -daystart -mtime -2 -exec cp -t /wherever '{}' +
You don't specify why the various attempts didn't work, so I can only assume that they are the result of whitespace in the filenames.
Try using find's useful -exec action instead of using xargs:
find . -type f -daystart -mtime 2 -exec cp {} /media/alex/Extra/Music/watchfolder/ \;
find . -type f -daystart -mtime 2 \
| cpio -pdv /media/alex/Extra/Music/watchfolder/

I am getting an error "arg list too long" in unix

i am using the following command and getting an error "arg list too long".Help needed.
find ./* \
-prune \
-name "*.dat" \
-type f \
-cmin +60 \
-exec basename {} \;
Here is the fix
find . -prune -name "*.dat" -type f -cmin +60 |xargs -i basename {} \;
To only find files in the current directory, use -maxdepth 1.
find . -maxdepth 1 -name '*.dat' -type f -cmin +60 -exec basename {} \;
In all *nix systems the shell has a maximum length of arguments that can be passed to a command. This is measured after the shell has expanded filenames passed as arguments on the command line.
The syntax of find is find location_to_find_from arguments..... so when you are running this command the shell will expand your ./* to a list of all files in the current directory. This will expand your find command line to find file1 file2 file3 etc etc This is probably not want you want as the find is recursive anyway. I expect that you are running this command in a large directory and blowing your command length limit.
Try running the command as follows
find . -name "*.dat" -type f -cmin +60 -exec basename {} \;
This will prevent the filename expansion that is probably causing your issue.
Without find, and only checking the current directory
now=$(date +%s)
for file in *.dat; do
if (( $now - $(stat -c %Y "$file") > 3600 )); then
echo "$file"
fi
done
This works on my GNU system. You may need to alter the date and stat formats for different OS's
If you have to show only .dat filename in the ./ tree. Execute it without -prune option, and use just path:
find ./ -name "*.dat" -type f -cmin +60 -exec basename {} \;
To find all the .dat files which are older than 60 minutes in the present directory only do as follows:
find . -iregex "./[^/]+\.dat" -type f -cmin +60 -exec basename {} \;
And if you have croppen (for example aix) version of find tool do as follows:
find . -name "*.dat" -type f -cmin +60 | grep "^./[^/]\+dat" | sed "s/^.\///"

how to ignore directories but not the files in them in bash script with find

I want to run a find command but only find the files in directories, not the directories or subdirectories themselves. Also acceptable would be to find the directories but grep them out or something similar, still listing the files in those directories. As of right now, to find all files changed in the last day in the working directory, and grep'ing out DS_Store and replacing spaces with underscores:
find . -mtime -1 -type f -print | grep -v '\.DS_Store' | awk '{gsub(/ /,"_")}; 1'
Any help would be appreciated!
If you have GNU find:
find . -mtime -1 ! -name '.DS_Store' -type f -printf '%f\n'
will print only the basename of the file.
For other versions of find:
find . -mtime -1 ! -name '.DS_Store' -type f -exec basename {} \;
you could then do:
find -name index.html -exec sh -c 'basename "$1" | tr " " _' _ {} \;

Resources