fswatch to watch only a certain file extension [closed] - macos

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am using fswatch and only want it triggered if a file with extension .xxx is modified/created etc. The documentation and the second reference below indicate that:
All paths are accepted by default, unless an exclusion filter says otherwise.
Inclusion filters may override any exclusion filter.
The order in the definition of filters in the command line has no effect.
Question: What is the regular expression to use to exclude all files that do not match the .xxx extension?
References:
Is there a command like "watch" or "inotifywait" on the Mac?
Watch for a specific filetype
Platform:
MacOS 10.9.5.

I'm fswatch author. It may not be very intuitive, but fswatch includes everything unless an exclusion filter says otherwise. Coming to your problem: you want to include all files with a given extension. Rephrasing in term of exclusion and inclusion filters:
You want to exclude everything.
You want to include files with a given extension ext.
That is:
To exclude everything you can add an exclusion filter matching any string: .*.
To include files with a given extension ext, you add an inclusion filter matching any path ending with .ext: \\.ext$. In this case you need to escape the dot . to match the literal dot, then the extension ext and then matching the end of the path with $.
The final command is:
$ fswatch [options] -e ".*" -i "\\.ext$"
If you want case insensitive filters (e.g. to match eXt, Ext, etc.), just add the -I option.

You may watch for changes to files of a single extension like this:
fswatch -e ".*" -i ".*/[^.]*\\.xxx$" .
This will exclude all files and then include all paths ending with .xxx (and also exclude files starting with a dot).
If you want to run a command on the file change, you may add the following:
fswatch -e ".*" -i ".*/[^.]*\\.xxx$" -0 . | xargs -0 -n 1 -I {} echo "File {} changed"

Related

Why does 'ls' recurse into folders? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Why does ls always recurse into folders when I use a wildcard on it (I'd rather that it didn't do this and instead just showed me all items in the directory starting with m and nothing else)?
$ ls
boot/ etc/ lost+found/ mnt/ proc/ run/ srv/ tmp/ var/ init* lib32# libx32# dev/ home/ media/ opt/ root/ snap/ sys/ usr/ bin# lib# lib64# sbin#
/ $ ls m*
media:
mnt:
c/ d/ e/ wsl/
$ alias ls
alias ls='ls -FAh --color=auto --group-directories-first'
This question is off-topic here, and should be migrated to Unix & Linux or Super User; answering community-wiki for the OP's benefit, but expecting this to be closed).
ls isn't recursing. Instead, it's parsing the command line that it's given as an instruction to list the contents of the media directory.
The important thing to understand about UNIX in general is that commands don't parse their own command lines -- whatever starts a program is responsible for coming up with an array of C strings to be used as its command line argument, so a single string like ls m* can't be used.
The shell thus replaces ls m* with an array ["ls", "media"] (when media is the only match for m*).
Because ls can't tell the difference between being given media as the name of a directory to list, and being given media as the result of expanding a glob, it assumes the former, and lists the contents of that directory.
Why does ls always recurse into folders when I use a wildcard on it
It's according to the specifications if the wildcard globbing matches a directory.
From The Open Group Base Specifications Issue 7, 2018 edition:
For each operand that names a file of type directory, ls shall write the names of files contained within the directory as well as any requested, associated information.
You can however override this default behavior by using the -d option:
Do not follow symbolic links named as operands unless the -H or -L options are specified. Do not treat directories differently than other types of files. The use of -d with -R or -f produces unspecified results.

How to delete files matching specific character placement? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm trying to delete files with names that contain certain digits in specific placement. Using bash and text file that contains those specific digits.
I have a single directory with files in the following naming convention: 2019-08-06-11-35-13_2091232924_4569.mp3
I have a text file containing area codes that I'd like to match and delete. One of those area codes is 209. Reading from the right of the filename is always consistent. So I'd like to match characters 17, 18, 19 from the right, against the text file and then delete those files using bash. I've tried plain wildcard matching but it will delete files with those digits in other positions.
You can use the ? wildcard, which matches any single character.
rm ????-??-??-??-??-??_209???????_????.mp3
However, it appears that all the wild characters are digits, so you could use [0-9] instead of ? and be safer.
rm [0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]_209[0-9][0-9][0-9][0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9].mp3
If you're getting the area code from a file, you can replace 209 in the pattern with the variable that you assigned from the file.
rm ????-??-??-??-??-??_"$code"??????_????.mp3
You could probably do something with xargs:
xargs -n1 <input.txt sh -c 'rm *_$1*_*.mp3' {}

My perl script doesn't see the txt file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
My Perl program creates the file
10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.txt
Then in method I have a code:
if ( -e $report ) {
# we parse the filet here is some code, at the end
{
else
}
print "*** Skipping \\NYNAS\NYNDS\VOL\DATA\INVACCT\FUND_RECS_PFI\10001.ICNTL.20160603.PROD.GAAP.PFI\10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.TXT
}
I cannot understand why the script doesn't see the file. I've checked it several times letter by letter. Can it be because of the Upper case TXT, but in reality it is lower case?
Is your file 10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.txt in directory \\NYNAS\NYNDS\VOL\DATA\INVACCT\FUND_RECS_PFI?
At a guess you're not escaping the file path correctly. Even if you use single-quotes, there is no way of representing the two leading backslashes in Uniform Naming Convention (UNC) paths without escaping at least one of them
Check the output of print $report, "\n" to see what you've really written
My preference is to use four backslashes at the start of the path string, like this
my $report = '\\\\NYNAS\NYNDS\VOL\DATA\INVACCT\FUND_RECS_PFI\10001.ICNTL.20160603.PROD.GAAP.PFI\‌​10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.TXT';
print -e $report ? "Found\n" : "Not found\n";
And Perl allows you to use forward slashes in place of backslashes in a Windows path, so you could write this instead if you prefer, but paths like this aren't valid in other Windows software
my $report = '//NYNAS/NYNDS/VOL/DATA/INVACCT/FUND_RECS_PFI/10001.ICNTL.20160603.PROD.GAAP.PFI/‌​10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.TXT';
Or another alternative is to relocate your current working directory. You cannot cd to a UNC path on the Windows command line, but Perl allows you to chdir successfully
chdir '//NYNAS/NYNDS/VOL/DATA/INVACCT/FUND_RECS_PFI' or die $!;
Thereafter all relative file paths will be relative to this new working directory on your networked system

lsof, grepping output of [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
To count open epubs I used this:
# - determine how many epubs are open -
NUMBER_OF_OPEN_EPUBS=0
while read -r LINE ; do
if [ "$(echo $LINE | rev | cut -c1-5 | rev)" = ".epub" ]; then
NUMBER_OF_OPEN_EPUBS="$(($NUMBER_OF_OPEN_EPUBS+1))"
fi
done < <(lsof | grep "\.epub")
# --- end determine how many epubs are open ---
and it always worked. But I wanted to extend it to fb2 files (similar to epubs) as well so I got an fb2 for testing and couldn't make it work. To illustrate the underlying problem in it's simplest form:
With 2 files, /test.epub & /test.fb2 open in fbreader in seperate windows, in bash, in lxterminal, under Ubuntu 14.04 LTS and plain Openbox:
me#nu:~$ lsof | grep "\.fb2" | tr -s " "
me#nu:~$ lsof | grep "\.epub" | tr -s " "
fbreader 28982 me 12r REG 8,5 346340 8375 /test.epub
me#nu:~$
Why doesn't lsof see the fb2? In practical terms, I suppose I could use ps, which exhibits no prejudice against fb2 files (and incidentally proves grep isn't to blame) instead, but why does lsof snub fb2 files?
==================
P.S. I edited this to put it in proper context here, even though, thanks to Mr.Hyde, it is solved already. The question as stated reflects an implied and unexamined assumption which turns out to be false. See answer.
Hyde's comment was the clue I needed. So he should get the credit. I'm not clear how that works yet, but I've lurked this site enough to know it is important to y'all.
So, right, if fbreader keeps one file open but not the other, as Hyde suggested, the question would be why. I was assuming the file type was the important thing, but once I looked at it that way the possibility was obvious and I tested it and the issue isn't type but file size. I've only found one fb2 to test my script with and it happens to be smaller than most of my epubs. I found a small epub and it behaved the same way. Presumably if the file is small enough fbreader just stores the whole thing in memeory but can't for a larger file. So, mea culpa, the problem as stated is spurious. Bottom line from a scripting pov is I need to use
ps -eo args
instead of
lsof
because ps sees the file as an argument to a command that started an open process rather than an open file and that is probably more to the point anyway. Thanks, gentles.

Exclude directories with a certain prefix using find [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Provided I'm not missing something that should be obvious, the OSX version of bash is different from Linux here, because I thought this question would provide all I needed to know. It didn't.
I'm trying to use find to find all directories that do not begin with "." (Mac OS X uses the .-prefix for hidden folders, e.g., /volumes/OD/.Trashes). I want to pipe all the non-hidden directories to rysnc to periodically mirror two local directories.
I tested to make sure I'm using find correctly with this code here:
find /volumes/OD -type d -iname ".*"
It finds the directories:
/volumes/OD/.Trashes
/volumes/OD/.Spotlight-V100
/volumes/OD/.fseventsd
So far so good. But when I try negating the condition I just tested, I get an error:
find /volumes/OD -type d -iname ! ".*"
yields this error:
find: .*: unknown primary or operator
I've tried escaping the "." with "\", but I only get the same error message. I've tried removing the parenthesis, but I get same error message. What am I missing here? Should I be using another operator besides iname?
The ! must precede the condition:
find /volumes/OD -type d '!' -iname '.*'
That said, you shouldn't need to pipe your file-list from find to rsync; I'm sure the latter offers a way to exclude dot-folders. Maybe
rsync --exclude='.*' ...
?
Reason :
! is prefix of jobs such as:
!23
!routENTER
You should do :
find /volumes/OD -type d \! -iname '.*'
ruakh has got the right answer, but here is an alternate way of doing a search:
find /volumes/OD -type d -not -iname ".*"

Resources