Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Why does ls always recurse into folders when I use a wildcard on it (I'd rather that it didn't do this and instead just showed me all items in the directory starting with m and nothing else)?
$ ls
boot/ etc/ lost+found/ mnt/ proc/ run/ srv/ tmp/ var/ init* lib32# libx32# dev/ home/ media/ opt/ root/ snap/ sys/ usr/ bin# lib# lib64# sbin#
/ $ ls m*
media:
mnt:
c/ d/ e/ wsl/
$ alias ls
alias ls='ls -FAh --color=auto --group-directories-first'
This question is off-topic here, and should be migrated to Unix & Linux or Super User; answering community-wiki for the OP's benefit, but expecting this to be closed).
ls isn't recursing. Instead, it's parsing the command line that it's given as an instruction to list the contents of the media directory.
The important thing to understand about UNIX in general is that commands don't parse their own command lines -- whatever starts a program is responsible for coming up with an array of C strings to be used as its command line argument, so a single string like ls m* can't be used.
The shell thus replaces ls m* with an array ["ls", "media"] (when media is the only match for m*).
Because ls can't tell the difference between being given media as the name of a directory to list, and being given media as the result of expanding a glob, it assumes the former, and lists the contents of that directory.
Why does ls always recurse into folders when I use a wildcard on it
It's according to the specifications if the wildcard globbing matches a directory.
From The Open Group Base Specifications Issue 7, 2018 edition:
For each operand that names a file of type directory, ls shall write the names of files contained within the directory as well as any requested, associated information.
You can however override this default behavior by using the -d option:
Do not follow symbolic links named as operands unless the -H or -L options are specified. Do not treat directories differently than other types of files. The use of -d with -R or -f produces unspecified results.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have hundreds of files in a folder, currently with the format: (#######-ccc)_(1), where # represents an integer and c represents a letter. I need them all to be renamed #######_ccc_1. Is there an easy way to do this via command prompt, or does it have to be done manually? I understand the mv function can only take one at a time.
ls output looks like this:
04/15/2021 06:39 PM <DIR> ..
04/15/2021 06:39 PM 34,436 (1101110-PMC)_(1).jpg
04/15/2021 06:39 PM 24,868 (1101111-PMC)_(1).jpg
04/15/2021 06:39 PM 24,842 (1102690-MARB)_(1).jpg
04/15/2021 06:39 PM 48,451 (1118150-DIVE).jpg
One way using just bash:
# Make sure extended pattern matching operators are enabled
# (Should already be on in interactive shell sessions)
# +(pattern) matches one or more occurences of pattern
shopt -s extglob
for file in \(+([[:digit:]])-+([[:alpha:]])\)_\(1\); do
# Remove all parens and change first dash to underscore.
newfile="${file//[()]}"
mv "$file" "${newfile/-/_}"
done
Or using prename (Usually installed as part of perl):
prename 's/[()]//g; s/-/_/' \(+([[:digit:]])-+([[:alpha:]])\)_\(1\)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm trying to delete files with names that contain certain digits in specific placement. Using bash and text file that contains those specific digits.
I have a single directory with files in the following naming convention: 2019-08-06-11-35-13_2091232924_4569.mp3
I have a text file containing area codes that I'd like to match and delete. One of those area codes is 209. Reading from the right of the filename is always consistent. So I'd like to match characters 17, 18, 19 from the right, against the text file and then delete those files using bash. I've tried plain wildcard matching but it will delete files with those digits in other positions.
You can use the ? wildcard, which matches any single character.
rm ????-??-??-??-??-??_209???????_????.mp3
However, it appears that all the wild characters are digits, so you could use [0-9] instead of ? and be safer.
rm [0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]_209[0-9][0-9][0-9][0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9].mp3
If you're getting the area code from a file, you can replace 209 in the pattern with the variable that you assigned from the file.
rm ????-??-??-??-??-??_"$code"??????_????.mp3
You could probably do something with xargs:
xargs -n1 <input.txt sh -c 'rm *_$1*_*.mp3' {}
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am using fswatch and only want it triggered if a file with extension .xxx is modified/created etc. The documentation and the second reference below indicate that:
All paths are accepted by default, unless an exclusion filter says otherwise.
Inclusion filters may override any exclusion filter.
The order in the definition of filters in the command line has no effect.
Question: What is the regular expression to use to exclude all files that do not match the .xxx extension?
References:
Is there a command like "watch" or "inotifywait" on the Mac?
Watch for a specific filetype
Platform:
MacOS 10.9.5.
I'm fswatch author. It may not be very intuitive, but fswatch includes everything unless an exclusion filter says otherwise. Coming to your problem: you want to include all files with a given extension. Rephrasing in term of exclusion and inclusion filters:
You want to exclude everything.
You want to include files with a given extension ext.
That is:
To exclude everything you can add an exclusion filter matching any string: .*.
To include files with a given extension ext, you add an inclusion filter matching any path ending with .ext: \\.ext$. In this case you need to escape the dot . to match the literal dot, then the extension ext and then matching the end of the path with $.
The final command is:
$ fswatch [options] -e ".*" -i "\\.ext$"
If you want case insensitive filters (e.g. to match eXt, Ext, etc.), just add the -I option.
You may watch for changes to files of a single extension like this:
fswatch -e ".*" -i ".*/[^.]*\\.xxx$" .
This will exclude all files and then include all paths ending with .xxx (and also exclude files starting with a dot).
If you want to run a command on the file change, you may add the following:
fswatch -e ".*" -i ".*/[^.]*\\.xxx$" -0 . | xargs -0 -n 1 -I {} echo "File {} changed"
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
To count open epubs I used this:
# - determine how many epubs are open -
NUMBER_OF_OPEN_EPUBS=0
while read -r LINE ; do
if [ "$(echo $LINE | rev | cut -c1-5 | rev)" = ".epub" ]; then
NUMBER_OF_OPEN_EPUBS="$(($NUMBER_OF_OPEN_EPUBS+1))"
fi
done < <(lsof | grep "\.epub")
# --- end determine how many epubs are open ---
and it always worked. But I wanted to extend it to fb2 files (similar to epubs) as well so I got an fb2 for testing and couldn't make it work. To illustrate the underlying problem in it's simplest form:
With 2 files, /test.epub & /test.fb2 open in fbreader in seperate windows, in bash, in lxterminal, under Ubuntu 14.04 LTS and plain Openbox:
me#nu:~$ lsof | grep "\.fb2" | tr -s " "
me#nu:~$ lsof | grep "\.epub" | tr -s " "
fbreader 28982 me 12r REG 8,5 346340 8375 /test.epub
me#nu:~$
Why doesn't lsof see the fb2? In practical terms, I suppose I could use ps, which exhibits no prejudice against fb2 files (and incidentally proves grep isn't to blame) instead, but why does lsof snub fb2 files?
==================
P.S. I edited this to put it in proper context here, even though, thanks to Mr.Hyde, it is solved already. The question as stated reflects an implied and unexamined assumption which turns out to be false. See answer.
Hyde's comment was the clue I needed. So he should get the credit. I'm not clear how that works yet, but I've lurked this site enough to know it is important to y'all.
So, right, if fbreader keeps one file open but not the other, as Hyde suggested, the question would be why. I was assuming the file type was the important thing, but once I looked at it that way the possibility was obvious and I tested it and the issue isn't type but file size. I've only found one fb2 to test my script with and it happens to be smaller than most of my epubs. I found a small epub and it behaved the same way. Presumably if the file is small enough fbreader just stores the whole thing in memeory but can't for a larger file. So, mea culpa, the problem as stated is spurious. Bottom line from a scripting pov is I need to use
ps -eo args
instead of
lsof
because ps sees the file as an argument to a command that started an open process rather than an open file and that is probably more to the point anyway. Thanks, gentles.
Before moving on to use SVN, I used to manage my project by simply keeping a /develop/ directory and editing and testing files there, then moving them to the /main/ directory. When I decided to move to SVN, I needed to be sure that the directories were indeed in sync.
So, what is a good way to write a shell script [ bash ] to recursively compare files with the same name in two different directories?
Note: The directory names used above are for sample only. I do not recommend storing your code in the top level :).
The diff command has a -r option to recursively compare directories:
diff -r /develop /main
diff -rqu /develop /main
It will only give you a summary of changes that way :)
If you want to see only new/missing files
diff -rqu /develop /main | grep "^Only
If you want to get them bare:
diff -rqu /develop /main | sed -rn "/^Only/s/^Only in (.+?): /\1/p"
The diff I have available allows recursive differences:
diff -r main develop
But with a shell script:
( cd main ; find . -type f -exec diff {} ../develop/{} ';' )
[I read somewhere that answering your own questions is OK, so here goes :) ]
I tried this, and it worked pretty well
[/]$ cd /develop/
[/develop/]$ find | while read line; do diff -ruN "/main/$line" $line; done |less
You can choose to compare only specific files [e.g., only the .php ones] by editing the above line as
[/]$ cd /develop/
[/develop/]$ find -name "*.php" | while read line; do diff -ruN "/main/$line" $line; done |less
Any other ideas?
here is an example of a (somewhat messy) script of mine, dircompare.sh, which will:
sort files and directories in arrays depending on which directory they occur in (or both), in two recursive passes
The files that occur in both directories, are sorted again in two arrays, depending on if diff -q determines if they differ or not
for those files that diff claims are equal, show and compare timestamps
Hope it can be found useful - Cheers!
EDIT2: (Actually, it works fine with remote files - the problem was unhandled Ctrl-C signal during a diff operation between local and remote file, which can take a while; script now updated with a trap to handle that - however, leaving the previous edit below for reference):
EDIT: ... except it seems to crash my server for a remote ssh directory (which I tried using over ~/.gvfs)... So this is not bash anymore, but an alternative I guess is to use rsync, here's an example:
$ # get example revision 4527 as testdir1
$ svn co https://openbabel.svn.sf.net/svnroot/openbabel/openbabel/trunk/data#4527 testdir1
$ # get earlier example revision 2729 as testdir2
$ svn co https://openbabel.svn.sf.net/svnroot/openbabel/openbabel/trunk/data#2729 testdir2
$ # use rsync to generate a list
$ rsync -ivr --times --cvs-exclude --dry-run testdir1/ testdir2/
sending incremental file list
.d..t...... ./
>f.st...... CMakeLists.txt
>f.st...... MACCS.txt
>f..t...... SMARTS_InteLigand.txt
...
>f.st...... atomtyp.txt
>f+++++++++ babel_povray3.inc
>f.st...... bin2hex.pl
>f.st...... bondtyp.h
>f..t...... bondtyp.txt
...
Note that:
To get the above, you mustn't forget trailing slashes / at the end of directory names in rsync
--dry-run - simulate only, don't update/transfer files
-r - recurse into directories
-v - verbose (but not related to file changes info)
--cvs-exclude - ignore .svn files
-i - "--itemize-changes: output a change-summary for all updates"
Here is a brief excerpt of man rsync that explains the information shown by -i (for instance, the >f.st...... strings above):
The "%i" escape has a cryptic output that is 11 letters long.
The general format is like the string YXcstpoguax, where Y is
replaced by the type of update being done, X is replaced by the
file-type, and the other letters represent attributes that may
be output if they are being modified.
The update types that replace the Y are as follows:
o A < means that a file is being transferred to the remote
host (sent).
o A > means that a file is being transferred to the local
host (received).
o A c means that a local change/creation is occurring for
the item (such as the creation of a directory or the
changing of a symlink, etc.).
...
The file-types that replace the X are: f for a file, a d for a
directory, an L for a symlink, a D for a device, and a S for a
special file (e.g. named sockets and fifos).
The other letters in the string above are the actual letters
that will be output if the associated attribute for the item is
being updated or a "." for no change. Three exceptions to this
are: (1) a newly created item replaces each letter with a "+",
(2) an identical item replaces the dots with spaces, and (3) an
....
A bit cryptic, indeed - but at least it shows basic directory comparison over ssh. Cheers!
The classic (System V Unix) answer would be dircmp dir1 dir2, which was a shell script that would list files found in either dir1 but not dir2 or in dir2 but not dir1 at the start (first page of output, from the pr command, so paginated with headings), followed by a comparison of each common file with an analysis (same, different, directory were the most common results).
This seems to be in the process of vanishing - I have an independent reimplementation of it available if you need it. It's not rocket science (cmp is your friend).