bash locate command with pattern - bash

I try to find a file with locate command.
It behaves itself some strange with patterns, at least not like ls or find commands.
I do the following:
sh#sh:~$ locate rhythmdb
/home/sh/.local/share/rhythmbox/rhythmdb.xml
sh#sh:~$ locate "rhyth*"
sh#sh:~$ locate 'rhyth*'
sh#sh:~$ locate rhyth*
(Screenshot)
In my humble opinion it should find when asterisk is used too, but it doesn't.
What can be wrong?

From man locate:
If --regex is not specified, PATTERNs can contain globbing characters. If any
PATTERN contains no globbing characters, locate behaves as if the pattern
were \*PATTERN*.
Hence, when you issue
locate rhyth*
locate will not find it, because there are no files that match this pattern: since there's a glob character, locate will really try to match (in regex): ^rhyth.* and there are obviously no such matches (on full paths).
In your case, you could try:
locate "/home/sh/.local/share/rhythmbox/rhyth*"
or
locate '/rhyth' # equivalent to locate '*/rhyth*'
But that's not very good, is it?
Now, look at the first option in man locate:
-b, --basename
Match only the base name against the specified patterns. This
is the opposite of --wholename.
Hurray! the line:
locate -b "rhyth*"
should work as you want it to: locate a file with basename matching (in regex): ^rhyth.*
Hope this helps.
Edit. To answer your comment: if you want to locate all jpg files in folder /home/sh/music/, this should do:
locate '/home/sh/music/*.jpg'
(no -b here, it wouldn't work). Notice that this will show all jpg files that are in folder /home/sh/music and also in its subfolders. You could be tempted to use the -i flag (ignore case) so that you'll also find those that have the uppercase JPG extension:
locate -i '/home/sh/music/*.jpg'
Edit 2. Better to say it somewhere: the locate command works with a database — that's why it can be much faster than find. If you have recent files, they won't be located and if you delete some files, they might still be located. If you're in this case (which might be the purpose of your other comment), you must update locate's database: as root, issue:
updatedb
Warning. The command updatedb might take a few minutes to complete, don't worry.

Related

Where is libpcap.dylib?

I able to load libpcap.dylib which is confusing cause I can't figure out the actual file location. Doing find / -name libpcap.A.dylib or libpcap.dylib says no such file.
Also finder search with libpcap just results in libpcap.A.tbd and libpcap.rb.
libpcap.A.tbd shows "Install location /usr/lib/libpcap.A.dylib", but it does not actually exist there.
I wanted to locate the actual dylib file cause I running into issue with being able to import function, So I wanted to check file to make sure I have function names correct.
So I wanted to check file to make sure I have function names correct.
The first thing to check is the pcap man page - from the command line, it'd be
man pcap
It's a bit long, but it should mention all the functions available in libpcap; it may be easier than
nm /usr/lib/libpcap.dylib | egrep ' T '
(and doesn't require you to remember that the leading underscores in the output of that command are NOT part of the name of the function, they're a leftover from ancient UNIX history).
Where is libpcap.dylib?
/usr/lib/libpcap.A.dylib. /usr/lib/libpcap.dylib is a symbolic link to it.

Duplicated output when using: `find pwd .`

I am trying to find some files and get the absolute path.
If I use: find `pwd` .
I get the files with absolute path but I also get them from ./
If I use: find `pwd` then I just get the files once.
Why Is that happening ?
Arguments given to find which precede any options, actions or arguments thereto are parsed as locations from which to start a search. (The POSIX standard doesn't require that find operate at all when not passed at least one such location, though GNU's version does so anyhow by treating . as a default starting location if none are given).
When you instruct find to start from the same location twice by passing it two different paths that refer to the same place, you're thus telling it to run two separate searches starting at the same place -- so if the set of files doesn't change between when the first search runs and the second one, you get the same results twice.

What does slash dot refer to in a file path?

I'm trying to install a grunt template on my computer but I'm having issues. I realized that perhaps something different is happening because of the path given by the Grunt docs, which is
%USERPROFILE%\.grunt-init\
What does that . mean before grunt-init?
I've tried to do the whole import manually but it also isn't working
git clone https://github.com/gruntjs/grunt-init-gruntfile.git "C:\Users\Imray\AppData\Roaming\npm\gru
nt-init\"
I get a message:
fatal: could not create work tree dir 'C:\Users\Imray\AppData\Roaming\npm\.grunt-init"'.: Invalid argument
Does it have to do with this /.? What does it mean?
The \ (that's a backslash, not a slash) is a directory delimiter. The . is simply part of the directory name.
.grunt-init and grunt-init are two distinct names, both perfectly valid.
On Unix-like systems, file and directory names starting with . are hidden by default, which is why you'll often see such names for things like configuration files.
The . is part of a directory name. Filenames can contain . . The \ is a separator between directory names.
Typically, files or directories starting with . are considered "hidden" and/or used for storing metadata. In particular, shell wildcard expansion skips over files that start with ..
For example if you wrote ls -d * then it would not show any files or directories beginning with . (including . and .., the current and parent directories).
Linux hides files and directories whose names begin with dot, unless you use the a (for "all") option when listing directory contents. If this convention is not followed on Windows, your example is probably just a carryover.
It may well be something behind the scenes (later) expects that name to match exactly. While I like things, installers, for example, to just do what I said, I realize that keeping default value is the most tested path.
Directories starting with a dot are invisible by default on xNIX systems. Typically used for configurations files and similar in a users home directory.
\ before " has a special meaning on windows, the error is because windows won't let you create a file containing " as part of its name.

How to extract a specific folder using IZARC (IZARCe)

I want to extract a specific directory form a huge zip file (>5GB) that is somewhat corrupted because of an inevitable bad maintained build system that creates the zip.
The tools such as winrar/7Zip GUI apps have no issues extracting the files, but some command line tools such as mks unzip and 7za fails to extract from the corrupted archive.
After a lot of digging around and trying out many such command line utilities I found out that IZARC successfully extracts files from the archive.
I am running the following command:
IZARCe.exe -e -d -o D:\aHugeZipFile.zip -pD:\temp #"source.txt"
The listing file source.txt contains just one entry:
source/lib/*
which is the only directory in the archive, from where the contents are to be extracted.
But, it is resulting in:
IZArc Command Line Extraction Add-On Version 1.1 (Build: 130)
Copyright(c) 2007 Ivan Zahariev, All Rights Reserved.
http://www.izarc.org contact#izarc.org
Archive File: aHugeZipFile.zip
WARNING: Nothing to do!
I have tried specifying:
/source/lib/*
source/lib/*
source/lib/
source/lib
*source/lib/*
in the listing file, all to no avail! :(
Any pointers on where the error is occurring, and how to fix the issue will be of great help. Thank you in advance!
Using relative or absolute paths for listfiles doesn't appear to work with IZArc. Try using wildcards such as ., *.doc, etc instead of paths in the listfile. Be aware that there appears to be a limitation for the folder depth that IZArc will extract to as well as a tendency to generate CRC errors when files with the same name are present in the same archive, even if they are in different directories.
I would suggest using 7-Zip command-line instead. It can recurse deeply through a file structure without error and can use relative directories and wildcards in its listfiles.
The following 7-Zip command was tested and worked perfectly.
7za x somearchive.zip -o"C:\Documents and Settings\me\desktop\temp_folder\test2" -ir#source.txt -aoa -scsWIN
the source.txt file may contain contain a combination of relative paths and/or wildcards on separate lines such as:
Output/, Folder2/, *, or *.doc.
In the command above: x (extract with full paths), -ir (include filenames, recurse subdirectories), -aoa (overide existing files without prompt), -scsWIN (set charset for list files). You may need to adjust these commands for your situation.

Finding and Removing Unused Files Through Command Line

My websites file structure has gotten very messy over the years from uploading random files to test different things out. I have a list of all my files such as this:
file1.html
another.html
otherstuff.php
cool.jpg
whatsthisdo.js
hmmmm.js
Is there any way I can input my list of files via command line and search the contents of all the other files on my website and output a list of the files that aren't mentioned anywhere on my other files?
For example, if cool.jpg and hmmmm.js weren't mentioned in any of my other files then it could output them in a list like this:
cool.jpg
hmmmm.js
And then any of those other files mentioned above aren't listed because they are mentioned somewhere in another file. Note: I don't want it to just automatically delete the unused files, I'll do that manually.
Also, of course I have multiple folders so it will need to search recursively from my current location and output all the unused (unreferenced) files.
I'm thinking command line would be the fastest/easiest way, unless someone knows of another. Thanks in advance for any help that you guys can be!
Yep! This is pretty easy to do with grep. In this case, you would run a command like:
$ for orphan in `cat orphans.txt`; do \
echo "Checking for presence of ${orphan} in present directory..." ;
grep -rl $orphan . ; done
And orphans.txt would look like your list of files above, one file per line. You can add -i to the grep above if you want to grep case-insensitively. And you would want to run that command in /var/www or wherever your distribution keeps its webroots. If, after you see the above "Checking for..." and no matches below, you haven't got any files matching that name.

Resources