Bash command to get directory file permissions for every file within - bash

Is it possible to write a bash command that would write to a file a (key, value) structure that would represent every file within given directory and its corresponding file permissions as octal number (i.e. 664)? I know this command returns an octal value:
stat -c '%a' /path/to/file/
but I don't know how to combine it with walking through a directory and writing it out to a file. What might be useful is also this command that creates my_md5.txt file with key, value like structure of hash codes...
find /path/to/file/ -type f -exec md5sum {} \; > /tmp/my_md5.txt
but I don't know how to combine the two bits of code to do what I want.
Any ideas?

You mean something like that?
find -type f -exec stat -c "%n: %a" {} \; | cut -b 3- > output.txt
explanation
find all files in working directory
print name and permissions
ignore first two characters from filename "./"
write to outputfile

Related

Find, unzip and grep the content of multiple files in one step/command

First I made a question here: Unzip a file and then display it in the console in one step
It works and helped me a lot. (please read)
Now I have a second issue. I do not have a single zipped log file but I have a lot of them in defferent folders, which I need to find first. The files have the same names. For example:
/somedir/server1/log.gz
/somedir/server2/log.gz
/somedir/server3/log.gz
and so on...
What I need is a way to:
find all the files like: find /somedir/server* -type f -name log.gz
unzip the files like: gunzip -c log.gz
use grep on the content of the files
Important! The whole should be done in one step.
I cannot first store the extracted files in the filesystem because it is a readonly filesystem. I need somehow to connect, with pipes, the output from one command to the input of the next.
Before, the log files were in text format (.txt), therefore I had not to unzip them first. In this case it was easy:
ex.
find /somedir/server* -type f -name log.txt | xargs grep "term"
Now I have to deal with zipped files. That means, after I find the files, I need first somehow do unzip them and then send the contents to grep.
With one file I do:
gunzip -p /somedir/server1/log.gz | grep term
But for multiple files I don't know how to do it. For example how to pass the output of find to gunzip and the to grep?!
Also if there is another way / "best practise" how to do that, it is welcome :)
find lets you invoke a command on the files it finds:
find /somedir/server* -type f -name log.gz -exec gunzip -c '{}' + | grep ...
From the man page:
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of
invocations of the command will be much less than the number
of matched files. The command line is built in much the same
way that xargs builds its command lines. Only one instance of
{} is allowed within the command, and (when find is being
invoked from a shell) it should be quoted (for example, '{}')
to protect it from interpretation by shells. The command is
executed in the starting directory. If any invocation with
the + form returns a non-zero value as exit status, then
find returns a non-zero exit status. If find encounters an
error, this can sometimes cause an immediate exit, so some
pending commands may not be run at all. This variant of -exec
always returns true.

How to remove files from a directory if their names are not in a text file? Bash script

I am writing a bash script and want it to tell me if the names of the files in a directory appear in a text file and if not, remove them.
Something like this:
counter = 1
numFiles = ls -1 TestDir/ | wc -l
while [$counter -lt $numFiles]
do
if [file in TestDir/ not in fileNames.txt]
then
rm file
fi
((counter++))
done
So what I need help with is the if statement, which is still pseudo-code.
You can simplify your script logic a lot :
#/bin/bash
# for loop to iterate over all files in the testdir
for file in TestDir/*
do
# if grep exit code is 1 (file not found in the text document), we delete the file
[[ ! $(grep -x "$file" fileNames.txt &> /dev/null) ]] && rm "$file"
done
It looks like you've got a solution that works, but I thought I'd offer this one as well, as it might still be of help to you or someone else.
find /Path/To/TestDir -type f ! -name '.*' -exec basename {} + | grep -xvF -f /Path/To/filenames.txt"
Breakdown
find: This gets file paths in the specified directory (which would be TestDir) that match the given criteria. In this case, I've specified it return only regular files (-type f) whose names don't start with a period (-name '.*'). It then uses its own builtin utility to execute the next command:
basename: Given a file path (which is what find spits out), it will return the base filename only, or, more specifically, everything after the last /.
|: This is a command pipe, that takes the output of the previous command to use as input in the next command.
grep: This is a regular-expression matching utility that, in this case, is given two lists of files: one fed in through the pipe from find—the files of your TestDir directory; and the files listed in filenames.txt. Ordinarily, the filenames in the text file would be used to match against filenames returned by find, and those that match would be given as the output. However, the -v flag inverts the matching process, so that grep returns those filenames that do not match.
What results is a list of files that exist in the directory TestDir, but do not appear in the filenames.txt file. These are the files you wish to delete, so you can simply use this line of code inside a parameter expansion $(...) to supply rm with the files it's able to delete.
The full command chain—after you cd into TestDir—looks like this:
rm $(find . -type f ! -name '.*' -exec basename {} + | grep -xvF -f filenames.txt")

Run `tmutil isexcluded` recursively

I'd like to use tmutil recursively to list all of the files currently excluded from Time Machine backups. I know that I can determine this for a single file with
tmutil isexcluded /path/to/file
but I can't seem to get this to run recursively. I have tried grepping for the excluded files and outputting to a file like this:
tmutil isexcluded * | grep -i excluded >> ~/Desktop/TM-excluded.txt
but this only outputs data for the top level of the current directory. Can I use find or a similar command to feed every file/directory on the machine to tmutil isexcluded and pull out a list of the excluded files? What is the best way to structure the command?
I'm aware that most of the exclusions can be found in
/System/Library/CoreServices/backupd.bundle/Contents/Resources/StdExclusions.plist
and that some app-specific exclusions are searchable via
sudo mdfind "com_apple_backup_excludeItem = 'com.apple.backupd'"
but I am looking for a way to compare the actual flags on the files to these lists.
This should do it:
find /starting/place -exec tmutil isexcluded {} + | grep -F "[Excluded]" | sed -E 's/^\[Excluded\][[:space:]]*//'
This takes advantage of the fact that tmutil allows you to pass multiple filenames, so I use + at the end of the find instead of ; then I don't have to execute a new process for every single file on your machine - which could be slow. The grep looks for the fixed (F) string [Excluded] and the sed removes the [Excluded] and following 4 spaces.
You can get all the files in any subdirectory of /path/to/master/dir with
find /path/to/master/dir -type f
Now you cannot pipe the output to tmutil, so what you can do is
find /path/to/master/dir -type f -exec tmutil isexcluded {} \;
What this does is:
We know what find /path/to/master/dir -type f does.
-exec will execute anything after it.
{} will use each file from find's output separately with tmutil isexcluded.
\ ends the -exec so the -exec knows what command should execute (in case you would want to do something else after it).

Create a shell script that reads folders in a specific folder and outputs their size and name to a file

Would like to create a shell script that will read the contents of a folder that contains many folders on a server- it should output a list of these folders with their size and date modified if possible.
If you want to do it recursively (it's not clear to me from the question whether or not you do), you can do:
$ find /path/to/dir -type d -exec stat -c '%n: %s: %y' {} \;
(If you have a find which supports the feature, you can replace '\;' with '+')
Note that the %s gives the size of the directory, which is not the number of files in the directory, nor is it the disk usage of the files in the directory.
ls -l /path/to/folder | grep ^d
Try this find command to list sub-directories and their size (since stat command doesn't run same way on mac and Linux):
#!/bin/bash
find /your/base/dir -type d -exec du -k {} \; > sizes.txt

How do I grab the filename of the file containing a certain string when there are hundreds of files?

I have a folder with 200 files in it. We can say that the files are named "abc0" to "abc199". Five of these files contain the string "ez123" but I don't know which ones. My current attempt to find the file names of the files that contain the string is:
#!/bin/sh
while read FILES
do
cat $FILES | egrep "ez123"
done
I have a file that contains the filenames of all files in the directory. So I then execute:
./script < filenames
This is verifies for me that the files containing the string exist but I still don't have the name of the files. Are there any ideas concerning the best way to accomplish this?
Thanks
you can try
grep -l "ez123" abc*
find /directory -maxdepth 1 -type f -exec fgrep -l 'ez123' \{\} \;
(-maxdepth 1 is only necessary if you only want to search the directory and not the tree recursively (if there's any)).
fgrep is a bit faster than grep. -l lists the matched filenames only.
Try
find -type f -exec grep -qs "ez123" {} \; -print
This will use find to find all real files in current directory (and subdirectories), execute grep on them ({} will be replaced by file name, -qs tells it to be silent and just set an exit code), -print will print out the names of the files that grep found a matching line in.
What about:
xargs egrep -l ez123
That reads filenames from stdin and prints out the filenames with matches.

Resources