Part of a script I currently use is using "ls -FCRlhLoprt" to list every file inside of a root directory recursively to a text document. The problem is, every time I run the script, ls includes that document in its output so the text document grows each time I run it. I believe I can use -i or --ignore, but how can I use that when ls is using a few variables? I keep getting errors:
ls "$lsopt" "$masroot"/ >> "$masroot"/"$client"_"$jobnum"_"$mas"_drive_contents.txt . #this works
If I try:
ls -FCRlhLoprt --ignore=""$masroot"/"$client"_"$jobnum"_"$mas"_drive_contents.txt"" "$masroot"/ >> "$masroot"/"$client"_"$jobnum"_"$mas"_drive_contents.txt #this does not work
I get errors. I basically want to not include the output back into the 2nd time I run this command.
Additional, all I am trying to do is create an easy to read document of every file inside of a directory recursively. If there is a better way, please let me know.
Additional, all I am trying to do is create an easy to read document of every file inside of a directory recursively. If there is a better way, please let me know.
To list every file in a directory recursively, the find command does exactly what you want, and admits further programmatic manipulation of the files found if you wish.
Examples:
To list every file under the current directory, recursively:
find ./ -type f
To list files under /etc/ and /usr/share, showing their owners and permissions:
find /etc /usr/share -type f -printf "%-100p %#m %10u %10g\n"
To show line counts of all files recursively, but ignoring subdirectories of .git:
find ./ -type f ! -regex ".*\.git.*" -exec wc -l {} +
To search under $masroot but ignore files generated by past searches, and dump the results into a file:
find "$masroot" -type f ! -regex ".*/[a-zA-Z]+_[0-9]+_.+_drive_contents.txt" | tee "$masroot/${client}_${jobnum}_${mas}_drive_contents.txt"
(Some of that might be slightly different on a Mac. For more information see man find.)
Related
I have a zip file named agent-20.1.80.8366.zip, when I extract this zip it is giving additional directory inside the parent dir like below:
agent-20.1.80.8366/agent-20.1.80.8366/files
I would like to extract and move files from child directory to it's parent directory and remove that empty child directory and then I want to pass that path as a variable.
Please someone help with bash snippet that should strictly validate and proceed, and then parent path should be stored in a variable
expected output to be:
agent-20.1.80.8366/files
$ mv agent-20.1.80.8366/agent-20.1.80.8366/files agent-20.1.80.8366/files
$ rmdir agent-20.1.80.8366/agent-20.1.80.8366
This will fail if agent-20.1.80.8366/agent-20.1.80.8366 is not empty, which imo is a good thing, since you are assuming that it will empty if files is moved.
Using find to first move the file up one level
find agent-20.1.80.8366/agent-20.1.80.8366 -type f -execdir mv -f '{}' .. \;
Then using find to remove the remaining directory
find agent-20.1.80.8366 -type d -name "agent-20.1.80.8366" -delete
First I made a question here: Unzip a file and then display it in the console in one step
It works and helped me a lot. (please read)
Now I have a second issue. I do not have a single zipped log file but I have a lot of them in defferent folders, which I need to find first. The files have the same names. For example:
/somedir/server1/log.gz
/somedir/server2/log.gz
/somedir/server3/log.gz
and so on...
What I need is a way to:
find all the files like: find /somedir/server* -type f -name log.gz
unzip the files like: gunzip -c log.gz
use grep on the content of the files
Important! The whole should be done in one step.
I cannot first store the extracted files in the filesystem because it is a readonly filesystem. I need somehow to connect, with pipes, the output from one command to the input of the next.
Before, the log files were in text format (.txt), therefore I had not to unzip them first. In this case it was easy:
ex.
find /somedir/server* -type f -name log.txt | xargs grep "term"
Now I have to deal with zipped files. That means, after I find the files, I need first somehow do unzip them and then send the contents to grep.
With one file I do:
gunzip -p /somedir/server1/log.gz | grep term
But for multiple files I don't know how to do it. For example how to pass the output of find to gunzip and the to grep?!
Also if there is another way / "best practise" how to do that, it is welcome :)
find lets you invoke a command on the files it finds:
find /somedir/server* -type f -name log.gz -exec gunzip -c '{}' + | grep ...
From the man page:
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of
invocations of the command will be much less than the number
of matched files. The command line is built in much the same
way that xargs builds its command lines. Only one instance of
{} is allowed within the command, and (when find is being
invoked from a shell) it should be quoted (for example, '{}')
to protect it from interpretation by shells. The command is
executed in the starting directory. If any invocation with
the + form returns a non-zero value as exit status, then
find returns a non-zero exit status. If find encounters an
error, this can sometimes cause an immediate exit, so some
pending commands may not be run at all. This variant of -exec
always returns true.
There is a directory with some folders like a, b, c...
In every folder there are some text files which contents I need to get.
I've already tried to write a script like
for i in `ls`;
do
cd $i ;
cat * ;
done
But it doesn't work (I know why, but I don't know how to do it properly)
You shouldn't parse the output of ls. Instead use the find command to get all your files.
If you want to display the content of all regular files in the current directory and all its subdirectories, use this command:
find -type f -exec cat {} \;
If you have a lot of subdirectories, you may want to restrict the depth level with the option -maxdepth.
I have a complicated scenario. In my current working directory, I have several subdirectories. Each subdirectory has a number of files, but I'm only interested in one: RAxML_bestTree.best. The file name is the same for each corresponding file in every subdirectory, i.e., they are not unique. Thus, a copy command to a new subdirectory will not work since one RAxML_bestTree.best will be shown and overwritten 514 times.
I need to take the content of each subdirectory's RAxML_bestTree.best and have it placed into a file all_RAxML_bestTrees.txt either in the current working directory or a new subdirectory. I have tried the following, which appears to print the contents to screen but not to file:
find . -type f -name \RAxML_bestTree.best -exec cat {} all_RAxML_bestTrees.txt \;
Nevermind, found my issue:
find . -type f -name \RAxML_bestTree.best -exec cat > all_RAxML_bestTrees.txt \;
I am horrible at writing bash scripts, but I'm wondering if it's possible to recursively loop through a directory and rename all the files in there by "1.png", "2.png", etc, but I need it to restart at one for every new folder it enters. Here's script that works but only does it for one directory.
cd ./directory
cnt=1
for fname in *
do
mv $fname ${cnt}.png
cnt=$(( $cnt + 1 ))
done
Thanks in advance
EDIT
Can anyone actually write this code out? I have no idea how to write bash, and it's very confusing to me
Using find is a great idea. You can use find with the next syntax to find all directories inside your directory and apply your script to found directories:
find /directory -type d -exec youscript.sh {} \;
-type d parameter means you want to find only directories
-exec youscript.sh {} \; starts your script for every found directory and pass it this directory name as a parameter
Use find(1) to get a list of files, and then do whatever you like with that list.