Listing all files and folders recursively via terminal - macos

I have a folder containing about 2000 sub-folders, some containing even more sub-folders and some with files. I've used this code via Terminal:
cd /path/to/folder
ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^/]*//--/g' -e 's/^/ /' -e 's/-/|/'
to return a very nice recursive list of all the sub-folders but it does not list the files within those folders. Does anyone know how to amend this code so it produces the recursive folder list and includes the files?
For reasons that aren't worth getting into I'm limited to using Terminal on this computer and can't try a different method using C# or Java. Any help is appreciated.

How about using find like this:
find $PWD
Or using this alias:
alias stree='ls -R | grep : | sed -e '\''s/:$//'\'' -e '\''s/[^-][^\/]*\//--/g'\'' -e '\''s/^/ /'\'' -e '\''s/-/|/'\'''

Related

Can't figure out how inotifywait excluding works, can somebody explain?

I have the following peace of code in bash
inotifywait -r -m -e modify -e moved_to -e moved_from -e move -e move_self -e create -e delete -e delete_self $list_of_folders |
while read path action file; do
message=$myip%0A$action%0A$path$file
message
done
I don't know how can I exclude from watch files named "logs.txt" and ".git" folders, can anyone explain how excluding works in inotifywait?
p.s. logs.txt and .git folders, may be in any location, I don't know exactly where
Based on John1024's comment, I believe you need to use --exclude 'logs\.txt|\.git' in your inotifywait command like this:
inotifywait --exclude 'logs\.txt|\.git' -r -m -e modify -e moved_to -e moved_from -e move -e move_self -e create -e delete -e delete_self $list_of_folders | # ...
This option takes a regular expression that is used to ignore events whose filename matches it. In this case, the regular expression logs\.txt|\.git means any filename equal to logs.txt or (|) .git. The backslashes are required to escape the dots because they have a special meaning inside regular expressions.
This exclusion seems to work well for any location of the logs.txt file or .git directory. And it even excludes events from descendants of the .git directory.

Bash script to separate files into directories, reverse sort and print in an HTML file works on some files but not others

Goal
Separate files into directories according to their filenames, run a Bash script that reverse sorts them and assembles the content into one file (I know steps to achieve this are already documented on Stack Overflow, but please keep reading...)
Problem
Scripts work on all files but two
State
Root directory
dos-18-1-18165-03-for-sql-server-2012---15-june-2018.html
dos-18-1-18165-03-for-sql-server-2016---15-june-2018.html
dos-18-1-18176-03-for-sql-server-2012---10-july-2018.html
dos-18-1-18197-01-for-sql-server-2012---23-july-2018.html
dos-18-1-18197-01-for-sql-server-2016---23-july-2018.html
dos-18-1-18232-01-for-sql-server-2012---21-august-2018.html
dos-18-1-18232-01-for-sql-server-2016---21-august-2018.html
dos-18-1-18240-01-for-sql-server-2012---5-september-2018.html
dos-18-1-18240-01-for-sql-server-2016---5-september-2018.html
dos-18-2-release-notes.html
dos-18-2-known-issues.html
Separate the files into directories according to their SQL Server version or name
ls | grep "^dos-18-1.*2012.*" | xargs -i cp {} dos181-2012
ls | grep "^dos-18-1.*2016.*" | xargs -i cp {} dos181-2016
ls | grep ".*notes.*" | xargs -i cp {} dos-18-2-release-notes
ls | grep ".*known.*" | xargs -i cp {} dos-18-2-known-issues
Result (success)
/dos181-2012:
dos-18-1-18165-03-for-sql-server-2012---15-june-2018.html
dos-18-1-18176-03-for-sql-server-2012---10-july-2018.html
dos-18-1-18197-01-for-sql-server-2012---23-july-2018.html
dos-18-1-18232-01-for-sql-server-2012---21-august-2018.html
dos-18-1-18240-01-for-sql-server-2012---5-september-2018.html
/dos181-2016:
dos-18-1-18165-03-for-sql-server-2016---15-june-2018.html
dos-18-1-18197-01-for-sql-server-2016---23-july-2018.html
dos-18-1-18232-01-for-sql-server-2016---21-august-2018.html
dos-18-1-18240-01-for-sql-server-2016---5-september-2018.html
/dos-18-2-known-issues
dos-18-2-known-issues.html
/dos-18-2-release-notes
dos-18-2-release-notes.html
Variables (all follow this pattern)
dos181-2012.sh
file="dos181-2012"
export
dos-18-2-known-issues
file="dos-18-2-known-issues"
export
Reverse sort and assemble (assumes /$file exists; after testing all lines of code I believe this is where the problem lies):
cat $( ls "$file"/* | sort -r ) > "$file"/"$file".html
Result (success and failure)
dos181-2012.html has the correct content in the correct order.
dos-18-2-known-issues.html is empty.
What I have tried
I tried to ignore the two files in the command:
cat $( ls "$file"/* -i (grep ".*known.*" ) | sort -r ) > "$file"/"$file".html
Result: The opposite occurs
dos181-2012.html is empty
dos-18-2-known-issues.html is not empty
Thank you
I am completely baffled. Why do these scripts work on some files but not others? (I can share more information about the file contents if that will help, but the file contents are nearly identical.) Thank you for any insights.
first off, you question is quite incomplete. You start great, showing the input files and directories. But then you talk about variables and $files, but you do not show the code from which these originate. So I based my answer on the explanation in the first paragraph and what I deduced from the rest of the question.
I did this:
#!/bin/bash
cp /etc/hosts dos-18-1-18165-03-for-sql-server-2012---15-june-2018.html
cp /etc/hosts dos-18-1-18165-03-for-sql-server-2016---15-june-2018.html
cp /etc/hosts dos-18-1-18176-03-for-sql-server-2012---10-july-2018.html
cp /etc/hosts dos-18-1-18197-01-for-sql-server-2012---23-july-2018.html
cp /etc/hosts dos-18-1-18197-01-for-sql-server-2016---23-july-2018.html
cp /etc/hosts dos-18-1-18232-01-for-sql-server-2012---21-august-2018.html
cp /etc/hosts dos-18-1-18232-01-for-sql-server-2016---21-august-2018.html
cp /etc/hosts dos-18-1-18240-01-for-sql-server-2012---5-september-2018.html
cp /etc/hosts dos-18-1-18240-01-for-sql-server-2016---5-september-2018.html
cp /etc/hosts dos-18-2-release-notes.html
cp /etc/hosts dos-18-2-known-issues.html
DIRS='dos181-2012 dos181-2016 dos-18-2-release-notes dos-18-2-known-issues'
for DIR in $DIRS
do
if [ ! -d $DIR ]
then
mkdir $DIR
fi
done
cp dos-18-1*2012* dos181-2012
cp dos-18-1*2016* dos181-2016
cp *notes* dos-18-2-release-notes
cp *known* dos-18-2-known-issues
for DIR in $DIRS
do
/bin/ls -c1r $DIR >$DIR.html
done
The cp commands are just to create the files with something in them.
You did not specify how the directory names were produced, so I went with the easy option and listed them in a variable ($DIRS). These could be built based on the filenames, but you did not mention that.
Then created the directories (first for).
Then 4 cp commands. Your code is very complicated for something so basic. cp, like rm;mv;ls;... can do wildcard expansion, so there is no need for complex grep and xargs to copy files around.
Finally in the last for loop, list the files (ls), in 1 column (-c1, strictly output formatting), reversed the sort order (-r). The result of that ls is sent to a ".html" file of the same name as the directory.

How to pipe a specific file to a find command?

I am trying to redirect a specific file to the "find" command in bash shell by using the below command.
ls sample.txt | find -name "*ple*"
I would like to search the sub string ple in the filename sample.txt which I have passed, but the above command is checking for the match from all the files in the directory. It is not searching for the match in the specific file which I have passed using pipe.
You need to use the grep command not find, which crawls the file system.
If you are looking for the sub string "ple" in the filename "sample.txt", then this code would do what you need:
mkdir -p /tmp/holder-file/ && cp /directory/sample.txt /tmp/holder-file/ && ls /tmp/holder-file/ | grep -e "ple"
Not super elegant but works great.
Edit: Thanks to Benjamin W. who came with a more elegant solution:
grep -e 'ple' <<< '/directory/sample.txt'

Sed with Xargs cannot open passed file (Cygwin)

Trying to use the beauty of Sed so I don't have to manually update a few hundred files. I'll note my employer only allows use of Win8 (joy), so I use Cygwin all day until I can use my Linux boxes at home.
The following works on a Linux (bash) command line, but not Cygwin
> grep -lrZ "/somefile.js" . | xargs -0 -l sed -i -e 's|/somefile.js|/newLib.js|g'
sed: can't read ./testTarget.jsp: No such file or directory
# works
> sed -i 's|/somefile.js|/newLib.js|g' ./testTarget.jsp
So the command by itself works, but not passed through Xargs. And, before you say to use Perl instead of Sed, the Perl equivalent throws the very same error
> grep -lrZ "/somefile.js" . | xargs -0 perl -i -pe 's|/somefile.js|/newLib.js|g'
Can't open ./testTarget.jsp
: No such file or directory.
Use the xargs -n option to split up the arguments and force separate calls to sed.
On windows using GnuWin tools (not Cygwin) I found that I need to split up the input to sed. By default xargs will pass ALL of the files from grep to one call to sed.
Let's say you have 4 files that match your grep call, the sed command will run through xargs like this:
sed -i -e 's|/somefile.js|/newLib.js|g' ./file1 ./file2 ./subdir/file3 ./subdir/file4
If the number of files is too large sed will give you this error.
Use the -n option to have xargs call sed repeatedly until it exhausts all of the arguments.
grep -lrZ "/somefile.js" . | xargs -0 -l -n 2 sed -i -e 's|/somefile.js|/newLib.js|g'
In my small example using -n 2 will internally do this:
sed -i -e 's|/somefile.js|/newLib.js|g' ./file1 ./file2
sed -i -e 's|/somefile.js|/newLib.js|g' ./subdir/file3 ./subdir/file4
I had a large set of files and directories (around 3000 files), and using xargs -n 5 worked great.
When I tried -n 10 I got errors. Using xargs --verbose I could see some of the commandline calls were getting cut off at around 500 characters. So you may need to make -n smaller depending on the path length of the files you are woking with.

Best way to do a find/replace in several files?

what's the best way to do this? I'm no command line warrior, but I was thinking there's possibly a way of using grep and cat.
I just want to replace a string that occurs in a folder and sub-folders. what's the best way to do this? I'm running ubuntu if that matters.
I'll throw in another example for folks using ag, The Silver Searcher to do find/replace operations on multiple files.
Complete example:
ag -l "search string" | xargs sed -i '' -e 's/from/to/g'
If we break this down, what we get is:
# returns a list of files containing matching string
ag -l "search string"
Next, we have:
# consume the list of piped files and prepare to run foregoing command
# for each file delimited by newline
xargs
Finally, the string replacement command:
# -i '' means edit files in place and the '' means do not create a backup
# -e 's/from/to/g' specifies the command to run, in this case,
# global, search and replace
sed -i '' -e 's/from/to/g'
find . -type f -print0 | xargs -0 -n 1 sed -i -e 's/from/to/g'
The first part of that is a find command to find the files you want to change. You may need to modify that appropriately. The xargs command takes every file the find found and applies the sed command to it. The sed command takes every instance of from and replaces it with to. That's a standard regular expression, so modify it as you need.
If you are using svn beware. Your .svn-directories will be search and replaced as well. You have to exclude those, e.g., like this:
find . ! -regex ".*[/]\.svn[/]?.*" -type f -print0 | xargs -0 -n 1 sed -i -e 's/from/to/g'
or
find . -name .svn -prune -o -type f -print0 | xargs -0 -n 1 sed -i -e 's/from/to/g'
As Paul said, you want to first find the files you want to edit and then edit them. An alternative to using find is to use GNU grep (the default on Ubuntu), e.g.:
grep -r -l from . | xargs -0 -n 1 sed -i -e 's/from/to/g'
You can also use ack-grep (sudo apt-get install ack-grep or visit http://petdance.com/ack/) as well, if you know you only want a certain type of file, and want to ignore things in version control directories. e.g., if you only want text files,
ack -l --print0 --text from | xargs -0 -n 1 sed -i -e 's/from/to/g'
# `from` here is an arbitrary commonly occurring keyword
An alternative to using sed is to use perl which can process multiple files per command, e.g.,
grep -r -l from . | xargs perl -pi.bak -e 's/from/to/g'
Here, perl is told to edit in place, making a .bak file first.
You can combine any of the left-hand sides of the pipe with the right-hand sides, depending on your preference.
An alternative to sed is using rpl (e.g. available from http://rpl.sourceforge.net/ or your GNU/Linux distribution), like rpl --recursive --verbose --whole-words 'F' 'A' grades/
For convenience, I took Ulysse's answer (after correcting the undesirable error printing) and turned it into a .zshrc / .bashrc function:
function find-and-replace() {
ag -l "$1" | xargs sed -i -e s/"$1"/"$2"/g
}
Usage: find-and-replace Foo Bar
The typical (find|grep|ack|ag|rg)-xargs-sed combination has a few problems:
Difficult to remember and get correct. Eg, forgetting the xargs -r option will run the command even when no files are found, potentially causing problems.
Retrieving the file list, and the actual replacement uses different CLI tools and can have a different search behaviour.
These problems were big enough for such an invasive and dangerous operation as recursive search-and-replace, to start the development of a dedicated tool: mo.
Early tests seem to indicate that its performance is between ag and rg and it solves following problems I encounter with them:
A single invocation can filter on filename and content. Following command searches for the word bug in all source files that have a v1 indication:
mo -f 'src/.*v1.*' -p bug -w
Once the search results are OK, actual replacement for bug with fix can be added:
mo -f 'src/.*v1.*' -p bug -w -r fix
comment() {
}
doc() {
}
function agr {
doc 'usage: from=sth to=another agr [ag-args]'
comment -l --files-with-matches
ag -0 -l "$from" "${#}" | pre-files "$from" "$to"
}
pre-files() {
doc 'stdin should be null-separated list of files that need replacement; $1 the string to replace, $2 the replacement.'
comment '-i backs up original input files with the supplied extension (leave empty for no backup; needed for in-place replacement.)(do not put whitespace between -i and its arg.)'
comment '-r, --no-run-if-empty
If the standard input does not contain any nonblanks,
do not run the command. Normally, the command is run
once even if there is no input. This option is a GNU
extension.'
AGR_FROM="$1" AGR_TO="$2" xargs -r0 perl -pi.pbak -e 's/$ENV{AGR_FROM}/$ENV{AGR_TO}/g'
}
You can use it like this:
from=str1 to=sth agr path1 path2 ...
Supply no paths to make it use the current directory.
Note that ag, xargs, and perl need to be installed and on PATH.

Resources