I have a list of Python files scattered over a directory structure that looks like this:
top\py\
html\
docs.html
foo\
v1\
a.py
b.py
bar\
v1\
c.py
d.py
baz\
aws\
sess.py
What I'm trying to here is extract the root directory names of any directories within top/py that contain Python files, so foo, bar and baz, using a shell script. Note that there may be files other than Python code here and I do not want those included in the output. I will admit that I'm somewhat of a novice at shell scripting and this is what I've come up with so far:
find top/py/. -name *py -exec echo {} \; -exec sh -c "echo $# | grep -o '[^/]*' | sed -n 3p" {} \;
However, the output isn't quite right. This is what I'm getting:
top/py/./foo/v1/a.py
foo
top/py/./foo/v1/b.py
foo
top/py/./bar/v1/c.py
foo
top/py/./bar/v1/d.py
foo
top/py/./baz/aws/sess.py
foo
It appears that the inner variable to the grep is not being updated but I'm not sure what to do about it. Any help would be appreciated.
If your files don't contain newlines in their names:
find top/py -name '*.py' | awk -F / '1; { print $3 }'
Otherwise:
find top/py -name '*.py' -exec sh -c '
for py_path; do
pk_path=${py_path%/"${py_path#*/*/*/}"}
pk_name=${pk_path##*/}
printf '\''%s\n'\'' "$py_path" "$pk_name"
done' sh {} +
For values other than 3, replace */*/*/ with as many */s as n.
Related
I have the following problem.
There are two nested folders A and B. They are mostly identical, but B has a few files that A does not. (These are two mounted rootfs images).
I want to create a shell script that does the following:
Find out which files are contained in B but not in A.
copy the files found in 1. from B and create a tar.gz that contains these files, keeping the folder structure.
The goal is to import the additional data from image B afterwards on an embedded system that contains the contents of image A.
For the first step I put together the following code snippet. Note to grep "Nur" : "Nur in" = "Only in" (german):
diff -rq <A> <B>/ 2>/dev/null | grep Nur | awk '{print substr($3, 1, length($3)-1) "/" substr($4, 1, length($4)-1)}'
The result is the output of the paths relative to folder B.
I have no idea how to implement the second step. Can someone give me some help?
Using diff for finding files which don't exist is severe overkill; you are doing a lot of calculations to compare the contents of the files, where clearly all you care about is whether a file name exists or not.
Maybe try this instead.
tar zcf newfiles.tar.gz $(comm -13 <(cd A && find . -type f | sort) <(cd B && find . -type f | sort) | sed 's/^\./B/')
The find commands produce a listing of the file name hierarchies; comm -13 extracts the elements which are unique to the second input file (which here isn't really a file at all; we are using the shell's process substitution facility to provide the input) and the sed command adds the path into B back to the beginning.
Passing a command substitution $(...) as the argument to tar is problematic; if there are a lot of file names, you will run into "command line too long", and if your file names contain whitespace or other irregularities in them, the shell will mess them up. The standard solution is to use xargs but using xargs tar cf will overwrite the output file if xargs ends up calling tar more than once; though perhaps your tar has an option to read the file names from standard input.
With find:
$ mkdir -p A B
$ touch A/a A/b
$ touch B/a B/b B/c B/d
$ cd B
$ find . -type f -exec sh -c '[ ! -f ../A/"$1" ]' _ {} \; -print
./c
./d
The idea is to use the exec action with a shell script that tests the existence of the current file in the other directory. There are a few subtleties:
The first argument of sh -c is the script to execute, the second (here _ but could be anything else) corresponds to the $0 positional parameter of the script and the third ({}) is the current file name as set by find and passed to the script as positional parameter $1.
The -print action at the end is needed, even if it is normally the default with find, because the use of -exec cancels this default.
Example of use to generate your tarball with GNU tar:
$ cd B
$ find . -type f -exec sh -c '[ ! -f ../A/"$1" ]' _ {} \; -print > ../list.txt
$ tar -c -v -f ../diff.tar --files-from=../list.txt
./c
./d
Note: if you have unusual file names the --verbatim-files-from GNU tar option can help. Or a combination of the -print0 action of find and the --null option of GNU tar.
Note: if the shell is POSIX (e.g., bash) you can also run find from the parent directory and get the path of the files relative from there, if you prefer:
$ mkdir -p A B
$ touch A/a A/b
$ touch B/a B/b B/c B/d
$ find B -type f -exec sh -c '[ ! -f A"${1#B}" ]' _ {} \; -print
B/c
B/d
I'm having trouble with trying to achieve this bash command:
Concatenate all the text files in the current directory that have at least one occurrence of the word BOB (in any case) within the text of the file.
Is it correct for me to do this use the cat command then use grep to find the occurences of the word BOB?
cat grep -i [BOB] *.txt > catFile.txt
To handle filenames with whitespace characters correctly:
grep --null -l -i "BOB" *.txt | xargs -0 cat > catFile.txt
Your issue was the need to pass grep's file names to cat as an inline function:
cat $(grep --null -l -i "BOB" *.txt ) > catFile.txt
$(.....) handles the inline execution
-l returns only filenames of the things that matched
You could use find with -exec:
find -maxdepth 1 -name '*.txt' -exec grep -qi 'bob' {} \; \
-exec cat {} + > catFile.txt
-maxdepth 1 makes sure you don't search any deeper than the current directory
-name '*.txt' says to look at all files ending with .txt – for the case that there is also a directory ending in .txt, you could add -type f to only look at files
-exec grep -qi 'bob' {} \; runs grep for each .txt file found. If bob is in the file, the exit status is zero and the next directive is executed. -q makes sure the grep is silent.
-exec cat {} + runs cat on all the files that contain bob
You need to remove the square brackets...
grep -il "BOB" *
You can also use the following command that you must run from the directory containing your BOB files.
grep -il BOB *.in | xargs cat > BOB_concat.out
-i is an option used to set grep in case insensitive mode
-l will be used to output only the filename containing the pattern provided as argument to grep
*.in is used to find all the input files in the dir (should be adapted to your folder content)
then you pipe the result of the command to xargs in order to build the arguments that cat will use to produce your file concatenation.
HYPOTHESIS:
Your folder does only contain files without strange characters in their name (e.g. space)
I'm in zsh.
I'd like to do something like:
find . -iname *.md | xargs cat && echo "---" > all_slides_with_separators_in_between.md
Of course this cats all the slides, then appends a single "---" at the end instead of after each slide.
Is there an xargs way of doing this? Can I replace cat && echo "---" with some inline function or do block?
Very strangely, when I create a file cat---.sh with the contents
cat $1
echo ---
and run
find . -iname *.md | xargs ./cat---.sh
it only executes for the first result of find.
Replace cat---.sh with cat and it runs on both files.
There's no need to use xargs at all here. Following is a properly paranoid approach (robust against files with spaces, files with newlines, files with literal backslashes in their names, etc):
while IFS= read -r -d '' filename; do
printf '---\n'
cat -- "$filename"
done < <(find . -iname '*.md' -print0) >all_slides_with_separators.md
However -- you don't even need that either: find can do all the work itself, both printing the separator and calling cat!
find . -iname '*.md' -printf '---\n' -exec cat -- '{}' ';' >all_slides_with_separators.md
A common usage pattern is xargs sh -c 'command; another' _ where the entire shell script in the quotes will have access to the command-line arguments. The underscore is because the first argument to sh -c will be assigned to $0 (where you'd often see e.g. -sh in a ps listing).
find . -iname '*.md' |
xargs sh -c 'for x; do
cat "$x" && echo "---"
done' _ > all_slides_with_separators_in_between.md
As noted in the comments, you should probably investigate find -print0 and the corresponding xargs -0 option in GNU find (and maybe install it if you don't have it).
You can do something like this, but it can be insecure in some cases (see comments):
find . -iname '*.md' | xargs -I % sh -c '{ cat %; echo "----"; }' > output.txt
You'll rarely need find in zsh; its globbing facilities cover nearly every use case of find.
for f in (#i)**/*.md; do
cat $f
print -- "---"
done > all_slides.md
This looks in the current directory hierarchy for every file that matches *.md in a case-insensitive manner.
For even more efficiency, replace cat $f with < $f; zsh itself will read the file and write its contents to standard output.
Using GNU Parallel it looks like this:
parallel cat {}\; print -- --- ::: **/*.md
I am having 35K + odd files in multiple directories and subdirectories. There are 1000 odd files (.c .h and other file names) with a unique string "FOO" in the FILE CONTENT. I am trying to copy those files alone (with the directory structure preserved) to a different directory 'MEOW'. Can some one look in to my bash execution and correct my mistake
for Eachfile in `find . -iname "*FOO*" `
do
cp $Eachfile MEOW
done
getting the following error
./baash.sh: line 2: syntax error near unexpected token `$'do\r''
'/baash.sh: line 2: `do
To find all files under the current directory with the string "FOO" in them and copy them to directory MEOW, use:
grep --null -lr FOO . | xargs -0 cp -t MEOW
Unlike find, grep looks in the contents of a file. The -l option to grep tells it to list file names only. The -r option tells it to look recursively in subdirectories. Because file names can have spaces and other odd characters in them, we give the --null option to grep so that, in the list it produces, the file names are safely separated by null characters. xargs -0 reads from that list of null-separated file names and provides them as argument to the command cp -t MEOW which copies them to the target directory MEOW.
In case you only want to search for the string FOO in .c and .h file then
find ./ -name "*\.c" -o -name "*\.h" -exec grep -l FOO {} \; | xargs cp -t MEOW/
For me its working even without --null option to xrags, if doesn't for you.. then append -0 in xargs part as follow:
xargs -0 cp -t MEOW/
for file in $(find -name '*.c' -o -name '*.h' -exec grep -l 'FOO' {} \;); do
dname=$(dirname "$file")
dest="MEOW/$dname"
mkdir -p $dest
cp "$file" "$dest"
done
Basically I have a directory and sub-directories that needs to be scanned to find .csv files. From there I want to copy all lines containing "foo" from the csv's found to new files (in the same directory as the original) but with the name reflecting the file it was found in.
So far I have
find -type f -name "*.csv" | xargs egrep -i "foo" > foo.csv
which yields one backup file (foo.csv) with everything in it, and the location it was found in is part of the data. Both of which I don't want.
What I want:
For example if I have:
csv1.csv
csv2.csv
and they both have lines containing "foo", I would like those lines copied to:
csv1_foo.csv
csv2_foo.csv
and I don't anything extra entered in the backups, other than the full line containing "foo" from the original file. I.e. I don't want the original file name in the backup data, which is what my current code does.
Also, I suppose I should note that I'm using egrep, but my example doesn't use regex. I will be using regex in my search when I apply it to my specific scenario, so this probably needs to be taken into account when naming the new file. If that seems too difficult, an answer that doesn't account for regex would be fine.
Thanks ahead of time!
try this if helps it anyway.
find -type f -name "*.csv" | xargs -I {} sh -c 'filen=`echo {} | sed 's/.csv//' | sed "s/.\///"` && egrep -i "foo" {} > ${filen}_foo.log'
You can try this:
$ find . -type f -exec grep -H foo '{}' \; | perl -ne '`echo $2 >> $1_foo` if /(.*):(.*)/'
It uses:
find to iterate over files
grep to print file path:line tuples (-H switch)
perl to echo those line to the output files (using backslashes, but it could be done prettier).
You can also try:
find -type f -name "*.csv" -a ! -name "*_foo.csv" | while read f; do
grep foo "$f" > "${f%.csv}_foo.csv"
done