I have about two thousand text files in folder.
I want to loop each one and search for specific word in line.
for file in "./*.txt";
do
cat $file | grep "banana"
done
I was wondering if join all text files into one file would be faster.
The whole directory has about 7 GB.
You're not actually looping, you're calling cat just once on the string ./*.txt, i.e., your script is equivalent to
cat ./*.txt | grep 'banana'
This is not equivalent to
grep 'banana' ./*.txt
though, as the output for the latter would prefix the filename for each match; you could use
grep -h 'banana' ./*.txt
to suppress filenames.
The problem you could run into is that ./*.txt expands to something that is longer than the maximum command line length allowed; to prevent that, you could do something like
printf '%s\0' ./*.txt | xargs -0 grep -h 'banana'
which is save for both files containing blanks and shell metacharacters and calls grep as few times as possible1.
This can even be parallelized; to run 4 grep processes in parallel, each handling 5 files at a time:
printf '%s\0' ./*.txt | xargs -0 -L 5 -P 4 grep -h 'banana'
What I think you intended to run is this:
for file in ./*.txt; do
cat "$file" | grep "banana"
done
which would call cat/grep once per file.
1At first I thought that printf would run into trouble with command line length limitations as well, but it seems that as a shell built-in, it's exempt:
$ touch '%s\0' {1000000..10000000} > /dev/null
-bash: /usr/bin/touch: Argument list too long
$ printf '%s\0' {1000000..10000000} > /dev/null
$
I need to write a Bash script that achieve the following goals:
1) move the newest n pdf files from folder 1 to folder 2;
2) correctly handles files that could have spaces in file names;
3) output each file name in a specific position in a text file. (In my actual usage, I will use sed to put the file names in a specific position of an existing file.)
I tried to make an array of filenames and then move them and do text output in a loop. However, the following array cannot handle files with spaces in filename:
pdfs=($(find -name "$DOWNLOADS/*.pdf" -print0 | xargs -0 ls -1 -t | head -n$NUM))
Suppose a file has name "Filename with Space". What I get from the above array will have "with" and "Space" in separate array entries.
I am not sure how to avoid these words in the same filename being treated separately.
Can someone help me out?
Thanks!
-------------Update------------
Sorry for being vague on the third point as I thought I might be able to figure that out after achieving the first and second goals.
Basically, it is a text file that have a line start with "%comment" near the end and I will need to insert the filenames before that line in the format "file=PATH".
The PATH is the folder 2 that I have my pdfs moved to.
You can achieve this using mapfile in conjunction with gnu versions of find | sort | cut | head that have options to operate on NUL terminated filenames:
mapfile -d '' -t pdfs < <(find "$DOWNLOADS/*.pdf" -name 'file*' -printf '%T#:%p\0' |
sort -z -t : -rnk1 | cut -z -d : -f2- | head -z -n $NUM)
Commands used are:
mapfile -d '': To read array with NUL as delimiter
find: outputs each file's modification stamp in EPOCH + ":" + filename + NUL byte
sort: sorts reverse numerically on 1st field
cut: removes 1st field from output
head: outputs only first $NUM filenames
find downloads -name "*.pdf" -printf "%T# %p\0" |
sort -z -t' ' -k1 -n |
cut -z -d' ' -f2- |
tail -z -n 3
find all *.pdf files in downloads
for each file print it's modifition date %T with the format specifier # that means seconds since epoch with fractional part, then print space, filename and separate with \0
Sort the null separated stream using space as field separator using only first field using numerical sort
Remove the first field from the stream, ie. creation date, leaving only filenames.
Get the count of the newest files, in this example 3 newest files, by using tail. We could also do reverse sort and use head, no difference.
Don't use ls in scripts. ls is for nice formatted output. You could do xargs -0 stat --printf "%Y %n\0" which would basically move your script forward, as ls isn't meant to be used for scripts. Just that I couldn't make stat output fractional part of creation date.
As for the second part, we need to save the null delimetered list to a file
find downloads ........ >"$tmp"
and then:
str='%comment'
{
grep -B$((2**32)) -x "$str" "$out" | grep -v "$str"
# I don't know what you expect to do with newlines in filenames, but I guess you don't have those
cat "$tmp" | sed -z 's/^/file=/' | sed 's/\x0/\n/g'
grep -A$((2**32)) -x "$str" "$out"
} | sponge "$out"
where output is the output file name
assuming output file name is stored in variable "$out"
filter all lines before the %comment and remove the line %comment itself from the file
output each filename with file= on the beginning. I also substituted zeros for newlines.
the filter all lines after %comment including %comment itself
write the output for outfile. Remember to use a temporary file.
Don't use pdf=$(...) on null separated inputs. You can use mapfile to store that to an array, as other answers provided.
Then to move the files, do smth like
<"$tmp" xargs -0 -i mv {} "$outdir"
or faster, with a single move:
{ cat <"$tmp"; printf "%s\0" "$outdir"; } | xargs -0 mv
or alternatively:
<"$tmp" xargs -0 sh -c 'outdir="$1"; shift; mv "$#" "$outdir"' -- "$outdir"
Live example at turorialspoint.
I suppose following code will be close to what you want:
IFS=$'\n' pdfs=($(find -name "$DOWNLOADS/*.pdf" -print0 | xargs -0 -I ls -lt "{}" | tail -n +1 | head -n$NUM))
Then you can access the output through ${pdfs[0]}, ${pdfs[1]}, ...
Explanations
IFS=$'\n' makes the following line to be split only with "\n".
-I option for xargs tells xargs to substitute {} with filenames so it can be quoted as "{}".
tail -n +1 is a trick to suppress an error message saying "xargs: 'ls' terminated by signal 13".
Hope this helps.
Bash v4 has an option globstar, after enabling this option, we can use ** to match zero or more subdirectories.
mapfile is a built-in command, which is used for reading lines into an indexed array variable. -t option removes a trailing newline.
shopt -s globstar
mapfile -t pdffiles < <(ls -t1 **/*.pdf | head -n"$NUM")
typeset -p pdffiles
for f in "${pdffiles[#]}"; do
echo "==="
mv "${f}" /dest/path
sed "/^%comment/i${f}=/dest/path" a-text-file.txt
done
I have code with two variables in echo. I don't know why it prints spaces before $NEXT even though I have just one space in code.
NEXT=$(find "${DIR}" -type f -name "*.$ext" | sed "s/.*\/\.//g" | sed "s/.*\///g" |
sed -n '/.*\..*/p' | wc -l)
echo "Files .$ext: $NEXT"
Files .tar: 1
Your find expression is not doing what you think it is:
NEXT=$(find "${DIR}" -type f -name "*.$ext" | sed "s/.*\/\.//g" | sed "s/.*\///g" |
sed -n '/.*\..*/p' | wc -l)
When you pipe to wc -l you are left with a Number. The format of the number will depend on your distributions default compile options for wc. While generally when information is piped or redirected to wc the value returned should be without any leading whitespace (but there is no guarantee that your install of wc will work that way). All you can do it test and see what results, e.g.
ls "$HOME" | wc -l
If whitespace is returned before the value -- you have found your problem.
If the last line is the output, then it seems it is an output of something else than displayed code. When your output looks weird, try putting single quotes around each variable:
echo " Average file size .'$ext': '$AEXT'"
That way, you will know, if the spaces (or tabs) are coming from the variables themselves or from the script.
conducting a word count of a directory.
ls | wc -l
if output is "17", I would like the output to display as "017".
I have played with | printf with little luck.
Any suggestions would be appreciated.
printf is the way to go to format numbers:
printf "There were %03d files\n" "$(ls | wc -l)"
ls | wc -l will tell you how many lines it encountered parsing the output of ls, which may not be the same as the number of (non-dot) filenames in the directory. What if a filename has a newline? One reliable way to get the number of files in a directory is
x=(*)
printf '%03d\n' "${#x[#]}"
But that will only work with a shell that supports arrays. If you want a POSIX compatible approach, use a shell function:
countargs() { printf '%03d\n' $#; }
countargs *
This works because when a glob expands the shell maintains the words in each member of the glob expansion, regardless of the characters in the filename. But when you pipe a filename the command on the other side of the pipe can't tell it's anything other than a normal string, so it can't do any special handling.
You coud use sed.
ls | wc -l | sed 's/^17$/017/'
And this applies to all the two digit numbers.
ls | wc -l | sed '/^[0-9][0-9]$/s/.*/0&/'
In Bash, how do I count the number of non-blank lines of code in a project?
cat foo.c | sed '/^\s*$/d' | wc -l
And if you consider comments blank lines:
cat foo.pl | sed '/^\s*#/d;/^\s*$/d' | wc -l
Although, that's language dependent.
#!/bin/bash
find . -path './pma' -prune -o -path './blog' -prune -o -path './punbb' -prune -o -path './js/3rdparty' -prune -o -print | egrep '\.php|\.as|\.sql|\.css|\.js' | grep -v '\.svn' | xargs cat | sed '/^\s*$/d' | wc -l
The above will give you the total count of lines of code (blank lines removed) for a project (current folder and all subfolders recursively).
In the above "./blog" "./punbb" "./js/3rdparty" and "./pma" are folders I blacklist as I didn't write the code in them. Also .php, .as, .sql, .css, .js are the extensions of the files being looked at. Any files with a different extension are ignored.
There are many ways to do this, using common shell utilities.
My solution is:
grep -cve '^\s*$' <file>
This searches for lines in <file> the do not match (-v) lines that match the pattern (-e) '^\s*$', which is the beginning of a line, followed by 0 or more whitespace characters, followed by the end of a line (ie. no content other then whitespace), and display a count of matching lines (-c) instead of the matching lines themselves.
An advantage of this method over methods that involve piping into wc, is that you can specify multiple files and get a separate count for each file:
$ grep -cve '^\s*$' *.hh
config.hh:36
exceptions.hh:48
layer.hh:52
main.hh:39
If you want to use something other than a shell script, try CLOC:
cloc counts blank lines, comment
lines, and physical lines of source
code in many programming languages. It
is written entirely in Perl with no
dependencies outside the standard
distribution of Perl v5.6 and higher
(code from some external modules is
embedded within cloc) and so is quite
portable.
This command count number of non-blank lines. cat fileName | grep -v ^$ | wc -l grep -v ^$ regular expression function is ignore blank lines.
'wc' counts lines, words, chars, so to count all lines (including blank ones) use:
wc *.py
To filter out the blank lines, you can use grep:
grep -v '^\s*$' *.py | wc
'-v' tells grep to output all lines except those that match
'^' is the start of a line
'\s*' is zero or more whitespace characters
'$' is the end of a line
*.py is my example for all the files you wish to count (all python files in current dir)
pipe output to wc. Off you go.
I'm answering my own (genuine) question. Couldn't find an stackoverflow entry that covered this.
cat file.txt | awk 'NF' | wc -l
cat 'filename' | grep '[^ ]' | wc -l
should do the trick just fine
grep -cvE '(^\s*[/*])|(^\s*$)' foo
-c = count
-v = exclude
-E = extended regex
'(comment lines) OR (empty lines)'
where
^ = beginning of the line
\s = whitespace
* = any number of previous characters or none
[/*] = either / or *
| = OR
$ = end of the line
I post this becaus other options gave wrong answers for me. This worked with my java source, where comment lines start with / or * (i use * on every line in multi-line comment).
awk '/^[[:space:]]*$/ {++x} END {print x}' "$testfile"
Here's a Bash script that counts the lines of code in a project. It traverses a source tree recursively, and it excludes blank lines and single line comments that use "//".
# $excluded is a regex for paths to exclude from line counting
excluded="spec\|node_modules\|README\|lib\|docs\|csv\|XLS\|json\|png"
countLines(){
# $total is the total lines of code counted
total=0
# -mindepth exclues the current directory (".")
for file in `find . -mindepth 1 -name "*.*" |grep -v "$excluded"`; do
# First sed: only count lines of code that are not commented with //
# Second sed: don't count blank lines
# $numLines is the lines of code
numLines=`cat $file | sed '/\/\//d' | sed '/^\s*$/d' | wc -l`
# To exclude only blank lines and count comment lines, uncomment this:
#numLines=`cat $file | sed '/^\s*$/d' | wc -l`
total=$(($total + $numLines))
echo " " $numLines $file
done
echo " " $total in total
}
echo Source code files:
countLines
echo Unit tests:
cd spec
countLines
Here's what the output looks like for my project:
Source code files:
2 ./buildDocs.sh
24 ./countLines.sh
15 ./css/dashboard.css
53 ./data/un_population/provenance/preprocess.js
19 ./index.html
5 ./server/server.js
2 ./server/startServer.sh
24 ./SpecRunner.html
34 ./src/computeLayout.js
60 ./src/configDiff.js
18 ./src/dashboardMirror.js
37 ./src/dashboardScaffold.js
14 ./src/data.js
68 ./src/dummyVis.js
27 ./src/layout.js
28 ./src/links.js
5 ./src/main.js
52 ./src/processActions.js
86 ./src/timeline.js
73 ./src/udc.js
18 ./src/wire.js
664 in total
Unit tests:
230 ./ComputeLayoutSpec.js
134 ./ConfigDiffSpec.js
134 ./ProcessActionsSpec.js
84 ./UDCSpec.js
149 ./WireSpec.js
731 in total
Enjoy! --Curran
The neatest command is
grep -vc ^$ fileName
with -c option, you don't even need wc -l
It's kinda going to depend on the number of files you have in the project. In theory you could use
grep -c '.' <list of files>
Where you can fill the list of files by using the find utility.
grep -c '.' `find -type f`
Would give you a line count per file.
Script to recursively count all non-blank lines with a certain file extension in the current directory:
#!/usr/bin/env bash
(
echo 0;
for ext in "$#"; do
for i in $(find . -name "*$ext"); do
sed '/^\s*$/d' $i | wc -l ## skip blank lines
#cat $i | wc -l; ## count all lines
echo +;
done
done
echo p q;
) | dc;
Sample usage:
./countlines.sh .py .java .html
If you want the sum of all non-blank lines for all files of a given file extension throughout a project:
while read line
do grep -cve '^\s*$' "$line"
done < <(find $1 -name "*.$2" -print) | awk '{s+=$1} END {print s}'
First arg is the project's base directory, second is the file extension. Sample usage:
./scriptname ~/Dropbox/project/src java
It's little more than a collection of previous solutions.
rgrep . | wc -l
gives the count of non blank lines in the current working directory.
grep -v '^\W*$' `find -type f` | grep -c '.' > /path/to/lineCountFile.txt
gives an aggregate count for all files in the current directory and its subdirectories.
HTH!
This gives the count of number of lines without counting the blank lines:
grep -v ^$ filename wc -l | sed -e 's/ //g'
Try this one:
> grep -cve ^$ -cve '^//' *.java
it's easy to memorize and it also excludes blank lines and commented lines.
There's already a program for this on linux called 'wc'.
Just
wc -l *.c
and it gives you the total lines and the lines for each file.