Get first argument of wc -l myFile.txt [duplicate] - bash

This question already has answers here:
How to get "wc -l" to print just the number of lines without file name?
(10 answers)
Closed 7 years ago.
I'm counting the number of lines in a big file using
wc -l myFile.txt
Result is
110 myFile.txt
But I want only the number
110
How can I do that?
(I want the number of lines as an input argument in a bash script)

There are lots of ways to do this. Here are two:
wc -l myFile.txt | cut -f1 -d' '
wc -l < myFile.txt
Cut is an old Unix tool to
print selected parts of lines from each FILE to standard output.

You can use cat and pipe wc -l:
cat myFile.txt | wc -l
Or if you insist wc -l be the first command, you can use awk:
wc -l myFile.txt | awk '{print $1}'

You can try
wc -l file | awk '{print $1}'

Related

How to store output as variable [duplicate]

This question already has answers here:
How do I set a variable to the output of a command in Bash?
(15 answers)
Closed 3 years ago.
I'm looking to store the hash of my most recently downloaded file in my downloads folder as a variable.
So far, this is what I have:
md5sum $(ls -t | head -n1) | awk '{print $1}'
Output:
user#ci-lux-soryan:~/Downloads$ md5sum $(ls -t | head -n1) | awk '{print $1}'
c1924742187128cc9cb2ec04ecbd1ca6
I have tried storing it as a variable like so, but it doesn't work:
VTHash=$(md5sum $(ls -t | head -n1) | awk '{print $1}')
Any ideas, where am I going wrong
As #Cyrus outlined parsing ls has its own pitfalls and therefore better to avoid it altogether rather than allowing unexpected corner cases. The following shall facilitate the requirements epitomised.
VTHash="$(find -type f -mtime 0 | tail -n 1 | xargs md5sum | awk '{ print $1 }')"

Multiple output in single line Shell commend with pipe only

For example:
ls -l -d */ | wc -l | awk '{print $1}' | tee /dev/tty | ls -l
This shell command print the result of wc and ls -l with single line, but tee is used.
Is it possible to using one Shell commend line to achieve multiple output without using “&&” “||” “>” “>>” “<” “;” “&”,tee and temp file?
When you want the output of date and ls -rtl | head -1 on one line, you can use
echo "$(date): $(ls -rtl | head -1)"
Yes, you can achieve writing to multiple files with awk which is not in the list of things you appear not to like:
echo hi | awk '{print > "a.txt"; print > "b.txt"}'
Then check a.txt and b.txt.

awk issue, summing lines in various files

I have a list of files starting with the word "output", and I want to sum up the total number of rows in all the files.
Here's my strategy:
for f in `find outpu*`;do wc -l $f | awk '{x+=$1}END{print $1}' ; done
Before piping over, if there were a way I could do something like >> to a temporary variable and then run the awk command after, I could accomplish this goal.
Any tips?
use this to see details and sum :
wc -l output*
and this to see only the sum:
wc -l output* | tail -n1 | cut -d' ' -f1
Here is some stuff for fun, check it out:
grep -c . out* | cut -d':' -f2- | paste -sd+ | bc
all lines, including empty ones:
grep -c '' out* | cut -d':' -f2- | paste -sd+ | bc
you can play in grep with conditions on lines in files
Watch out, this find command will only find stuff in your current directory if there is one file matching outpu*.
One way of doing it:
awk 'END{print NR}' $(find 'outpu*')
Provided that there is not an insane amount of matching filenames that overflows the maximum command length limit of your shell.

How to get nth column with regexp delimiter [duplicate]

This question already has answers here:
Shell file size in Linux
(6 answers)
Closed 6 years ago.
Basically I get line from ls -la command:
-rw-r--r-- 13 ondrejodchazel staff 442 Dec 10 16:23 some_file
and want to get size of file (442). I have tried cut and sed commands, but was unsuccesfull. Using just basic UNIX tools (cut, sed, awk...), how can i get specific column from stdin, where delimiter is / +/ regexp?
If you want to do it with cut you need to squeeze the space first (tr -s ' ') because cut doesn't support +. This should work:
ls -la | tr -s ' ' | cut -d' ' -f 5
It's a bit more work when doing it with sed (GNU sed):
ls -la | sed -r 's/([^ ]+ +){4}([^ ]+).*/\2/'
Slightly more finger punching if you use the grep alternative (GNU grep):
ls -la | grep -Eo '[^ ]+( +[^ ]+){4}' | grep -Eo '[^ ]+$'
Parsing ls output is harder than you think. Use a dedicated tool such as stat instead.
size=$(stat -c '%s' some_file)
One way ls -la some_file | awk '{print $5}' could break is if numbers use space as a thousands separator (this is common in some European locales).
See also Why You Shouldn't Parse the Output of ls(1).
Pipe your output with:
awk '{print $5}'
Or even better us to use stat command like this (On Mac):
stat -f "%z" yourFile
Or (on Linux)
stat -c "%s" yourFile
that will output size of file in bytes.

Linux commands to output part of input file's name and line count

What Linux commands would you use successively, for a bunch of files, to count the number of lines in a file and output to an output file with part of the corresponding input file as part of the output line. So for example we were looking at file LOG_Yellow and it had 28 lines, the the output file would have a line like this (Yellow and 28 are tab separated):
Yellow 28
wc -l [filenames] | grep -v " total$" | sed s/[prefix]//
The wc -l generates the output in almost the right format; grep -v removes the "total" line that wc generates for you; sed strips the junk you don't want from the filenames.
wc -l * | head --lines=-1 > output.txt
produces output like this:
linecount1 filename1
linecount2 filename2
I think you should be able to work from here to extend to your needs.
edit: since I haven't seen the rules for you name extraction, I still leave the full name. However, unlike other answers I'd prefer to use head rather then grep, which not only should be slightly faster, but also avoids the case of filtering out files named total*.
edit2 (having read the comments): the following does the whole lot:
wc -l * | head --lines=-1 | sed s/LOG_// | awk '{print $2 "\t" $1}' > output.txt
wc -l *| grep -v " total"
send
28 Yellow
You can reverse it if you want (awk, if you don't have space in file names)
wc -l *| egrep -v " total$" | sed s/[prefix]//
| awk '{print $2 " " $1}'
Short of writing the script for you:
'for' for looping through your files.
'echo -n' for printing the current file
'wc -l' for finding out the line count
And dont forget to redirect
('>' or '>>') your results to your
output file

Resources