Printing number of lines with in shell with echo - shell

I know that the simplest way to print out the specific value of line/bytes/words is to use wc -l < filename.sh, but when i try to use it in conjunction with the echo command, it's printing the physical command itself and not the output.
My intended output is "this file has x lines", with x being number of lines, but when i try to do things like echo "this line has" wc -l < filename.sh "lines", it's printing the command itself. I've also tried this without breaking the quotation, among several other things.
is it just the command itself that's not applicable alongside echo, or am i missing something extremely obvious here?

echo "this line has $(wc -l < filename.sh) lines"

printf is versatile:
printf 'this file has %s lines\n' $(wc -l < filename.sh)
$(command) converts the output of command into an argument.

Try this one:
echo "this file has `wc -l < filename.sh | awk '{print $1}'` lines"
Explanation:
wc -l < filename.sh retrieves the line number of the file
awk '{print $1}' prints the number without any blanks
`` means executing the command first in order to get the result

Without any subshell or pipe, awk have an inbuilt variable NR which holds the number of record in the input file. Print is written inside END block to print the result at the end else, it will print the line number of each line.
awk 'END{print "This line has " NR " lines" }' file

Related

Unix bash script grep loop counter (for)

I am looping our the a grep result. The result contains 10 lines (every line has different content). So the loop stuff in the loop gets executed 10 times.
I need to get the index, 0-9, in the run so i can do actions based on the index.
ABC=(cat test.log | grep "stuff")
counter=0
for x in $ABC
do
echo $x
((counter++))
echo "COUNTER $counter"
done
Currently the counter won't really change.
Output:
51209
120049
148480
1211441
373948
0
0
0
728304
0
COUNTER: 1
If your requirement is to only print counter(which is as per shown samples only), in that case you could use awk(if you are ok with it), this could be done in a single awk like, without creating variable and then using grep like you are doing currently, awk could perform both search and counter printing in a single shot.
awk -v counter=0 '/stuff/{print "counter=" counter++}' Input_file
Replace stuff string above with the actual string you are looking for and place your actual file name for Input_file in above.
This should print like:
counter=1
counter=2
........and so on
Your shell script contains what should be an obvious syntax error.
ABC=(cat test.log | grep "stuff")
This fails with
-bash: syntax error near unexpected token `|'
There is no need to save the output in a variable if you only want to process one at a time (and obviously no need for the useless cat).
grep "stuff" test.log | nl
gets you numbered lines, though the index will be 1-based, not zero-based.
If you absolutely need zero-based, refactoring to Awk should solve it easily:
awk '/stuff/ { print n++, $0 }' test.log
If you want to loop over this and do something more with this information,
awk '/stuff/ { print n++, $0 }' test.log |
while read -r index output; do
echo index is "$index"
echo output is "$output"
done
Because the while loop executes in a subshell the value of index will not be visible outside of the loop. (I guess that's what your real code did with the counter as well. I don't think that part of the code you posted will repro either.)
Do not store the result of grep in a scalar variable $ABC.
If the line of the log file contains whitespaces, the variable $x
is split on them due to the word splitting of bash.
(BTW the statement ABC=(cat test.log | grep "stuff") causes a syntax error.)
Please try something like:
readarray -t abc < <(grep "stuff" test.log)
for x in "${abc[#]}"
do
echo "$x"
echo "COUNTER $((++counter))"
done
or
readarray -t abc < <(grep "stuff" test.log)
for i in "${!abc[#]}"
do
echo "${abc[i]}"
echo "COUNTER $((i + 1))"
done
you can use below increment statement-
counter=$(( $counter + 1));

How to print the result of a system command in awk

I have the following single line in my bash script:
echo "foo" | awk -F"=" '{char=system("echo $1 | cut -c1");}{print "this is the result: "$char;}' >> output.txt
I want to print the first letter of "foo" using awk, such that I would get:
this is the result: f
in my output file, but instead, I get:
this is the result: foo
What am i doing wrong?
Thanks
No, this is not the way system command works inside awk.
What's happening in OP's code:
You are giving a shell command in system which is good(for some cases) but there is a problem in this one that you should give it like system("echo " $0" | cut -c1") to get its first character AND you need NOT to have a variable etc to save its value and print it in awk.
You are trying to save its result to a variable but it will not have its value(system command's value) but its status. It doesn't work like shell style in awk in here.
So your variable named char will have 0 value(which is a success status from system command) and when you are printing $char it is printing whole line(because in awk: print $0 means print whole line).
You could do this in a single awk by doing:
echo "foo" | awk '{print substr($0,1,1)}'
OR with GNU awk specifically:
echo "foo" | awk 'BEGIN{FS=""} {print $1}'
you're not using much of awk, same can be done with printf
$ echo "foo" | xargs printf "this is the result: %.1s\n"
this is the result: f
or, directly
$ printf "this is the result: %.1s\n" foo
this is the result: f

Bash Shell: Infinite Loop

The problem is the following I have a file that each line has this form:
id|lastName|firstName|gender|birthday|joinDate|IP|browser
i want to sort alphabetically all the firstnames in that file and print them one on each line but each name only once
i have created the following program but for some reason it creates an infinite loop:
array1=()
while read LINE
do
if [ ${LINE:0:1} != '#' ]
then
IFS="|"
array=($LINE)
if [[ "${array1[#]}" != "${array[2]}" ]]
then
array1+=("${array[2]}")
fi
fi
done < $3
echo ${array1[#]} | awk 'BEGIN{RS=" ";} {print $1}' | sort
NOTES
if [ ${LINE:0:1} != '#' ] : this command is used because there are comments in the file that i dont want to print
$3 : filename
array1 : is used for all the seperate names
Wow, there's a MUCH simpler and cleaner way to achieve this, without having to mess with the IFS variable or using arrays. You can use "for" to do this:
First I created a file with the same structure as yours:
$ cat file
id|lastName|Douglas|gender|birthday|joinDate|IP|browser
id|lastName|Tim|gender|birthday|joinDate|IP|browser
id|lastName|Andrew|gender|birthday|joinDate|IP|browser
id|lastName|Sasha|gender|birthday|joinDate|IP|browser
#id|lastName|Carly|gender|birthday|joinDate|IP|browser
id|lastName|Madson|gender|birthday|joinDate|IP|browser
Here's the script I wrote using "for":
#!/bin/bash
for LINE in `cat file | grep -v "^#" | awk -F'|' '{print$3}' | sort -u`
do
echo $LINE
done
And here's the output of this script:
$ ./script.sh
Andrew
Douglas
Madson
Sasha
Tim
Explanation:
for LINE in `cat file`
Creates a loop that reads each line of "file". The commands between ` are run by linux, for example, if you wanted to store the date inside of a variable you could use "VARDATE=`date`".
grep -v "^#"
The option -v is used to exclude results matching the pattern, in this case the pattern is "^#". The "^" character means "line begins with". So grep -v "^#" means "exclude lines beginning with #".
awk -F'|' '{print$3}'
The -F option switches the column delimiter from the default (the default is a space) to whatever you put between ' after it, in this case the "|" character.
The '{print$3}' prints the 3rd column.
sort -u
And the "sort -u" command to sort the names alphabetically.

results of wc as variables

I would like to use the lines coming from 'wc' as variables. For example:
echo 'foo bar' > file.txt
echo 'blah blah blah' >> file.txt
wc file.txt
2 5 23 file.txt
I would like to have something like $lines, $words and $characters associated to the values 2, 5, and 23. How can I do that in bash?
In pure bash: (no awk)
a=($(wc file.txt))
lines=${a[0]}
words=${a[1]}
chars=${a[2]}
This works by using bash's arrays. a=(1 2 3) creates an array with elements 1, 2 and 3. We can then access separate elements with the ${a[indice]} syntax.
Alternative: (based on gonvaled solution)
read lines words chars <<< $(wc x)
Or in sh:
a=$(wc file.txt)
lines=$(echo $a|cut -d' ' -f1)
words=$(echo $a|cut -d' ' -f2)
chars=$(echo $a|cut -d' ' -f3)
There are other solutions but a simple one which I usually use is to put the output of wc in a temporary file, and then read from there:
wc file.txt > xxx
read lines words characters filename < xxx
echo "lines=$lines words=$words characters=$characters filename=$filename"
lines=2 words=5 characters=23 filename=file.txt
The advantage of this method is that you do not need to create several awk processes, one for each variable. The disadvantage is that you need a temporary file, which you should delete afterwards.
Be careful: this does not work:
wc file.txt | read lines words characters filename
The problem is that piping to read creates another process, and the variables are updated there, so they are not accessible in the calling shell.
Edit: adding solution by arnaud576875:
read lines words chars filename <<< $(wc x)
Works without writing to a file (and do not have pipe problem). It is bash specific.
From the bash manual:
Here Strings
A variant of here documents, the format is:
<<<word
The word is expanded and supplied to the command on its standard input.
The key is the "word is expanded" bit.
lines=`wc file.txt | awk '{print $1}'`
words=`wc file.txt | awk '{print $2}'`
...
you can also store the wc result somewhere first.. and then parse it.. if you're picky about performance :)
Just to add another variant --
set -- `wc file.txt`
chars=$1
words=$2
lines=$3
This obviously clobbers $* and related variables. Unlike some of the other solutions here, it is portable to other Bourne shells.
I wanted to store the number of csv file in a variable. The following worked for me:
CSV_COUNT=$(ls ./pathToSubdirectory | grep ".csv" | wc -l | xargs)
xargs removes the whitespace from the wc command
I ran this bash script not in the same folder as the csv files. Thus, the pathToSubdirectory
You can assign output to a variable by opening a sub shell:
$ x=$(wc some-file)
$ echo $x
1 6 60 some-file
Now, in order to get the separate variables, the simplest option is to use awk:
$ x=$(wc some-file | awk '{print $1}')
$ echo $x
1
declare -a result
result=( $(wc < file.txt) )
lines=${result[0]}
words=${result[1]}
characters=${result[2]}
echo "Lines: $lines, Words: $words, Characters: $characters"

Why awk '{ print }' doesn't start a new line but loops on space char

I have this shell script
#!/bin/bash
LINES=$(awk '{ print }' filename.txt)
for LINE in $LINES; do
echo "$LINE"
done
And filename.txt has this content
Loreum ipsum dolores
Loreum perche non se imortale
The shell script is iterating all spaces of the lines in filename.txt while it is supposed to loop only those two lines.
But when I type the "awk '{ print }' filename.txt" in terminal then it loops ok.
Any explanations?
Thanks in advance!
The $(...) construct absorbs all the output from awk as one large string, and then for LINE in $LINES splits on whitespace. You want this construct instead:
#! /bin/sh
while read LINE; do
printf '%s\n' "$LINE"
done < filename.txt
The other answers are good, another thing you can do is temporarily change your IFS (Internal Field Separator) variable. If you update your shell script to look like this:
#!/bin/bash
IFS="
"
LINES=$(awk '{ print }' filename.txt)
for LINE in $LINES; do
echo "$LINE"
done
This updates the IFS to be a newline instead of ' ' which should also do what you want.
Just another suggestion.
You need to loop over LINES as an array as all lines are stored as an array there.
Here's an example how to loop over the lines:
http://tldp.org/LDP/abs/html/arrays.html#SCRIPTARRAY

Resources