How to print output of two shell commands on the same line? - bash

This is what my loop contains:
cat /$f/stat | awk '{print $1,$3,$4,$7,$17}' /$f/stat
cd $f
sudo ls fd | wc -l
cd ..
At first, it shows the output of:
cat /$f/stat | awk '{print $1,$3,$4,$7,$17}' /$f/stat
And it prints the output of this on a new line:
cd $f
sudo ls fd | wc -l
cd ..
How do I combine these so that it shows them on one line?

At the outset, use shellcheck to validate your script.
Looks like you want awk's output and wc -l's output to be on the same line. Use command substitution for this:
printf '%s %s\n' "$(awk '{print $1,$3,$4,$7,$17}' "$f/stat")" "$(sudo ls "$f/fd" | wc -l)"
no need for cat | awk which is a case of UUOC - awk is reading input from the file passed as an argument; also, it looks like you need "$f/stat" and not "/$f/stat"
enclose variables in double quotes to prevent word splitting and globbing
use full path $f/fd instead of having to do a cd $f and back
Since parsing ls output is considered a bad practice, you could do this instead, on Linux:
printf '%s %s\n' "$(awk '{print $1,$3,$4,$7,$17}' "$f/stat")" "$(sudo find "$f/fd" -maxdepth 1 -print0 | tr -cd '\0' | wc -c)"
find ... -print0 prints NUL terminated list of files
tr -cd '\0' - deletes all characters other than NUL
wc -c - counts the number of NULs, which is the number of file names in find output

Related

Counting Python files with bash and awk always returns zero

I want to get a number of python files on my desktop and I have coded a small script for that. But the awk command does not work as is have expected.
script
ls -l | awk '{ if($NF=="*.py") print $NF; }' | wc -l
I know that there is another solution to finding a number of python files on a PC but I just want to know what am i doing wrong here.
ls -l | awk '{ if($NF=="*.py") print $NF; }' | wc -l
Your code does count of files literally named *.py, you should deploy regex matching and use correct GNU AWK syntax, after fixing that, your code becomes
ls -l | awk '{ if($NF~/[.]py$/) print $NF; }' | wc -l
note [.] which denote literal . and $ denoting end of string.
Your code might be further ameloriated, as there is not need to use if here, as pattern-action will do that is
ls -l | awk '$NF~/[.]py$/{ print $NF; }' | wc -l
Morever you might easily implemented counting inside GNU AWK rather than deploying wc -l as follows
ls -l | awk '$NF~/[.]py$/{t+=1}END{print t}'
Here, t is increased by 1 for every describe line, and after all is processed, that is in END it is printed. Observe there is no need to declare t variable in GNU AWK.
Don't try to parse the output of ls, see https://mywiki.wooledge.org/ParsingLs.
Beyond that your awk script is failing because $NF=="*.py" is doing a literal string partial comparison of the last sting of non-spaces against *.py when you probably wanted a regexp comparison such as $NF~/*.py$/ and your print $NF would fail for any file names containing spaces.
If you really want to involve awk in this for some reason then, assuming the list of python files doesn't exceed ARG_MAX, it'd be:
awk 'BEGIN{print ARGC-1; exit}' *.py
but you could just do it in bash:
shopt -s nullglob
files=(*.py)
echo "${#files[#]}"
or if you want to have a pipe to wc -l for some reason and your files can't have newlines in their names then:
printf '%s\n' *.py | wc -l
gfind . -maxdepth 1 -type f -name "*.py" -print0 |
{m,g}awk 'END { print NR }' RS='\0' FS='^$'
or
{m,g}awk 'END { print --NF }' RS='^$' FS='\0'
879

Multiple output in single line Shell commend with pipe only

For example:
ls -l -d */ | wc -l | awk '{print $1}' | tee /dev/tty | ls -l
This shell command print the result of wc and ls -l with single line, but tee is used.
Is it possible to using one Shell commend line to achieve multiple output without using “&&” “||” “>” “>>” “<” “;” “&”,tee and temp file?
When you want the output of date and ls -rtl | head -1 on one line, you can use
echo "$(date): $(ls -rtl | head -1)"
Yes, you can achieve writing to multiple files with awk which is not in the list of things you appear not to like:
echo hi | awk '{print > "a.txt"; print > "b.txt"}'
Then check a.txt and b.txt.

Solaris: find files not containing a string (alternative to grep -L)

I want to search files that does not contain a specific string.
I used -lv but this was a huge mistake because it was returning all the files that contain any line not containing my string.
I knew what I need exactly is grep -L, however, Solaris grep does not implement this feature.
What is the alternative, if any?
You can exploit grep -c and do the following (thanks #Scrutinizer for the /dev/null hint):
grep -c foo /dev/null * 2>/dev/null | awk -F: 'NR>1&&!$2{print $1}'
This will unfortunately also print directories (if * expands to any) which might not be desired in which case a simple loop, albeit slower, might be your best bet:
for file in *; do
[ -f "${file}" ] || continue
grep -q foo "${file}" 2>/dev/null || echo "${file}"
done
However, if you have GNU awk 4 on your system you can do:
awk 'BEGINFILE{f=0} /foo/{f=1} ENDFILE{if(!f)print FILENAME}' *
after using grep -c u can use grep again to find your desired filenames:
grep -c 'pattern' * | grep ':0$'
and to see just filnames :
grep -c 'pattern' * | grep ':0$' | cut -d":" -f1
You can use awk like this:
awk '!/not this/' file
To do multiple not:
awk '!/jan|feb|mars/' file

Bash/Shell - paths with spaces messing things up

I have a bash/shell function that is supposed to find files then awk/copy the first file it finds to another directory. Unfortunately if the directory that contains the file has spaces in the name the whole thing fails, since it truncates the path for some reason or another. How do I fix it?
If file.txt is in /path/to/search/spaces are bad/ it fails.
dir=/path/to/destination/ | find /path/to/search -name file.txt | head -n 1 | awk -v dir="$dir" '{printf "cp \"%s\" \"%s\"\n", $1, dir}' | sh
cp: /path/to/search/spaces: No such file or directory
*If file.txt is in /path/to/search/spacesarebad/ it works, but notice there are no spaces. :-/
Awk's default separator is white space. Simply change it to something else by doing:
awk -F"\t" ...
Your script should look like:
dir=/path/to/destination/ | find /path/to/search -name file.txt | head -n 1 | awk -F"\t" -v dir="$dir" '{printf "cp \"%s\" \"%s\"\n", $1, dir}' | sh
As pointed by the comments, you don't really need all those steps, you could actually simply do (one-liner):
dir=/path/to/destination/ && path="$(find /path/to/search -name file.txt | head -n 1)" && cp "$path" "$dir"
Formated code (that may look better, in this case ^^):
dir=/path/to/destination/
path="$(find /path/to/search -name file.txt | head -n 1)"
cp "$path" "$dir"
The "" are used to assign the entire content of the string to the variable, causing the separator IFS, which is a white space by default, not to be considered over the string.
If you think spaces are bad, wait till you get into trouble with newlines. Consider for example:
mkdir spaces\ are\ bad
touch spaces\ are\ bad/file.txt
mkdir newlines$'\n'are$'\n'even$'\n'worse
touch newlines$'\n'are$'\n'even$'\n'worse/file.txt
And:
find . -name file.txt
The head command assumes newline delimiter. You can get around the space and newline issue with GNU find and GNU grep (maybe others) by using \0 delimiters:
find . -name file.txt -print0 | grep -zm1 . | xargs -0 cp -t "$dir"
You could try this.
awk '{print substr($0, index($0,$9))}'
For example this is the output of ls command:
-rw-r--r--. 1 root root 73834496 Dec 6 10:55 File with spaces 2
If you use simple awk like this
# awk '{print $9}'
It returns only
# File
If used with the full command
# awk '{print substr($0, index($0,$9))}'
I get the whole output
File with spaces 2
Here
substr(s, a, b) : it returns b number of chars from string s, starting at position a. The parameter b is optional.
For example if the match is addr:192.168.1.133 and you use substr as follows
# awk '{print substr($2,6)}'
You get the IP i.e 192.168.1.133. Note the 6 is the character starting from a in addr
So in the proper command the $2 is $0 ( which prints whole line.) and index($0,$9) matches $9 and prints everything ahead of column 9. You can change that to index($0,$8) and see that the output changes to
# 10:55 File with spaces 2
`index(IN, FIND)'
This searches the string IN for the first occurrence of the string
FIND, and returns the position in characters where that occurrence
begins in the string IN.
I hope it helps. Moreover if you are assigning this value to a variable in script then you need to enclose the variables in double quotes. Other wise you will get errors if you are doing some other operation for the extracted file name.

bash echo number of lines of file given in a bash variable without the file name

I have the following three constructs in a bash script:
NUMOFLINES=$(wc -l $JAVA_TAGS_FILE)
echo $NUMOFLINES" lines"
echo $(wc -l $JAVA_TAGS_FILE)" lines"
echo "$(wc -l $JAVA_TAGS_FILE) lines"
And they both produce identical output when the script is run:
121711 /home/slash/.java_base.tag lines
121711 /home/slash/.java_base.tag lines
121711 /home/slash/.java_base.tag lines
I.e. the name of the file is also echoed (which I don't want to). Why do these scriplets fail and how should I output a clean:
121711 lines
?
An Example Using Your Own Data
You can avoid having your filename embedded in the NUMOFLINES variable by using redirection from JAVA_TAGS_FILE, rather than passing the filename as an argument to wc. For example:
NUMOFLINES=$(wc -l < "$JAVA_TAGS_FILE")
Explanation: Use Pipes or Redirection to Avoid Filenames in Output
The wc utility will not print the name of the file in its output if input is taken from a pipe or redirection operator. Consider these various examples:
# wc shows filename when the file is an argument
$ wc -l /etc/passwd
41 /etc/passwd
# filename is ignored when piped in on standard input
$ cat /etc/passwd | wc -l
41
# unusual redirection, but wc still ignores the filename
$ < /etc/passwd wc -l
41
# typical redirection, taking standard input from a file
$ wc -l < /etc/passwd
41
As you can see, the only time wc will print the filename is when its passed as an argument, rather than as data on standard input. In some cases, you may want the filename to be printed, so it's useful to understand when it will be displayed.
wc can't get the filename if you don't give it one.
wc -l < "$JAVA_TAGS_FILE"
You can also use awk:
awk 'END {print NR,"lines"}' filename
Or
awk 'END {print NR}' filename
(apply on Mac, and probably other Unixes)
Actually there is a problem with the wc approach: it does not count the last line if it does not terminate with the end of line symbol.
Use this instead
nbLines=$(cat -n file.txt | tail -n 1 | cut -f1 | xargs)
or even better (thanks gniourf_gniourf):
nblines=$(grep -c '' file.txt)
Note: The awk approach by chilicuil also works.
It's a very simple:
NUMOFLINES=$(cat $JAVA_TAGS_FILE | wc -l )
or
NUMOFLINES=$(wc -l $JAVA_TAGS_FILE | awk '{print $1}')
I normally use the 'back tick' feature of bash
export NUM_LINES=`wc -l filename`
Note the 'tick' is the 'back tick' e.g. ` not the normal single quote

Resources