Why shell report "command not found" while using cat? - shell

I'm processing a file by line, so my shell script is this:
check_vm_connectivity()
{
$res=`cat temp.txt` # this is line 10
i=0
for line in "$res"
do
i=$i+1
if [[ $i -gt 3 ]] ; then
continue
fi
echo "${line}"
done
}
The temp.txt is this:
+--------------------------------------+
| ID |
+--------------------------------------+
| cb91a52f-f0dd-443a-adfe-84c5c685d9b3 |
| 184564aa-9a7d-48ef-b8f0-ff9d51987e71 |
| f01f9739-c7a7-404c-8789-4e3e2edf314e |
| 825925cc-a816-4434-8b4b-a75301ddaefd |
when I run script, report this:
vm_connectivity.sh: line 10: =+--------------------------------------+: command not found
Why? How to fix this bug? Thank you~

May i ask why you are using
$res=`cat temp.txt`?
shouldn't it be
res=`cat temp.txt`

variables in a shell are set with var= rather than $var=
res is empty when you're entering the function, so that line
expands to the empty string, followed by an equals sign,
followed by the contents of temp.txt. The shell then interprets
that, and since a command is terminated by a newline, and
=+--------------------------------------+ has the syntax
of a command, rather than anything else, the shell tries to
run it as such.
You want: res=$(cat temp.txt)
However, it looks like you're trying to output
the first three lines, in which case just
do
head -n3 temp.txt
though from the looks of it, you probably want all except the first three
lines:
tail -n +4 temp.txt
and if you're looking for just the uuids:
tail -n +4 temp.txt | awk '{print $2}'

Related

Duplicate the output of bash script

Below is the piece of code of my bash script, I want to get duplicate output of that script.
This is how my script runs
#bash check_script -a used_memory
Output is: used_memory: 812632
Desired Output: used_memory: 812632 | used_memory: 812632
get_vals() {
metrics=`command -h $hostname -p $port -a $pass info | grep -w $opt_var | cut -d ':' -f2 > ${filename}`
}
output() {
get_vals
if [ -s ${filename} ];
then
val1=`cat ${filename}`
echo "$opt_var: $val1"
# rm $filename;
exit $ST_OK;
else
echo "Parameter not found"
exit $ST_UK
fi
}
But when i used echo "$opt_var: $val1 | $opt_var: $val1" the output become: | used_memory: 812632
$opt_var is an argument.
I had a similar problem when capturing results from cat with Windows-formatted text files. One way to circumvent this issue is to pipe your result to dos2unix, e.g.:
val1=`cat ${filename} | dos2unix`
Also, if you want to duplicate lines, you can use sed:
sed 's/^\(.*\)$/\1 | \1/'
Then pipe it to your echo command:
echo "$opt_var: $val1" | sed 's/^\(.*\)$/\1 | \1/'
The sed expression works like that:
's/<before>/<after>/' means that you want to substitute <before> with <after>
on the <before> side: ^.*$ is a regular expression meaning you get the entire line, ^\(.*\)$ is basically the same regex but you get the entire line and you capture everything (capturing is performed inside the \(\) expression)
on the <after> side: \1 | \1 means you write the 1st captured expression (\1), then the space character, then the pipe character, then the space character and then the 1st captured expression again
So it captures your entire line and duplicates it with a "|" separator in the middle.

Bash : How to check in a file if there are any word duplicates

I have a file with 6 character words in every line and I want to check if there are any duplicate words. I did the following but something isn't right:
#!/bin/bash
while read line
do
name=$line
d=$( grep '$name' chain.txt | wc -w )
if [ $d -gt '1' ]; then
echo $d $name
fi
done <$1
Assuming each word is on a new line, you can achieve this without looping:
$ cat chain.txt | sort | uniq -c | grep -v " 1 " | cut -c9-
You can use awk for that:
awk -F'\n' 'found[$1] {print}; {found[$1]++}' chain.txt
Set the field separator to newline, so that we look at the whole line. Then, if the line already exists in the array found, print the line. Finally, add the line to the found array.
Note: If a line will only be suppressed once, so if the same line appears, say, 6 times, it will be printed 5 times.

setting variables in bash script

I have the following command:
echo "exec [loc_ver].[dbo].[sp_RptEmpCheckInOutMissingTime_2]" |
/home/mdland_tool/common/bin/run_isql_prod_rpt_2.sh |
grep "^| Row_2" |
awk '{print $15 }'
which only works with echo in the front. I tried to set this line into a variable. I've tried quotations marks, parenthesis, and back ticks, with no luck.
May anyone tell me the correct syntax for setting this into a variable?
If you want more columns store in an array you should use this syntax (it is also good if you have only one result):
#!/bin/bash
result=( $( echo "exec [loc_ver].[dbo].[sp_RptEmpCheckInOutMissingTime_2]" |
/home/mdland_tool/common/bin/run_isql_prod_rpt_2.sh |
grep "^| Row_2" |
awk '{print $15 }' ) )
$result=$(exec [loc_ver].[dbo].[sp_RptEmpCheckInOutMissingTime_2]" | /home/mdland_tool/common/bin/run_isql_prod_rpt_2.sh | grep "^| Row_2" | awk '{print $15 })
Since you asked for both meanings of your question:
First, imagine a simpler case:
echo asdf
If you want to execute this command and store the result somewhere, you do the following:
$(echo asdf)
For example:
variable=$(echo asdf)
# or
if [ "$(echo asdf)" == "asdf" ]; then echo good; fi
So generally, $(command) executes command and returns the output.
If you want to store the text of the command itself, you can, well, just do that:
variable='echo asdf'
# or
variable="echo asdf"
# or
variable=echo\ asdf
Different formats depending on the content of your command. So now once you have the variable storing the command, you can simply execute the command with the $(command) syntax. However, pay attention that the command itself is in a variable. Therefore, you would execute the command by:
$($variable)
As a complete example, let's combine both:
command="echo asdf"
result=$($command)
echo $result

Speed up bash filter function to run commands consecutively instead of per line

I have written the following filter as a function in my ~/.bash_profile:
hilite() {
export REGEX_SED=$(echo $1 | sed "s/[|()]/\\\&/g")
while read line
do
echo $line | egrep "$1" | sed "s/$REGEX_SED/\x1b[7m&\x1b[0m/g"
done
exit 0
}
to find lines of anything piped into it matching a regular expression, and highlight matches using ANSI escape codes on a VT100-compatible terminal.
For example, the following finds and highlights the strings bin, U or 1 which are whole words in the last 10 lines of /etc/passwd:
tail /etc/passwd | hilite "\b(bin|[U1])\b"
However, the script runs very slowly as each line forks an echo, egrep and sed.
In this case, it would be more efficient to do egrep on the entire input, and then run sed on its output.
How can I modify my function to do this? I would prefer to not create any temporary files if possible.
P.S. Is there another way to find and highlight lines in a similar way?
sed can do a bit of grepping itself: if you give it the -n flag (or #n instruction in a script) it won't echo any output unless asked. So
while read line
do
echo $line | egrep "$1" | sed "s/$REGEX_SED/\x1b[7m&\x1b[0m/g"
done
could be simplified to
sed -n "s/$REGEX_SED/\x1b[7m&\x1b[0m/gp"
EDIT:
Here's the whole function:
hilite() {
REGEX_SED=$(echo $1 | sed "s/[|()]/\\\&/g");
sed -n "s/$REGEX_SED/\x1b[7m&\x1b[0m/gp"
}
That's all there is to it - no while loop, reading, grepping, etc.
If your egrep supports --color, just put this in .bash_profile:
hilite() { command egrep --color=auto "$#"; }
(Personally, I would name the function egrep; hence the usage of command).
I think you can replace the whole while loop with simply
sed -n "s/$REGEX_SED/\x1b[7m&\x1b[0m/gp"
because sed can read from stdin line-by-line so you don't need read
I'm not sure if running egrep and piping to sed is faster than using sed alone, but you can always compare using time.
Edit: added -n and p to sed to print only highlighted lines.
Well, you could simply do this:
egrep "$1" $line | sed "s/$REGEX_SED/\x1b[7m&\x1b[0m/g"
But I'm not sure that it'll be that much faster ; )
Just for the record, this is a method using a temporary file:
hilite() {
export REGEX_SED=$(echo $1 | sed "s/[|()]/\\\&/g")
export FILE=$2
if [ -z "$FILE" ]
then
export FILE=~/tmp
echo -n > $FILE
while read line
do
echo $line >> $FILE
done
fi
egrep "$1" $FILE | sed "s/$REGEX_SED/\x1b[7m&\x1b[0m/g"
return $?
}
which also takes a file/pathname as the second argument, for case like
cat /etc/passwd | hilite "\b(bin|[U1])\b"

results of wc as variables

I would like to use the lines coming from 'wc' as variables. For example:
echo 'foo bar' > file.txt
echo 'blah blah blah' >> file.txt
wc file.txt
2 5 23 file.txt
I would like to have something like $lines, $words and $characters associated to the values 2, 5, and 23. How can I do that in bash?
In pure bash: (no awk)
a=($(wc file.txt))
lines=${a[0]}
words=${a[1]}
chars=${a[2]}
This works by using bash's arrays. a=(1 2 3) creates an array with elements 1, 2 and 3. We can then access separate elements with the ${a[indice]} syntax.
Alternative: (based on gonvaled solution)
read lines words chars <<< $(wc x)
Or in sh:
a=$(wc file.txt)
lines=$(echo $a|cut -d' ' -f1)
words=$(echo $a|cut -d' ' -f2)
chars=$(echo $a|cut -d' ' -f3)
There are other solutions but a simple one which I usually use is to put the output of wc in a temporary file, and then read from there:
wc file.txt > xxx
read lines words characters filename < xxx
echo "lines=$lines words=$words characters=$characters filename=$filename"
lines=2 words=5 characters=23 filename=file.txt
The advantage of this method is that you do not need to create several awk processes, one for each variable. The disadvantage is that you need a temporary file, which you should delete afterwards.
Be careful: this does not work:
wc file.txt | read lines words characters filename
The problem is that piping to read creates another process, and the variables are updated there, so they are not accessible in the calling shell.
Edit: adding solution by arnaud576875:
read lines words chars filename <<< $(wc x)
Works without writing to a file (and do not have pipe problem). It is bash specific.
From the bash manual:
Here Strings
A variant of here documents, the format is:
<<<word
The word is expanded and supplied to the command on its standard input.
The key is the "word is expanded" bit.
lines=`wc file.txt | awk '{print $1}'`
words=`wc file.txt | awk '{print $2}'`
...
you can also store the wc result somewhere first.. and then parse it.. if you're picky about performance :)
Just to add another variant --
set -- `wc file.txt`
chars=$1
words=$2
lines=$3
This obviously clobbers $* and related variables. Unlike some of the other solutions here, it is portable to other Bourne shells.
I wanted to store the number of csv file in a variable. The following worked for me:
CSV_COUNT=$(ls ./pathToSubdirectory | grep ".csv" | wc -l | xargs)
xargs removes the whitespace from the wc command
I ran this bash script not in the same folder as the csv files. Thus, the pathToSubdirectory
You can assign output to a variable by opening a sub shell:
$ x=$(wc some-file)
$ echo $x
1 6 60 some-file
Now, in order to get the separate variables, the simplest option is to use awk:
$ x=$(wc some-file | awk '{print $1}')
$ echo $x
1
declare -a result
result=( $(wc < file.txt) )
lines=${result[0]}
words=${result[1]}
characters=${result[2]}
echo "Lines: $lines, Words: $words, Characters: $characters"

Resources