I want to insert/store wc -l result into a bash array - bash

I have the following comand:
grep RJAVA | grep -v grep | wc -l ' | sort | cut -d':' -f2 | cut -d' ' -f2
After executing this, I get the following result :
10
0
0
10
I would like to put all these numbers into a bash array so that I can loop through
the array. I tried using xargs but couldn't make it work. Any suggestion ?

this should work:
array=($( YOUR_ENTIRE_PIPED_COMMAND ))
BTW, the command above seems broken - you are missing the input to the first grep (either filnames, or pipe into it)

you could try tr:
IN="10 0 0 10"
arr=$(echo $IN | tr " " "\n")
for x in $arr
do
echo "> [$x]"
done
Regards, Edi

Related

Using xargs parameterrs as variables to compare two md5sum

I'm extracting two md5sums by using this code:
md5sum test{1,2} | cut -d' ' -f1-2
I'm receiving two md5sums as in example below:
02eace9cb4b99519b49d50b3e44ecebc
d8e8fca2dc0f896fd7cb4cb0031ba249
Afterwards I'm not sure how to compare them. I have tried using the xargs:
md5sum test{1,2} | cut -d' ' -f1-2 | xargs bash -c '$0 == $1'
However, it tries to execute md5sum as a command
Any advice?
Try using a command subsitution instead
#!/bin/bash
echo 1 > file_a
echo 2 > file_b
echo 1 > file_c
file1=file_a
# try doing "file2=file_b" as well
file2=file_c
if [[ $(sha1sum $file1 | cut -d ' ' -f1-2) = $(sha1sum $file2 | cut -d ' ' -f1-2) ]]; then
echo same
else
echo different
fi

How to calculate the total size of all files from ls- l by using cut, eval, head, tr, tail, ls and echo

ls -l | tr -s " " | cut -d " " -f5
I tried above code and got following output.
158416
757249
574994
144436
520739
210444
398630
1219080
256965
684782
393445
157957
273642
178980
339245
How to add these numbers. I'm stuck here. Please no use of awk, perl, shell scripting etc.
It's easiest to use du. Somethings like:
du -h -a -c | tail -n1
Will give you the sum total. You can also use the -d argument to specify how deep the traversing should go like:
du -d 1 -h -a -c | tail -n1
You will have to clarify what you mean by "no use of shell scripting" for anyone to come up with a more meaningful answer.
You can try this way but $((...)) is shell scripting
eval echo $(( $(ls -l | tr -s ' ' | cut -d ' ' -f5 | tr '\n' '+') 0 ))
Don't parse the output of ls. It's not meant for parsing. Use du as with Martin Gergov's answer. Or du and find, or just du.
But if just adding numbers is the sole focus, (even if the input is iffy), here's the laziest possible method (first install num-utils):
ls -l | tr -s " " | cut -d " " -f5 | numsum
And there's other ways, see also: How can I quickly sum all numbers in a file?

Print Unique Values while using Do-While loop

I have a file named textfile.txt like below:
a 1 xxx
b 1 yyy
c 2 zzz
d 2 aaa
e 3 bbb
f 3 ccc
I am trying to filter the second column with a unique values in that. I had below code:
while read LINE
do
compname=`echo ${LINE} | cut -d' ' -f2 | uniq`
echo -e "${compname}"
done < textfile.txt
It is displaying:
1
1
2
2
3
3
But I am looking for an output like:
1
2
3
I tried another command also like : echo ${LINE} | cut -d' ' -f2 | sort -u | uniq
still not expected output.
Can anyone help me?
There's no need to loop, sort -u already processes the whole input.
cut -d' ' -f2 textfile.txt | sort -u
Maybe you wanted to get the output in the original order, showing the first occurrence only? You can use an associative array to remember which values have been already seen:
#! /bin/bash
declare -A seen
while read x ; do
[[ ${seen[$x]} ]] || printf '%s\n' "$x"
seen[$x]=1
done < <(cut -d' ' -f2 textfile.txt)
For the last occurrence only, change the last line to
done < <(cut -d' ' -f2 textfile.txt | tac) | tac
(i.e. the last occurrence is the first occurrence in the reversed order)
Just pipe the output of the loop to sort -u. There's no need for cut; the read command can handle this type of splitting.
while read -r _ compname _; do
echo "$compname"
done < textfile.txt | sort -u
Try moving the sort -u or sort | uniq after the done statement like this:
while read LINE;
do
compname=$(echo ${LINE} | cut -d' ' -f2)
echo "${compname}"
done < textfile.txt | sort -u

grep search with filename as parameter

I'm working on a shell script.
OUT=$1
here, the OUT variable is my filename.
I'm using grep search as follows:
l=`grep "$pattern " -A 15 $OUT | grep -w $i | awk '{print $8}'|tail -1 | tr '\n' ','`
The issue is that the filename parameter I must pass is test.log.However, I have the folder structure :
test.log
test.log.001
test.log.002
I would ideally like to pass the filename as test.log and would like it to search it in all log files.I know the usual way to do is by using test.log.* in command line, but I'm facing difficulty replicating the same in shell script.
My efforts:
var-$'.*'
l=`grep "$pattern " -A 15 $OUT$var | grep -w $i | awk '{print $8}'|tail -1 | tr '\n' ','`
However, I did not get the desired result.
Hopefully this will get you closer:
#!/bin/bash
for f in "${1}*"; do
grep "$pattern" -A15 "$f"
done | grep -w $i | awk 'END{print $8}'

bash scripting for mysqldump

i use the following code to download all mysql database in a different file and not in one file (like --all-databases) and put them in the /backup/mysql folder
#!/bin/bash
mysqldump=`which mysqldump`
echo $mysqldump
mkdir /backup/mysql/$(date '+%d-%b-%Y')
echo "creating folder for current date done"
for line in "$(mysqlshow |cut -f1 -d"-" | cut -c 3- | cut -f1 -d" ")"
do
$mysqldump $line > /backup/mysql/$(date '+%d-%b-%Y')/"$line"
echo "$line\n"
done
I used the cut pipes to remove dashes and empty space before and at the end of the database name and it gave me what I want.
The problem is at line 13 according to bash but with no more details. Any ideas what I'm doing wrong?
mysqlshow output format
+---------------------+
| Databases |
+---------------------+
| information_schema |
| gitlabhq_production |
| mysql |
| performance_schema |
| phpmyadmin |
| test |
+---------------------
Your script doesn't manage the first 3 lines nor the last one, so you $line variable is invalid.
Solution
mysql | tail -n +4 | head -n -1 | tr -d '| '
tail -n +4: skip first four lines (may need adjustement);
head -n -1: ignore last line ;
tr -d '| ': remove pipe and space.
Advices
quotes your variables ;
use $() instead of backtick ;
don't use for i in $(ls *.mp3).
read How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
Better solution
Instead of a for loop you should use a while with a Process Substitution:
while read -r db; do
echo "[$db]";
done < <(mysqlshow -u root -p | tail -n +3 | head -n -1 | tr -d ' |' )

Resources