Grep particular values from file and assign it to a variables - shell

Below is the file
TMPQM>CSQN205I COUNT= 213, RETURN=00000000, REASON=00000000
CSQM401I ?TMPQM
QUEUE(Q1) TYPE(QLOCAL)
QSGDISP(QMGR) CURDEPTH(0)
CSQM401I ?TMPQM
QUEUE(Q2) TYPE(QLOCAL)
QSGDISP(QMGR) CURDEPTH(23)
CSQM401I ?TMPQM
QUEUE(Q3) TYPE(QLOCAL)
QSGDISP(QMGR) CURDEPTH(150)
CSQM401I ?TMPQM
My intention is to get the values,
Q=Q1
V=0
Q=Q2
V=23
Q=Q3
V=150

Can you use bash? Not the most efficient, but here's a bash script:
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
count=1
for queue in `grep "QUEUE" input.txt`
do
# Strip the beginning up to (
q1="${queue#*(}"
# Strip the end from ) on
q2="${q1%%)*}"
q[$count]=$q2
count=$((count+1))
done
count=1
for value in `grep "CURDEPTH" input.txt`
do
# Strip the beginning up to (
v1="${value##*(}"
# Strip the end from ) on
v2="${v1%)*}"
v[$count]=$v2
count=$((count+1))
done
for index in 1 2 3
do
echo "Q=${q[$index]}"
echo "V=${v[$index]}"
done
IFS=$SAVEIFS
The fancy for loops are just to deal with looping through lines with spaces in bash.

A combination of pcregrep and sed will do the job. pcregrep supports multiline matches with the -M option
-bash-3.2$ pcregrep -M -o '^QUEUE[(](.+?)[)].*\n.*CURDEPTH[(](\d+?)[)]$' trial.txt | sed -e 's/^QUEUE(\([^)]\+\)).*$/Q=\1/g' -e 's/.*CURDEPTH(\([^)]\+\))/V=\1/g'
Q=Q1
V=0
Q=Q2
V=23
Q=Q3
V=150

My solution:
grep -oP '(?<=QUEUE|CURDEPTH)\S+' input.txt | tr -d '()' | (while read NAME && read VAL; do echo $NAME=$VAL; done)
It's a little more flexible than #cravoori's solution because you can format the output easily. That line's output is...
Q1=0
Q2=23
Q3=150
...which you can source and then read directly using $Q1, $Q2, etc.

Related

Find the occurrences of an element in array

arr=(7793 7793123 7793 37793 3214)
I'd like to find the occurrence of 7793. I tried: grep -o '7793' <<< $arr | wc -l
However, this also counts other elements that contain 7793 (e.g. 7793123, 37793)
printf '%s\n' "${arr[#]}" | grep -c '^7793$'
Explanation:
printf prints each item of the array on a new line
grep -c '^7793$' uses the start and end anchors to match 7793 exactly and outputs the count
With GNU grep (note the correct counting of elements containing newlines, refer to documentation for a description of options used):
arr=(7793 7793123 7793 37793 3214 7793$'\n'7793)
printf '%s\0' "${arr[#]}" | grep --null-data -cFxe 7793
Output:
2
This works because variables in bash cannot contain the NUL character.
You can use regex in this case
grep -e ^7793$
To make a bash script efficient (from CPU/memory consumption point of view), whenever possible, avoid running sub-shells and programs. Hence, instead of using grep or any other program, here we have the choice of using a simple loop with variable comparison and arithmetic:
#!/bin/bash
key=7793
arr=(7793 7793123 7793 37793 3214)
count=0
for i in "${arr[#]}"
do if [ "$i" = "$key" ]
then count=$((count+1))
fi
done
echo $count

How can I save only a substring of file names from a directory without the file extension?

I have a directory that I'm reading from and I want to save only the date representation as a string.
I am close to getting it , although I know there is probably an easier way. Here is what I have so far:
#files are in the format of "THIS_20200420.csv" so I want only "20200420"
declare -a arr
declare -a arr2
FILES=test2/*.csv
for file in $FILES
do
arr=(${arr[*]} "${file##*/}")
done
for i in "${arr[#]}"
do
arr2+=$(echo $i | cut -c6-13)
done
for item in "${arr2[#]}"
do
echo $item
done
the output shows the array only having one element which is all the strings concatenated:
20200110202001202020021920200220202004202020042220200110202001202020021920200220202004202020042220200219202002202020042020200422
Im bashing my head against my computer at this point.
arr=(
"THIS_20200420.csv"
"THIS_20200421.csv"
"THIS_20200422.csv"
"THIS_20200423.csv"
"THIS_20200424.csv"
"THIS_20200425.csv"
"THIS_20200426.csv"
"THIS_20200427.csv"
"THIS_20200428.csv"
"THIS_20200429.csv"
"THIS_20200430.csv" )
arr=( ${arr[#]//*_} )
arr=( ${arr[#]//.*} )
echo "arr: ${arr[#]}"
Explanation:
arr=( ${arr[#]//*_} ) will match all char up to '_' for each element, and replace them with empty string.
arr=( ${arr[#]//.*} ) will match all char after '.' for each element, and replace them with empty string.
For more information on parameter expansion, a good reference is TLDP's guide on parameter expansion.
Try this
declare -a arrayname=($(ls -1 test2/*.csv | grep -o '[0-9]*'))
Demo:
$ls -1 *csv
THIS_20200420.csv
THIS_20200421.csv
THIS_20200422.csv
THIS_20200423.csv
THIS_20200424.csv
THIS_20200425.csv
THIS_20200426.csv
THIS_20200427.csv
THIS_20200428.csv
THIS_20200429.csv
THIS_20200430.csv
$declare -a arrayname=($(ls -1 *csv | grep -o '[0-9]*'))
$echo ${arrayname[#]}
20200420 20200421 20200422 20200423 20200424 20200425 20200426 20200427 20200428 20200429 20200430
$echo ${arrayname[2]}
20200422
$
You could achieve this using a loop with awk:
$ for file in *.csv; do echo $file | awk -F '[^[:alnum:]]' '{print $2}'; done
The -F '[^[:alnum:]]' tells awk to use non alphanumeric characters as the delimiter.
Another way to do this is to use bash shell parameter expansion to echo only the part of the filename you want. This obviously only works if your filenames have consistent formatting:
$ for file in *.csv; do echo "${file:5:8}"; done
I thought it would be nice to use bash parameter expansion to strip the unwanted prefix and suffix but you can't have nested expansion (afaict) so this is the best I could come up with:
$ for file in *.csv; do echo "$(tmp=${file%.csv}; echo ${tmp#THIS_})"; done
Meet Cut! A good friend of Linux Users
for file in ./*.csv; do echo $file | cut -d "_" -f 2 | cut -d "." -f 1 ; done
This one line should do the trick!
Example:
Use an array for the files assignment and parameter expansion.
#!/usr/bin/env bash
shopt -s nullglob
##: Save the files ending in *.csv in an array
## so it expands properly, variable assignment does not expand the glob *
files=(test2/*.csv)
##: Remain only the files that end with .csv without the pathname, longest match
files=("${files[#]##*/}")
##: Remain only the file names without the .csv extention
files=("${files[#]%.csv}")
##: Remain only the filename after the _ from the beginning, shortest match.
files=("${files[#]#*_}")
printf '%s ' "${files[#]}"

Sed replace substring only if expression exist

In a bash script, I am trying to remove the directory name in filenames :
documents/file.txt
direc/file5.txt
file2.txt
file3.txt
So I try to first see if there is a "/" and if yes delete everything before :
for i in **/*.scss *.scss; do
echo "$i" | sed -n '^/.*\// s/^.*\///p'
done
But it doesn't work for files in the current directory, it gives me a blank string.
I get :
file.txt
file5.txt
When you only want the filename, use basename instead of sed.
# basename /path/to/file
returns file
here is the man page
Your sed attempt is basically fine, but you should print regardless of whether you performed a substitution; take out the -n and the p at the end. (Also there was an unrelated syntax error.)
Also, don't needlessly loop over all files.
printf '%s\n' **/*.scss *.scss |
sed -n 's%^.*/%%p'
This also can be done with awk bash util.
Example:
echo "1/2/i.py" | awk 'BEGIN {FS="/"} {print $NF}'
output: i.py
Eventually, I did :
for i in **/*.scss *.scss; do
# for i in *.scss; do
# for i in _hm-globals.scss; do
name=${i##*/} # remove dir name
name=${name%.scss} # remove extension
name=`echo "$name" | sed -n "s/^_hm-//p"` # remove _hm-
if [[ $name = *"."* ]]; then
name=`echo "$name" | sed -n 's/\./-/p'` #replace . to --
fi
echo "$name" >&2
done

Concatenate strings in bash

I have in a bash script:
for i in `seq 1 10`
do
read AA BB CC <<< $(cat file1 | grep DATA)
echo ${i}
echo ${CC}
SORT=${CC}${i}
echo ${SORT}
done
so "i" is a integer, and CC is a string like "TODAY"
I would like to get then in SORT, "TODAY1", etc
But I get "1ODAY", "2ODAY" and so
Where is the error?
Thanks
You should try
SORT="${CC}${i}"
Make sure your file does not contain "\r" that would end just in the end of $CC.
This could well explain why you get "1ODAY".
Try including
|tr '\r' ''
after the cat command
try
for i in {1..10}
do
while read -r line
do
case "$line" in
*DATA* )
set -- $line
CC=$3
SORT=${CC}${i}
echo ${SORT}
esac
done <"file1"
done
Otherwise, show an example of file1 and your desired output
ghostdog is right: with the -r option, read avoids succumbing to potential horrors, like CRLFs. Using arrays makes the -r option more pleasant:
for i in `seq 1 10`
do
read -ra line <<< $(cat file1 | grep DATA)
CC="${line[3]}"
echo ${i}
echo ${CC}
SORT=${CC}${i}
echo ${SORT}
done

Randomizing arg order for a bash for statement

I have a bash script that processes all of the files in a directory using a loop like
for i in *.txt
do
ops.....
done
There are thousands of files and they are always processed in alphanumerical order because of '*.txt' expansion.
Is there a simple way to random the order and still insure that I process all of the files only once?
Assuming the filenames do not have spaces, just substitute the output of List::Util::shuffle.
for i in `perl -MList::Util=shuffle -e'$,=$";print shuffle<*.txt>'`; do
....
done
If filenames do have spaces but don't have embedded newlines or backslashes, read a line at a time.
perl -MList::Util=shuffle -le'$,=$\;print shuffle<*.txt>' | while read i; do
....
done
To be completely safe in Bash, use NUL-terminated strings.
perl -MList::Util=shuffle -0 -le'$,=$\;print shuffle<*.txt>' |
while read -r -d '' i; do
....
done
Not very efficient, but it is possible to do this in pure Bash if desired. sort -R does something like this, internally.
declare -a a # create an integer-indexed associative array
for i in *.txt; do
j=$RANDOM # find an unused slot
while [[ -n ${a[$j]} ]]; do
j=$RANDOM
done
a[$j]=$i # fill that slot
done
for i in "${a[#]}"; do # iterate in index order (which is random)
....
done
Or use a traditional Fisher-Yates shuffle.
a=(*.txt)
for ((i=${#a[*]}; i>1; i--)); do
j=$[RANDOM%i]
tmp=${a[$j]}
a[$j]=${a[$[i-1]]}
a[$[i-1]]=$tmp
done
for i in "${a[#]}"; do
....
done
You could pipe your filenames through the sort command:
ls | sort --random-sort | xargs ....
Here's an answer that relies on very basic functions within awk so it should be portable between unices.
ls -1 | awk '{print rand()*100, $0}' | sort -n | awk '{print $2}'
EDIT:
ephemient makes a good point that the above is not space-safe. Here's a version that is:
ls -1 | awk '{print rand()*100, $0}' | sort -n | sed 's/[0-9\.]* //'
If you have GNU coreutils, you can use shuf:
while read -d '' f
do
# some stuff with $f
done < <(shuf -ze *)
This will work with files with spaces or newlines in their names.
Off-topic Edit:
To illustrate SiegeX's point in the comment:
$ a=42; echo "Don't Panic" | while read line; do echo $line; echo $a; a=0; echo $a; done; echo $a
Don't Panic
42
0
42
$ a=42; while read line; do echo $line; echo $a; a=0; echo $a; done < <(echo "Don't Panic"); echo $a
Don't Panic
42
0
0
The pipe causes the while to be executed in a subshell and so changes to variables in the child don't flow back to the parent.
Here's a solution with standard unix commands:
for i in $(ls); do echo $RANDOM-$i; done | sort | cut -d- -f 2-
Here's a Python solution, if its available on your system
import glob
import random
files = glob.glob("*.txt")
if files:
for file in random.shuffle(files):
print file

Resources