Bash Script: Filter large files for value - bash

I have several config files with around 20k lines each and I need to get some values from them.
I know that each of the values I need starts with a specific word "CONFNET" so I tried to get the values with a while loop, which reads every line.
But unfortunately this is extremely inefficient and slow.
Is there a better solution to this?
for filename in ~/configs/*; do
ip=$(cat $filename | strings | grep -i -A 7 "addnet_outside" | head -7 | grep "IP" | sed "s/IP//" | sed "s/=//" | sed -e 's/^[ \t]*//')
hostname=$(cat $filename | strings | grep -a "Inst:" | head -1 | sed "s/Inst://" | sed -e 's/^[ \t]*//')
while IFS= read -r line; do
object_name=$(echo $line | strings | grep "CONFNET" | sed "s/CONFNET//" | awk '{print $1}')
object_value=$(echo $line | strings | grep "CONFNET" | sed "s/CONFNET//" | awk '{print $3}' | sed -e 's/^[ \t]*//')
if [ ! -z $object_name ] && [ ! -z $object_value ]
then
echo $hostname "->" $object_name ":" $object_value
done < "$filename"
done

Related

How to replace a word in a specific line where the replace pattern contains a variable?

I have the following files:
[root#f9044b5d9d1e aws-marketing-and-sales]# grep -HRe ".*\/common\/.*\${local.parent_folder_name}" *
ap-northeast-1/config/config/terragrunt.hcl: inline_policy = templatefile("${get_parent_terragrunt_dir()}/common/${local.environment}/config/${local.parent_folder_name}/inline-policy-s3.tpl", {
ap-northeast-2/config/config/terragrunt.hcl: inline_policy = templatefile("${get_parent_terragrunt_dir()}/common/${local.environment}/config/${local.parent_folder_name}/inline-policy-s3.tpl", {
ap-south-1/config/config/terragrunt.hcl: inline_policy = templatefile("${get_parent_terragrunt_dir()}/common/${local.environment}/config/${local.parent_folder_name}/inline-policy-s3.tpl", {
ap-southeast-1/config/config/terragrunt.hcl: inline_policy = templatefile("${get_parent_terragrunt_dir()}/common/${local.environment}/config/${local.parent_folder_name}/inline-policy-s3.tpl", {
I'm trying to replace the occurrences of "${local.parent_folder_name}" where the line contains "common" in all files with the parent folder name, like this:
for file in $(grep -HRe ".*\/common\/.*\${local.parent_folder_name}" *); do
filename=$(echo $file | cut -d: -f1)
parent=$(echo $file | rev | cut -d/ -f2 | rev)
sed -i "/common/\${local.parent_folder_name}/$parent/g" $filename
done
This is the error that I get when running the above script:
sed: -e expression #1, char 9: unknown command: `$'
I've found some SO questions regarding this but none of them have examples with using a variable as the replace pattern.
I've also tried different separators (| , !) but to no avail.
Edit:
#moshe, it didn't work, that's the output:
grep -Re "/common\/.*\${local.parent_folder_name}" . | while read -r grep_line; do
if [[ $grep_line == *"$0"* ]]; then
continue
fi
echo $grep_line
filename=$(echo $grep_line | cut -d: -f1)
parent=$(echo $grep_line | rev | cut -d/ -f2 | rev)
echo "parent: $parent"
sed -i "/common/s?\${local.parent_folder_name}?$parent?g" $filename
done
./ca-central-1/config/config/terragrunt.hcl: inline_policy = templatefile("${get_parent_terragrunt_dir()}/common/${local.environment}/config/${local.parent_folder_name}/inline-policy-s3.tpl", {
parent: ${local.parent_folder_name}
sed: 1: "./ca-central-1/config/c ...": invalid command code .
./us-west-2/config/config/terragrunt.hcl: inline_policy = templatefile("${get_parent_terragrunt_dir()}/common/${local.environment}/config/${local.parent_folder_name}/inline-policy-s3.tpl", {
I tried replacing the "." to "8" in the first grep and it worked only on some of the files but not on all. Any idea?
What am I doing wrong?
not sure what the parent var should be, probably should fix it, but the script could look like that
#!/bin/bash
grep -re "/common\/.*\${local.parent_folder_name}" . | while read -r grep_line; do
if [[ $grep_line == *"$0"* ]]; then
continue
fi
echo $grep_line
filename=$(echo $grep_line | cut -d: -f1)
parent=$(echo $grep_line | rev | cut -d/ -f2 | rev)
echo "parent: $parent"
sed -i "/common/s?\${local.parent_folder_name}?$parent?g" $filename
done
Note that sed takes a pattern (/common/) and then a command (a to append, d to delete, s to replace)
We want to change only lines with the pattern /common/ in them,
so after the pattern, we perform a regular search and replace - s/\${local.parent_folder_name}/$parent/g
To make it more readable, I changed the separator / to ?
So the sed is:
sed -i "/common/s?\${local.parent_folder_name}?$parent?g" $filename

Shell Script do while flow

I have a script whose content is like:
#!/bin/bash
DB_FILE='pgalldump.out'
echo $DB_FILE
DB_NAME=''
egrep -n "\\\\connect\ $DB_NAME" $DB_FILE | while read LINE
do
DB_NAME=$(echo $LINE | awk '{print $2}')
STARTING_LINE_NUMBER=$(echo $LINE | cut -d: -f1)
STARTING_LINE_NUMBER=$(($STARTING_LINE_NUMBER+1))
TOTAL_LINES=$(tail -n +$STARTING_LINE_NUMBER $DB_FILE | \
egrep -n -m 1 "PostgreSQL\ database\ dump\ complete" | \
head -n 1 | \
cut -d: -f1)
tail -n +$STARTING_LINE_NUMBER $DB_FILE | head -n +$TOTAL_LINES > /backup/$DB_NAME.sql
done
I know what it is doing. But i have a doubt about the flow of do while in this case. At line egrep -n "\\\\connect\ $DB_NAME" $DB_FILE | while read LINE will egrep runs first or while . Because DB_NAME is empty at start of code.
Could anyone please explain the flow of do while in this case.

BASH: Remove newline for multiple commands

I need some help . I want the result will be
UP:N%:N%
but the current result is
UP:N%
:N%
this is the code.
#!/bin/bash
UP=$(pgrep mysql | wc -l);
if [ "$UP" -ne 1 ];
then
echo -n "DOWN"
else
echo -n "UP:"
fi
df -hl | grep 'sda1' | awk ' {percent+=$5;} END{print percent"%"}'| column -t && echo -n ":"
top -bn2 | grep "Cpu(s)" | \sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | \awk 'END{print 100 - $1"%"}'
You can use command substitution in your first sentence (notice you're creating a subshell in this way):
echo -n $(df -hl | grep 'sda1' | awk ' {percent+=$5;} END{print percent"%"}'| column -t ):

Using bash command on a variable that will be used as reference for an array

Short and direct, basically I want to use the value of $command on a variable, instead using it inside the while loop as a command itself. So:
This Works, but I think it's ugly:
#!/bin/bash
IFS=$'\n'
lsof=`which lsof`
whoami=`whoami`
while true ; do
execution_array=($(${lsof} -iTCP -P 2> /dev/null | grep ':' | grep ${whoami} | awk '{print $9}' | cut -f2 -d'>' | sort | uniq ))
for i in ${execution_array[*]}; do
echo $i
done
sleep 1
done
unset IFS
This doesn't work ( no output happens ), but i think is less ugly:
#!/bin/bash
IFS=$'\n'
lsof=`which lsof`
whoami=`whoami`
command="${lsof} -iTCP -P 2> /dev/null | grep ':' | grep ${whoami} | awk '{print $9}' | cut -f2 -d'>' | sort | uniq"
while true ; do
execution_array=($(command))
for i in ${execution_array[*]}; do
echo $i
done
sleep 1
done
unset IFS
This solved my problem:
#!/bin/bash
IFS=$'\n'
lsof=$(which lsof)
list_connections() {
${lsof} -iTCP -P 2> /dev/null | grep ':' | grep $(whoami) | awk '{print $9}' | cut -f2 -d'>' | sort | uniq
}
while true ; do
execution_array=($(list_connections))
for i in ${execution_array[*]}; do
echo $i
done
sleep 1
done
unset IFS

BASH command: How to save output of bash command into variable and later pipeline into command

i have a question about how to store the output into variable and then later pipeline into another command
var=$(ps -auxc | grep -vE '^USER' )
#get top CPU
echo $var | sort -nr -k3 | head -1
#get top memory
echo $var | sort -nr -k4 | head -1
Make sure to use quotes in assignment and while accessing variable:
var="$(ps -auxc | grep -vE '^USER')"
#get top CPU
sort -nr -k3 <<< "$var" | head -1
#get top memory
sort -nr -k4 <<< "$var" | head -1
I'm not sure if this would always work:
IFS= read -rd '' var < <(ps -auxc | grep -vE '^USER') ## -d '' may be -d $'\0'
echo -n "$var" | sort -nr -k3 | head -1
However using readarray could:
readarray -t var < <(ps -auxc | grep -vE '^USER')
printf '%s\n' "${var[#]}" | sort -nr -k4 | head -1

Resources