How to process text in shell script and assign to varibles? - shell

sample.text file .
var1=https://www.process.com
var2=https://www.hp.com
var3=http://www.google.com
:
:
varz=https://www.sample.com
i am sending this sample txt as input to one script.
that script should split the lines and assign the variables to diff parameters
like
$varn= $var1,....$varn
$value=https://www.sample.com ( all the variables value)
i am trying with below script not working .
#!/bin/bash
for $1 in ( cat sample.txt );
do
echo $1 #var1=https://www.process.com
sed 's/=/\n/g' $1 | awk 'NR%2==0'
done
main aim is to assign all urls to one variable and vars to one variable and process the file

If sample.text already contains your variable assignments for you, e.g.
var1=https://www.process.com
var2=https://www.hp.com
var3=http://www.google.com
and you want access to var1, var2, ... varn, then you are making things difficult on yourself by trying to read and parse sample.text instead of simply sourcing it with '.' or source.
For example, given sample.text containing:
$ cat sample.text
var1=https://www.process.com
var2=https://www.hp.com
var3=http://www.google.com
varz=https://www.sample.com
You need only source the file to access the variable, e.g.
#!/bin/bash
. sample.text || {
printf "error sourcing sample.text\n"
exit 1
}
printf "%s\n" $var{1..3} $varz
Example Use/Output
$ bash source_sample.sh
https://www.process.com
https://www.hp.com
http://www.google.com
https://www.sample.com
Look things over and let me know if you have further questions.

Related

how to assign each of multiple lines in a file as different variable?

this is probably a very simple question. I looked at other answers but couldn't come up with a solution. I have a 365 line date file. file as below,
01-01-2000
02-01-2000
I need to read this file line by line and assign each day to a separate variable. like this,
d001=01-01-2000
d002=02-01-2000
I tried while read commands but couldn't get them to work.It takes a lot of time to shoot one by one. How can I do it quickly?
Trying to create named variable out of an associative array, is time waste and not supported de-facto. Better use this, using an associative array:
#!/bin/bash
declare -A array
while read -r line; do
printf -v key 'd%03d' $((++c))
array[$key]=$line
done < file
Output
for i in "${!array[#]}"; do echo "key=$i value=${array[$i]}"; done
key=d001 value=01-01-2000
key=d002 value=02-01-2000
Assumptions:
an array is acceptable
array index should start with 1
Sample input:
$ cat sample.dat
01-01-2000
02-01-2000
03-01-2000
04-01-2000
05-01-2000
One bash/mapfile option:
unset d # make sure variable is not currently in use
mapfile -t -O1 d < sample.dat # load each line from file into separate array location
This generates:
$ typeset -p d
declare -a d=([1]="01-01-2000" [2]="02-01-2000" [3]="03-01-2000" [4]="04-01-2000" [5]="05-01-2000")
$ for i in "${!d[#]}"; do echo "d[$i] = ${d[i]}"; done
d[1] = 01-01-2000
d[2] = 02-01-2000
d[3] = 03-01-2000
d[4] = 04-01-2000
d[5] = 05-01-2000
In OP's code, references to $d001 now become ${d[1]}.
A quick one-liner would be:
eval $(awk 'BEGIN{cnt=0}{printf "d%3.3d=\"%s\"\n",cnt,$0; cnt++}' your_file)
eval makes the shell variables known inside your script or shell. Use echo $d000 to show the first one of the newly defined variables. There should be no shell special characters (like * and $) inside your_file. Remove eval $() to see the result of the awk command. The \" quoted %s is to allow spaces in the variable values. If you don't have any spaces in your_file you can remove the \" before and after %s.

Reading filenames from a structured file to a bash script

I have a file with a structured list of filenames (file1.sh, file2.sh, ...) and would like to read loop the file names inside a bash script.
cat /home/flora/logs/9681-T13:17:07.091363777.org
%rec: dynamic
Ptrn: Gnu
File: /home/flora/comint.rc
+ /home/flora/engine.rc
+ /home/flora/playa.rc
+ /home/flora/edva.rc
+ /home/flora/dyna.rc
+ /home/flora/lin.rc
Have started with
while read -r fl; do
echo "$fl" | grep -oE '[/].+'
done < "$logfl"
But I want to be more specific by matching the File: , then continue reading the rest using + as a continuation character.
bash doesn't have impose a limit on variables (other than memory). That said, I would start by processing the list of lines one by one:
#!/bin/bash
while read _ f
do
process "$f"
done
where process is whatever function you need to implement.
If you want a variables use an array like this:
#!/bin/bash
while read _ f
do
files+=("$f")
done
In either case pass the input file to script with:
your_script < /home/flora/logs/27043-T13:09:44.893003954.log

Set variable equal to value based on match in bash

I have a script where I want to set a variable equal to a specific value based on a match from a file (in bash).
For example:
File in .csv contains:
Name,ID,Region
Device1,1,USA
Device2,2,UK
I want to declare variables at the beginning like this:
region1=USA
regions2=UK
region3=Ireland
etc...
Then, whilst reading the csv, I need to match the Regioncolumn's name to the global variable set at the beginning of a file, to then use this in an API. So if a device in the csv has a region set of USA, I should be able to use region1 during the update call in the API. I want to use a while loop to iterate over the csv file line by line, and update each device's region.
Does anyone maybe know how I can achieve this? Any help would be greatly appreciated.
PS: This is not a homework assignment before anyone asks :-)
Would you please try the following:
declare -A regions # an associative array
# declare your variables
region1=USA
region2=UK
region3=Ireland
# associate each region's name with region's code
for i in "${!region#}"; do # expands to region1, region2, region3
varname="$i"
regions[${!varname}]="$i" # maps "USA" to "region1", "UK" to "region2" ..
done
while IFS=, read -r name id region; do
((nr++)) || continue # skips the header line
region_group="${regions[$region]}"
echo "region=$region, region_group=$region_group" # for a test
# call API here
done < file.csv
Output:
region=USA, region_group=region1
region=UK, region_group=region2
region=Ireland, region_group=region3
BTW if the variable declaration at the beginning is under your control, it will be easier to say:
# declare an associative aarray
declare -A regions=(
[USA]="region1"
[UK]="region2"
[Ireland]="region3"
)
while IFS=, read -r name id region; do
((nr++)) || continue # skip the header line
region_group="${regions[$region]}"
echo "region=$region, region_group=$region_group" # for a test
# call API here
done < file.csv
Use awk to create a lookup table. eg:
$ cat a.sh
#!/bin/sh
cat << EOD |
region1=USA
region2=UK
region3=Ireland
EOD
{
awk 'NR==FNR { lut[$2] = $1; next}
{$3 = lut[$3]} 1
' FS== /dev/fd/3 FS=, OFS=, - << EOF
Name,ID,Region
Device1,1,USA
Device2,2,UK
EOF
} 3<&0
$ ./a.sh
Name,ID,
Device1,1,region1
Device2,2,region2
or, slightly less obscure (and more portable to just use regular files, I'm not sure about the platforms on which /dev/fd/3 is actually valid:
$ cat input
Name,ID,Region
Device1,1,USA
Device2,2,UK
$ cat lut
region1=USA
region2=UK
region3=Ireland
$ awk 'NR==FNR{lut[$2] = $1; next} {$3 = lut[$3]} 1' FS== lut FS=, OFS=, input
Name,ID,
Device1,1,region1
Device2,2,region2

read environment variables from whitelist and pipe to sed

I've got a whitelist file of env variables that looks like this:
ENV_A
ENV_B
ENV_C
I have another file filetoreplace.txt that (let's say) looks like this:
asdfasdfasdfasdf{{ENV_A}}asdfasdfasdfasdfasdf
adsfasdf
asdf{{ENV_B}}
asdf{{ENV_A}}
adsfasdfdsfdf{{ENV_C}}
I want to come up with a (one line) sed command that reads the variable names from the file and does the replacement.
For this environment
ENV_A=1
ENV_B=2
ENV_C=3
This would be the output given the above file
asdfasdfasdfasdf1asdfasdfasdfasdfasdf
adsfasdf
asdf2
asdf1
adsfasdfdsfdf3
I think it's kind of similar to this:
< whitelist | while read var; do
sed -i 's/{{'"$var"'}}/'${!var}'/g' filetoreplace.txt
done
Looking for a bit of code golf to do this in a one liner here
Got 2 impractical variants based on your idea with $var\${!var}
First, obvy
vars=( $(cat whitelist) ); for var in ${vars[#]}; { sed "s|{{$var}}|${!var}|g" -i filetoreplace.txt; }
Second, monstrous)
vars=( $(cat filetoreplace.txt) ); for var in ${vars[#]}; { env=${var//[!ENV_ABC]/}; [[ $env ]] && val=${!env} || var=; echo ${var//"{{$env}}"/$val}; }
This is a relatively simple alternative solution :
#!/usr/bin/env bash
bash -c "source whitelist; cat << EOF
$(perl -pe 's/\{(\{.*?})}/\$$1/' filetoreplace.txt)
EOF
"
The aim of the perl script is to transform {{ENV_A}} to ${ENV_A}

bash script to modify and extract information

I am creating a bash script to modify and summarize information with grep and sed. But it gets stuck.
#!/bin/bash
# This script extracts some basic information
# from text files and prints it to screen.
#
# Usage: ./myscript.sh </path/to/text-file>
#Extract lines starting with ">#HWI"
ONLY=`grep -v ^\>#HWI`
#replaces A and G with R in lines
ONLYR=`sed -e s/A/R/g -e s/G/R/g $ONLY`
grep R $ONLYR | wc -l
The correct way to write a shell script to do what you seem to be trying to do is:
awk '
!/^>#HWI/ {
gsub(/[AG]/,"R")
if (/R/) {
++cnt
}
END { print cnt+0 }
' "$#"
Just put that in the file myscript.sh and execute it as you do today.
To be clear - the bulk of the above code is an awk script, the shell script part is the first and last lines where the shell just calls awk and passes it the input file names.
If you WANT to have intermediate variables then you can create/print them with:
awk '
!/^>#HWI/ {
only = $0
onlyR = only
gsub(/[AG]/,"R",onlyR)
print "only:", only
print "onlyR:", onlyR
if (/R/) {
++cnt
}
END { print cnt+0 }
' "$#"
The above will work robustly, portably, and efficiently on all UNIX systems.
First of all, and as #fedorqui commented - you're not providing grep with a source of input, against which it will perform line matching.
Second, there are some problems in your script, which will result in unwanted behavior in the future, when you decide to manipulate some data:
Store matching lines in an array, or a file from which you'll later read values. The variable ONLY is not the right data structure for the task.
By convention, environment variables (PATH, EDITOR, SHELL, ...) and internal shell variables (BASH_VERSION, RANDOM, ...) are fully capitalized. All other variable names should be lowercase. Since
variable names are case-sensitive, this convention avoids accidentally overriding environmental and internal variables.
Here's a better version of your script, considering these points, but with an open question regarding what you were trying to do in the last line : grep R $ONLYR | wc -l :
#!/bin/bash
# This script extracts some basic information
# from text files and prints it to screen.
#
# Usage: ./myscript.sh </path/to/text-file>
input_file=$1
# Read lines not matching the provided regex, from $input_file
mapfile -t only < <(grep -v '^\>#HWI' "$input_file")
#replaces A and G with R in lines
for((i=0;i<${#only[#]};i++)); do
only[i]="${only[i]//[AG]/R}"
done
# DEBUG
printf '%s\n' "Here are the lines, after relpace:"
printf '%s\n' "${only[#]}"
# I'm not sure what you were trying to do here. Am I gueesing right that you wanted
# to count the number of R's in ALL lines ?
# grep R $ONLYR | wc -l

Resources