How to find content in a file and replace the adjecent value - bash
Using bash how do I find a string and update the string next to it for example pass value
my.site.com|test2.spin:80
proxy_pass.map
my.site2.com test2.spin:80
my.site.com test.spin:8080;
Expected output is to update proxy_pass.map with
my.site2.com test2.spin:80
my.site.com test2.spin:80;
I tried using awk
awk '{gsub(/^my\.site\.com\s+[A-Za-z0-9]+\.spin:8080;$/,"my.site2.comtest2.spin:80"); print}' proxy_pass.map
but does not seem to work. Is there a better way to approch the problem. ?
One awk idea, assuming spacing needs to be maintained:
awk -v rep='my.site.com|test2.spin:80' '
BEGIN { split(rep,a,"|") # split "rep" variable and store in
site[a[1]]=a[2] # associative array
}
$1 in site { line=$0 # if 1st field is in site[] array then make copy of current line
match(line,$1) # find where 1st field starts (in case 1st field does not start in column #1)
newline=substr(line,1,RSTART+RLENGTH-1) # save current line up through matching 1st field
line=substr(line,RSTART+RLENGTH) # strip off 1st field
match(line,/[^[:space:];]+/) # look for string that does not contain spaces or ";" and perform replacement, making sure to save everything after the match (";" in this case)
newline=newline substr(line,1,RSTART-1) site[$1] substr(line,RSTART+RLENGTH)
$0=newline # replace current line with newline
}
1 # print current line
' proxy_pass.map
This generates:
my.site2.com test2.spin:80
my.site.com test2.spin:80;
If the input looks like:
$ cat proxy_pass.map
my.site2.com test2.spin:80
my.site.com test.spin:8080;
This awk script generates:
my.site2.com test2.spin:80
my.site.com test2.spin:80;
NOTES:
if multiple replacements need to be performed I'd suggest placing them in a file and having awk process said file first
the 2nd match() is hardcoded based on OP's example; depending on actual file contents it may be necessary to expand on the regex used in the 2nd match()
once satisified with the result the original input file can be updated in a couple ways ... a) if using GNU awk then awk -i inplace -v rep.... or b) save result to a temp file and then mv the temp file to proxy_pass.map
If the number of spaces between the columns is not significant, a simple
proxyf=proxy_pass.map
tmpf=$$.txt
awk '$1 == "my.site.com" { $2 = "test2.spin:80;" } {print}' <$proxyf >$tmpf && mv $tmpf $proxyf
should do. If you need the columns to be lined up nicely, you can replace the print by a suitable printf .... statement.
With your shown samples and attempts please try following awk code. Creating shell variable named var where it stores value my.site.com|test2.spin:80 in it. which further is being passed to awk program. In awk program creating variable named var1 which has shell variable var's value in it.
In BEGIN section of awk using split function to split value of var(shell variable's value container) into array named arr with separator as |. Where num is total number of values delimited by split function. Then using for loop to be running till value of num where it creates array named arr2 with index of current i value and making i+1 as its value(basically 1 is for key of array and next item is value of array).
In main block of awk program checking condition if $1 is in arr2 then print arr2's value else print $2 value as per requirement.
##Shell variable named var is being created here...
var="my.site.com|test2.spin:80"
awk -v var1="$var" '
BEGIN{
num=split(var1,arr,"|")
for(i=1;i<=num;i+=2){
arr2[arr[i]]=arr[i+1]
}
}
{
print $1,(($1 in arr2)?arr2[$1]:$2)
}
' Input_file
OR in case you want to maintain spaces between 1st and 2nd field(s) then try following code little tweak of Above code. Written and tested with your shown samples Only.
awk -v var1="$var" '
BEGIN{
num=split(var1,arr,"|")
for(i=1;i<=num;i+=2){
arr2[arr[i]]=arr[i+1]
}
}
{
match($0,/[[:space:]]+/)
print $1 substr($0,RSTART,RLENGTH) (($1 in arr2)?arr2[$1]:$2)
}
' Input_file
NOTE: This program can take multiple values separated by | in shell variable to be passed and checked on in awk program. But it considers that it will be in format of key|value|key|value... only.
#!/bin/sh -x
f1=$(echo "my.site.com|test2.spin:80" | cut -d'|' -f1)
f2=$(echo "my.site.com|test2.spin:80" | cut -d'|' -f2)
echo "${f1}%${f2};" >> proxy_pass.map
tr '%' '\t' < proxy_pass.map >> p1
cat > ed1 <<EOF
$
-1
d
wq
EOF
ed -s p1 < ed1
mv -v p1 proxy_pass.map
rm -v ed1
This might work for you (GNU sed):
<<<'my.site.com|test2.spin:80' sed -E 's#\.#\\.#g;s#^(\S+)\|(\S+)#/^\1\\b/s/\\S+/\2/2#' |
sed -Ef - file
Build a sed script from the input arguments and apply it to the input file.
The input arguments are first prepared so that their metacharacters ( in this case the .'s are escaped.
Then the first argument is used to prepare a match command and the second is used as the value to be replaced in a substitution command.
The result is piped into a second sed invocation that takes the sed script and applies it the input file.
Related
Grep a line from a file and replace a substring and append the line to the original file in bash?
This is what I want to do. for example my file contains many lines say : ABC,2,4 DEF,5,6 GHI,8,9 I want to copy the second line and replace a substring EF(all occurrences) and make it XY and add this line back to the file so the file looks like this: ABC,2,4 DEF,5,6 GHI,8,9 DXY,5,6 how can I achieve this in bash? EDIT : I want to do this in general and not necessarily for the second line. I want to grep EF, and do the substition in whatever line is returned.
Here's a simple Awk script. awk -F, -v pat="EF" -v rep="XY" 'BEGIN { OFS=FS } $1 ~ pat { x = $1; sub(pat, rep, x); y = $0; sub($1, x, y); a[++n] = y } 1 END { for(i=1; i<=n; i++) print a[i] }' file The -F , says to use comma as the input field separator (internal variable FS) and in the BEGIN block, we also set that as the output field separator (OFS). If the first field matches the pattern, we copy the first field into x, substitute pat with rep, and then substitute the first field of the whole line $0 with the new result, and append it to the array a. 1 is a shorthand to say "print the current input line". Finally, in the END block, we output the values we have collected into a. This could be somewhat simplified by hardcoding the pattern and the replacement, but I figured it's more useful to make it modular so that you can plug in whatever values you need. While this all could be done in native Bash, it tends to get a bit tortured; spending the 30 minutes or so that it takes to get a basic understanding of Awk will be well worth your time. Perhaps tangentially see also while read loop extremely slow compared to cat, why? which explains part of the rationale for preferring to use an external tool like Awk over a pure Bash solution.
You can use the sed command: sed ' /EF/H # copy all matching lines ${ # on the last line p # print it g # paste the copied lines s/EF/XY/g # replace all occurences s/^\n// # get rid of the extra newline }' As a one-liner: sed '/EF/H;${p;g;s/EF/XY/g;s/^\n//}' file.csv
If ed is available/acceptable, something like: #!/bin/sh ed -s file.txt <<-'EOF' $kx g/^.*EF.*,.*/t'x 'x+;$s/EF/XY/ ,p Q EOF Or in one-line. printf '%s\n' '$kx' "g/^.*EF.*,.*/t'x" "'x+;\$s/EF/XY/" ,p Q | ed -s file.txt Change Q to w if in-place editing is needed. Remove the ,p to silence the output.
Using BASH: #!/bin/bash src="${1:-f.dat}" rep="${2:-XY}" declare -a new_lines while read -r line ; do if [[ "$line" == *EF* ]] ; then new_lines+=("${line/EF/${rep}}") fi done <"$src" printf "%s\n" "${new_lines[#]}" >> "$src" Contents of f.dat before: ABC,2,4 DEF,5,6 GHI,8,9 Contents of f.dat after: ABC,2,4 DEF,5,6 GHI,8,9 DXY,5,6
Following on from the great answer by #tripleee, you can create a variation that uses a single call to sub() by outputting all records before the substitution is made, then add the updated record to the array to be output with the END rule, e.g. awk -F, '1; /EF/ {sub(/EF/,"XY"); a[++n]=$0} END {for(i=1;i<=n;i++) print a[i]}' file Example Use/Output An expanded input based on your answer to my comment below the question that all occurrences of EF will be replaced with XY in all records, e.g. $ cat file ABC,2,4 DEF,5,6 GHI,8,9 EFZ,3,7 Use and output: $ awk -F, '1; /EF/ {sub(/EF/,"XY"); a[++n]=$0} END {for(i=1;i<=n;i++) print a[i]}' file ABC,2,4 DEF,5,6 GHI,8,9 EFZ,3,7 DXY,5,6 XYZ,3,7 Let me know if you have questions.
how to replace a string at a specific position in a csv file using bash
I have several .csv files and each csv file has lines which look like this. AA,1,CC,1,EE AA,FF,6,7,8,9 BB,6,7,8,99,AA I am reading through each line of each csv file and then trying to replace the 4th position of each line beginning with AA with "ZZ" Expected output AA,1,CC,ZZ,EE EE,FF,6,ZZ,8,9 BB,6,7,8,99,AA However the variable "y" does contain the 4th variable "1" and "7" respectively, but when I use sed command it replaces the first occurrence of "1" with "ZZ". How do I modify my code to replace only the 4th position of each line irrespective of what value it holds? My code looks like this $file = "name of file which contains list of all csv files" for i in `cat file` while IFS = read -r line; do if [[ $line == AA* ]] ; then y=$(echo "$line" | cut -d',' -f 4) sed -i "s/${y}/ZZ/" $i fi done < $i
Using sed, you can also direct that only the 4th field of a comma separated values file be changed to "ZZ" for lines beginning "AA" with: sed -i '/^AA/s/[^,][^,]*/ZZ/4' file Explanation sed -i call sed to edit file in place; general form /find/s/match/replace/occurrence; where find is /^AA/ line beginning with "AA"; match [^,][^,]* a character not a comma followed by any number of non-commas; replace /ZZ/4 the 4th occurrence of match with "ZZ". Note, both awk and sed provide good solutions in this case so see the answers by #perreal and #RavinderSingh13 Example Input File $ cat file AA,1,CC,1,EE AA,FF,6,7,8,9 BB,6,7,8,99,AA Example Use/Output (note: -i not used below so the changes are simply output to stdout) $ sed '/^AA/s/[^,][^,]*/ZZ/4' file AA,1,CC,ZZ,EE AA,FF,6,ZZ,8,9 BB,6,7,8,99,AA
To robustly do this is just: $ awk 'BEGIN{FS=OFS=","} $1=="AA"{$4="ZZ"} 1' csv AA,1,CC,ZZ,EE AA,FF,6,ZZ,8,9 BB,6,7,8,99,AA Note that the above is doing a literal string comparison and a literal string replacement so unlike the other solutions posted so far it won't fail if the target string (AA in this example) contains regexp metachars like . or *, nor if it can be part of another string like AAX, nor if the replacement string (ZZ in this example) contains backreferences like & or \1. If you want to map multiple strings in one pass: $ awk 'BEGIN{FS=OFS=","; m["AA"]="ZZ"; m["BB"]="FOO"} $1 in m{$4=m[$1]} 1' csv AA,1,CC,ZZ,EE AA,FF,6,ZZ,8,9 BB,6,7,FOO,99,AA and just like GNU sed has -i for "inplace" editing, GNU awk has -i inplace, so you can discard the shell loop and just do: awk -i inplace ' BEGIN { FS=OFS="," } (NR==FNR) { ARGV[ARGC++]=$0 } (NR!=FNR) && ($1=="AA") { $4="ZZ" } { print } ' file and it'll operate on all of the files named in file in one call to awk. "file" in that last case is your file containing a list of other CSV file names.
EDIT1: Since OP has changed requirement a bit do adding following now. awk 'BEGIN{FS=OFS=","} /^AA/||/^BB/{$4="ZZ"} /^CC/||/^DD/{$5="NEW_VALUE"} 1' Input_file > temp_file && mv temp_file Input_file Could you please try following. awk -F, '/^AA/{$4="ZZ"} 1' OFS=, Input_file > temp_file && mv temp_file Input_file OR awk 'BEGIN{FS=OFS=","} /^AA/{$4="ZZ"} 1' Input_file > temp_file && mv temp_file Input_file Explanation: Adding explanation to above code too now. awk ' BEGIN{ ##Starting BEGIN section of awk which will be executed before reading Input_file. FS=OFS="," ##Setting field separator and output field separator as comma here for all lines of Input_file. } ##Closing block for BEGIN section of this program. /^AA/{ ##Checking condition if a line starts from string AA then do following. $4="ZZ" ##Setting 4th field as ZZ string as per OP. } ##Closing this condition block here. 1 ##By mentioning 1 we are asking awk to print edited or non-edited line of Input_file. ' Input_file ##Mentioning Input_file name here.
Using sed: sed -i 's/\(^AA,[^,]*,[^,]*,\)[^,]*/\1ZZ/' input_file
replace a range of number in a file
I would like to replace a range of number in a file with another range. Let say I have: /dev/raw/raw16 /dev/raw/raw17 /dev/raw/raw18 And I want modify them as: /dev/raw/raw1 /dev/raw/raw2 /dev/raw/raw3 I know I can do it using sed or awk but just cannot write it correctly. What is the easiest way to do it?
awk to the rescue! $ awk -F'/dev/raw/raw' '{print FS (++c)}' ile /dev/raw/raw1 /dev/raw/raw2 /dev/raw/raw3
I would not recomment changing device names. Anyway, just to replace letters or numbers you could use the option 's' with sed. cat file.txt | sed s/raw16/raw1/g; > newfile.txt In this example you replace all the raw16 with raw1. Here some other examples ... sed 's/foo/bar/' # replaces in each row the first foo only sed 's/foo/bar/4' # replaces in each row every 4. sed 's/foo/bar/g' # replaces all foo with bar sed 's/\(.*\)foo/\1bar/' # replace the last only per line .
# using /raw as field separator, so $2 is the end number in this case awk -v From=16 -v To=18 -v NewStart=1 -F '/raw' ' # for lines where last number is in the scope $2 >= From && $2 <=To { # change last number to corresponding in new scope sub( /[0-9]+$/, $2 - From + NewStart) } # print (default action of a non 0 value "filter") the line (modified or not) 7 ' file.txt \ > newfile.txt Note: adapt the field separator for your real need suite for your sample of data, not if other element are in the line but you could easily adapt this code foir your purpose
Search file A for a list of strings located in file B and append the value associated with that string to the end of the line in file A
This is a bit complicated, well I think it is.. I have two files, File A and file B File A contains delay information for a pin and is in the following format AD22 15484 AB22 9485 AD23 10945 File B contains a component declaration that needs this information added to it and is in the format: 'DXN_0': PIN_NUMBER='(AD22,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'; 'DXP_0': PIN_NUMBER='(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,AD23,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'; 'VREFN_0': PIN_NUMBER='(AB22,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'; So what I am trying to achieve is the following output 'DXN_0': PIN_NUMBER='(AD22,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'; PIN_DELAY='15484'; 'DXP_0': PIN_NUMBER='(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,AD23,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'; PIN_DELAY='10945'; 'VREFN_0': PIN_NUMBER='(AB22,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'; PIN_DELAY='9485'; There is no order to the pin numbers in file A or B So I'm assuming the following needs to happen open file A, read first line search file B for first string field in the line just read once found in file B at the end of the line add the text "\nPIN_DELAY='" add the second string filed of the line read from file A add the following text at the end "';" repeat by opening file A, read the second line I'm assuming it will be a combination of sed and awk commands and I'm currently trying to work it out but think this is beyond my knowledge. Many thanks in advance as I know it's complicated..
FILE2=`cat file2` FILE1=`cat file1` TMPFILE=`mktemp XXXXXXXX.tmp` FLAG=0 for line in $FILE1;do echo $line >> $TMPFILE for line2 in $FILE2;do if [ $FLAG == 1 ];then echo -e "PIN_DELAY='$(echo $line2 | awk -F " " '{print $1}')'" >> $TMPFILE FLAG=0 elif [ "`echo $line | grep $(echo $line2 | awk -F " " '{print $1}')`" != "" ];then FLAG=1 fi done done mv $TMPFILE file1 Works for me, you can also add a trap for remove tmp file if user send sigint.
awk to the rescue... $ awk -vq="'" 'NR==FNR{a[$1]=$2;next} {print; for(k in a) if(match($0,k)) {print "PIN_DELAY=" q a[k] q ";"; next}}' keys data 'DXN_0': PIN_NUMBER='(AD22,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'; PIN_DELAY='15484'; 'DXP_0': PIN_NUMBER='(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,AD23,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'; PIN_DELAY='10945'; 'VREFN_0': PIN_NUMBER='(AB22,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'; PIN_DELAY='9485'; Explanation: scan the first file for key/value pairs. For each line in the second data file print the line, for any matching key print value of the key in the requested format. Single quotes in awk is little tricky, setting a q variable is one way of handling it.
FINAL Script for my application, A big thank you to all that helped.. # ! /usr/bin/sh # script created by Adam with a LOT of help from users on stackoverflow # must pass $1 file (package file from Xilinx) # must pass $2 file (chips.prt file from the PCB design office) # remove these temp files, throws error if not present tho, whoops!! rm DELAYS.txt CHIP.txt OUTPUT.txt # BELOW::create temp files for the code thanks to Glastis#stackoverflow https://stackoverflow.com/users/5101968/glastis I now know how to do this DELAYS=`mktemp DELAYS.txt` CHIP=`mktemp CHIP.txt` OUTPUT=`mktemp OUTPUT.txt` # BELOW::grep input file 1 (pkg file from Xilinx) for lines containing a delay in the form of n.n and use TAIL to remove something (can't remember), sed to remove blanks and replace with single space, sed to remove space before \n, use awk to print columns 3,9,10 and feed into awk again to calculate delay provided by fedorqui#stackoverflow https://stackoverflow.com/users/1983854/fedorqui # In awk, NF refers to the number of fields on the current line. Since $n refers to the field number n, with $(NF-1) we refer to the penultimate field. # {...}1 do stuff and then print the resulting line. 1 evaluates as True and anything True triggers awk to perform its default action, which is to print the current line. # $(NF-1) + $NF)/2 * 141 perform the calculation: `(penultimate + last) / 2 * 141 # {$(NF-1)=sprintf( ... ) assign the result of the previous calculation to the penultimate field. Using sprintf with %.0f we make sure the rounding is performed, as described above. # {...; NF--} once the calculation is done, we have its result in the penultimate field. To remove the last column, we just say "hey, decrease the number of fields" so that the last one gets "removed". grep -E -0 '[0-9]\.[0-9]' $1 | tail -n +2 | sed -e 's/[[:blank:]]\+/ /g' -e 's/\s\n/\n/g' | awk '{print ","$3",",$9,$10}' | awk '{$(NF-1)=sprintf("%.0f", ($(NF-1) + $NF)/2 * 169); NF--}1' >> $DELAYS # remove blanks in part file and add additional commas (,) so that the following awk command works properly cat $2 | sed -e "s/[[:blank:]]\+//" -e "s/(/(,/g" -e 's/)/,)/g' >> $CHIP # this awk command is provided by karakfa#stackoverflow https://stackoverflow.com/users/1435869/karakfa Explanation: scan the first file for key/value pairs. For each line in the second data file print the line, for any matching key print value of the key in the requested format. Single quotes in awk is little tricky, setting a q variable is one way of handling it. https://stackoverflow.com/questions/32458680/search-file-a-for-a-list-of-strings-located-in-file-b-and-append-the-value-assoc awk -vq="'" 'NR==FNR{a[$1]=$2;next} {print; for(k in a) if(match($0,k)) {print "PIN_DELAY=" q a[k] q ";"; next}}' $DELAYS $CHIP >> $OUTPUT # remove the additional commas (,) added in earlier before ) and after ( and you are done.. cat $OUTPUT | sed -e 's/(,/(/g' -e 's/,)/)/g' >> chipsd.prt
searching multi-word patterns from one file in another using awk
patterns file: wicked liquid movie guitar balance transfer offer drive car bigfile file: wickedliquidbrains drivelicense balanceofferings using awk on command line: awk '/balance/ && /offer/' bigfile i get the result i want which is balanceofferings awk '/wicked/ && /liquid/' bigfile gives me wickedliquidbrains, which is also good.. awk '/drive/ && /car/' bigfile does not give me drivelicense which is also good, as i am having && now when trying to pass shell variable, containg those '/regex1/ && /regex2/.. etc' to awk.. awk -v search="$out" '$0 ~ search' "$bigfile" awk does not run.. what may be the problem??
Try this: awk "$out" "$bigfile" When you do $0 ~ search, the value of search has to be a regular expression. But you were setting it to a string containing a bunch of regexps with && between them -- that's not a valid regexp. To perform an action on the lines that match, do: awk "$out"' { /* do stuff */ }' "$bigfile" I switched from double quotes to single quotes for the action in case the action uses awk variables with $.
UPDATED An alternative to Barmars's solution with arguments passed with -v: awk -v search="$out" 'match($0,search)' "$bigfile" Test: $ echo -e "one\ntwo"|awk -v luk=one 'match($0,luk)' one Passing two (real) regexs (EREs) to awk: echo -e "one\ntwo\nnone"|awk -v re1=^o -v re2=e$ 'match($0,re1) && match($0,re2)' Output: one If You want to read the pattern_file and do match against all the rows, You could try something like this: awk 'NR==FNR{N=NR;re[N,0]=split($0,a);for(i in a)re[N,i]=a[i];next} { for(i=1;i<=N;++i) { #for(j=1;j<=re[i,0]&&match($0,re[i,j]);++j); for(j=1;j<=re[i,0]&&$0~re[i,j];++j); if(j>re[i,0]){print;break} } }' patterns_file bigfile Output: wickedliquidbrains At the 1st line it reads and stores the pattern_file in a 2D array re. Each row contains the split input string. The 0th element of each row is the length of that row. Then it reads bigfile. Each lines of bigfile are tested for match of re array. If all items in a row are matching then that row is printed.