In a shell script, how do I replace the first occurrence of a string, after a different string - bash

I have a simple config file that looks a bit like this:
[sectionA]
url = value1
username = value2
password = value3
[sectionC]
url = value1
username = value2
password = value3
[sectionB]
url = value1
username = value2
password = value3
And I want to replace the username for SectionB to be valueX without touching SectionA's or SectionC's username.
I've tried some variations on sed, but as far as I've managed to fathom it seems to operate on individual lines.
How do I do the equivalent of
Search for StringA (in this case [SectionB])
Find the next occurrence of StringB (username = value2)
Replace with StringC ('username = valueX`)

sed:
sed '/sectionB/,/\[/s/username.*/username = valueX/' input
awk:
awk -vRS='' -vFS='\n' -vOFS='\n' '
$1~/sectionB/{
sub(/=.*$/, "= valueX", $3)
}
{
printf "%s\n\n", $0
}' input

This multi-line sed should do the trick:
sed -E -n '1h;1!H;${;g;s/\[sectionB\]([^[]*)username = [a-zA-Z0-9_]+/\[sectionB\]\1username = valueX/g;p;}' input.txt

Related

Use sed to remove string, add whitespace and insert ""

I have a file with the following entry:
export TF_VAR_environment_name=dev
export TF_VAR_project_name=hello-world
I would like to do 3 things with these enteries:
Remove the export TF_VAR_ string
Add whitespace to both sides of =
Wrap the string right of = in " "
So my file would end up looking like:
environment_name = "dev"
project_name = "hello-world"
I'm able to remove the string with s/"export TF_VAR_"//, but haven't been able to wrap the = in whitespace, or wrap the final string in quotes. Any help would be greatly appriciated.
Is this possible to do in sed?
input.txt is your textfile.
output.txt is the wanted result.
sed 's/export TF_VAR_// ; s/=\ (.*\ )$/ = "\1"/ ' < input.txt > output.txt
there is no blank between \ and (
and no blank between \ and )
input.txt is your textfile. output.txt is the wanted result.
sed 's/export TF_VAR_// ; s/=\(.*\)$/ = "\1"/ ' < input.txt > output.txt
its the same as above. i just tried to post it here in a correct way.

find and replace line with variable use sed

sn=$(./libs/ideviceinfo | grep ^SerialNumber | awk {'printf $NF'})
type=$(./libs/ideviceinfo | grep ProductType | awk {'printf $NF'})
udid=$(./libs/ideviceinfo | grep UniqueDeviceID | awk {'printf $NF'})
I want to replace variable value into this txt file
{
"InternationalMobileEquipmentIdentity" = "355331088790894";
"SerialNumber" = "C6KSJM0AHG6W";
"InternationalMobileSubscriberIdentity" = "";
"ProductType" = "iPhone9,1";
"UniqueDeviceID" = "69bae2fcc0da3e6e3373f583ef856e02c88026eb";
"ActivationRandomness" = "25E7742B-76A7-4C31-9F49-52D17A817B2F";
"ActivityURL" = "https://albert.apple.com/deviceservices/activity";
"IntegratedCircuitCardIdentity" = "";
"CertificateURL" = "https://albert.apple.com/deviceservices/certifyMe";
"PhoneNumberNotificationURL" = "https://albert.apple.com/deviceservices/phoneHome";
"ActivationTicket" = "";
}
i try using sed:
sed 's/"SerialNumber.*/"SerialNumber" = "$sn";/g' ./file/bp.txt > ./file/bp1.txt
The output is not as expected: "SerialNumber" = "$sn";
Hope you guys can help me
p/s: can you help me if 1 command can replace 3 variable values ​​at the same time, that would be great
The problem here is one of shell quoting. Using single quotes means that everything inside will not go through substitution.
The following should fix your problem:
sed 's/"SerialNumber.*/"SerialNumber" = "'"$sn"'";/g' ./file/bp.txt > ./file/bp1.txt

Output the value/word after one pattern has been found in string in variable (grep, awk, sed, pearl etc)

I have a program that prints data into the console like so (separated by space):
variable1 value1
variable2 value2
variable3 value3
varialbe4 value4
EDIT: Actually the output can look like this:
data[variable1]: value1
pre[variable2] value2
variable3: value3
flag[variable4] value4
In the end I want to search for a part of the name e.g. for variable2 or variable3 but only get value2 or value3 as output.
EDIT: This single value should then be stored in a variable for further processing within the bash script.
I first tried to put all the console output into a file and process it from there with e.g.
# value3_var="$(grep "variable3" file.log | cut -d " " -f2)"
This works fine but is too slow. I need to process ~20 of these variables per run and this takes ~1-2 seconds on my system. Also I need to do this for ~500 runs. EDIT: I actually do not need to automatically process all of the ~20 'searches' automatically with one call of e.g. awk. If there is a way to do it automaticaly, it's fine, but ~20 calls in the bash script are fine here too.
Therefore I thought about putting the console output directly into a variable to remove the slow file access. But this will then eliminate the newline characters which then again makes it more complicated to process:
# console_output=$(./programm_call)
# echo $console_output
variable1 value1 variable2 value2 variable3 value3 varialbe4 value4
EDIT: IT actually looks like this:
# console_output=$(./programm_call)
# echo $console_output
data[variable1]: value1 pre[variable2] value2 variable3: value3 flag[variable4] value4
I found a solution for this kind of string arangement, but these seem only to work with a text file. At least I was not able to use the string stored in $console_output with these examples
How to print the next word after a found pattern with grep,sed and awk?
So, how can I output the next word after a found pattern, when providing a (long) string as variable?
PS: grep on my system does not know the parameter -P...
I'd suggest to use awk:
$ cat ip.txt
data[variable1]: value1
pre[variable2] value2
variable3: value3
flag[variable4] value4
$ cat var_list
variable1
variable3
$ awk 'NR==FNR{a[$1]; next}
{for(k in a) if(index($1, k)) print $2}' var_list ip.txt
value1
value3
To use output of another command as input file, use ./programm_call | awk '...' var_list - where - will indicate stdin as input.
This single value should then be stored in a variable for further processing within the bash script.
If you are doing further text processing, you could do it within awk and thus avoid a possible slower bash loop. See Why is using a shell loop to process text considered bad practice? for details.
Speed up suggestions:
Use LC_ALL=C awk '..' if input is ASCII (Note that as pointed out in comments, this doesn't apply for all cases, so you'll have to test it for your use case)
Use mawk if available, that is usually faster. GNU awk may still be faster for some cases, so again, you'll have to test it for your use case
Use ripgrep, which is usually faster than other grep programs.
$ ./programm_call | rg -No -m1 'variable1\S*\s+(\S+)' -r '$1'
value1
$ ./programm_call | rg -No -m1 'variable3\S*\s+(\S+)' -r '$1'
value3
Here, -o option is used to get only the matched portion. -r is used to get only the required text by replacing the matched portion with the value from the capture group. -m1 option is used to stop searching input once the first match is found. -N is used to disable line number prefix.
Exit after the first grep match, like so:
value3_var="$(grep -m1 "variable3" file.log | cut -d " " -f2)"
Or use Perl, also exiting after the first match. This eliminates the need for a pipe to another process:
value3_var="$(perl -le 'print $1, last if /^variable3\s+(.*)/' file.log)"
If I'm understanding your requirements correctly, how about feeding
the output of programm_call directly to the awk script instead of
assinging a shell variable.
./programm_call | awk '
# the following block is invoked line by line of the input
{
a[$1] = $2
}
# the following block is executed after all lines are read
END {
# please modify the print statement depending on your required output format
print "variable1 = " a["variable1"]
print "variable3 = " a["variable3"]
}'
Output:
variable1 = value1
variable3 = value3
As you see, the script can process all (~20) variables at once.
[UPDATE]
Assumptions including the provided information:
The ./program_call prints approx. 50 pairs of "variable value"
variable and value are delimited by blank character(s)
variable may be enclosed with [ and ]
variable may be followed by :
We have interest with up to 20 variables out of the ~50 pairs
We use just one of the 20 variables at once
We don't want to invoke ./program_call whenever accessing just one variable
We want to access the variable values from within bash script
We may use an associative array to fetch the value via the variable name
Then it will be convenient to read the variable-value pairs directly within
bash script:
#!/bin/bash
declare -A hash # declare an associative array
while read -r key val; do # read key (variable name) and value
key=${key#*[} # remove leading "[" and the characters before it
key=${key%:} # remove trailing ":"
key=${key%]} # remove trailing "]"
hash["$key"]="$val" # store the key and value pair
done < <(./program_call) # feed the output of "./program_call" to the loop
# then you can access the values via the variable name here
foo="${hash["variable2"]}" # the variable "foo" is assigned to "value2"
# do something here
bar="${hash["variable3"]}" # the variable "bar" is assigned to "value3"
# do something here
Some people criticize that bash is too slow to process text lines,
but we process just about 50 lines in this case. I tested a simulation by
generating 50 lines, processing the output with the script above,
repeating the whole process 1,000 times. It completed within a few seconds. (Meaning one batch ends within a few milliseconds.)
This is how to do the job efficiently AND robustly (your approach and all other current answers will result in false matches from some input and some values of the variables you want to search for):
$ cat tst.sh
#!/usr/bin/env bash
vars='variable2 variable3'
awk -v vars="$vars" '
BEGIN {
split(vars,tmp)
for (i in tmp) {
tags[tmp[i]":"]
tags["["tmp[i]"]"]
tags["["tmp[i]"]:"]
}
}
$1 in tags || ( (s=index($1,"[")) && (substr($1,s) in tags) ) {
print $2
}
' "${#:--}"
$ ./tst.sh file
value2
value3
$ cat file | ./tst.sh
value2
value3
Note that the only loop is in the BEGIN section where it populates a hash table (tags[]) with the strings from the input that could match your variable list so that while processing the input it doesn't have to loop, it just does a hash lookup of the current $1 which will be very efficient as well as robust (e.g. will not fail on partial matches or even regexp metachars).
As shown, it'll work whether the input is coming from a file or a pipe. If that's not all you need then edit your question to clarify your requirements and improve your example to show a case where this does not do what you want.

convert 1 field of awk to base64 and leave the rest intact

I'm creating a one liner where my ldap export is directly converted into a csv.
So far so good but the challange is now that 1 column of my csv needs to contain base64 encoded values. These values are comming as clear text out of the ldap search filter. So I basically need them converted during the awk creation.
What I have is:
ldapsearch | awk -v OFS=',' '{split($0,a,": ")} /^blobinfo:/{blob=a[2]} /^cn:/{serialnr=a[2]} {^mode=a[2]; print serialnr, mode, blob}'
This gives me a csv output as intended but now I need to convert blob to base64 encoded output.
Getline is not available
demo input:
cn: 1313131313
blobinfo: a string with spaces
mode: d121
cn: 131313asdf1313
blobinfo: an other string with spaces
mode: d122
ouput must be like
1313131313,D121,YSBzdHJpbmcgd2l0aCBzcGFjZXM=
where YSBzdHJpbmcgd2l0aCBzcGFjZXM= is the encoded a string with spaces
but now I get
1313131313,D121,a string with spaces
Something like this, maybe?
$ perl -MMIME::Base64 -lne '
BEGIN { $, = "," }
if (/^cn: (.+)/) { $s = $1 }
if (/^blobinfo: (.+)/) { $b = encode_base64($1, "") }
if (/^mode: (.+)/) { print $s, $1, $b }' input.txt
1313131313,d121,YSBzdHJpbmcgd2l0aCBzcGFjZXM=
131313asdf1313,d122,YW4gb3RoZXIgc3RyaW5nIHdpdGggc3BhY2Vz
If you can't use getline and you just need to output the csv (you can't further process the base64'd field), change the order of fields in output and abuse the system's newline. First, a bit modified input data (changed order, missing field):
cn: 1313131313
blobinfo: a string with spaces
mode: d121
blobinfo: an other string with spaces
mode: d122
cn: 131313asdf1313
cn: 131313asdf1313
mode: d122
The awk:
$ awk '
BEGIN {
RS="" # read in a block of rows
FS="\n" # newline is the FS
h["cn"]=1 # each key has a fixed buffer slot
h["blobinfo"]=2
h["mode"]=3
}
{
for(i=1;i<=NF;i++) { # for all fields
split($i,a,": ") # split to a array
b[h[a[1]]]=a[2] # store to b uffer
}
printf "%s,%s,",b[1],b[3] # output all but blob, no newline
system("echo " b[2] "| base64") # let system output the newline
delete b # buffer needs to be reset
}' file # well, I used file for testing, you can pipe
ANd the output:
1313131313,d121,YSBzdHJpbmcgd2l0aCBzcGFjZXMK
131313asdf1313,d122,YW4gb3RoZXIgc3RyaW5nIHdpdGggc3BhY2VzCg==
131313asdf1313,d122,Cg==

Change the Particualy string value without string comparison in a file using Command line Ubuntu

I want to create a shell script to change the string values which is after '=' in my file Using command line.
File is like:
String name = "Max";
String age = "24";
String address = "Noida";
Or
String name=Max
String age=24
String address=Noida
But here, I don't wanna string comparison, Like this:
$ sed -i 's/Max/Aman/gI' String.txt
$ sed -i 's/24/25/gI' String.txt
$ sed -i 's/Noida/Delhi/gI' String.txt
Please suggest how to change the string values without string comparison in a file using command line.
You may use this single sed that doesn't check previous value while replacing with new ones:
sed '/name = /s/"[^"]*"/"AMAN"/; /age = /s/"[^"]*"/"25"/; /address = /s/"[^"]*"/"Delhi"/;' String.txt
String name = "AMAN";
String age = "25";
String address = "Delhi";

Resources