Overwriting a file in bash - bash

I have a file, of which a part is shown below:
OUTPUT_FILENAME="out.Received.Power.x.0.y.1.z.0.41
X_TX=0
Y_TX=1
Z_TX=0.41
I would like to automatically change some part of it with BASH: every time i see OUTPUT_FILENAME i want to over write the name next to it and change it with a new one. Then i want to do the same with the values X_TX, Y_TX and Z_TX: delete the value next to it and rewrite a new one. For example instead of X_TX=0 i want X_TX=0.3 or viceversa.
Do you think it's possible?Maybe with grep or so..

You can use sed like this:
i.e. to replace X_TX= with X_TX=123 you can do:
sed -i -e 's/X_TX=.*/X_TX=123/g' /tmp/file1.txt

One option using awk. Your values are passed as variables to the awk script and substituted when exists a match:
awk -v outfile="str_outfile" -v x_tx="str_x" -v y_tx="str_y" -v z_tx="str_z" '
BEGIN { FS = OFS = "=" }
$1 == "OUTPUT_FILENAME" { $2 = outfile; print; next }
$1 == "X_TX" { $2 = x_tx; print $0; next }
$1 == "Y_TX" { $2 = y_tx; print $0; next }
$1 == "Z_TX" { $2 = z_tx; print $0; next }
' infile

Related

Edit multiple columns in a line using awk command?

I'm trying to edit 3 columns in a file if the value in column 1 equals a specific string. This is my current attempt:
cp file file.copy
awk -F':' 'OFS=":" { if ($1 == "root1") $2="test"; print}' file.copy>file
rm file.copy
I've only been able to get the awk command working with one column being changed, I want to be able to edit $3 and $8 as well. Is this possible in the same command? Or is it only possible with separate awk commands or with a different command all together?
Edit note: The real command i'll be passing variables to the columns, i.e. $2=$var
It'll be used to edit the /etc/passwd file, sample input/output:
root:$6$fR7Vrjyp$irnF38R/htMSuk0efLSnAten/epf.5v7gfs0q.NcjKcFPeJmB/4TnnmgaAoTUE9.n4p4UyWOgFwB1guJau8AL.:17976::::::
You can create multiple statements for the if condition with a block {}.
awk -F':' 'OFS=":" { if ($1 == "root1") {$2="test"; $3="test2";} print}' file.copy>file
You can also improve your command by using awk's default "workflow": condition{commands}. For this you need to bring the OFS to the input variables (-v flag)
awk -F':' -v OFS=":" '$1=="root1"{$2="test"; $3="test2"; print}' file.copy>file
You may use
# Fake sample values
v1=pass1
v2=pass2
awk -v var1="$v1" -v var2="$v2" 'BEGIN{FS=OFS=":"} $1 == "root1" { $2 = var1; $3 = var2}1' file > tmp && mv tmp file
See the online awk demo:
s="root1:xxxx:yyyy
root11:xxxx:yyyy
root1:zzzz:cccc"
v1=pass1
v2=pass2
awk -v var1="$v1" -v var2="$v2" 'BEGIN{FS=OFS=":"} $1 == "root1" { $2 = var1; $3 = var2}1' <<< "$s"
Output:
root1:pass1:pass2
root11:xxxx:yyyy
root1:pass1:pass2
Note:
-v var1="$v1" -v var2="$v2" pass the variables you need to use in the awk command
BEGIN{FS=OFS=":"} set the field separator
$1 == "root1" check if Field 1 is equal to some value
{ $2 = var1; $3 = var2 } set Field 2 and 3 values
1 calls the default print command
file > tmp && mv tmp file helps you "shrink" the "replace-inplace-like" code.

Increment a value regarding a pattern in a file

I have a file like this :
"A";"1"
"A";""
"A";""
"B";"1"
"C";"1"
"C";""
"C";""
When I have the same pattern between first part of current line and previous line, I want increment the second part of my line.
like this :
"A";"1"
"A";"2"
"A";"3"
"B";"1"
"C";"1"
"C";"2"
"C";"3"
or if second part is empty I take the previous line and I increment it.
Do you have any idea how I can do this with a shell script or maybe with awk or sed command?
With perl:
$ perl -F';' -lane 'if ($F[1] =~ /"(\d+)"/) { $saved = $1; } else { $saved++; $F[1] = qq/"$saved"/; }
print join(";", #F)' example.txt
"A";"1"
"A";"2"
"A";"3"
"B";"1"
"C";"1"
"C";"2"
"C";"3"
With awk:
$ awk -F';' -v OFS=';' '
$2 ~ /"[0-9]+"/ { saved = substr($2, 2, length($2) - 2) }
$2 == "\"\"" { $2 = "\"" ++saved "\"" }
{ print }' example.txt
"A";"1"
"A";"2"
"A";"3"
"B";"1"
"C";"1"
"C";"2"
"C";"3"

Display message when no match found in AWK

I'm writing a small BASH script that reads a csv file with names on it and prompts the user for a name to be removed. The csv file looks like this:
Smith,John
Jackie,Jackson
The first and last name of the person to be removed from the list are saved in the bash variables $first_name and $last_name.
This is what I have so far:
cat file.csv | awk -F',' -v last="$last_name" -v first="$first_name" ' ($1 != last || $2 != first) { print } ' > tmpfile1
This works fine. However, it still outputs to tmpfile1 even if no employee matches that name. What I would like is to have something like:
if ($1 != last || $2 != first) { print } > tmpfile1 ; else { print "No Match Found." }
I'm new to awk and can't get that last part to work.
NOTE: I do not want to use something like grep -v "$last_name,$first_name"; I want to use a filtering function.
You can redirect right inside the awk script, and only output matches found.
awk -F',' -v last="$last_name" -v first="$first_name" '
$1==last && $2==first {next}
{print > "tmpfile"}
' file.csv
Here are some differences between your script and this....
This has awk reading your CSV directly, rather than having UUOC.
This actively skips the records you want to skip,
and prints everything else through a redirect.
Note that you could, if you wanted, specify the target to which to redirect in a variable you pass in using -v as well.
If you really want the "No match found" error, you can set a flag, then use the END special condition in awk...
awk -F',' -v last="$last_name" -v first="$first_name" '
$1==last && $2==first { found=1; next }
{ print > "tmpfile" }
END { if (!found) print "No match found." > "/dev/stderr" }
' file.csv
And if you want no tmpfile to be created if a match wasn't found, you would either need to scan the file TWICE, once to verify that there's a match, and once to print, or if there's no risk that the size of the file would be too great for available memory, you could keep a buffer:
awk -F',' -v last="$last_name" -v first="$first_name" '
$1==last && $2==first { next }
{ output = (output ? output ORS : "" ) $0 }
END {
if (output)
print output > "tmpfile"
else
print "No match found." > "/dev/stderr"
}
' file.csv
Disclaimer: I haven't tested any of these. :)
You can do two passes over the file, or you can queue up all of the file so far in memory and then just fail if you reach the END block with no match.
awk -v first="$first" last="$last" '$1 != last || $2 != first {
for (i=1; i<=n; ++i) print a[i] >>"tempfile"; p=1; split("", a); }
# No match yet, remember this line for later
!p { a[++n] = $0; next }
# If we get through to here, there was a match
p { print >>"tempfile" }
END { if (!p) { print "no match" >"/dev/stderr"; exit 1 } }' filename
This requires you to have enough memory to store the entire file (this will be required when there is no match).
With a bash script, you can test if awk print something.
If yes, remove the tmpfile.
c=$(awk -F',' -v a="$last_name" -v b="$first_name" '
$1==a && $2==b {c=1;next}
{print > "tmpfile"}
END{if (!c){print "no match"}}' infile)
[ -n "$c" ] && { echo "$c"; rm tmpfile;}

Bash find/replace and run command on matching group

I'm trying to do a dynamic find/replace where a matching group from the find gets manipulated in the replace.
testfile:
…
other text
base64_encode_SOMEPATH_ something
other(stuff)
text base64_encode_SOMEOTHERPATH_
…
Something like this:
sed -i "" -e "s/(base64_encode_(.*)_)/cat MATCH | base64/g" testfile
Which would output something like:
…
other text
U09NRVNUUklORwo= something
other(stuff)
text U09NRU9USEVSU1RSSU5HCg==
…
Updated per your new requirement. Now using GNU awk for the 3rd arg to match() for convenience:
$ awk 'match($0,/(.*)base64_encode_([^_]+)_(.*)/,arr) {
cmd = "base64 <<<" arr[2]
if ( (cmd | getline rslt) > 0) {
$0 = arr[1] rslt arr[3]
}
close(cmd)
} 1' file
…
other text
U09NRVNUUklORwo= something
other(stuff)
text U09NRU9USEVSU1RSSU5HCg==
…
Make sure you read and understand http://awk.info/?tip/getline if you're going to use getline.
If you can't install GNU awk (but you really, REALLY would benefit from having it so do try) then something like this would work with any modern awk:
$ awk 'match($0,/base64_encode_[^_]+_/) {
arr[1] = substr($0,1,RSTART-1)
arr[2] = arr[3] = substr($0,RSTART+length("base64_encode_"))
sub(/_.*$/,"",arr[2])
sub(/^[^_]+_/,"",arr[3])
cmd = "base64 <<<" arr[2]
if ( (cmd | getline rslt) > 0) {
$0 = arr[1] rslt arr[3]
}
close(cmd)
} 1' file
I say "something like" because you might need to tweak the substr() and/or sub() args if they're slightly off, I haven't tested it.
awk '!/^base64_encode_/ { print } /^base64_encode_/ { fflush(); /^base64_encode_/ { fflush(); sub("^base64_encode_", ""); sub("_$", ""); cmd = "base64" ; print $0 | cmd; close(cmd); }' testfile > testfile.out
This says to print non-matching lines unaltered.
Matching lines get altered with the awk function sub() to extract the string to be encoded, which is then piped to the base64 command, which prints the result to stdout.
The fflush call is needed so that all the previous output from awk has been flushed before the base64 output appears, ensuring lines aren't re-ordered.
Edit:
As pointed out in the comment, testing every line twice for matching a pattern and non-matching the same pattern isn't very good. This single action handles all lines:
{
if ($0 !~ "base64_encode_")
{
print;
next;
}
fflush();
sub("^.*base64_encode_", "");
sub("_$", "");
cmd = "base64";
print $0 | cmd;
close(cmd);
}

Bash script to grep through one file for a list names, then grep through a second file to match those names to get a lookup value

Somehow, being specific just doesn't translate well into a title.
Here is my goal, using BASH script in a cygwin environment:
Read text file $filename to get a list of schemas and table names
Take that list of schemas and table names and find a match in $lookup_file to get a value
Use that value to make a logic choice
I basically have each item working separately. I just can't figure out how to glue it all together.
For step one, it's
grep $search_string $filename | awk '{print $1, $5}' | sed -e 's~"~~g' -e 's~ ~\t~g'
Which gives a list of schema{tab}table
For step two, it's
grep -e '{}' $lookup_file | awk '{print $3}'
Where $lookup_file is schema{tab}table{tab}value
Step three is basically, based on the value returned, do "something"; file a report, email a warning, ignore it, etc.
I tried stringing part one and two together with xargs, but it treats the schema and the table name as filenames and throws errors.
What is the glue I'm missing? Or is there a better method?
awk -v s="$search_string" 'NR == FNR { if ($0 ~ s) { gsub(/"/, "", $5); a[$1, $5] = 1; }; next; } a[$1, $2] { print $3; }' "$filename" "$lookup_file"
Explained:
NR == FNR { if ($0 ~ s) { gsub(/"/, "", $5); a[$1, $5] = 1; }; next; } targets the first file, searching for valid matches on it, and save key values in array a.
a[$1, $2] { print $3; } targets the second file and prints the value in its third column if it finds matches with the first and second column of the file and the keys in array a.
awk -v search="$search_string" '$0 ~ search { gsub(/"/, "", $5);
print $1"\t"$5; }' "$filename" |
while read line
do
result=$(awk -v search="\b$line\b" '$0 ~ search { print $3; } ' "$lookup_file");
# Do "something" with $result
done

Resources