Update version number in property file using bash - bash

I am new in bash scripting and I need help with awk. So the thing is that I have a property file with version inside and I want to update it.
version=1.1.1.0
and I use awk to do that
file="version.properties"
awk -F'["]' -v OFS='"' '/version=/{
split($4,a,".");
$4=a[1]"."a[2]"."a[3]"."a[4]+1
}
;1' $file > newFile && mv newFile $file
but I am getting strange result version="1.1.1.0""...1
Could someone help me please with this.

You mentioned in your comment you want to update the file in place. You can do that in a one-liner with perl:
perl -pe '/^version=/ and s/(\d+\.\d+\.\d+\.)(\d+)/$1 . ($2+1)/e' -i version.properties
Explanation
-e is followed by a script to run. With -p and -i, the effect is to run that script on each line, and modify the file in place if the script changes anything.
The script itself, broken down for explanation, is:
/^version=/ and # Do the following on lines starting with `version=`
s/ # Make a replacement on those lines
(\d+\.\d+\.\d+\.)(\d+)/ # Match x.y.z.w, and set $1 = `x.y.z.` and $2 = `w`
$1 . ($2+1)/ # Replace x.y.z.w with a copy of $1, followed by w+1
e # This tells Perl the replacement is Perl code rather
# than a text string.
Example run
$ cat foo.txt
version=1.1.1.2
$ perl -pe '/^version=/ and s/(\d+\.\d+\.\d+\.)(\d+)/$1 . ($2+1)/e' -i foo.txt
$ cat foo.txt
version=1.1.1.3

This is not the best way, but here's one fix.
Test case
I am assuming the input file has at least one line that is exactly version=1.1.1.0.
$ awk -F'["]' -v OFS='"' '/version=/{
> split($4,a,".");
> $4=a[1]"."a[2]"."a[3]"."a[4]+1
> }
> ;1' <<<'version=1.1.1.0'
Output:
version=1.1.1.0"""...1
The """ is because you are assigning to field 4 ($4). When you do that, awk adds field separators (OFS) between fields 1 and 2, 2 and 3, and 3 and 4. Three OFS => """, in your example.
Minimal change
$ awk -F'["]' -v OFS='"' '/version=/{
split($1,a,".");
$1=a[1]"."a[2]"."a[3]"."a[4]+1;
print
}
' <<<'version=1.1.1.0'
version=1.1.1.1
Two changes:
Change $4 to $1
Since the input field separator (-F) is ["], $4 is whatever would be after the third " (if there were any in the input). Therefore, split($4, ...) splits an empty field. The contents of the line, before the first " (if any), are in $1.
print at the end instead of ;1
The 1 after the closing curly brace is the next condition, and there is no action specified. The default action is to print the current line, as modified, so the 1 triggers printing. Instead, just print within your action when you are done processing. That way your action is self-contained. (Of course, if you needed to do other processing, you might want to print later, after that processing.)

You can use the = as the delimiter, like this:
awk -F= -v v=1.0.1 '$1=="version"{printf "version=\"%s\"\n", v}' file.properties

Related

Replace some lines in fasta file with appended text using while loop and if/else statement

I am working with a fasta file and need to add line-specific text to each of the headers. So for example if my file is:
>TER1
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>TER2
AGCATGCTAGCTAGACGACTCGATCGCATGCTC
>URC1
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>URC2
AGCATGCTACCTAGTCGACTCGATCGCATGCTC
>UCR3
AGCATGCTAGCTAGTCGACTCGATGGCATGCTC
I want a while loop that will read through each line; for those with a > at the start, I want to append |population: plus the first three characters after the >. So line one would be:
>TER1|population:TER
etc.
I can't figure out how to make this work. Here my best attempt so far.
filename="testfasta.fa"
while read -r line
do
if [[ "$line" == ">"* ]]; then
id=$(cut -c2-4<<<"$line")
printf $line"|population:"$id"\n" >>outfile
else
printf $line"\n">>outfile
fi
done <"$filename"
This produces a file with the original headers and following line each on a single line.
Can someone tell me where I'm going wrong? My if and else loop aren't working at all!
Thanks!
You could use a while loop if you really want,
but sed would be simpler:
sed -e 's/^>\(...\).*/&|population:\1/' "$filename"
That is, for lines starting with > (pattern: ^>),
capture the next 3 characters (with \(...\)),
and match the rest of the line (.*),
replace with the line as it was (&),
and the fixed string |population:,
and finally the captured 3 characters (\1).
This will produce for your input:
>TER1|population:TER
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>TER2|population:TER
AGCATGCTAGCTAGACGACTCGATCGCATGCTC
>URC1|population:URC
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>URC2|population:URC
AGCATGCTACCTAGTCGACTCGATCGCATGCTC
>UCR3|population:UCR
AGCATGCTAGCTAGTCGACTCGATGGCATGCTC
Or you can use this awk, also producing the same output:
awk '{sub(/^>.*/, $0 "|population:" substr($0, 2, 3))}1' "$filename"
You can do this quickly in awk:
awk '$1~/^>/{$1=$1"|population:"substr($1,2,3)}{}1' infile.txt > outfile.txt
$ awk '$1~/^>/{$1=$1"|population:"substr($1,2,3)}{}1' testfile
>TER1|population:TER
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>TER2|population:TER
AGCATGCTAGCTAGACGACTCGATCGCATGCTC
>URC1|population:URC
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>URC2|population:URC
AGCATGCTACCTAGTCGACTCGATCGCATGCTC
>UCR3|population:UCR
AGCATGCTAGCTAGTCGACTCGATGGCATGCTC
Here awk will:
Test if the record starts with a > The $1 looks at the first field, but $0 for the entire record would work just as well in this case. The ~ will perform a regex test, and ^> means "Starts with >". Making the test: ($1~/^>/)
If so it will set the first field to the output you are looking for (using substr() to get the bits of the string you want. {$1=$1"|population:"substr($1,2,3)}
Finally it will print out the entire record (with the changes if applicable): {}1 which is shorthand for {print $0} or.. print the entire record.

Multiline CSV: output on a single line, with double-quoted input lines, using a different separator

I'm trying to get a multiline output from a CSV into one line in Bash.
My CSV file looks like this:
hi,bye
hello,goodbye
The end goal is for it to look like this:
"hi/bye", "hello/goodbye"
This is currently where I'm at:
INPUT=mycsvfile.csv
while IFS=, read col1 col2 || [ -n "$col1" ]
do
source=$(awk '{print;}' | sed -e 's/,/\//g' )
echo "$source";
done < $INPUT
The output is on every line and I'm able to change the , to a / but I'm not sure how to put the output on one line with quotes around it.
I've tried BEGIN:
source=$(awk 'BEGIN { ORS=", " }; {print;}'| sed -e 's/,/\//g' )
But this only outputs the last line, and omits the first hi/bye:
hello/goodbye
Would anyone be able to help me?
Just do the whole thing (mostly) in awk. The final sed is just here to trim some trailing cruft and inject a newline at the end:
< mycsvfile.csv awk '{print "\""$1, $2"\""}' FS=, OFS=/ ORS=", " | sed 's/, $//'
If you're willing to install trl, a utility of mine, the command can be simplified as follows:
input=mycsvfile.csv
trl -R '| ' < "$input" | tr ',|' '/,'
trl transforms multiline input into double-quoted single-line output separated by ,<space> by default.
-R '| ' (temporarily) uses |<space> as the separator instead; this assumes that your data doesn't contain | instances, but you can choose any char. that you know not be part of your data.
tr ',|' '/,' then translates all , instances (field-internal to the input lines) into / instances, and all | instances (the temporary separator) into , instances, yielding the overall result as desired.
Installation of trl from the npm registry (Linux and macOS)
Note: Even if you don't use Node.js, npm, its package manager, works across platforms and is easy to install; try
curl -L https://git.io/n-install | bash
With Node.js installed, install as follows:
[sudo] npm install trl -g
Note:
Whether you need sudo depends on how you installed Node.js and whether you've changed permissions later; if you get an EACCES error, try again with sudo.
The -g ensures global installation and is needed to put trl in your system's $PATH.
Manual installation (any Unix platform with bash)
Download this bash script as trl.
Make it executable with chmod +x trl.
Move it or symlink it to a folder in your $PATH, such as /usr/local/bin (macOS) or /usr/bin (Linux).
$ awk -F, -v OFS='/' -v ORS='"' '{$1=s ORS $1; s=", "; print} END{printf RS}' file
"hi/bye", "hello/goodbye"
There is no need for a bash loop, which is invariably slow.
sed and tr can do this more efficiently:
input=mycsvfile.csv
sed 's/,/\//g; s/.*/"&", /; $s/, $//' "$input" | tr -d '\n'
s/,/\//g uses replaces all (g) , instances with / instances (escaped as \/ here).
s/.*/"&", / encloses the resulting line in "...", followed by ,<space>:
regex .* matches the entire pattern space (the potentially modified input line)
& in the replacement string represent that match.
$s/, $// removes the undesired trailing ,<space> from the final line ($)
tr -d '\n' then simply removes the newlines (\n) from the result, because sed invariably outputs each line with a trailing newline.
Note that the above command's single-line output will not have a trailing newline; simply append ; printf '\n' if it is needed.
In awk:
$ awk '{sub(/,/,"/");gsub(/^|$/,"\"");b=b (NR==1?"":", ")$0}END{print b}' file
"hi/bye", "hello/goodbye"
Explained:
$ awk '
{
sub(/,/,"/") # replace comma
gsub(/^|$/,"\"") # add quotes
b=b (NR==1?"":", ") $0 # buffer to add delimiters
}
END { print b } # output
' file
I'm assuming you just have 2 lines in your file? If you have alternating 2 line pairs, let me know in comments and I will expand for that general case. Here is a one-line awk conversion for you:
# NOTE: I am using the octal ascii code for the
# double quote char (\42=") in my printf statement
$ awk '{gsub(/,/,"/")}NR==1{printf("\42%s\42, ",$0)}NR==2{printf("\42%s\42\n",$0)}' file
output:
"hi/bye", "hello/goodbye"
Here is my attempt in awk:
awk 'BEGIN{ ORS = " " }{ a++; gsub(/,/, "/"); gsub(/[a-z]+\/[a-z]+/, "\"&\""); print $0; if (a == 1){ print "," }}{ if (a==2){ printf "\n"; a = 0 } }'
Works also if your Input has more than two lines.If you need some explanation feel free to ask :)

sed multiple replacements with line range

I have a file with below records
user1,fuser1,luser1,user1#test.com,data,user1
user2,fuser2,luser2,user2#test.com,data,user2
user3,fuser3,luser3,user3#test.com,data,user3
I wanted to perform some text replacements from
user1,fuser1,luser1,user1#test.com,data,user1
to
New_user1,New_fuser1,New_luser1,New_user1#test.com,data,New_user1
so I wrote below sed script.
sed -i -e 's/user/New_user/g; s/fuser/New_fuser/g; s/luser/New_luser/g' file
This works perfect. Now I have a requirement that I want to replace in specific line range.
start=2
end=3
sed -i -e ''${start},${end}'s/user/New_user/g; s/fuser/New_fuser/g; s/luser/New_luser/g' file
but this command is replacing pattern in all lines. example output is,
user1,New_fuser1,New_luser1,user1#test.com,data,New_user1
user2,New_fuser2,New_luser2,user2#test.com,data,New_user2
user3,New_fuser3,New_luser3,user3#test.com,data,New_user3
Looks like range is getting applied only to first expression and remaining expressions are getting applied on whole file. How to apply this range to all expressions?
You can use awk variables to use for this functionality, controlling the row and column numbers used for replacing
awk -vFS="," -vOFS="," -v columnStart=2 -v columnEnd=3 -v rowStart=1 -v rowEnd=2 \
'NR>=rowStart&&NR<=rowEnd{for(i=columnStart; i<=columnEnd; i++) \
$i="New_"$i; print }' file
where the awk variables columnStart, columnEnd, rowStart and rowStart determine which columns and rows to replace with , as the de-limiter adopted.
For your input file:-
$ cat input-file
user1,fuser1,luser1,user1#test.com,data,user1
user2,fuser2,luser2,user2#test.com,data,user2
user3,fuser3,luser3,user3#test.com,data,user3
Assuming I want to do replacement in lines 2 and 3 from columns 3-4, I can set-up my awk as
awk -vFS="," -vOFS="," -v columnStart=3 -v columnEnd=4 -v rowStart=2 -v rowEnd=3 \
'NR>=rowStart&&NR<=rowEnd{for(i=columnStart; i<=columnEnd; i++) \
$i="New_"$i; print }' file
user2,fuser2,New_luser2,New_user2#test.com,data,user2
user3,fuser3,New_luser3,New_user3#test.com,data,user3
To apply on the say the last column, set the columnStart and columnEnd to the same value e.g. say on column 6 and on last line only.
awk -vFS="," -vOFS="," -v columnStart=6 -v columnEnd=6 -v rowStart=3 -v rowEnd=3 \
'NR>=rowStart&&NR<=rowEnd{for(i=columnStart; i<=columnEnd; i++) \
$i="New_"$i; print }' file
user3,fuser3,luser3,user3#test.com,data,New_user3
When using GNU Sed (present on Ubuntu, probably Debian, and probably others).
There is a feature which makes this easy:
https://www.gnu.org/software/sed/manual/sed.html#Common-Commands
A group of commands may be enclosed between { and } characters. This
is particularly useful when you want a group of commands to be
triggered by a single address (or address-range) match.
Example: perform substitution then print the second input line:
$ seq 3 | sed -n '2{s/2/X/ ; p}'
X
Given the original question, this should do the trick:
sed -i -e '2,3 {s/user/New_user/g; s/fuser/New_fuser/g; s/luser/New_luser/g}' file
The following works for me:
START=2
NUM=1
sed -i -e "$START,+${NUM} s/user/New_user/g; $START,+${NUM} s/fuser/New_fuser/g; $START,+${NUM} s/luser/New_luser/g" file
As you can see, there are several changes:
The line range has to be present at each expression
The range should be represented (in this case) as the start line number and number of lines (the number of affected lines is NUM+1)
You put extra apostrophe symbols.
Using a single s command:
start=1
end=2
sed -e "$start,$end s/\([fl]*\)user/New_\1user/g" file
[fl]*user will match user with optional f or l first letter
output:
New_user1,New_fuser1,New_luser1,New_user1#test.com,data,New_user1
New_user2,New_fuser2,New_luser2,New_user2#test.com,data,New_user2
user3,fuser3,luser3,user3#test.com,data,user3

filter specific attribute from a file

I have an input.txt file has following text. I have to filter the "".
- <ci>
<id>a573f0d014c18a5811793aedb5aad3</id>
<viewName>Windows</viewName>
</ci>
- <ci>
<id>7ad9088802ef62d75a15c9d4799fe8</id>
<viewName>Network</viewName>
</ci>
- <ci>
<id>abbbeeb60c4074bbc8483f321e0b43</id>
<viewName>Unix</viewName>
</ci>
Output should be like this:
a573f0d014c18a5811793aedb5aad3
7ad9088802ef62d75a15c9d4799fe8
abbbeeb60c4074bbc8483f321e0b43
With gnu grep you can use a positive lookahead and a positive lookbehind:
$ grep -oP '(?<=<id>).*(?=</id>)' file
a573f0d014c18a5811793aedb5aad3
7ad9088802ef62d75a15c9d4799fe8
abbbeeb60c4074bbc8483f321e0b43
another grep alternative based on data pattern
grep -o '[a-f0-9]\{30\}'
Perl solution:
perl -lane 'print $1 if /^\s*<id>(\S+)<\/id>/' file
The /regex/ captures the information between < id > and < /id > into variable $1
These command-line options are used:
n loop around every line of the input file, put the line in the $_ variable, do not automatically print every line
l removes newlines before processing, and adds them back in afterwards
a autosplit mode – perl will automatically split input lines on whitespace into the #F array
e : execute the perl code

Assign a variable the value of a string in a file

I have a file called info.log which contains the line:
/home/jax/Main_X_1_A
X, 1 and A are meaningful and they can change. However "Main" and the underscores remain the same.
Is it possible to use a utility to assign a shell variable a value based on the information in info.log?
E.g.
MY_VERSION="?_?_?";
Where the question marks represent the single characters that are found in those locations.
For example if info.log contained this line:
/home/jax/Main_1_2_3
And we used that data to initialise a shell variable:
MY_VERSION=...
echo $MY_VERSION
The output would be:
1_2_3
Updating question with better example:
Info.log
MODULE=TEST
QUICK_BUILD_DIR=/usr/apps/Main_1_2_3
ANT_FILE=build.xml
FANCE=/usr/apps/test/Main_1_2_3
I want to be able to take these three numbers(1, 2 and 3):
QUICK_BUILD_DIR=/usr/apps/Main_1_2_3
And assign them to variables.
Note: 1, 2 and 3 are just example numbers and they can change.
Can you try this?
var="MY_VERSION=1_3_2"
version=$(echo $var | sed 's/.*MAIN_\(.*\)/\1/') #version will be 1_3_2
This uses bash and sed.
A GNU Awk Solution
$ MY_VERSION=$(awk -F/ '/Main_/ { sub(/Main_/, "", $NF); print $NF }' info.log)
$ echo "$MY_VERSION"
X_1_A
You can use this awk command:
cat file
/home/jill/Main_1_2_4
/home/jax/Main_1_2_3
/home/john/Main_X_1_A
awk -v u=jax -F '/' '$3==u{sub(/^Main_/, "", $4); print $4}' file
1_2_3
Here you can pass any username in u variable to awk (as jax is being passed here) and version will be picked from that particular line.
No need for external utilities. Bash can do the string manipulation for you:
$ cat info.log
/home/jax/Main_X_1_A
$ read -r a < info.log
$ b="${a#*_}"
$ echo "$b"
X_1_A

Resources