Remove the newline character in awk - bash

I am trying to remove the new line character for a date function and have it include spaces. I am saving the variables using this:
current_date=$(date "+%m/%d/ AT %y%H:%M:%S" )
I can see that this is the right format I need by doing a echo $current_date.
However, when I need to use this variable it does not act the way I would like it.
awk '(++n==47) {print "1\nstring \nblah '$current_date' blah 2; n=0} (/blah/) {n=0} {print}' input file > output file
I need the date to stay in the current line of text and continue with no newline unless specified.
Thanks in advance.

Rather than attempting to insert the variable into the command string as you are doing, you can pass it to awk like this:
awk -v date="$(date "+%m/%d/ AT %y%H:%M:%S")" '# your awk one-liner here' input_file
You can then use the variable date as an awk variable within the script:
print "1\nstring \nblah " date " blah 2";
As an aside, it looks like your original print statement was broken, as there were double quotes missing from the end of it.

Related

Insert string from variable containing "\n"s without replacing with newline literals

I have a string that I'm capturing from a curl command to a variable. The string includes some javascript and newline codes (\n). How can I insert that text into a file, at a specific line number, without sed or awk either choking on the sequences or processing them into literal new lines? Here's what I have so far:
AGENT=`curl -s -X GET 'https://some.web.site/api/blah.json | jq '.blah[].javascript'`
LOC=`grep -n "locationmatchstring" file.htm | cut -d : -f 1`
awk -v line=$LOC -v text="$AGENT" '{print} NR==line{printf " " text}' file.htm
The gist is that I'm pulling the script from the json source and inserting it into the html page at the correct location, based on a location match string, as a new line after the location match. I'm also adding the 4 spaces before the captured string so that it lines up with the spacing used in the html file. I've tried some variations on text="$AGENT", like text=$AGENT, text=${AGENT}, text='"$AGENT"', all of which were no help obviously. I would like it all to push straight into a single long line in the html file, and keep the \n's where they are without expanding them.
Thoughts? And thanks!
Given:
var='foo\nbar'
Note the difference:
$ awk -v var="$var" 'BEGIN{print "<" var ">"}'
<foo
bar>
$ var="$var" awk 'BEGIN{var=ENVIRON["var"]; print "<" var ">"}'
<foo\nbar>
$ awk 'BEGIN{var=ARGV[1]; ARGV[1]=""; print "<" var ">"}' "$var"
<foo\nbar>
See http://cfajohnson.com/shell/cus-faq-2.html#Q24 for details.
Never do printf <input data> btw unless you have a VERY specific purpose in mind and fully understand all of the caveats/implications. Instead do printf "%s", <input data> - imagine the difference if/when <input data> includes printf formatting chars like %s.
Also always quotes your shell variables (google it) never use all upper case for non-exported shell variables by convention and to avoid clashing with environment variables.
So assuming you use loc instead of LOC and agent instead of AGENT in the assignment above it, your entire awk line would be (assuming your awk supports ENVIRON otherwise use the ARGV approach above):
agent="$agent" awk -v line="$loc" 'BEGIN{text=ENVIRON["agent"]} {print} NR==line{printf " %s", text}' file.htm

Update version number in property file using bash

I am new in bash scripting and I need help with awk. So the thing is that I have a property file with version inside and I want to update it.
version=1.1.1.0
and I use awk to do that
file="version.properties"
awk -F'["]' -v OFS='"' '/version=/{
split($4,a,".");
$4=a[1]"."a[2]"."a[3]"."a[4]+1
}
;1' $file > newFile && mv newFile $file
but I am getting strange result version="1.1.1.0""...1
Could someone help me please with this.
You mentioned in your comment you want to update the file in place. You can do that in a one-liner with perl:
perl -pe '/^version=/ and s/(\d+\.\d+\.\d+\.)(\d+)/$1 . ($2+1)/e' -i version.properties
Explanation
-e is followed by a script to run. With -p and -i, the effect is to run that script on each line, and modify the file in place if the script changes anything.
The script itself, broken down for explanation, is:
/^version=/ and # Do the following on lines starting with `version=`
s/ # Make a replacement on those lines
(\d+\.\d+\.\d+\.)(\d+)/ # Match x.y.z.w, and set $1 = `x.y.z.` and $2 = `w`
$1 . ($2+1)/ # Replace x.y.z.w with a copy of $1, followed by w+1
e # This tells Perl the replacement is Perl code rather
# than a text string.
Example run
$ cat foo.txt
version=1.1.1.2
$ perl -pe '/^version=/ and s/(\d+\.\d+\.\d+\.)(\d+)/$1 . ($2+1)/e' -i foo.txt
$ cat foo.txt
version=1.1.1.3
This is not the best way, but here's one fix.
Test case
I am assuming the input file has at least one line that is exactly version=1.1.1.0.
$ awk -F'["]' -v OFS='"' '/version=/{
> split($4,a,".");
> $4=a[1]"."a[2]"."a[3]"."a[4]+1
> }
> ;1' <<<'version=1.1.1.0'
Output:
version=1.1.1.0"""...1
The """ is because you are assigning to field 4 ($4). When you do that, awk adds field separators (OFS) between fields 1 and 2, 2 and 3, and 3 and 4. Three OFS => """, in your example.
Minimal change
$ awk -F'["]' -v OFS='"' '/version=/{
split($1,a,".");
$1=a[1]"."a[2]"."a[3]"."a[4]+1;
print
}
' <<<'version=1.1.1.0'
version=1.1.1.1
Two changes:
Change $4 to $1
Since the input field separator (-F) is ["], $4 is whatever would be after the third " (if there were any in the input). Therefore, split($4, ...) splits an empty field. The contents of the line, before the first " (if any), are in $1.
print at the end instead of ;1
The 1 after the closing curly brace is the next condition, and there is no action specified. The default action is to print the current line, as modified, so the 1 triggers printing. Instead, just print within your action when you are done processing. That way your action is self-contained. (Of course, if you needed to do other processing, you might want to print later, after that processing.)
You can use the = as the delimiter, like this:
awk -F= -v v=1.0.1 '$1=="version"{printf "version=\"%s\"\n", v}' file.properties

Replace some lines in fasta file with appended text using while loop and if/else statement

I am working with a fasta file and need to add line-specific text to each of the headers. So for example if my file is:
>TER1
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>TER2
AGCATGCTAGCTAGACGACTCGATCGCATGCTC
>URC1
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>URC2
AGCATGCTACCTAGTCGACTCGATCGCATGCTC
>UCR3
AGCATGCTAGCTAGTCGACTCGATGGCATGCTC
I want a while loop that will read through each line; for those with a > at the start, I want to append |population: plus the first three characters after the >. So line one would be:
>TER1|population:TER
etc.
I can't figure out how to make this work. Here my best attempt so far.
filename="testfasta.fa"
while read -r line
do
if [[ "$line" == ">"* ]]; then
id=$(cut -c2-4<<<"$line")
printf $line"|population:"$id"\n" >>outfile
else
printf $line"\n">>outfile
fi
done <"$filename"
This produces a file with the original headers and following line each on a single line.
Can someone tell me where I'm going wrong? My if and else loop aren't working at all!
Thanks!
You could use a while loop if you really want,
but sed would be simpler:
sed -e 's/^>\(...\).*/&|population:\1/' "$filename"
That is, for lines starting with > (pattern: ^>),
capture the next 3 characters (with \(...\)),
and match the rest of the line (.*),
replace with the line as it was (&),
and the fixed string |population:,
and finally the captured 3 characters (\1).
This will produce for your input:
>TER1|population:TER
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>TER2|population:TER
AGCATGCTAGCTAGACGACTCGATCGCATGCTC
>URC1|population:URC
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>URC2|population:URC
AGCATGCTACCTAGTCGACTCGATCGCATGCTC
>UCR3|population:UCR
AGCATGCTAGCTAGTCGACTCGATGGCATGCTC
Or you can use this awk, also producing the same output:
awk '{sub(/^>.*/, $0 "|population:" substr($0, 2, 3))}1' "$filename"
You can do this quickly in awk:
awk '$1~/^>/{$1=$1"|population:"substr($1,2,3)}{}1' infile.txt > outfile.txt
$ awk '$1~/^>/{$1=$1"|population:"substr($1,2,3)}{}1' testfile
>TER1|population:TER
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>TER2|population:TER
AGCATGCTAGCTAGACGACTCGATCGCATGCTC
>URC1|population:URC
AGCATGCTAGCTAGTCGACTCGATCGCATGCTC
>URC2|population:URC
AGCATGCTACCTAGTCGACTCGATCGCATGCTC
>UCR3|population:UCR
AGCATGCTAGCTAGTCGACTCGATGGCATGCTC
Here awk will:
Test if the record starts with a > The $1 looks at the first field, but $0 for the entire record would work just as well in this case. The ~ will perform a regex test, and ^> means "Starts with >". Making the test: ($1~/^>/)
If so it will set the first field to the output you are looking for (using substr() to get the bits of the string you want. {$1=$1"|population:"substr($1,2,3)}
Finally it will print out the entire record (with the changes if applicable): {}1 which is shorthand for {print $0} or.. print the entire record.

How to add a character end of each variable with awk?

I have a tab deliminated file which I want to add "$" end of each variable, Can I do that with awk,sed or anything else?
Example
input:
a seq1 anot1
b seq2 anot2
c seq3 anot3
d seq4 anot4
I neet to have this:
output:
a$ seq1$ anot1$
b$ seq2$ anot2$
c$ seq3$ anot3$
d$ seq4$ anot4$
Any answer will be appreciated,
Thanks
In bash alone:
while read line; do echo "${line//$'\t'/\$$'\t'}\$"; done < file
This hackish solution relies on two "special" things -- parameter expansion to do the replacement, and format expansion to allow the tabs to be parsed.
In awk, you can process fields much more safely:
awk -F'\t' 'BEGIN{OFS=FS} {for(n=1;n<=NF;n++){$n=$n "$"}} 1' file
This works by stepping through each line of input and replacing each field with itself plus the dollar sign. The BEGIN block insures that your output will use the same field separators as your input. The 1 at the end is awk short-hand for "print the current line".
late to the party...
another awk solution. Prefix field and record separators with "$"
$ awk -F'\t' 'BEGIN{OFS="$"FS; ORS="$"RS} {$1=$1}1' file
With sed:
sed 's/[^ ]*/&$/g' filename
which replaces any non-space words with the word (&) followed by a $.
Oops! You said tabs. You can replace the above space with "\t" to use tab delimited.
sed 's/[^\t]*/&$/g' filename
Actually, even better, for tabs OR spaces:
sed 's/[^[:blank:]]*/&$/g' filename
awk is your friend :
awk '{for(i=1;i<=NF;i++)sub(/$/,"$",$i);print}' file
or
awk '{for(i=1;i<=NF;i++)sub(/$/,"$",$i);}1' file
Sample Output
a$ seq1$ anot1$
b$ seq2$ anot2$
c$ seq3$ anot3$
d$ seq4$ anot4$
What is happening here?
Using a for-loop we iterate thru all the fields in a record.
We use the awk sub function to replace the end ie (/$/) with a $ ie ("$") for each record ($i).
Use print explicitly to print the record. Numeric 1 also represents the default action that is to print the record.
awk '{gsub(/ /,"$ ")}{print $0 "$\r"}' file
a$ seq1$ anot1$
b$ seq2$ anot2$
c$ seq3$ anot3$
d$ seq4$ anot4$
What happens?
First replace spaces with dollar sign and new space.
Last insert dollar sign before the carriage return.

how to find the position of a string in a file in unix shell script

Can you please help me solve this puzzle? I am trying to print the location of a string (i.e., line #) in a file, first to the std output, and then capture that value in a variable to be used later. The string is “my string”, the file name is “myFile” which is defined as follows:
this is first line
this is second line
this is my string on the third line
this is fourth line
the end
Now, when I use this command directly at the command prompt:
% awk ‘s=index($0, “my string”) { print “line=” NR, “position= ” s}’ myFile
I get exactly the result I want:
% line= 3, position= 9
My question is: if I define a variable VAR=”my string”, why can’t I get the same result when I do this:
% awk ‘s=index($0, $VAR) { print “line=” NR, “position= ” s}’ myFile
It just won’t work!! I even tried putting the $VAR in quotation marks, to no avail? I tried using VAR (without the $ sign), no luck. I tried everything I could possibly think of ... Am I missing something?
awk variables are not the same as shell variables. You need to define them with the -v flag
For example:
$ awk -v var="..." '$0~var{print NR}' file
will print the line number(s) of pattern matches. Or for your case with the index
$ awk -v var="$Var" 'p=index($0,var){print NR,p}' file
using all uppercase may not be good convention since you may accidentally overwrite other variables.
to capture the output into a shell variable
$ info=$(awk ...)
for multi line output assignment to shell array, you can do
$ values=( $(awk ...) ); echo ${values[0]}
however, if the output contains more than one field, it will be assigned it's own array index. You can change it with setting the IFS variable, such as
$ IFS=$(echo -en "\n\b"); values=( $(awk ...) )
which will capture the complete lines as the array values.

Resources