grep string containing `":"` patterns - bash

This is the piece of my log file in server
"order_items_subtotal":"60.5100","order_final_due_amount":"0.0000","items":[{"product_id"
I need to grep the logs which contain "order_final_due_amount":"0.0000" in my whole log file.
for this, I did like this
tail -f pp_create_shipment2018-12-05.log | grep "order_final_due_amount":"0.0000"
but I got zero results. what would be wrong on my tail command

" is interpreted by the shell (it's used to quote e.g. spaces).
grep "order_final_due_amount":"0.0000"
is equivalent to
grep order_final_due_amount:0.0000
To pass " to grep, you need to quote it:
grep '"order_final_due_amount":"0\.0000"'
(Also, . is special in regexes and should be escaped.)

Using Perl, you need to just escape the "." alone. The qr// takes cares of remaining.
Check this out:
> cat product.log
order items
items1
"order_items_subtotal":"60.5100","order_final_due_amount":"0.0000","items":[{"product_id"
item2
"order_items_subtotal":"60.5100","order_final_due_amount":"000000","items":[{"product_id"
items3
"order_items_subtotal":"60.5100",order_final_due_amount:"0.0000","items":[{"product_id"
items4
> perl -ne ' $pat=qr/"order_final_due_amount":"0\.0000"/; print if /$pat/ ' product.log
"order_items_subtotal":"60.5100","order_final_due_amount":"0.0000","items":[{"product_id"
>
Thanks to melpomene, the below also works
> perl -ne ' print if /"order_final_due_amount":"0\.0000"/ ' product.log
"order_items_subtotal":"60.5100","order_final_due_amount":"0.0000","items":[{"product_id"
>

Related

convert a file content using shell script

Hello everyone I'm a beginner in shell coding. In daily basis I need to convert a file's data to another format, I usually do it manually with Text Editor. But I often do mistakes. So I decided to code an easy script who can do the work for me.
The file's content like this
/release201209
a1,a2,"a3",a4,a5
b1,b2,"b3",b4,b5
c1,c2,"c3",c4,c5
to this:
a2>a3
b2>b3
c2>c3
The script should ignore the first line and print the second and third values separated by '>'
I'm half way there, and here is my code
#!/bin/bash
#while Loops
i=1
while IFS=\" read t1 t2 t3
do
test $i -eq 1 && ((i=i+1)) && continue
echo $t1|cut -d\, -f2 | { tr -d '\n'; echo \>$t2; }
done < $1
The problem in my code is that the last line isnt printed unless the file finishes with an empty line \n
And I want the echo to be printed inside a new CSV file(I tried to set the standard output to my new file but only the last echo is printed there).
Can someone please help me out? Thanks in advance.
Rather than treating the double quotes as a field separator, it seems cleaner to just delete them (assuming that is valid). Eg:
$ < input tr -d '"' | awk 'NR>1{print $2,$3}' FS=, OFS=\>
a2>a3
b2>b3
c2>c3
If you cannot just strip the quotes as in your sample input but those quotes are escaping commas, you could hack together a solution but you would be better off using a proper CSV parsing tool. (eg perl's Text::CSV)
Here's a simple pipeline that will do the trick:
sed '1d' data.txt | cut -d, -f2-3 | tr -d '"' | tr ',' '>'
Here, we're just removing the first line (as desired), selecting fields 2 & 3 (based on a comma field separator), removing the double quotes and mapping the remaining , to >.
Use this Perl one-liner:
perl -F',' -lane 'next if $. == 1; print join ">", map { tr/"//d; $_ } #F[1,2]' in_file
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
-F',' : Split into #F on comma, rather than on whitespace.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches

bash remove/change values from one field with a loop

I have a file where the 10th column in excel contains prices.
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"5000",19.50,justin,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"75,000",19.50,bieber,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"100,000",19.50,selena,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"5500",19.50,gomez,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"50,000",19.50,gomez,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"350,000",19.50,bieber,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"50000",19.50,bieber,20160506,0,,N,E,,,,,,
When it goes to csv the quotes and the comma's stay.
I need to pick out the column that is surrounded by quotes - I use grep -o
and then after clearing the commas, i get rid of the quotes.
I can't use quotes or comma to delimit in awk because the prices get broken up into different fields.
cat /tmp/wowmom | awk -F ',' '{print $10}'
"5000"
"75
"100
"5500"
"50
"350
"50000"
while read line
do
clean_price=$(grep -o '".*"' $line)
echo "$clean_price" | tr -d',' > cleanprice1
echo "cleanprice1" | tr -d'"' > clearnprice2
done </tmp/wowmom
I get errors though "No such file or directory" on the grep
grep:CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"5000",19.50,justin,20160506,0,,N,E,,,,,,:No such file or directory
grep:CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"75,000",19.50,bieber,20160506,0,,N,E,,,,,,:No such file or directory
grep:CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"100,000",19.50,selena,20160506,0,,N,E,,,,,,:No such file or directory
grep:CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"50,000",19.50,gomez,20160506,0,,N,E,,,,,,:No such file or directory
grep:CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,"350,000",19.50,bieber,20160506,0,,N,E,,,,,,:No such file or directory
I want to some way, Isolate the value within quotes with a grep -o and take out comma from the number , then use awk to take the quotes out of field 10.
I am doinng this manually right now It is a suprizingly long job - there are thousands of lines on this.
You an use FPAT with gnu-awk for this:
awk -v FPAT='"[^"]+",|[^,]*' '{gsub(/[",]+/, "", $10)} 1' OFS=, file
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,5000,19.50,justin,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,75000,19.50,bieber,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,100000,19.50,selena,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,5500,19.50,gomez,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,50000,19.50,gomez,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,350000,19.50,bieber,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,50000,19.50,bieber,20160506,0,,N,E,,,,,,
You are using the wrong tool here.
sed -r 's/^(([^,]+,){9})"([^,]+),?([^,]+)"/\1\3\4/' file.csv > newfile.csv
The regular expression captures the first nine fields into the first back reference (and also populates the second with the last of the nine fields), the number before the separator comma in the third, and the rest of the number in the fourth, then the substitution glues them back without the skipped elements.
If you have numbers with more than one thousands separator (i.e. above one million), you will need a slightly more complex script.
In terms of what's wrong with your original script, the second argument to grep is the name of the file to grep, not the string to grep. You can use a here string (in Bash) or pipe the string to grep, but again, this is not how you do it properly.
grep -o '"[^"]*"' <<<"$line"
or
printf '%s' "$line" | grep -o '"[^"]*"'
Notice also the quotes -- omitting quotes are a common newbie error; you can get away with it for a while, and then it bites you.
A pure Bash solution:
while IFS=\" read -r l n r; do
printf '%s\n' "$l${n//,/}$r"
done < input_file.txt
If you're looking for perl:
#!perl
use strict;
use warnings;
use Text::CSV;
use autodie;
my $csv = Text::CSV->new({binary=>1, eol=>"\n"});
my $filename = shift #ARGV;
open my $fh, "<", $filename;
while (my $row = $csv->getline($fh)) {
$row->[9] =~ s/,//g;
$csv->print(*STDOUT, $row);
}
close $fh;
demo:
$ perl csv.pl file
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,5000,19.50,justin,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,75000,19.50,bieber,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,100000,19.50,selena,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,5500,19.50,gomez,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,50000,19.50,gomez,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,350000,19.50,bieber,20160506,0,,N,E,,,,,,
CASPER,N,CUSIP,0000000000,WOWMOM,USD,USD,US,B,50000,19.50,bieber,20160506,0,,N,E,,,,,,

How to use sed to extract a string [duplicate]

This question already has answers here:
BASH extract value after string in variable Not file [duplicate]
(2 answers)
Closed last year.
I need to extract a number from the output of a command: cmd. The output is type: 1000
So my question is how to execute the command, store its output in a variable and extract 1000 in a shell script. Also how do you store the extracted string in a variable?
This question has been answered in pieces here before, it would be something like this:
line=$(sed -n '2p' myfile)
echo "$line"
if [ `echo $line || grep 'type: 1000' ` ] then;
echo "It's there!";
fi;
Store output of sed into a variable
String contains in Bash
EDIT: sed is very limited, you would need to use bash, perl or awk for what you need.
This is a typical use case for grep:
output=$(cmd | grep -o '[0-9]\+')
You can write the output of a command or even a pipeline of commands into a shell variable using so called command substitution:
variable=$(cmd);
In comments it appeared that the output of cmd contains more lines than the type : 1000. In this case I would suggest sed:
output=$(cmd | sed -n 's/type : \([0-9]\+\)/\1/p;q')
You tagged your question as sed but your question description does not restrict other tools, so here's a solution using awk.
output = `cmd | awk -F':' '/type: [0-9]+/{print $2}'`
Alternatively, you can use the newer $( ) syntax. Some find the newer syntax preferable and it can be conveniently nested, without the need for escaping backtics.
output = $(cmd | awk -F':' '/type: [0-9]+/{print $2}')
If the output is rigidly restricted to "type: " followed by a number, you can just use cut.
var=$(echo 'type: 1000' | cut -f 2 -d ' ')
Obviously you'll have to pipe the output of your command to cut, I'm using echo as a demo.
In addition, I'd use grep and then cut if the string you are searching is more complex. If we assume there can be all kind of numbers in the text, but only one occurrence of "type: " followed by a number, you can use the command:
>> var=$(echo "hello 12 type: 1000 foo 1001" | grep -oE "type: [0-9]+" | cut -f 2 -d ' ')
>> echo $var
1000
You can use the | operator to send the output of one command to another, like so:
echo " 1\n 2\n 3\n" | grep "2"
This sends the string " 1\n 2\n 3\n" to the grep command, which will search for the line containing 2. It sound like you might want to do something like:
cmd | grep "type"
Here is a plain sed solution that uses a regualar expression to find the number in your string:
cmd | sed 's/^.*type: \([0-9]\+\)/\1/g'
^ means from the start
.* can be any character (also none)
\([0-9]\+\) are numbers (minimum one character)
\1 means it takes the first pattern it finds (and only in this case) and uses it as replacement for the whole string

Grep (Bash) error

I have a file like this called new.samples.dat
-4.5000000000E-01 8.0000000000E+00 -1.3000000000E-01
5.0000000000E-02 8.0000000000E+00 3.4000000000E-01
...
I have to search all this numbers of this file in another file called Remaining.Simulations.dat and copy them in another file. I did like this
for sample_index in $(seq 1 100)
do
sample=$(awk 'NR=='$sample_index'' new.samples.dat)
grep "$sample" Remaining.Simulations.dat >> Previous.Training.dat
done
It works almost fine but it does not copy all the $sample into Previous.Training.dat even if I am sure that these are in Remaining.Simulations.dat
This errors appear
grep: invalid option -- '.'
Usage: grep [OPTION]... PATTERN [FILE]...
Try `grep --help' for more information.
Do you have any idea how to solve it?Thank you
It's because you're trying to grep for something like -4.5 and grep is treating that as an option rather than a search string. If you use -- to indicate there are no more options, this should work okay:
pax> echo -4.5000000000E-01 | grep -4.5000000000E-01
grep: invalid option -- '.'
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
pax> echo -4.5000000000E-01 | grep -- -4.5000000000E-01
-4.5000000000E-01
In addition, if you pass the string 7.2 to grep, it will match any line containing 7 followed by any character followed by 2 since:
Regular expressions treat . as a special character; and
Without start and end markers, 7.2 will also match 47.2, 7.25 and so on.
With awk you can try something like:
awk '
NR==FNR {
for (i=1;i<=NF;i++) {
numbers[$i]++
}
next
}
{
for (number in numbers)
if (index ($0,number) > 0) {
print $0
}
}' new.samples.dat Remaining.Simulations.dat > anotherfile

Get string between strings in bash

I want to get the string between <sometag param=' and '>
I tried to use the method from Get any string between 2 string and assign a variable in bash to get the "x":
echo "<sometag param='x'><irrelevant stuff='nonsense'>" | tr "'" _ | sed -n 's/.*<sometag param=_\(.*\)_>.*/\1/p'
The problem (apart from low efficiency because I just cannot manage to escape the apostrophe correctly for sed) is that sed matches the maximum, i.e. the output is:
x_><irrelevant stuff=_nonsense
but the correct output would be the minimum-match, in this example just "x"
Thanks for your help
You are probably looking for something like this:
sed -n "s/.*<sometag param='\([^']*\)'>.*/\1/p"
Test:
echo "<sometag param='x'><irrelevant stuff='nonsense'>" | sed -n "s/.*<sometag param='\([^']*\)'>.*/\1/p"
Results:
x
Explanation:
Instead of a greedy capture, use a non-greedy capture like: [^']* which means match anything except ' any number of times. To make the pattern stick, this is followed by: '>.
You can also use double quotes so that you don't need to escape the single quotes. If you wanted to escape the single quotes, you'd do this:
-
... | sed -n 's/.*<sometag param='\''\([^'\'']*\)'\''>.*/\1/p'
Notice how that the single quotes aren't really escaped. The sed expression is stopped, an escaped single quote is inserted and the sed expression is re-opened. Think of it like a four character escape sequence.
Personally, I'd use GNU grep. It would make for a slightly shorter solution. Run like:
... | grep -oP "(?<=<sometag param=').*?(?='>)"
Test:
echo "<sometag param='x'><irrelevant stuff='nonsense'>" | grep -oP "(?<=<sometag param=').*?(?='>)"
Results:
x
You don't have to assemble regexes in those cases, you can just use ' as the field separator
in="<sometag param='x'><irrelevant stuff='nonsense'>"
IFS="'" read x whatiwant y <<< "$in" # bash
echo "$whatiwant"
awk -F\' '{print $2}' <<< "$in" # awk

Resources