I wrote a script that generates an array of URLs. I want to open that URLs and extract the lowest price. I tried it with:
curl http://www.orbitz.com/shop/home?type=air&ar.rt.numAdult=1&ar.rt.numChild=0&_ar.rt.narrowSel=0&search=Search+Flights&ar.rt.child[2]=&ar.rt.leaveSlice.orig.key=las&strm=true&ar.rt.child[6]=&ar.rt.numSenior=0&ar.rt.narrow=airlines&ar.rt.carriers[2]=&ar.rt.cabin=C&_ar.rt.nonStop=0&ar.rt.child[3]=&ar.rt.child[7]=&_ar.rt.leaveSlice.originRadius=0&ar.rt.carriers[1]=&ar.rt.returnSlice.time=Anytime&ar.rt.child[4]=&ar.rt.child[0]=&_ar.rt.leaveSlice.destinationRadius=0&ar.rt.leaveSlice.time=Anytime&ar.rt.carriers[0]=&ar.rt.returnSlice.date=09%2F24%2F14&ar.rt.leaveSlice.date=09%2F23%2F14&ar.rt.leaveSlice.dest.key=lax&_ar.rt.flexAirSearch=0&ar.type=roundTrip&ar.rt.child[5]=&ar.rt.child[1]=|grep \"div class='basePrice '\"
but always get the whole content. I also tried it with various sed combinations and that didn't work, too. How can I just get the lowest price or at least a list of all prices?
As a start you need to quote it properly:
curl 'http://www.orbitz.com/shop/home?type=air&ar.rt.numAdult=1&ar.rt.numChild=0&_ar.rt.narrowSel=0&search=Search+Flights&ar.rt.child[2]=&ar.rt.leaveSlice.orig.key=las&strm=true&ar.rt.child[6]=&ar.rt.numSenior=0&ar.rt.narrow=airlines&ar.rt.carriers[2]=&ar.rt.cabin=C&_ar.rt.nonStop=0&ar.rt.child[3]=&ar.rt.child[7]=&_ar.rt.leaveSlice.originRadius=0&ar.rt.carriers[1]=&ar.rt.returnSlice.time=Anytime&ar.rt.child[4]=&ar.rt.child[0]=&_ar.rt.leaveSlice.destinationRadius=0&ar.rt.leaveSlice.time=Anytime&ar.rt.carriers[0]=&ar.rt.returnSlice.date=09%2F24%2F14&ar.rt.leaveSlice.date=09%2F23%2F14&ar.rt.leaveSlice.dest.key=lax&_ar.rt.flexAirSearch=0&ar.type=roundTrip&ar.rt.child[5]=&ar.rt.child[1]=' | \
grep "div class='basePrice '"
And perhaps your grep command is really meant to be:
grep 'div class="basePrice'
You should probably use an html parser over sed and grep for this.
http://blog.codinghorror.com/parsing-html-the-cthulhu-way/
Related
I am a complete beginner on shell scripting and I am trying to iterate through a set of JSON files and trying to extract a certain field out of it. Each JSON file has a "country:"xxx" field. In each JSON file, there are 10k of the same field with the same country name so I need only the first occurrence and I can do that using "-m 1".
I tried to use grep for this but could not figure out how to extract the whole field including the country name from each file at first occurrence.
for FILE in *.json;
do
grep -o -a -m 1 -h -r '"country":"' $FILE;
done
I tried to use another pipe and use the below pattern but it did not work
| egrep -o '^[^"]+'
Actual Output:
"country":"
"country":"
"country":"
Desired Output:
"country:"romania"
"country:"united kingdom"
"country:"tajikistan"
but I need the whole thing. Any help would be great. Thanks
There is one general answer on the question "I only want the first occurence", and that answer is:
... | head -n 1
This mean, whatever your do: take the head (the first lines), the -n switch gives you the possibility to say how many you want (one in this case).
The same can be done for the last occurence(s), but then you use tail instead of head (you can also use the -n switch).
After trying many things. I found the pattern I was looking for.
grep -Po '"country":.*?[^\\]",' $FILE | head -n 1;
I'm new to bash scripting and I need to make a script that will go through files of logs about jobs that ran and I need to extract certain values such as the memory used and then the memory requested to calculate the memory used.
To begin this I'm simply trying to get a grep command that will grep a value between two patterns in a file, which will be my starting point for this script.
The file looks something like this:
20200429:04/29/2020 04:25:32;S;1234567.vpbs3;user=xx group=xxxxxx=_xxx_xxx_xxxx jobname=xx_xxxxxx queue=xxx ctime=1588148732 qtime=1588148732 etime=1588148732 start=1588148732 exec_host=xxx2/1*8 exec_vnode=(xx2:mem=402653184kb:ncpus=8) Resource_List.mem=393216mb Resource_List.ncpus=8 Resource_List.nodect=1 Resource_List.place=free Resource_List.preempt_targets=NONE Resource_List.Qlist=xxxq Resource_List.select=1:mem=393216mb:ncpus=8 Resource_List.walltime=24:00:00 resource_assigned.mem=402653184kb resource_assigned.ncpus=8
The values in bold are what I need to extract. Its multiple jobs and dates, so the file goes on with multiple paragraphs like this of data with different dates and numbers.
From going through similar questions online, I've come up with:
egrep -Eo 'Resource_List.mem=.{1,50}' sampleoutput.txt | cut -d "=" -f 2-
and I get multple lines of this:
393216mb Resource_List.ncpus=8 Resource_List.nodec
and I'm stuck as to how to get only that '393216mb' as I've never really used grep or cut much. Any suggestions, even if its not using grep, would be greatly appreciated!
Use:
grep -o -E 'Resource_List.mem=[^\ ]+|resource_assigned.mem=[^\ ]+'
Very close! . is a wildcard, you want to match numbers.
egrep -Eo 'Resource_List.mem=[0-9]*..' sampleoutput.txt
I am trying to extract a list of domain names from a httrack data stream using grep. I have it close to working, but the result also includes any and all sub-domains.
httrack --skeleton http://www.ilovefreestuff.com -V "cat \$0" | grep -iEo "([0-9,a-z\.-]+)\.(com)"
Here is my current example result:
domain1.com
domain2.com
www.domain3.com
subdomain.domain4.com
whatever.domain5.com
Here is my desired example result.
domain1.com
domain2.com
domain3.com
domain4.com
domain5.com
Is there something I can add to this grep expression, or should I pipe it to a new sed expression to truncate any subdomains? And if so, how do I accomplish this task? I'm stuck. Any help is much appreciated.
Regards,
Wyatt
You could drop the . in the grep pattern. The following should work
httrack --skeleton http://www.ilovefreestuff.com -V "cat \$0" |
grep -iEo '[[:alnum:]-]+\.(com|net|org)'
If you are just wanting to do a .com then the following will work as it will remove HTTP:// with or without an s, and the next sub-domains. As you can see though it will only work for a .com.
/(?:https?:\/\/[a-z09.]*?)([a-zA-Z0-9-]*\.com)/
Example Dataset
http://www.ilovefreestuff.com/
https://test.ilovefreestuff.com/
https://test.sub.ilovefreestuff.com/
REGEX101
That being said it is generally bad practice to parse and/or validate domain names using Regex as there are a ton of variants that can never be fully accounted for with the exception being when the conditions for matching and/or the dataset is clearly defined and not all encompassing. THIS post has more details on this process and covers a few more situations.
I use this code
include all domain & subdomains
grep -oE '[[:alnum:]_.-]+[.][[:alnum:]_.-]+' file_name | sed -re 's/[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}//g' | sort -u > test.txt
I have a gps unit which extracts longitude and latitude and outputs as a google maps link
http://maps.googleapis.com/maps/api/geocode/xml?latlng=51.601154,-0.404765&sensor=false
From this i'd like to call it via curl and display the "short name" in line 20
"short_name" : "Northwood",
so i'd just like to be left with
Northwood
so something like
curl -s http://maps.googleapis.com/maps/api/geocode/xml?latlng=latlng=51.601154,-0.404765&sensor=false sed sort_name
Mmmm, this is kind of quick and dirty:
curl -s "http://maps.googleapis.com/maps/api/geocode/json?latlng=40.714224,-73.961452&sensor=false" | grep -B 1 "route" | awk -F'"' '/short_name/ {print $4}'
Bedford Avenue
It looks for the line before the line with "route" in it, then the word "short_name" and then prints the 4th field as detected by using " as the field separator. Really you should use a JSON parser though!
Notes:
This doesn't require you to install anything.
I look for the word "route" in the JSON because you seem to want the road name - you could equally look for anything else you choose.
This isn't a very robust solution as Google may not always give you a route, but I guess other programs/solutions won't work then either!
You can play with my solution by successively removing parts from the right hand end of the pipeline to see what each phase produces.
EDITED
Mmm, you have changed from JSON to XML, I see... well, this parses out what you want, but I note you are now looking for a locality whereas before you were looking for a route or road name? Which do you want?
curl -s "http://maps.googleapis.com/maps/api/geocode/xml?latlng=51.601154,-0.404765&sensor=false" | grep -B1 locality | grep short_name| head -1|sed -e 's/<\/.*//' -e 's/.*>//'
The "grep -B1" looks for the line before the line containing "locality". The "grep short_name" then gets the locality's short name. The "head -1" discards all but the first locality if there are more than one. The "sed" stuff removes the <> XML delimiters.
This isn't text, it's structured JSON. You don't want the value after the colon on line 12, you want the value of short name in the address_component with type 'route' from the result.
You could do this with jsawk or python, but it's easier to get it from XML output with xmlstarlet, which is lighter than python and more available than jsawk. Install xmlstarlet and try:
curl -s 'http://maps.googleapis.com/maps/api/geocode/xml?latlng=40.714224,-73.961452&sensor=false' \
| xmlstarlet sel -t -v '/GeocodeResponse/result/address_component[type="route"]/short_name'
This is much more robust than trying to parse JSON as plaintext.
The following seems to work assuming you always like the short_name at line 12:
curl -s 'http://maps.googleapis.com/maps/api/geocode/json?latlng=40.714224,-73.961452&sensor=false' | sed -n -e '12s/^.*: "\([a-zA-Z ]*\)",/\1/p'
or if you are using the xml api and wan't to trap the short_name on line 20:
curl -s 'http://maps.googleapis.com/maps/api/geocode/xml?latlng=51.601154,-0.404765&sensor=false' | sed -n -e '19s/<short_name>\([a-zA-Z ]*\)<\/short_name>/\1/p'
Given this curl command:
curl --user-agent "fogent" --silent -o page.html "http://www.google.com/search?q=insansiate"
* Spelling is intentionally incorrect. I want to grab the suggestion as my result.
I want to be able to either grep into the page.html file perhaps with grep -oE or pipe it right from curl and never store a file.
The result should be: 'instantiate'
I need only the word 'instantiate', or the phrase, whatever google is auto correcting, is what I am after.
Here is the basic html that is returned:
<span class=spell style="color:#cc0000">Did you mean: </span><a href="/search?hl=en&ie=UTF-8&&sa=X&ei=VEMUTMDqGoOINraK3NwL&ved=0CB0QBSgA&q=instantiate&spell=1"class=spell><b><i>instantiate</i></b></a> <span class=std>Top 2 results shown</span>
So perhaps from/to of the string below, which I hope is unique enough to cover all my bases.
class=spell><b><i>instantiate</i></b></a>
I keep running into issues with greedy grep; perhaps I should run it though an html prettify tool first to get a line break or 50 in there. I don't know of any simple way to do so in bash, which is what I would ideally like this to be in. I really don't want to deal with firing up perl, and making sure I have the correct module.
Any suggestions, thank you?
As I'm sure you're aware, screen scraping is a delicate business. This command sequence is no exception since it relies on the specific structure of the page which could change at any time without notice.
grep -o 'Did you mean:\([^>]*>\)\{5\}' page.html | sed 's/.*<i>\([^<]*\)<.*/\1/' page.html
In a pipe:
curl --user-agent "fogent" --silent "http://www.google.com/search?q=insansiate" | grep -o 'Did you mean:\([^>]*>\)\{5\}' page.html | sed 's/.*<i>\([^<]*\)<.*/\1/'
This relies on finding five ">" characters between "Did you mean:" and the "</i>" after the word you're looking for.
Have you considered other methods of getting spelling suggestions or are you specifically interested in what Google provides?
If you have ispell or aspell installed, you can do:
echo insansiate | ispell -a
and parse the result.
xidel is a great utility for scraping web pages; it supports retrieving pages and extracting information in various query languages (CSS selectors, XPath).
In the case at hand, the simple CSS selector a.spell will do the trick.
xidel --user-agent "fogent" "http://google.com/search?q=insansiate" -e 'a.spell'
Note how xidel does its own page retrieval, so no need for curl in this case.
If, however, you needed curl for more exotic retrieval options, here's how you'd combine the two tools (line break for readability):
curl --user-agent "fogent" --silent "http://google.com/search?q=insansiate" |
xidel - -e 'a.spell'
curl --> tidy -asxml --> xmlstarlet sel
Edit: Sorry, did not see your Perl notice.
#!/usr/bin/perl
use strict;
use LWP::UserAgent;
my $arg = shift // 'insansiate';
my $lwp = LWP::UserAgent->new(agent => 'Mozilla');
my $c = $lwp->get("http://www.google.com/search?q=$arg") or die $!;
my #content = split(/:/, $c->content);
for(#content) {
if(m;<b><i>(.+)</i></b>;) {
print "$1\n";
exit;
}
}
Running:
> perl google.pl
instantiate
> perl google.pl disconect
disconnect