Google Maps API accessible with Java, Python or Ruby? - ruby

Is there a way anyone knows to call the Google Maps APIs from Ruby for example?

With a key, you can access the APIs through simple HTTPS requests, which you can send using open-uri and parse using json.
require 'open-uri'
require 'ostruct'
require 'json'
def journey_between start, destination
key = "[Visit https://developers.google.com/maps/web/ to get a free key]"
url = "https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=#{start}&destinations=#{destination}&key=#{key}"
json_response = open(url).read
journey_data = JSON.parse(json_response, object_class: OpenStruct).rows[0].elements[0]
return journey_data
end
journey = journey_between "London", "Glasgow"
puts journey.distance.text
#=> "412 mi"
puts journey.duration.text
#=> "6 hours 46 mins"
Unfortunately, you can't try this example without an API key. You can get one at https://developers.google.com/maps/web/ for free by registering a project under your Google account.

Related

Creating a Ruby API

I have been tasked with creating a Ruby API that retrieves youtube URL's. However, I am not sure of the proper way to create an 'API'... I did the following code below as a Sinatra server that serves up JSON, but what exactly would be the definition of an API and would this qualify as one? If this is not an API, how can I make in an API? Thanks in advance.
require 'open-uri'
require 'json'
require 'sinatra'
# get user input
puts "Please enter a search (seperate words by commas):"
search_input = gets.chomp
puts
puts "Performing search on YOUTUBE ... go to '/videos' API endpoint to see the results and use the output"
puts
# define query parameters
api_key = 'my_key_here'
search_url = 'https://www.googleapis.com/youtube/v3/search'
params = {
part: 'snippet',
q: search_input,
type: 'video',
videoCaption: 'closedCaption',
key: api_key
}
# use search_url and query parameters to construct a url, then open and parse the result
uri = URI.parse(search_url)
uri.query = URI.encode_www_form(params)
result = JSON.parse(open(uri).read)
# class to define attributes of each video and format into eventual json
class Video
attr_accessor :title, :description, :url
def initialize
#title = nil
#description = nil
#url = nil
end
def to_hash
{
'title' => #title,
'description' => #description,
'url' => #url
}
end
def to_json
self.to_hash.to_json
end
end
# create an array with top 3 search results
results_array = []
result["items"].take(3).each do |video|
#video = Video.new
#video.title = video["snippet"]["title"]
#video.description = video["snippet"]["description"]
#video.url = video["snippet"]["thumbnails"]["default"]["url"]
results_array << #video.to_json.gsub!(/\"/, '\'')
end
# define the API endpoint
get '/videos' do
results_array.to_json
end
An "API = Application Program Interface" is, simply, something that another program can reliably use to get a job done, without having to busy its little head about exactly how the job is done.
Perhaps the simplest thing to do now, if possible, is to go back to the person who "tasked" you with this task, and to ask him/her, "well, what do you have in mind?" The best API that you can design, in this case, will be the one that is most convenient for the people (who are writing the programs which ...) will actually have to use it. "Don't guess. Ask!"
A very common strategy for an API, in a language like Ruby, is to define a class which represents "this application's connection to this service." Anyone who wants to use the API does so by calling some function which will return a new instance of this class. Thereafter, the program uses this object to issue and handle requests.
The requests, also, are objects. To issue a request, you first ask the API-connection object to give you a new request-object. You then fill-out the request with whatever particulars, then tell the request object to "go!" At some point in the future, and by some appropriate means (such as a callback ...) the request-object informs you that it succeeded or that it failed.
"A whole lot of voodoo-magic might have taken place," between the request object and the connection object which spawned it, but the client does not have to care. And that, most of all, is the objective of any API. "It Just Works.™"
I think they want you to create a third-party library. Imagine you are schizophrenic for a while.
Joe wants to build a Sinatra application to list some YouTube videos, but he is lazy and he does not want to do the dirty work, he just wants to drop something in, give it some credentials, ask for urls and use them, finito.
Joe asks Bob to implement it for him and he gives him his requirements: "Bob, I need YouTube library. I need it to do:"
# Please note that I don't know how YouTube API works, just guessing.
client = YouTube.new(api_key: 'hola')
video_urls = client.videos # => ['https://...', 'https://...', ...]
And Bob says "OK." end spends a day in his interactive console.
So first, you should figure out how you are going to use your not-yet-existing lib, if you can – sometimes you just don't know yet.
Next, build that library based on the requirements, then drop it in your Sinatra app and you're done. Does that help?

Working with Ruby and APIs

I am pretty new to working with Ruby, especially with APIs but I've been trying to get the Darksky API to work, but I'm afraid I'm missing something obvious with how I'm using it.
Here is what I have
require 'darksky'
darksky = Darksky::API.new('my api key')
forecast = darksky.forecast('34.0500', '118.2500')
forecast
When I run this from the command line nothing happens. What am I doing wrong here?
Simply using forecast isn't going to do anything. You need to use puts at a minimum:
puts forecast
Or, see if Ruby's object pretty-printer can return something more interesting:
require 'pp'
pp forecast
Digging in further, I think their API doesn't work. Based on their examples, using a valid key and their location samples, plus the locations from their source site Forecast.io, also returns nil.
Using the REST interface directly from Forecast.io's site does return JSON. JSON is very easy to work with in Ruby, so it's a good way to go.
Here's some code to test the API, and Forecast.io's REST interface:
API_KEY = 'xxxxxxxxxxxxxxxxxxx'
LOCATION = %w[37.8267 -122.423]
require 'darksky'
darksky = Darksky::API.new(API_KEY)
forecast = darksky.forecast(*LOCATION)
forecast # => nil
brief_forecast = darksky.brief_forecast(*LOCATION)
brief_forecast # => nil
require 'json'
require 'httparty'
URL = "https://api.forecast.io/forecast/#{ API_KEY }/37.8267,-122.423"
puts URL
# >> https://api.forecast.io/forecast/xxxxxxxxxxxxxxxxxxx/37.8267,-122.423
puts HTTParty.get(URL).body[0, 80]
# >> {"latitude":37.8267,"longitude":-122.423,"timezone":"America/Los_Angeles","offse
Notice that LOCATION is 37.8267,-122.423 in both cases, which is Alcatraz according to the Forecast.io site. Also notice that the body output displayed is a JSON string.
Pass the returned JSON to the Ruby's JSON class like:
JSON[returned_json]
to get it parsed back into a Ruby Hash. Using OpenURI (because it comes with Ruby) instead of HTTParty, and passing it to JSON for parsing looks like:
body = open(URL).read
puts JSON[body]

How Do I search Twitter for a word with Ruby?

I have written code in Ruby that will display the timeline for a specific user. I would like to write code to be able to just search twitter to just find every user that has mentioned a word. My code is currently:
require 'rubygems'
require 'oauth'
require 'json'
# Now you will fetch /1.1/statuses/user_timeline.json,
# returns a list of public Tweets from the specified
# account.
baseurl = "https://api.twitter.com"
path = "/1.1/statuses/user_timeline.json"
query = URI.encode_www_form(
"q" => "Obama"
)
address = URI("#{baseurl}#{path}?#{query}")
request = Net::HTTP::Get.new address.request_uri
# Print data about a list of Tweets
def print_timeline(tweets)
tweets.each do |tweet|
require 'date'
d = DateTime.parse(tweet['created_at'])
puts " #{tweet['text'].delete ","} , #{d.strftime('%d.%m.%y')} , #{tweet['user']['name']}, #{tweet['id']}"
end
end
# Set up HTTP.
http = Net::HTTP.new address.host, address.port
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
# If you entered your credentials in the first
# exercise, no need to enter them again here. The
# ||= operator will only assign these values if
# they are not already set.
consumer_key = OAuth::Consumer.new(
"")
access_token = OAuth::Token.new(
"")
# Issue the request.
request.oauth! http, consumer_key, access_token
http.start
response = http.request request
# Parse and print the Tweet if the response code was 200
tweets = nil
puts "Text,Date,Name,id"
if response.code == '200' then
tweets = JSON.parse(response.body)
print_timeline(tweets)
end
nil
How would I possibly change this code to search all of twitter for a specific word?
The easiest approach would be to use 'Twitter' gem. Refer to this Link for more information and the result type of the search results. Once you have all the correct authorization attribute in place (oAuth-Token,oAuth-secret, etc) you should be able to search as
Twitter.search('Obama')
or
Twitter.search('Obama', options = {})
Let us know, if that worked for you or not.
p.s. - Please mark the post as answered if it helped you. Else put a comment back with what is missing.
The Twitter API suggests the URI your should be using for global search is https://api.twitter.com/1.1/search/tweets.json and this means:
Your base_url component would be https://api.twitter.com
Your path component would be /1.1/search/tweets.json
Your query component would be the text you are searching for.
The query part takes a lot of values depending upon the API spec. Refer to the specification and you can change it as per your requirement.
Tip: Try to use irb (I'd recommend pry) REPL which makes it a lot easier to explore APIs. Also, checkout the Faraday gem which can be easier to use than the default HTTP library in Ruby IMO.

Is there a module like Perl's LWP for Ruby?

In Perl there is an LWP module:
The libwww-perl collection is a set of Perl modules which provides a simple and consistent application programming interface (API) to the World-Wide Web. The main focus of the library is to provide classes and functions that allow you to write WWW clients. The library also contain modules that are of more general use and even classes that help you implement simple HTTP servers.
Is there a similar module (gem) for Ruby?
Update
Here is an example of a function I have made that extracts URL's from a specific website.
use LWP::UserAgent;
use HTML::TreeBuilder 3;
use HTML::TokeParser;
sub get_gallery_urls {
my $url = shift;
my $ua = LWP::UserAgent->new;
$ua->agent("$0/0.1 " . $ua->agent);
$ua->agent("Mozilla/8.0");
my $req = new HTTP::Request 'GET' => "$url";
$req->header('Accept' => 'text/html');
# send request
$response_u = $ua->request($req);
die "Error: ", $response_u->status_line unless $response_u->is_success;
my $root = HTML::TreeBuilder->new;
$root->parse($response_u->content);
my #gu = $root->find_by_attribute("id", "thumbnails");
my %urls = ();
foreach my $g (#gu) {
my #as = $g->find_by_tag_name('a');
foreach $a (#as) {
my $u = $a->attr("href");
if ($u =~ /^\//) {
$urls{"http://example.com"."$u"} = 1;
}
}
}
return %urls;
}
The closest match is probably httpclient, which aims to be the equivalent of LWP. However, depending on what you plan to do, there may be better options. If you plan to follow links, fill out forms, etc. in order to scrape web content, you can use Mechanize which is similar to the perl module by the same name. There are also more Ruby-specific gems, such as the excellent Rest-client and HTTParty (my personal favorite). See the HTTP Clients category of Ruby Toolbox for a larger list.
Update: Here's an example of how to find all links on a page in Mechanize (Ruby, but it would be similar in Perl):
require 'rubygems'
require 'mechanize'
agent = Mechanize.new
page = agent.get('http://example.com/')
page.links.each do |link|
puts link.text
end
P.S. As an ex-Perler myself, I used to worry about abandoning the excellent CPAN--would I paint myself into a corner with Ruby? Would I not be able to find an equivalent to a module I rely on? This has turned out not to be a problem at all, and in fact lately has been quite the opposite: Ruby (along with Python) tends to be the first to get client support for new platforms/web services, etc.
Here's what your function might look like in ruby.
require 'rubygems'
require "mechanize"
def get_gallery_urls url
ua = Mechanize.new
ua.user_agent = "Mozilla/8.0"
urls = {}
doc = ua.get url
doc.search("#thumbnails a").each do |a|
u = a["href"]
urls["http://example.com#{u}"] = 1 if u =~ /^\//
end
urls
end
Much nicer :)
I used Perl for years and years, and liked LWP. It was a great tool. However, here's how I'd go about extracting URLs from a page. This isn't spidering a site, but that'd be an easy thing:
require 'open-uri'
require 'uri'
urls = URI.extract(open('http://example.com').read)
puts urls
With the resulting output looking like:
http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd
http://www.w3.org/1999/xhtml
http://www.icann.org/
mailto:iana#iana.org?subject=General%20website%20feedback
Writing that as a method:
require 'open-uri'
require 'uri'
def get_gallery_urls(url)
URI.extract(open(url).read)
end
or, closer to the original function while doing it the Ruby-way:
def get_gallery_urls(url)
URI.extract(open(url).read).map{ |u|
URI.parse(u).host ? u : URI.join(url, u).to_s
}
end
or, following closer to the original code:
require 'nokogiri'
require 'open-uri'
require 'uri'
def get_gallery_urls(url)
Nokogiri::HTML(
open(url)
)
.at('#thumbnails')
.search('a')
.map{ |link|
href = link['href']
URI.parse(link[href]).host \
? href \
: URI.join(url, href).to_s
}
end
One of the things that attracted me to Ruby is its ability to be readable, while still being concise.
If you want to roll your own TCP/IP-based functions, Ruby's standard Net library is the starting point. By default you get:
net/ftp
net/http
net/imap
net/pop
net/smtp
net/telnet
with the SSL-based ssh, scp, sftp and others available as gems. Use gem search net -r | grep ^net- to see a short list.
This is more of an answer for anyone looking at this question and needing to know what are easier/better/different alternatives to general web scraping with Perl compared to using LWP (and even WWW::Mechanize).
Here is a quick selection of web scraping modules on CPAN:
Mojo::UserAgent
pQuery
Scrappy
Web::Magic
Web::Scraper
Web::Query
NB. Above is just in alphabetical order so please choose your favourite poison :)
For most of my recent web scraping I've been using pQuery. You can see there are quite a few examples of usage on SO.
Below is your get_gallery_urls example using pQuery:
use strict;
use warnings;
use pQuery;
sub get_gallery_urls {
my $url = shift;
my %urls;
pQuery($url)
->find("#thumbnails a")
->each( sub {
my $u = $_->getAttribute('href');
$urls{'http://example.com' . $u} = 1 if $u =~ /^\//;
});
return %urls;
}
PS. As Daxim has said in the comments there are plenty of excellent Perl tools for web scraping. The hardest part is just making a choice of which one to use!

How to visit a URL with Ruby via http and read the output?

So far I have been able to stitch this together :)
begin
open("http://www.somemain.com/" + path + "/" + blah)
rescue OpenURI::HTTPError
#failure += painting.permalink
else
#success += painting.permalink
end
But how do I read the output of the service that I would be calling?
Open-URI extends open, so you'll get a type of IO stream returned:
open('http://www.example.com') #=> #<StringIO:0x00000100977420>
You have to read that to get content:
open('http://www.example.com').read[0 .. 10] #=> "<!DOCTYPE h"
A lot of times a method will let you pass different types as a parameter. They check to see what it is and either use the contents directly, in the case of a string, or read the handle if it's a stream.
For HTML and XML, such as RSS feeds, we'll typically pass the handle to a parser and let it grab the content, parse it, and return an object suitable for searching further:
require 'nokogiri'
doc = Nokogiri::HTML(open('http://www.example.com'))
doc.class #=> Nokogiri::HTML::Document
doc.to_html[0 .. 10] #=> "<!DOCTYPE h"
doc.at('h1').text #=> "Example Domains"
doc = open("http://etc..")
content = doc.read
More often people want to be able to parse the returned document, for this use something like hpricot or nokogiri
I'm not sure if you want to do this yourself for the hell of it or not but if you don't.. Mecanize is a really nice gem for doing this.
It will visit the page you want and automatically wrap the page with nokogiri so that you can access it's elements with css selectors such as "div#header h1". Ryan Bates has a video tutorial on it which will teach you everything you need to know to use it.
Basically you can just
require 'rubygems'
require 'mechanize'
agent = Mechanize.new
agent.get("http://www.google.com")
agent.page.at("some css selector").text
It's that simple.

Resources