How to create a REST API for a Ruby application? - ruby

I would like to know how to provide a Ruby application with a REST API. I could code something based on Ruby's TCPServer API, but that seems a bit low-level. Do you think it would be a good solution? Or do you recommend a better approach?

You can use Sinatra to write tiny, focused web applications and lightweight REST services very quickly.
In the documentation section they highlight a couple of videos on that matter:
Adam Wiggins and Blake Mizerany present Sinatra and RestClient at RubyConf 2008. The talk details Sinatra’s underlying philosophy and reflects on using Sinatra to build real world applications.
Adam Keys and The Pragmatic Programmers have started a series of screencasts on Sinatra. The first two episodes cover creating a tiny web app and creating a REST service. $5 a pop.
You can also use rails as well, but that's a little overkill...

There are several layers involved when desiging a RESTful API, and at each layer there are several valid approaches.
TCPServer is indeed very low level, since you would have to implement the HTTP protocol yourself, which is not recommended.
One step up would be Rack, which takes care of all the low-level HTTP details. This is what all Ruby web frameworks like Rails, Sinatra or Ramaze use under the hood. It also assures that your application works on various application servers, like Passenger, Thin or Unicorn.
But even Rack is still low level, it gives you HTTP, but higher level frameworks take the boilerplate out of typical web programming. For an API you could look at a minimal framework like Sinatra, or a framework specifically designed for APIs like Grape or Rails::API. These will already assume a RESTful style API, so you should find them to be a natural fit.
Typical RESTful APIs are characterized by having resources identified by guessable (convention driven) URLs, and operations on those based on HTTP methods (verbs) like GET, POST, PUT, DELETE and PATCH. To truly embrace the spirit of REST as it was described by Roy Fielding however, you could move towards a more full "Hypermedia" API. The most visible difference is that responses are more self-contained. They are of well-defined media types (defined by yourself or by existing specs) containing links to related resources, rather than merely numerical ids. Similarly responses contain templates/forms describing the operations that can be performed. (There is more to it, but on the surface level that's what you will notice.)
This makes the API more discoverable, both by humans and machines, and it allows for a greater freedom in evolving the API. There could be a performance drawback, since a client typically would need to do more requests to achieve the same thing, but this can be prevented by well thought out design and caching. Garner is specifically made to provide easy server-side caching.
You could define your own media types that suit your application, commonly on top of JSON or XML, or you could look at existing specifications, notably Collection+JSON, HAL and JSON-API. It seems at the moment HAL has the biggest traction, with several libraries available on a variety of platforms.
There is seemingly not a whole lot happening around JSON-API, but two signifacnt projects, ActiveModel::Serializers and Ember-data, are both adopting (and at the same time, developing) this format, which means it could become a popular choice in the Ruby/Rails world.
Edit : typo

I'm using Sinatra too to develop simple REST solutions.
The thing is Sinatra is so flexible in many ways. You can build your project structure the way you like more. Usualy we have a lib/ tmp/ and public/ directories and a config.ru and app.rb files but as I sayd you can build whatever you want.
To remember is that Sinatra is not an usual MVC just because de M (model). For you to use sinatra for Simple CRUD web applications you need simply to load a gem.
require 'datamapper'
or other of your choice like sqlite, sequel, ActiveRecord, ...
and voilá you got a ORM under your Sinatra.
Under Sinatra you define routes that obey to four main proposes GET, PUT POST and DELETE.
require 'rubygems'
require 'sinatra'
get '/' do
erb :home
end
get '/API/*' do
api = params[:splat]
#command_test = api[0]
#command_helo = api[1]
#...
def do_things(with_it)
#...
end
#...
end
__END__
##home
helo
well you got the ideia :)
Finally. Learning Sinatra is not a waste of time because of it simplicity and because it gives (me) foundations of what web programming is.
I think In a near future it will be possible to "inject" Sinatra apps (Rack Apps) into a Rails3 project.
Take a look into github, there you will find many project built with Sinatra.
For further reading checkout Sinatra::Base.

For simple REST APIs I would also consider working directly against the Rack library (i.e. you may not need a framework like Sinatra). Routing for example can be quite easy for simple cases. I've put together a little example here: https://gist.github.com/4685445

Related

User login/session system description

I'm building my first web app, and I've got all these crazy ideas on ways I could handle things like logins/sessions, but I was wondering if anyone has written a really good, thorough description of how logins/sessions work. I've seen tutorials, but I want to know if theres something more abstract that gives the reader a more general idea of how the whole process is handled. My web app is in ruby/sinatra if that's relevant.
Most of the in-depth tutorials for login/authentication from scratch are for Rails unfortunately. I went through the same trouble trying to find Sinatra specific tutorials. I would recommend just checking out the rails oriented tutorials since the knowledge is pretty general and can be applied to Sinatra as well.
These guides from RailsGuides are pretty good for getting an understanding of authentication even though they are Rails specific (read the section on security especially):
http://guides.rubyonrails.org/
Here's an example of a good Sinatra authentication scheme on github (it uses the datamapper gem, but you can easily replace this with any other Ruby ORM):
https://github.com/daddz/sinatra-dm-login
If you aren't as interested in rolling your own, you can also try the sinatra-authentication gem:
https://github.com/maxjustus/sinatra-authentication

What is the difference between Net:Http and third-party library for making API calls in Rails/Ruby?

I came across this:
https://github.com/archiloque/rest-client ...and it seems fairly simple and straight forward. But, working with third-party APIs is new to me, so I'm not sure what's important in a library and most of all, which is easiest to use.
Does rest-client offer any advantage over the standard Net::Http?
I also found https://github.com/jnunemaker/httparty, though it doesn't seem to be as well documented as rest-client or, even this one: https://github.com/dbalatero/typhoeus. Are they better than the included standard?
Any thoughts, suggestions?
Net::HTTP is meant to be a low level library for accessing networked resources. The third-party APIs make up for some of the difficulties that you'd otherwise have to handle yourself. To name a few:
Handling redirect codes
Implementing multipart file uploads
Storing cookies between requests
HTTP exception handling
Parsing responses (HTML, JSON, etc.)
Managing authentication/SSL on secure sites
In general, the authors of those libraries have taken extra care to make their API easy to use compared to Net::HTTP.
Also, I've found Mechanize to be a more complete solution for my needs than rest-client. For example, with rest-client you will still have to implement storing cookies between requests and handling redirects on POST requests.
You may find useful this short article from Adam Wiggins, initial author of RestClient:
http://adam.heroku.com/past/2008/8/8/ruby_libs_for_making_web/
I personally am using httparty in my project - this was choice of previous developer, but it works for me pretty well.

What are some good Ruby-based web crawlers? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am looking at writing my own, but I am wondering if there are any good web crawlers out there which are written in Ruby.
Short of a full-blown web crawler, any gems that might be helpful in building a web crawler would be useful. I know this part of the question is touched upon in a couple of places, but a list of gems applicable to building a web crawler would be a great resource as well.
I used to write spiders, page scrapers and site analyzers for my job, and still write them periodically to scratch some itch I get.
Ruby has some excellent gems to make it easy:
Nokogiri is my #1 choice for the HTML parser. I used to use Hpricot, but found some sites that made it explode in flames. I switched to Nokogiri afterwards and have been very happy with it. I regularly use it for parsing HTML, RDF/RSS/Atom and XML. Ox looks interesting too, so that might be another candidate, though I find searching the DOM a lot easier than trying to walk through a big hash, such as what is returned by Ox.
OpenURI is good as a simple HTTP client, but it can get in the way when you want to do more complex things or need to have multiple requests firing at once. I'd recommend looking at HTTPClient or Typhoeus with Hydra for modest to heavyweight jobs. Curb is good too, because it uses the cURL library, but the interface isn't as intuitive to me. It's worth looking at though. HTTPclient is also worth looking at, but I lean toward the previously mentioned ones.
Note: OpenURI has some flaws and vulnerabilities that can affect unsuspecting programmers so it's fallen out of favor somewhat. RestClient is a very worthy successor.
You'll need a backing database, and some way to talk to it. This isn't a task for Rails per se, but you could use ActiveRecord, detached from Rails, to talk to the database. I've done that a couple times and it works all right. Instead, I really like Sequel for my ORM. It's very flexible in how it lets you talk to the database, from using straight SQL to using Sequel's ability to programmatically build a query, to modeling the database and using migrations. Once you have the database built, you could use Rails to act as a front-end to the data though.
If you are going to navigate sites in any way beyond simply grabbing pages and following links, you'll want to look at Mechanize. It makes it easy to fill out forms and submit pages. As an added bonus, you can grab the content of a page as a Nokogiri HTML document and parse away using Nokogiri's multitude of tricks.
For massaging/mangling URLs I really like Addressable::URI. It's more full-featured than the built-in URI module. One thing that URI does that's nice is it has the URI#extract method to scan a string for URLs. If that string happened to be the body of a web page it would be an alternate way of locating links, but its downside is you'll also get links to images, videos, ads, etc., and you'll have to filter those out, probably resulting in more work than if you use a parser and look for <a> tags exclusively. For that matter, Mechanize also has the links method which returns all the links in a page, but you'll still have to filter them to determine whether you want to follow or ignore them.
If you think you'll need to deal with Javascript manipulated pages, or pages that get their content dynamically from AJAX, you should look into using one of the WATIR variants. There are flavors for the different browsers on different OSes, such as Firewatir, Safariwatir and Operawatir, so you'll have to figure out what works for you.
You do NOT want to rely on keeping your list of URLs to visit, or visited URLs, in memory. Design a database schema and store that information there. Spend some time up front designing the schema, thinking about what things you'll want to know as you collect links on a site. SQLite3, MySQL and Postgres are all excellent choices, depending on how big you think your database needs will be. One of my site analyzers was custom designed to help us recommend SEO changes for a Fortune 50 company. It ran for over three weeks covering about twenty different sites before we had enough data and stopped it. Imagine what would have happened if we had a power-outage and all that data went in the bit-bucket.
After all that you'll want to also make your code be aware of proper spidering etiquette: What are the key considerations when creating a web crawler?
I am building wombat, a Ruby DSL to crawl web pages and extract content. Check it out on github https://github.com/felipecsl/wombat
It is still in an early stage but is already functional with basic functionality. More stuff will be added really soon.
So you want a good Ruby-based web crawler?
Try spider or anemone. Both have solid usage according to RubyGems download counts.
The other answers, so far, are detailed and helpful but they don't have a laser-like focus on the question, which asks for ruby libraries for web crawlers. It would seem that this distinction can get muddled: see my answer to "Crawling vs. Web-Scraping?"
Tin Man's comprehensive list is good but partly outdated for me.
Most websites my customers deal with are heavily AJAX/Javascript dependent.
I've been using Watir / watir-webdriver / selenium for a few years too, but the overhead of having to load up a hidden web browser on the backend to render that DOM stuff just isn't viable, let alone that all this time they still haven't implemented a useable "browser session reuse" to let new code execution reuse an old browser in memory for this purpose, shooting down tickets that might have worked their way up the API layers eventually. (refering to https://code.google.com/p/selenium/issues/detail?id=18 ) **
https://rubygems.org/gems/phantomjs
is what we're migrating new projects over to now, to let the necessary data get rendered without even any sort of invisible Xvfb memory & CPU heavy web browser.
** Alternative approaches also failed to pan out:
how to serialize an object using TCPServer inside?
Can a watir browser object be re-used in a later Ruby process?
If you don't want to write your own, then use any ordinary web crawler. There are dozens out there.
If you do want to write your own, then write your own. A web crawler isn't exactly a complicated activity, it consists of:
Downloading a website.
Locating URLs in that website, filtered however you dang well please.
For each URL in that website, repeat step 1.
Oh, and this seems to be a duplicate of "Web crawler in ruby".

How to test/spec Sinatra & MongoDB API with Cucumber?

I want to spec a Sinatra server that receives HTTP requests, stores things in MongoDB, and responds with JSON. How would I spec both the MongoDB entries and the responses?
I'd like to use Cucmber and RSpec to do this cause I hear they're hot, but I'm not really good with them yet.
My learning so far with BDD is that you need to think in very small steps. E.g. you could start in making specifications with rspec for your routes, example project with sinatra here, and another example, here.
Then you could start in making specifications for your model layer. Small steps also here, check for validations, setting and getting attributes.
Last, you might approach to specify the view, here you need to learn about mocks and stubs for your controller and models.
Cucumber is a different story in my view. You need to write cucumber specifications when you work with your customer, to understand together the requirements of your app. It facilitates acceptances testing as far as I can see.

Cappuccino, Django, AJAX, and fitting it all together - review my architecture!

I'm trying to get my head around Cappuccino. I'd like my StackOverview peers to review the architecture below and see if it makes sense - the aim is to utilize the unique benefits of Django and Cappuccino without doubling up where the technologies overlap...
When the web browser requests a 'friendly' URL (eg, /, /articles, etc):
DJango's urls.py matches this to a
view.
The view, rather than doing
DJangos typical work of filling in a
template with the locals dict,
returns the small 'stub' HTML used in
a Cappuccino app directly.
The client receives the Cappuccino HTML
The client requests the Objective J JS
URLs mentioned in the stub HTML
The end-user app is executed and
displayed in the browser
The browser now has a working app. When the user does something that
requests something from the server:
The browser sends an XMLHTTPRequest to a URL.
Django's URLs.py matches this to a
view.
The view does it work, perhaps interacting with the DB model. But instead of returning a template, Django returns some JSON.
The client recieves the JSON, and
does whatever it needs to do.
Does this make sense? We still have the benefit of friendly URLs, and the database being created for us to model our code. However rather than using templates, we're providing Cappuccino stub pages and JSON responses, in order to give users something more like a real app and less like an HTML templating engine.
Is there perhaps a better way of doing things? What do other Pythonistas use? Thanks for your feedback.
For a low traffic site, using Django's routing layer would be fine, but if you plan on getting a significant amount of traffic, you might consider having your proxying webserver handle the stubs.
As for the rest, it works and the TurboGears community has been doing it for years (I was a TG committer so that's what I normally use). The TG architecture of returning a dictionary to a template makes this trivial since you just set 'json' as your template engine.
Doing the same thing in Django isn't much more complicated. Just use the serialization tools to write the result to the response rather than using the templating calls.
Note that when you do an architecture like this, it's considerably easier to manage if you keep all the application logic in one place. Putting some app logic in Django and some in the browser causes things to start getting messy fairly quickly. If you treat your server as a dumb persistence layer (with the exception of validation/authentication/authorization), life is easier.
FWIW, I find Sproutcore to be easier to work with than Cappuccino if you're interested in heavier non-progressive enhancement frameworks.

Resources