sinatra-r18n: How do I get date in a specific language? - internationalization

That's basically it. As far as I can tell, sinatra inspects such places like Accept-Language HTTP header, :locale URL parameter, :locale session variable in an attempt to detect user's preferred language. But what if I just want to get it in the language I want?
UPD
Consider the following code:
require 'sinatra'
require 'sinatra/r18n'
get '/' do
l Date.today, '%B'
end
It outputs February for me. How do I make it output the month in German or any other language regardless of what sinatra thinks is appropriate for user?

Related

Sinatra dynamic template name

I just started checking out sinatra for a project, and I started playing around with HAML.
However, I've run in to an issue -- I have a path with a splat that needs to point to an HAML file with a name the same as the text splatted out of the url, however, any string passed to the [haml] template method is treated as an inline template, and not a filename.
There is no documentation that would suggest there is a way to do this. The only solution I can think of is reading to full text of the necessary template file and passing it to the HAML function; however, such a solution is incredibly cumbersome.
Example
get '/stpl/*.haml' do |page|
haml page # <--- `page' is treated as an inline template
end
Whilst this functionality is expected when one reads the documentation, there is no other means, it would seem, to accomplish what I need.
If you pass a symbol to haml, it will look in views for the matching file, so you can do this instead:
get '/stpl/*.haml' do |page|
haml page.to_sym # attempts to get views/<page>.haml
end

Ruby RSS::Parser.to_s silently fails?

I'm using Ruby 1.8.7's RSS::Parser, part of stdlib. I'm new to Ruby.
I want to parse an RSS feed, make some changes to the data, then output it (as RSS).
The docs say I can use '#to_s', but and it seems to work with some feeds, but not others.
This works:
#!/usr/bin/ruby -w
require 'rss'
require 'net/http'
url = 'http://news.ycombinator.com/rss'
feed = Net::HTTP.get_response(URI.parse(url)).body
rss = RSS::Parser.parse(feed, false, true)
# Here I would make some changes to the RSS, but right now I'm not.
p rss.to_s
Returns expected output: XML text.
This fails:
#!/usr/bin/ruby -w
require 'rss'
require 'net/http'
url = 'http://feeds.feedburner.com/devourfeed'
feed = Net::HTTP.get_response(URI.parse(url)).body
rss = RSS::Parser.parse(feed, false, true)
# Here I would make some changes to the RSS, but right now I'm not.
p rss.to_s
Returns nothing (empty quotes).
And yet, if I change the last line to:
p rss
I can see that the object is filled with all of the feed data. It's the to_s method that fails.
Why?
How can I get some kind of error output to debug a problem like this?
From what I can tell, the problem isn't in to_s, it's in the parser itself. Stepping way into the parser.rb code showed nothing being returned, so to_s returning an empty string is valid.
I'd recommend looking at something like Feedzirra.
Also, as a FYI, take a look at Ruby's Open::URI module for easy retrieval of web assets, like feeds. Open-URI is simple but adequate for most tasks. Net::HTTP is lower level, which will require you to type a lot more code to replace the functionality of Open-URI.
I had the same problem, so I started debugging the code. I think the ruby rss has a few too many required elements. The channel need to have "title, link, description", if one is missing to_s will fail.
The second feed in the example above is missing the description, which will make the to_s fail...
I believe this is a bug, but I really don't understand the code and barely ruby so who knows. It would seem natural to me that to_s would try its best even if some elements are missing.
Either way
rss.channel.description="something"
rss.to_s
will "work"
The problem lies in def have_required_elements?
Or in the
self.class::MODELS

Need to build a url and work with the returned result

I would like to start with a little script that fetches the examination results of me and my friends from our university website.
I would like to pass it the roll number as the post parameter and work with the returned data,
I don't know how to create the post string.
It would be great if someone could tell me where to start, what are the things to learn, links to a tutorial would be most appreciated.
I donĀ“t want someone to write code for me, just guidance on how to get started.
I've written a solution here just as a reference for whatever you might come up with. There are multiple ways of attacking this.
#fetch_scores.rb
require 'open-uri'
#define a constant named URL so if the results URL changes we don't
#need to replace a hardcoded URL everywhere.
URL = "http://www.nitt.edu/prm/ShowResult.html?&param="
#checking the count of arguments passed to the script.
#it is only taking one, so let's show the user how to use
#the script
if ARGV.length != 1
puts "Usage: fetch_scores.rb student_name"
else
name = ARGV[0] #could drop the ARGV length check and add a default using ||
# or name = ARGV[0] || nikhil
results = open(URL + name).read
end
You might examine Nokogiri or Hpricot to better parse/format your results. Ruby is an "implicit return" language so if you happened to wonder why we didn't have a return statement that's because results will be returned by the script since it was last executed.
You could have a look at the net/http library, included as part of the standard library. See http://www.ruby-doc.org/stdlib/libdoc/net/http/rdoc/index.html for details, there are some examples on that page to get you started.
A very simple way to do this is to use the open-uri library and just put the query parameters in the URL query string:
require 'open-uri'
name = 'nikhil'
results = open("http://www.nitt.edu/prm/ShowResult.html?&param=#{name}").read
results now contains the body text fetched from the URL.
If you are looking for something more ambitious, look at net/http and the httparty gem.

How to use Scrubty properly to grab URL from the XML outputted content

I am by no means a master with Ruby and am quite new to Scrubyt. I was just trying out some examples found on there wiki page. The example i was working on was getting the search results returned by Google when you search for 'ruby' and I had the idea of grabbing the URL of each result so I could go ahead and fetch that page as well. The problem is I don't know how to grab the URL appropriately. This is my following code:
require 'rubygems'
require 'scrubyt'
google_data = Scrubyt::Extractor.define do
fetch 'http://www.google.com/ncr'
fill_textfield 'q','ruby'
submit
link_title "//a[#class='l']", :write_text => true do
link_url
end
end
google_data.to_xml.write($stdout, 1);
The code prints out the XML data appropriately (name and link) but how do I retrieve the link without the <link_url> tags that seems to get added to it (I tried to print out link_url and I noticed the tags are printed as well). Could I do something as simple as fetch link_url or is there a way of extracting the text from the xml content held in link_url?
This is some of the content that gets printed by the google_data.to_xml.write():
<root>
<link_title>
Ruby Programming Language
<link_url>http://ruby-lang.org/</link_url>
</link_title>
<link_title>
Download Ruby
<link_url>http://www.ruby-lang.org/en/downloads/</link_url>
</link_title>
<link_title>
Ruby - The Inspirational Weight Loss Journey on the Style Network ...
<link_url>http://www.mystyle.com/mystyle/shows/ruby/index.jsp</link_url>
</link_title>
<link_title>
Ruby (programming language) - Wikipedia, the free encyclopedia
<link_url>http://en.wikipedia.org/wiki/Ruby_(programming_language)</link_url>
</link_title>
</root>
I'd think about alternatives. Scrubyt hasn't been updated in a while, and the forums have been shut down.
Mechanize can do what the Extractor does, Nokogiri can parse XML or HTML responses, and Builder can create XML (though it seems like you don't really want XML).

How to load a Web page and search for a word in Ruby

How to load a Web page and search for a word in Ruby??
Here's a complete solution:
require 'open-uri'
if open('http://example.com/').read =~ /searchword/
# do something
end
For something simple like this I would prefer to write a couple of lines of code instead of using a full blown gem. Here is what I will do:
require 'net/http'
# let's take the url of this page
uri = 'http://stackoverflow.com/questions/1878891/how-to-load-a-web-page-and-search-for-a-word-in-ruby'
response = Net::HTTP.get_response(URI.parse(uri)) # => #<Net::HTTPOK 200 OK readbody=true>
# match the word Ruby
/Ruby/.match(response.body) # => #<MatchData "Ruby">
I can go to the path of using a gem if I need to do more than this and I need to implement some algorithm for that which is already being done in one of the gems
I suggest using Nokogiri or hpricot to open and parse HTML documents. If you need something simple that doesn't require parsing the HTML, you can just use the open-uri library built in to most ruby distributions. If need something more complex for posting forms (or logging in), you can elect to use Mechanize.
Nokogiri is probably the preferred solution post _why, but both are about as simple as this:
require 'nokogiri'
require 'open-uri'
doc = Nokogiri(open("http://www.example.com"))
if doc.inner_text.match(/someword/)
puts "got it"
end
Both also allow you to search using xpath-like queries or CSS selectors, which allows you to grab items out of all divs with class=foo, for example.
Fortunately, it's not that big of a leap to move between open-uri, nokogiri and mechanize, so use the first one that meets your needs, and revise your code once you realize you need the capabilities of one of the other libraries.
You can also use mechanize gem, something similar to this.
require 'rubygems'
require 'mechanize'
mech = WWW::Mechanize.new.get('http://example.com') do |page|
if page.body =~ /mysearchregex/
puts "found it"
end
end

Resources