How can I parse just the bug count from the PivotalTracker API? - ruby

I'm currently issuing a GET request to the PivotalTracker API to get all of the bugs for a given project, by bug severity. All I really need is a count of the bugs (i.e. 10 critical bugs), but I'm currently getting all of the raw data for each bug in XML format. The XML data has a bug count at the top, but I have to scroll up tons of data to get to that count.
To solve this issue, I'm trying to parse the XML to only display the bug count, but I'm not sure how to do that. I've experimented with Nokogiri and REXML, but it seems like they can only parse actual XML files, not XML from an HTTP GET request.
Here is my code (The access token as been replaced with *'s for security reasons):
require 'net/http'
require 'rexml/document'
prompt = '> '
puts "What is the id of the Project you want to get data from?"
print prompt
project_id = STDIN.gets.chomp()
puts "What type of bugs do you want to get?"
print prompt
type = STDIN.gets.chomp()
def bug(project_id, type)
net = Net::HTTP.new("www.pivotaltracker.com")
request = Net::HTTP::Get.new("/services/v3/projects/#{project_id}/stories?filter=label%3Aqa-#{type}")
request.add_field("X-TrackerToken", "*******************")
net.read_timeout = 10
net.open_timeout = 10
response = net.start do |http|
http.request(request)
end
puts response.code
print response.read_body
end
bug(project_id, type)
Like I said, the GET request is successfully printing a bug count and all of the raw data for each individual bug to my Terminal window, but I only want it to print the bug count.

The API documentation shows the total bug count is an attribute of the XML response's top-level node, stories.
Using Nokogiri as an example, try replacing print response.read_body with
xml = Nokogiri::XML.parse(response.body)
puts "Bug count: #{xml.xpath('/stories/#total')}"
Naturally you'll need to add require 'nokogiri' at the top of your code as well.

Related

Ruby get download progression

How do I download a file from a link but also get the download progress in order to display it ?
From the very little I know open-uri and the typhoeus gem ( I use both ) manage the download internaly and I have no idea on how to get the download progression from them.
I suppose what I am looking for is a way of pausing the download and get the progress of update a value each time a packet is received but I do not know how to proceed.
I could also use an other gem ? If yes, which one and why ?
You need to do it as a chunked request, which is what libraries like open-uri do behind the scenes if you use their high level APIs (like open(uri).read).
Untested example based on https://www.ruby-forum.com/topic/170237:
u = URI.parse(uri)
s = ""
progress = 0
Net::HTTP.start(u.host, u.port) do |http|
http.request_get(u.path) do |response|
response.read_body do |chunk|
s << chunk
progress += chunk.length
puts "Downloaded #{progress} so far"
end
end
end
puts "Received:\n#{s}"

Uploading and parsing text document in Rails

In my application, the user must upload a text document, the contents of which are then parsed by the receiving controller action. I've gotten the document to upload successfully, but I'm having trouble reading its contents.
There are several threads on this issue. I've tried more or less everything recommended on these threads, and I'm still unable to resolve the problem.
Here is my code:
file_data = params[:file]
contents = ""
if file_data.respond_to?(:read)
contents = file_data.read
else
if file_data.respond_to?(:path)
File.open(file_data, 'r').each_line do |line|
elts = line.split
#
#
end
end
end
So here are my problems:
file_data doesn't 'respond_to?' either :read or :path. According to some other threads on the topic, if the uploaded file is less than a certain size, it's interpreted as a string and will respond to :read. Otherwise, it should respond to :path. But in my code, it responds to neither.
If I try to take out the if statements and straight away attempt File.open(file_data, 'r'), I get an error saying that the file wasn't found.
Can someone please help me find out what's wrong?
PS, I'm really sorry that this is a redundant question, but I found the other threads unhelpful.
Are you actually storing the file? Because if you are not, of course it can't be found.
First, find out what you're actually getting for file_data by adding debug output of file_data.inspect. It maybe something you don't expect, especially if form isn't set up correctly (i.e. :multipart => true).
Rails should enclose uploaded file in special object providing uniform interface, so that something as simple as this should work:
file_data.read.each_line do |line|
elts = line.split
#
#
end

Ruby RSS::Parser.to_s silently fails?

I'm using Ruby 1.8.7's RSS::Parser, part of stdlib. I'm new to Ruby.
I want to parse an RSS feed, make some changes to the data, then output it (as RSS).
The docs say I can use '#to_s', but and it seems to work with some feeds, but not others.
This works:
#!/usr/bin/ruby -w
require 'rss'
require 'net/http'
url = 'http://news.ycombinator.com/rss'
feed = Net::HTTP.get_response(URI.parse(url)).body
rss = RSS::Parser.parse(feed, false, true)
# Here I would make some changes to the RSS, but right now I'm not.
p rss.to_s
Returns expected output: XML text.
This fails:
#!/usr/bin/ruby -w
require 'rss'
require 'net/http'
url = 'http://feeds.feedburner.com/devourfeed'
feed = Net::HTTP.get_response(URI.parse(url)).body
rss = RSS::Parser.parse(feed, false, true)
# Here I would make some changes to the RSS, but right now I'm not.
p rss.to_s
Returns nothing (empty quotes).
And yet, if I change the last line to:
p rss
I can see that the object is filled with all of the feed data. It's the to_s method that fails.
Why?
How can I get some kind of error output to debug a problem like this?
From what I can tell, the problem isn't in to_s, it's in the parser itself. Stepping way into the parser.rb code showed nothing being returned, so to_s returning an empty string is valid.
I'd recommend looking at something like Feedzirra.
Also, as a FYI, take a look at Ruby's Open::URI module for easy retrieval of web assets, like feeds. Open-URI is simple but adequate for most tasks. Net::HTTP is lower level, which will require you to type a lot more code to replace the functionality of Open-URI.
I had the same problem, so I started debugging the code. I think the ruby rss has a few too many required elements. The channel need to have "title, link, description", if one is missing to_s will fail.
The second feed in the example above is missing the description, which will make the to_s fail...
I believe this is a bug, but I really don't understand the code and barely ruby so who knows. It would seem natural to me that to_s would try its best even if some elements are missing.
Either way
rss.channel.description="something"
rss.to_s
will "work"
The problem lies in def have_required_elements?
Or in the
self.class::MODELS

Need to build a url and work with the returned result

I would like to start with a little script that fetches the examination results of me and my friends from our university website.
I would like to pass it the roll number as the post parameter and work with the returned data,
I don't know how to create the post string.
It would be great if someone could tell me where to start, what are the things to learn, links to a tutorial would be most appreciated.
I donĀ“t want someone to write code for me, just guidance on how to get started.
I've written a solution here just as a reference for whatever you might come up with. There are multiple ways of attacking this.
#fetch_scores.rb
require 'open-uri'
#define a constant named URL so if the results URL changes we don't
#need to replace a hardcoded URL everywhere.
URL = "http://www.nitt.edu/prm/ShowResult.html?&param="
#checking the count of arguments passed to the script.
#it is only taking one, so let's show the user how to use
#the script
if ARGV.length != 1
puts "Usage: fetch_scores.rb student_name"
else
name = ARGV[0] #could drop the ARGV length check and add a default using ||
# or name = ARGV[0] || nikhil
results = open(URL + name).read
end
You might examine Nokogiri or Hpricot to better parse/format your results. Ruby is an "implicit return" language so if you happened to wonder why we didn't have a return statement that's because results will be returned by the script since it was last executed.
You could have a look at the net/http library, included as part of the standard library. See http://www.ruby-doc.org/stdlib/libdoc/net/http/rdoc/index.html for details, there are some examples on that page to get you started.
A very simple way to do this is to use the open-uri library and just put the query parameters in the URL query string:
require 'open-uri'
name = 'nikhil'
results = open("http://www.nitt.edu/prm/ShowResult.html?&param=#{name}").read
results now contains the body text fetched from the URL.
If you are looking for something more ambitious, look at net/http and the httparty gem.

How to use Scrubty properly to grab URL from the XML outputted content

I am by no means a master with Ruby and am quite new to Scrubyt. I was just trying out some examples found on there wiki page. The example i was working on was getting the search results returned by Google when you search for 'ruby' and I had the idea of grabbing the URL of each result so I could go ahead and fetch that page as well. The problem is I don't know how to grab the URL appropriately. This is my following code:
require 'rubygems'
require 'scrubyt'
google_data = Scrubyt::Extractor.define do
fetch 'http://www.google.com/ncr'
fill_textfield 'q','ruby'
submit
link_title "//a[#class='l']", :write_text => true do
link_url
end
end
google_data.to_xml.write($stdout, 1);
The code prints out the XML data appropriately (name and link) but how do I retrieve the link without the <link_url> tags that seems to get added to it (I tried to print out link_url and I noticed the tags are printed as well). Could I do something as simple as fetch link_url or is there a way of extracting the text from the xml content held in link_url?
This is some of the content that gets printed by the google_data.to_xml.write():
<root>
<link_title>
Ruby Programming Language
<link_url>http://ruby-lang.org/</link_url>
</link_title>
<link_title>
Download Ruby
<link_url>http://www.ruby-lang.org/en/downloads/</link_url>
</link_title>
<link_title>
Ruby - The Inspirational Weight Loss Journey on the Style Network ...
<link_url>http://www.mystyle.com/mystyle/shows/ruby/index.jsp</link_url>
</link_title>
<link_title>
Ruby (programming language) - Wikipedia, the free encyclopedia
<link_url>http://en.wikipedia.org/wiki/Ruby_(programming_language)</link_url>
</link_title>
</root>
I'd think about alternatives. Scrubyt hasn't been updated in a while, and the forums have been shut down.
Mechanize can do what the Extractor does, Nokogiri can parse XML or HTML responses, and Builder can create XML (though it seems like you don't really want XML).

Resources