The site I want to index is fairly big, 1.x million pages. I really just want a json file of all the URLs so I can run some operations on them (sorting, grouping, etc).
The basic anemome loop worked well:
require 'anemone'
Anemone.crawl("http://www.example.com/") do |anemone|
anemone.on_every_page do |page|
puts page.url
end
end
But (because of the site size?) the terminal froze after a while. Therefore, I installed MongoDB and used the following
require 'rubygems'
require 'anemone'
require 'mongo'
require 'json'
$stdout = File.new('sitemap.json','w')
Anemone.crawl("http://www.mybigexamplesite.com/") do |anemone|
anemone.storage = Anemone::Storage.MongoDB
anemone.on_every_page do |page|
puts page.url
end
end
It's running now, but I'll be very surprised if there's output in the json file when I get back in the morning - I've never used MongoDB before and the part of the anemone docs about using storage weren't clear (to me at least). Can anyone who's done this before give me some tips?
If anyone out there needs <= 100,000 URLs, the Ruby Gem Spidr is a great way to go.
This is probably not the answer you wanted to see but I highly advice that you don't use Anemone and perhaps Ruby for that matter for crawling a million pages.
Anemone is not a maintained library and fails on many edge cases.
Ruby is not the fastest language and uses a global interpreter lock which means that you can't have true threading capabilities. I think your crawling will probably be too slow. For more information about threading, I suggest you can check out the following links.
http://ablogaboutcode.com/2012/02/06/the-ruby-global-interpreter-lock/
Does ruby have real multithreading?
You can try using anemone with Rubinius or JRuby which are much faster with but I'm not sure the extent of compatibility.
I had some mild success going from Anemone to Nutch but your mileage may vary.
Related
I need to fetch some data but I'm completely stumped after trying a few things.
I want to access Airlines & Destinations from the Albuquerque_International_Sunport's wiki page - keep in mind, I'll be going through a prepopulated list of airports with this data.
There are multiple "types" of Airlines: Passenger, Cargo, sometimes there's other (sub?)sections; other times there are none:
Articles for multiple airports will be accessed automatically - including some less known airports. This means I need to:
Check if "Airlines & Destinations" section exists
Take all data inside of any table
Scrape it; otherwise do nothing
I've tried using the ruby wikipedia-client gem however, the .raw_data method isn't even returning the section data:
Next, I went to Wikipedia's API: unless I am mistaken, but it doesn't return "section" names! This doesn't seem right but I wasn't able to get it working.
So I suppose that leaves Nokogiri. I can grab and parse the pages fine, but:
How would I go about detecting "Airlines & Destinations" section presence, getting all table data BEFORE end of section? I have a suspicion I need some tricky Xpath for this.
Seems to be the only viable solution.
Any thoughts welcome. Putting a bounty on this question when I can.
Edit: Perhaps it's better to simply somehow grab a list of all airlines in the world and hit them against HTML? Seems like it could be computationally expensive.
Well, I'm not an expert user of Nokogiri but maybe this can give you some idea.
require 'nokogiri'
require 'open-uri'
page = Nokogiri::HTML(open("https://en.wikipedia.org/wiki/Albuquerque_International_Sunport"))
# this is the passenger table
page.xpath('//*[#id="mw-content-text"]/div/table[2]/tr').each do |tr|
p tr.text()
puts "-"*50
end
# this is the cargo table
page.xpath('//*[#id="mw-content-text"]/div/table[3]/tr').each do |tr|
p tr.text()
puts "-"*50
end
OK SO I Am just picking Ruby up pretty much for the kicks and giggles... and Believe me when I say I'm stumped.
I want to create a bot for my Twitch stream and do it in Ruby because I found a fairly easy tut to follow along with, along with my reasoning skills. However I'm having a very hard time getting my command prompt or pry to load the file.
Here is my file JUST IN CASE
require 'socket'
TWITCH_HOST = "irc.twitch.tv"
TWITCH_PORT = 6667
class Fox
def initialize
#nickname = "mybotsname"
#password = "I have the proper oauth here"
#channel = "mytwitchchannel"
#socket = TCPSocket.open(TWITCH_HOST, TWITCH_PORT)
write_to_system "PASS #{#password}"
write_to_system "NICK #{#nickname}"
write_to_system "USER #{#nickname} 0 * #{#nickname}"
write_to_system "JOIN ##{#Channel}"
end
def write_to_system(message)
#socket.puts message
end
def write_to_chat(message)
write_to_system "PRIVMSG ##{#channel} :{message}"
end
end
Now, From what I gathered, I should beable to go into my command prompt and type pry
I get this.
Pry
Now, I want to run my program which is located in a dropbox (Private use)
I'm Still very new to the concept of Repl's as I've been working with Java mostly along with very LITTLE Experience in other languages. What am I doing wrong here? Why can I not get my file to load properly? I've also tried filepathing and got this.FilePathing
I'm sorry if this is a stupid question. It's just driving me absolutely bat-brain crazy. The reason this is driving me bonkers is the video I was watching, he didn't do anything different other than my guess is he was using Terminal instead of Command Prompt. I Wanted originally to do this through Cygwin but upon install of Pry I lost a bunch of Cygwin files and can no longer load Cygwin, I will re-install the over all program later and see what I can from there.
Sorry for no embedded pics.
Also, any easier way to do this I'm all ears. I've tried Komodo Edit 10 but it's not playing nice ether.
Require from LOAD_PATH
A Ruby module or class file needs to be in the LOAD_PATH to require it with Kernel#require. For example, if your file is named just_in_case.rb, you can use:
$LOAD_PATH.unshift '/path/to/dropbox/directory'
# Leave off the path and .rb extension.
require 'just_in_case'
Load from an absolute path
If you need to provide an absolute path, then you should use Kernel#load instead. For example:
# Use the absolute path and the .rb extension.
load '/path/to/dropbox/just_in_case.rb'
Caveats
There are some other differences in behavior between require, require_relative, and load, but they probably don't really matter within the limited scope of the question you asked except that there have historically been issues with Kernel#require_relative within the REPL. It may or may not work as expected now, but I would still recommend require or load for your specific use case.
I have a number of data files to process from a data warehouse that have the following format:
:header 1 ...
:header n
# remarks 1 ...
# remarks n
# column header 1
# column header 2
DATA ROWS
(Example: "#### ## ## ##### ######## ####### ###afp## ##e###")
The data is separated by white spaces and has both numbers and other ASCII chars. Some of those pieces of data will be split up and made more meaningful.
All of the data will go into a database, initially an SQLite db for development, and then pushed up to another, more permanent, storage.
These files will be pulled in actually via HTTP from the remote server and I will have to crawl a bit to get some of it as they span folders and many files.
I was hopeful to get some input what the best tools and methods may be to accomplish this the "Ruby way", as well as to abstract out some of this. Otherwise, I'll tackle it probably similar to how I would in Perl or other such approaches I've taken before.
I was thinking along the lines of using OpenURI to open each url, then if input is HTML collect links to crawl, otherwise process the data. I would use String.scan to break apart the file appropriately each time into a multi-dimensional array parsing each component based on the established formatting by the data provider. Upon completion, push the data into the database. Move on to next input file/uri. Rinse and repeat.
I figure I must be missing some libs that those with more experience would use to clean/quicken this process up dramatically and make the script much more flexible for reuse on other data sets.
Additionally, I will be graphing and visualizing this data as well as generating reports, so perhaps that should too be considered.
Any input as to perhaps a better approach or libs to simply this?
Your question focuses on a lot on "low level" details -- parsing URL's and so on. One key aspect of the "Ruby Way" is "Don't reinvent the wheel." Leverage existing libraries. :)
My recommendation? First, leverage a crawler such as spider or anemone. Second, use Nokogiri for HTML/XML parsing. Third, store the results. I recommend this because you might do different analyses later and you don't want to throw away the hard work of your spidering.
Without knowing too much about your constraints, I would look at storing your results in MongoDB. After thinking this, I did a quick search and found a nice tutorial Scraping a blog with Anemone and MongoDB.
I've written probably a bajillion spiders and site analyzers and find that Ruby has some nice tools that should make this an easy process.
OpenURI makes it easy to retrieve pages.
URI.extract makes it easy to find links in pages. From the docs:
Description
Extracts URIs from a string. If block given, iterates through all matched URIs. Returns nil if block given or array with matches.
require "uri"
URI.extract("text here http://foo.example.org/bla and here mailto:test#example.com and here also.")
# => ["http://foo.example.com/bla", "mailto:test#example.com"]
Simple, untested, logic to start might look like:
require "openuri"
require "uri"
urls_to_scan = %w[
http://www.example.com/page1
http://www.example.com/page2
]
loop do
break if urls_to_scan.empty?
url = urls_to_scan.shift
html = open(url).read
# you probably want to do something to make sure the URLs are not
# pointing outside the site you're walking.
#
# Something like:
#
# URI.extract(html).select{ |u| u[%r{^http://www\.example\.com}i] }
#
new_urls = URI.extract(html)
if (new_urls.any?)
urls_to_scan += new_urls
else
; # parse your file as data using the content in html
end
end
Unless you own the site you're crawling, you want to be kind and gentle: Don't run as fast as possible because it's not your pipe. Pay attention to the site's robot.txt file or risk being banned.
There are true web-crawler gems for Ruby, but the basic task is so simple I never bother with them. If you want to check out other alternatives, visit some of the links to the right for other questions on SO that touch on this subject.
If you need more power or flexibility, the Nokogiri gem makes short work of parsing HTML, allowing you to use CSS accessors to search for tags of interest. There are some pretty powerful gems for making it easy to grab pages such as typhoeus.
Finally, while ActiveRecord, which is recommended in some comments, is nice, finding documentation for using it outside of Rails can be difficult or confusing. I recommend using Sequel. It is a great ORM, very flexible, and well documented.
Hi I would start by taking a very close look at the gem called Mechanize before firing up any basic open-uri stuff - cause it's build into mechanize. It's a brilliant, fast, and easy to use gem for automating web-crawling. Since your data-format is pretty strange (at least compared to json, xml or html) I don't think you will make any use of the build-in parser - but you could still take a look at it. it's called nokogiri and is extremely smart as well. But in the last end, after crawling and fetching the resources, you will probably have to go with some good old regular expression stuff.
Good luck!
Is there any gem in ruby to generate a summary of an url similar to what facebook does when you post a link.
None that I'm aware of, but it should't be too hard to roll your own. In the simplest case you can just require 'open-uri' and then use the open method to retrieve the contents of the site, or go for one of the HTTP libraries.
Once you got the document, all you have to do is use something like Nokogori or Hpricot to get the title, first paragraph of text and an image and you are done.
Generating a thumbnail isn't a straightforward task. The page has to be rendered, the window captured, shrunk down, then stored or returned. While it would be possible for a gem to do it, there would be significant overhead.
There are websites that can create the thumbnails, then you can reference the image:
Websnapr
Webthumb
ShrinkTheWeb
iWEBTOOL
I haven't tried them, but there's a good page discussing the first two on The Accidental Technologist.
If you need some text from the page, its simple to grab some, but making it be sensible is a different problem:
require 'nokogiri'
require 'open-uri'
doc = Nokogiri::HTML(open('http://www.example.com'))
page_text = doc.text
print page_text.gsub(/\s+/, ' ').squeeze(' ')[0..99]
# >> IANA — Example domains Domains Numbers Protocols About IANA Example Domains As described in RFC 2606
Is there a high-level Ruby library to interact with an FTP server?
Instead of Net::HTTP I can use HTTParty, Curb, Rest Client, or Typhoeus, which makes everything easier, but I can't find any similar solutions to replace/enhance Net::FTP.
More specifically, I'm looking for:
minimal lines to connect to a server. For example, login must be explicitly specified with Net::FTP
the ability to iterate through all entries in one folder, or using glob, or just recursively.
the ability to get all possible information, such as the type of entry, size, mtime without manually parsing returned lines.
Ruby's built-in OpenURI will handle FTP.
From OpenURI's docs:
OpenURI is an easy-to-use wrapper for net/http, net/https and net/ftp.
This will seem to hang while it retrieves the Ruby source, but should return after a minute or two.
require 'open-uri'
open('ftp://ftp.ruby-lang.org//pub/ruby/ruby-1.9.2-p136.tar.bz2') do |fi|
File.open('ruby-1.9.2-p136.tar.bz2', 'wb') do |fo|
fo.puts fi.read
end
end
Or Net::FTP is easy to use with a lot more functionality:
require 'net/ftp'
Net::FTP.open('ftp.ruby-lang.org') do |ftp|
ftp.login
ftp.chdir('/pub/ruby')
puts ftp.list('ruby-1.9.2*')
puts ftp.nlst()
ruby_file = 'ruby-1.9.2-p136.tar.bz2'
ftp.getbinaryfile(ruby_file, ruby_file, 1024)
end
Have you tried EventMachine? https://github.com/schleyfox/em-ftp-client