How do I split a URL into 2 parts in Ruby? - ruby

I have a ruby script that downloads URLs from an RSS server and then downloads the files at those URLs.
I need to split the URL into 2 components like so -
http://www.website.com/dir1/dir2/file.txt
--> 'www.website.com' and 'dir1/dir2/file.txt'
I'm struggling to come up with a way to do this. I've been playing with regular expressions but nothing has worked. How would others go about doing this?

Use the URI library.
require 'uri'
u = URI.parse("http://www.website.com/dir1/dir2/file.txt")
u.host
# => "www.website.com"
u.path
# => "/dir1/dir2/file.txt"

In a simple way , you could use split .
split('/')[2]

Related

Regex in Ruby for a URL that is an image

So I'm working on a crawler to get a bunch of images on a page that are saved as links. The relevant code, at the moment, is:
def parse_html(html)
html_doc = Nokogiri::HTML(html)
nodes = html_doc.xpath("//a[#href]")
nodes.inject([]) do |uris, node|
uris << node.attr('href').strip
end.uniq
end
I am current getting a bunch of links, most of which are images, but not all. I want to narrow down the links before downloading with a regex. So far, I haven't been able to come up with a Ruby-Friendly regex for the job. The best I have is:
^https?:\/\/(?:[a-z0-9\-]+\.)+[a-z]{2,6}(?:/[^\/?]+)+\.(?:jpg|gif|png)$.match(nodes)
Admittedly, I got that regex from someone else, and tried to edit it to work and I'm failing. One of the big problems I'm having is the original Regex I took had a few "#"'s in it, which I don't know if that is a character I can escape, or if Ruby is just going to stop reading at that point. Help much appreciated.
I would consider modifying your XPath to include your logic. For example, if you only wanted the a elements that contained an img you can use the following:
"//a[img][#href]"
Or even go further and extract just the URIs directly from the href values:
uris = html_doc.xpath("//a[img]/#href").map(&:value)
As some have said, you may not want to use Regex for this, but if you're determined to:
^http(s?):\/\/.*\.(jpeg|jpg|gif|png)
Is a pretty simple one that will grab anything beginning with http or https and ending with one of the file extensions listed. You should be able to figure out how to extend this one, Rubular.com is good for experimenting with these.
Regexp is a very powerful tool but - compared to simple string comparisons - they are pretty slow.
For your simple example, I would suggest using a simple condition like:
IMAGE_EXTS = %w[gif jpg png]
if IMAGE_EXTS.any? { |ext| uri.end_with?(ext) }
# ...
In the context of your question, you might want to change your method to:
IMAGE_EXTS = %w[gif jpg png]
def parse_html(html)
uris = []
Nokogiri::HTML(html).xpath("//a[#href]").each do |node|
uri = node.attr('href').strip
uris << uri if IMAGE_EXTS.any? { |ext| uri.end_with?(ext) }
end
uris.uniq
end

Join two relative urls

I have the following case that I want to join two relative urls:
/api/v1/ and /status.
I already searched how I can accomplish this but the only two solutions I found were to use URI::join or File.join.
URI::join only works if the first url segment is absolute which is not the case. Using File.join works but doesn't feel right in this case.
The Addressable gem solves the problem:
require "addressable/uri"
fragment1 = '/api/v1/'
fragment2 = 'status'
Addressable::URI.join(fragment1, fragment2).to_s
# => "/api/v1/status"

Ruby regexp: capture the path of url

From any URL I want to extract its path.
For example:
URL: https://stackoverflow.com/questions/ask
Path: questions/ask
It shouldn't be difficult:
url[/(?:\w{2,}\/).+/]
But I think I use a wrong pattern for 'ignore this' ('?:' - doesn't work). What is the right way?
I would suggest you don't do this with a regular expression, and instead use the built in URI lib:
require 'uri'
uri = URI::parse('http://stackoverflow.com/questions/ask')
puts uri.path # results in: /questions/ask
It has a leading slash, but thats easy to deal with =)
You can use regex in this case, which is faster than URI.parse:
s = 'http://stackoverflow.com/questions/ask'
s[s[/.*?\/\/[^\/]*\//].size..-1]
# => "questions/ask" (6,8 times faster)
s[/\/(?!.*\.).*/]
# => "/questions/ask" (9,9 times faster, but with an extra slash)
But if you don't care with the speed, use uri, as ctcherry showed, is more readable.
The approach presented by ctcherry is perfectly correct, but I prefer to use request.fullpath instead of including the URI library in the code. Just call request.fullpath in your views or controllers. But be careful, if you have any GET parameters in your URL it will be catched, in this case a use a split('?').first

upload analogue with XSendFile?

Is there some way to use something similar to x-sendfile for uploading files, e.g. saving particular stream/parameter from request to file, without putting it wholly into memory?
(In particular, with apache2 and ruby fcgi)
require 'open-uri'
CHUNK_SIZE = 8192
File.open("local_filename.dat","w") do |w|
open("http://some_file.url") do |r|
w.write(r.read(CHUNK_SIZE)) while !r.eof?
end
end
Apache's ModPorter seems to be the way.

Remove subdomain from string in ruby

I'm looping over a series of URLs and want to clean them up. I have the following code:
# Parse url to remove http, path and check format
o_url = URI.parse(node.attributes['href'])
# Remove www
new_url = o_url.host.gsub('www.', '').strip
How can I extend this to remove the subdomains that exist in some URLs?
I just wrote a library to do this called Domainatrix. You can find it here: http://github.com/pauldix/domainatrix
require 'rubygems'
require 'domainatrix'
url = Domainatrix.parse("http://www.pauldix.net")
url.public_suffix # => "net"
url.domain # => "pauldix"
url.canonical # => "net.pauldix"
url = Domainatrix.parse("http://foo.bar.pauldix.co.uk/asdf.html?q=arg")
url.public_suffix # => "co.uk"
url.domain # => "pauldix"
url.subdomain # => "foo.bar"
url.path # => "/asdf.html?q=arg"
url.canonical # => "uk.co.pauldix.bar.foo/asdf.html?q=arg"
This is a tricky issue. Some top-level domains do not accept registrations at the second level.
Compare example.com and example.co.uk. If you would simply strip everything except the last two domains, you would end up with example.com, and co.uk, which can never be the intention.
Firefox solves this by filtering by effective top-level domain, and they maintain a list of all these domains. More information at publicsuffix.org.
You can use this list filter out everything except the domain right next to the effective TLD. I don't know of any Ruby library that does this, but it would be a great idea to release one!
Update: there are C, Perl and PHP libraries that do this. Given the C version, you could create a Ruby extension. Alternatively, you could port the code to Ruby.
For posterity, here's an update from Oct 2014:
I was looking for a more up-to-date dependency to rely on and found the public_suffix gem (RubyGems) (GitHub). It's being actively maintained and handles all the top-level domain and nested-subdomain issues by maintaining a list of the known public suffixes.
In combination with URI.parse for stripping protocol and paths, it works really well:
❯❯❯ 2.1.2 ❯ PublicSuffix.parse(URI.parse('https://subdomain.google.co.uk/path/on/path').host).domain
=> "google.co.uk"
The regular expression you'll need here can be a bit tricky, because, hostnames can be infinitely complex -- you could have multiple subdomains (ie. foo.bar.baz.com), or the top level domain (TLD) can have multiple parts (ie. www.baz.co.uk).
Ready for a complex regular expression? :)
re = /^(?:(?>[a-z0-9-]*\.)+?|)([a-z0-9-]+\.(?>[a-z]*(?>\.[a-z]{2})?))$/i
new_url = o_url.host.gsub(re, '\1').strip
Let's break this into two sections. ^(?:(?>[a-z0-9-]*\.)+?|) will collect subdomains, by matching one or more groups of characters followed by a dot (greedily, so that all subdomains are matched here). The empty alternation is needed in the case of no subdomain (such as foo.com). ([a-z0-9-]+\.(?>[a-z]*(?>\.[a-z]{2})?))$ will collect the actual hostname and the TLD. It allows either for a one-part TLD (like .info, .com or .museum), or a two part TLD where the second part is two characters (like .oh.us or .org.uk).
I tested this expression on the following samples:
foo.com => foo.com
www.foo.com => foo.com
bar.foo.com => foo.com
www.foo.ca => foo.ca
www.foo.co.uk => foo.co.uk
a.b.c.d.e.foo.com => foo.com
a.b.c.d.e.foo.co.uk => foo.co.uk
Note that this regex will not properly match hostnames that have more than two "parts" to the TLD!
Something like:
def remove_subdomain(host)
# Not complete. Add all root domain to regexp
host.sub(/.*?([^.]+(\.com|\.co\.uk|\.uk|\.nl))$/, "\\1")
end
puts remove_subdomain("www.example.com") # -> example.com
puts remove_subdomain("www.company.co.uk") # -> company.co.uk
puts remove_subdomain("www.sub.domain.nl") # -> domain.nl
You still need to add all (root) domains you consider root domain. So '.uk' might be the root domain, but you probably want to keep the host just before the '.co.uk' part.
Detecting the subdomain of a URL is non-trivial to do in a general sense - it's easy if you just consider the basic ones, but once you get into international territory this becomes tricky.
Edit: Consider stuff like http://mylocalschool.k12.oh.us et al.
Why not just strip the .com or .co.uk and then split on '.' and get the last element?
some_url.host.sub(/(\.co\.uk|\.[^.]*)$/).split('.')[-1] + $1
Have to say it feels hacky. Are there any other domains like .co.uk?
I've wrestled with this a lot in writing various and sundry crawlers and scrapers over the years. My favorite gem for solving this is FuzzyUrl by Pete Gamache: https://github.com/gamache/fuzzyurl . Its available for Ruby, JavaScript and Elixir.

Resources