Jekyll - generating JSON files alongside the HTML files - ruby

I'd like to make Jekyll create an HTML file and a JSON file for each page and post. This is to offer a JSON API of my Jekyll blog - e.g. a post can be accessed either at /posts/2012/01/01/my-post.html or /posts/2012/01/01/my-post.json
Does anyone know if there's a Jekyll plugin, or how I would begin to write such a plugin, to generate two sets of files side-by-side?

I was looking for something like this too, so I learned a bit of ruby and made a script that generates JSON representations of Jekyll blog posts. I’m still working on it, but most of it is there.
I put this together with Gruntjs, Sass, Backbonejs, Requirejs and Coffeescript. If you like, you can take a look at my jekyll-backbone project on Github.
# encoding: utf-8
#
# Title:
# ======
# Jekyll to JSON Generator
#
# Description:
# ============
# A plugin for generating JSON representations of your
# site content for easy use with JS MVC frameworks like Backbone.
#
# Author:
# ======
# Jezen Thomas
# jezenthomas#gmail.com
# http://jezenthomas.com
module Jekyll
require 'json'
class JSONGenerator < Generator
safe true
priority :low
def generate(site)
# Converter for .md > .html
converter = site.getConverterImpl(Jekyll::Converters::Markdown)
# Iterate over all posts
site.posts.each do |post|
# Encode the HTML to JSON
hash = { "content" => converter.convert(post.content)}
title = post.title.downcase.tr(' ', '-').delete("’!")
# Start building the path
path = "_site/dist/"
# Add categories to path if they exist
if (post.data['categories'].class == String)
path << post.data['categories'].tr(' ', '/')
elsif (post.data['categories'].class == Array)
path << post.data['categories'].join('/')
end
# Add the sanitized post title to complete the path
path << "/#{title}"
# Create the directories from the path
FileUtils.mkpath(path) unless File.exists?(path)
# Create the JSON file and inject the data
f = File.new("#{path}/raw.json", "w+")
f.puts JSON.generate(hash)
end
end
end
end

There are two ways you can accomplish this, depending on your needs. If you want to use a layout to accomplish the task, then you want to use a Generator. You would loop through each page of your site and generate a new .json version of the page. You could optionally make which pages get generated conditional upon the site.config or the presence of a variable in the YAML front matter of the pages. Jekyll uses a generator to handle slicing blog posts up into indices with a given number of posts per page.
The second way is to use a Converter (same link, scroll down). The converter will allow you to execute arbitrary code on your content to translate it to a different format. For an example of how this works, check out the markdown converter that comes with Jekyll.
I think this is a cool idea!

Take a look at JekyllBot and the following code.
require 'json'
module Jekyll
class JSONPostGenerator < Generator
safe true
def generate(site)
site.posts.each do |post|
render_json(post,site)
end
site.pages.each do |page|
render_json(page,site)
end
end
def render_json(post, site)
#add `json: false` to YAML to prevent JSONification
if post.data.has_key? "json" and !post.data["json"]
return
end
path = post.destination( site.source )
#only act on post/pages index in /index.html
return if /\/index\.html$/.match(path).nil?
#change file path
path['/index.html'] = '.json'
#render post using no template(s)
post.render( {}, site.site_payload)
#prepare output for JSON
post.data["related_posts"] = related_posts(post,site)
output = post.to_liquid
output["next"] = output["next"].id unless output["next"].nil?
output["previous"] = output["previous"].id unless output["previous"].nil?
#write
#todo, figure out how to overwrite post.destination
#so we can just use post.write
FileUtils.mkdir_p(File.dirname(path))
File.open(path, 'w') do |f|
f.write(output.to_json)
end
end
def related_posts(post, site)
related = []
return related unless post.instance_of?(Post)
post.related_posts(site.posts).each do |post|
related.push :url => post.url, :id => post.id, :title => post.to_liquid["title"]
end
related
end
end
end
Both should do exactly what you want.

Related

How to read data from a different file without using YAML or JSON

I'm experimenting with a Ruby script that will add data to a Neo4j database using REST API. (Here's the tutorial with all the code if interested.)
The script works if I include the hash data structure in the initialize method but I would like to move the data into a different file so I can make changes to it separately using a different script.
I'm relatively new to Ruby. If I copy the following data structure into a separate file, is there a simple way to read it from my existing script when I call #data? I've heard one could do something with YAML or JSON (not familiar with how either work). What's the easiest way to read a file and how could I go about coding that?
#I want to copy this data into a different file and read it with my script when I call #data.
{
nodes:[
{:label=>"Person", :title=>"title_here", :name=>"name_here"}
]
}
And here is part of my code, it should be enough for the purposes of this question.
class RGraph
def initialize
#url = 'http://localhost:7474/db/data/cypher'
#If I put this hash structure into a different file, how do I make #data read that file?
#data = {
nodes:[
{:label=>"Person", :title=>"title_here", :name=>"name_here"}
]
}
end
#more code here... not relevant to question
def create_nodes
# Scan file, find each node and create it in Neo4j
#data.each do |key,value|
if key == :nodes
#data[key].each do |node| # Cycle through each node
next unless node.has_key?(:label) # Make sure this node has a label
#WE have sufficient data to create a node
label = node[:label]
attr = Hash.new
node.each do |k,v| # Hunt for additional attributes
next if k == :label # Don't create an attribute for "label"
attr[k] = v
end
create_node(label,attr)
end
end
end
end
rGraph = RGraph.new
rGraph.create_nodes
end
Given that OP said in comments "I'm not against using either of those", let's do it in YAML (which preserves the Ruby object structure best). Save it:
#data = {
nodes:[
{:label=>"Person", :title=>"title_here", :name=>"name_here"}
]
}
require 'yaml'
File.write('config.yaml', YAML.dump(#data))
This will create config.yaml:
---
:nodes:
- :label: Person
:title: title_here
:name: name_here
If you read it in, you get exactly what you saved:
require 'yaml'
#data = YAML.load(File.read('config.yaml'))
puts #data.inspect
# => {:nodes=>[{:label=>"Person", :title=>"title_here", :name=>"name_here"}]}

Why do I get "undefined local variable or method" in my code?

I want to scrap links from a Google search query.
I can't save results in a TAB (links):
error : test.rb:17:in `parse_result': undefined local variable or
method `links' for main:Object (NameError)
This is my code:
require 'open-uri'
require 'nokogiri'
doc = Nokogiri::HTML(open('https://www.google.fr/search?q=estimation+immobilier'))
links = []
def parse_results(doc)
doc.search('.g').map do |element|
parse_block(element)
end
end
def parse_block(element)
tempo = element.search('.r').to_s
links << tempo.scan(/<a href=\"\/url\?q=(.*)&sa=U/)[0][0]
end
parse_results(doc)
puts links
The problem is variable scope, and is very common.
I'd rewrite the code like this:
require 'open-uri'
require 'nokogiri'
doc = Nokogiri::HTML(open('https://www.google.fr/search?q=estimation+immobilier'))
def parse_results(doc)
_links = []
doc.search('.g').each do |element|
_links << parse_block(element)
end
_links
end
def parse_block(element)
tempo = element.search('.r').to_s
tempo.scan(/<a href=\"\/url\?q=(.*)&sa=U/)[0][0]
end
links = parse_results(doc)
puts links
links could be defined as an instance, class or global variable, but all of those have code smell. You'd be trying to circumvent scoping, which is really your friend when it comes to avoiding wasting space on the variable stack.
scan is going to return an array of results, so push its results to _links.
map wasn't the right method for what you're doing; each is more appropriate since you're looping over the results of searching for class="g" in the HTML. Using map, you could write parse_results() like:
def parse_results(doc)
doc.search('.g').map { |element| parse_block(element) }
end
parse_block() isn't written correctly, or at least it can be written a lot more idiomatically for Nokogiri. If you ever have to resort to using regex when using an XML or HTML parser, you know there's something that should be reconsidered. Looking at what's happening, here's what the code sees as it dives through parse_results() and parse_block():
doc.search('.g').first.search('.r').to_s
# => "<h3 class=\"r\"><b>Estimation immobiliere</b> gratuite - MeilleursAgents.com</h3>\n"
You're trying to grab a parameter from the links, so use Nokogiri to do that cleanly, instead of trying to use a pattern and scan. I opened the page and parsed it as you did, then tried this:
doc.search('.g h3.r a').map(&:to_html)
# => ["<b>Estimation immobiliere</b> gratuite - MeilleursAgents.com",
# "<b>Estimation immobili\u00E8re</b> gratuite (maison, appartement <b>...</b> - Drimki",
# "<b>Estimation immobili\u00E8re</b> avec Particulier \u00E0 Particulier | De <b>...</b> - P.a.p",
# "LaCoteImmo - <b>Estimation immobili\u00E8re</b> et Prix immobilier",
# "<b>Estimation immobili\u00E8re</b> - Efficity",
# "<b>Estimation</b> gratuite d'un bien <b>immobilier</b> - ParuVendu",
# "<b>Estimer</b> la valeur de son bien <b>immobilier</b>- Meilleurtaux.com",
# "<b>Estimation immobiliere</b> gratuite avec MeilleursAgents.com",
# "<b>Estimation</b> gratuite de votre appartement en ligne - Refleximmo",
# "<b>Estimation Immobili\u00E8re</b> - Immobilier - Capital.fr"]
A bit more comprehensive CSS narrowed down the returned results significantly.
A bit of tweaking results in:
doc.search('.g h3.r a').map{ |a| a['href'] }
# => ["/url?q=http://www.meilleursagents.com/estimation-immobiliere/&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CBQQFjAA&usg=AFQjCNFNCH0iR3pr0fQX6wSjcj1_s3CsRg",
# "/url?q=http://www.drimki.fr/estimation-immobiliere-gratuite&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CBoQFjAB&usg=AFQjCNGUbFcsWWQY-bc8Vu-d-GD9YFcbVg",
# "/url?q=http://www.pap.fr/evaluation/estimation-immobiliere&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CCAQFjAC&usg=AFQjCNGztbZlDWWGS4kNPHzR06ayRdAQKg",
# "/url?q=http://www.lacoteimmo.com/&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CCYQFjAD&usg=AFQjCNEZK_JVduJKJvFpDDXu4yIsTXGMFg",
# "/url?q=http://www.efficity.com/estimation-immobiliere/&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CCwQFjAE&usg=AFQjCNHHc-GuJoHXTx3N3_Ex_fz1KUp1cg",
# "/url?q=http://www.paruvendu.fr/pa/prix-immobilier-prix-m2-estimation-gratuite-bien-immobilier/&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CDIQFjAF&usg=AFQjCNGmwWmo19asoooWz6Lbh0YMOC8wlg",
# "/url?q=http://www.meilleurtaux.com/services-immo/vendre-un-bien-immobilier/estimation-immobiliere.html&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CDgQFjAG&usg=AFQjCNFJ_fAsPBmZvVU60jRLh-yKzvuEiw",
# "/url?q=http://prix-immobilier.latribune.fr/estimation-immobiliere/&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CD4QFjAH&usg=AFQjCNHHaVmKGg4jiaT-6AwZAfby2-H4sg",
# "/url?q=http://www.refleximmo.com/estimation-immobiliere-gratuite-appartement&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CEQQFjAI&usg=AFQjCNGiBMMYrK-EO9wqIh82eW2uFT0n8w",
# "/url?q=http://www.capital.fr/immobilier/estimation-immobiliere&sa=U&ei=OoHSU7KaEszwoAS__YDoCA&ved=0CEoQFjAJ&usg=AFQjCNEf8FQuKCYBMXBB5FA2dJ2gor4Wmg"]
At this point it's obvious we're looking at an array of absolute URLs, which can be handled using Ruby's built-in URI class:
require 'uri'
doc.search('.g h3.r a').map{ |a|
uri = URI.parse(a['href'])
query_hash = Hash[URI::decode_www_form(uri.query)]
query_hash['q']
}
# => [
"http://www.meilleursagents.com/estimation-immobiliere/",
"http://www.drimki.fr/estimation-immobiliere-gratuite",
"http://www.pap.fr/evaluation/estimation-immobiliere",
...
That should give you enough information to rewrite your code a bit more robustly. Regular expressions are not good tools for parsing HTML, and it's better to use well-tested, pre-built wheels whenever possible, like URI.
The reason I say this approach is more robust is because of this piece of code:
links << tempo.scan(/<a href=\"\/url\?q=(.*)&sa=U/)[0][0]
That search is very prone to breaking. URL formats can change quickly, especially if a site suspects that people are scraping their pages and they don't want scraping to happen, such as Google. They could easily change the order of the parameters, they could change the way the link is written in the page, etc., since HTML allows very liberal formatting of the source and a browser will still render the same view to the user. Imagine the fun you'd have if Google chose to render a link like:
<a
href="/url?amp;sa=U&q=...
The regex would break, causing your code to break, whereas using URI and Nokogiri to drill down would continue to work.
It works if you make links and instance variable:
require 'open-uri'
require 'nokogiri'
doc = Nokogiri::HTML(open('https://www.google.fr/search?q=estimation+immobilier'))
#links = []
def parse_results(doc)
doc.search('.g').map do |element|
parse_block(element)
end
end
def parse_block(element)
tempo = element.search('.r').to_s
#links << tempo.scan(/<a href=\"\/url\?q=(.*)&sa=U/)[0][0]
end
parse_results(doc)
puts #links

How to get RSS feed in xml format for ruby script

I am using the following ruby script from this dashing widget that retrieves an RSS feed and parses it and sends that parsed title and description to a widget.
require 'net/http'
require 'uri'
require 'nokogiri'
require 'htmlentities'
news_feeds = {
"seattle-times" => "http://seattletimes.com/rss/home.xml",
}
Decoder = HTMLEntities.new
class News
def initialize(widget_id, feed)
#widget_id = widget_id
# pick apart feed into domain and path
uri = URI.parse(feed)
#path = uri.path
#http = Net::HTTP.new(uri.host)
end
def widget_id()
#widget_id
end
def latest_headlines()
response = #http.request(Net::HTTP::Get.new(#path))
doc = Nokogiri::XML(response.body)
news_headlines = [];
doc.xpath('//channel/item').each do |news_item|
title = clean_html( news_item.xpath('title').text )
summary = clean_html( news_item.xpath('description').text )
news_headlines.push({ title: title, description: summary })
end
news_headlines
end
def clean_html( html )
html = html.gsub(/<\/?[^>]*>/, "")
html = Decoder.decode( html )
return html
end
end
#News = []
news_feeds.each do |widget_id, feed|
begin
#News.push(News.new(widget_id, feed))
rescue Exception => e
puts e.to_s
end
end
SCHEDULER.every '60m', :first_in => 0 do |job|
#News.each do |news|
headlines = news.latest_headlines()
send_event(news.widget_id, { :headlines => headlines })
end
end
The example rss feed works correctly because the URL is for an xml file. However I want to use this for a different rss feed that does not provide an actual xml file. This rss feed I want is at http://www.ttc.ca/RSS/Service_Alerts/index.rss
This doesn't seem to display anything on the widget. Instead of using "http://www.ttc.ca/RSS/Service_Alerts/index.rss", I also tried "http://www.ttc.ca/RSS/Service_Alerts/index.rss?format=xml" and "view-source:http://www.ttc.ca/RSS/Service_Alerts/index.rss" but with no luck. Does anyone know how I can get the actual xml data related to this rss feed so that I can use it with this ruby script?
You're right, that link does not provide regular XML, so that script won't work in parsing it since it's written specifically to parse the example XML. The rss feed you're trying to parse is providing RDF XML and you can use the Rubygem: RDFXML to parse it.
Something like:
require 'nokogiri'
require 'rdf/rdfxml'
rss_feed = 'http://www.ttc.ca/RSS/Service_Alerts/index.rss'
RDF::RDFXML::Reader.open(rss_feed) do |reader|
# use reader to iterate over elements within the document
end
From here you can try learning how to use RDFXML to extract the content you want. I'd begin by inspecting the reader object for methods I could use:
puts reader.methods.sort - Object.methods
That will print out the reader's own methods, look for one you might be able to use for your purposes, such as reader.each_entry
To further dig down you can inspect what each entry looks like:
reader.each_entry do |entry|
puts "----here's an entry----"
puts entry.inspect
end
or see what methods you can call on the entry:
reader.each_entry do |entry|
puts "----here's an entry's methods----"
puts entry.methods.sort - Object.methods
break
end
I was able to crudely find some titles and descriptions using this hack job:
RDF::RDFXML::Reader.open('http://www.ttc.ca/RSS/Service_Alerts/index.rss') do |reader|
reader.each_object do |object|
puts object.to_s if object.is_a? RDF::Literal
end
end
# returns:
# TTC Service Alerts
# http://www.ttc.ca/Service_Advisories/index.jsp
# TTC Service Alerts.
# TTC.ca
# http://www.ttc.ca
# http://www.ttc.ca/images/ttc-main-logo.gif
# Service Advisory
# http://www.ttc.ca/Service_Advisories/all_service_alerts.jsp#Service+Advisory
# 196 York University Rocket route diverting northbound via Sentinel, Finch due to a collision that has closed the York U Bus way.
# - Affecting: Bus Routes: 196 York University Rocket
# 2013-12-17T13:49:03.800-05:00
# Service Advisory (2)
# http://www.ttc.ca/Service_Advisories/all_service_alerts.jsp#Service+Advisory+(2)
# 107B Keele North route diverting northbound via Keele, Lepage due to a collision that has closed the York U Bus way.
# - Affecting: Bus Routes: 107 Keele North
# 2013-12-17T13:51:08.347-05:00
But I couldn't quickly find a way to know which one was a title, and which a description :/
Finally, if you still can't find how to extract what you want, start a new question with this info.
Good luck!

Jekyll extension calling an external script

I've written a simple Jekyll plugin to pull in my tweets using the twitter gem (see below). I'd like to keep the ruby script for the plugin on my open Github site, but following recent changes to the twitter API, the gem now requires authentication credentials.
require 'twitter' # Twitter API
require 'redcarpet' # Formatting links
module Jekyll
class TwitterFeed < Liquid::Tag
def initialize(tag_name, text, tokens)
super
input = text.split(/, */ )
#user = input[0]
#count = input[1]
if input[1] == nil
#count = 3
end
end
def render(context)
# Initialize a redcarpet markdown renderer to autolink urls
# Could use octokit instead to get GFM
markdown = Redcarpet::Markdown.new(Redcarpet::Render::HTML,
:autolink => true,
:space_after_headers => true)
## Attempt to load credentials externally here:
require '~/.twitter_auth.rb'
out = "<ul>"
tweets = #client.user_timeline(#user)
for i in 0 ... #count.to_i
out = out + "<li>" + markdown.render(tweets[i].text) +
" <a href=\"http://twitter.com/" + #user + "/statuses/" +
tweets[i].id.to_s + "\">" + tweets[i].created_at.strftime("%I:%M %Y/%m/%d") +
"</a> " + "</li>"
end
out + "</ul>"
end
end
end
Liquid::Template.register_tag('twitter_feed', Jekyll::TwitterFeed)
If I replace the line
require '~/.twitter_auth.rb'
where twitter_auth.rb contains something like:
require 'twitter'
#client = Twitter::Client.new(
:consumer_key => "CEoYXXXXXXXXXXX",
:consumer_secret => "apnHXXXXXXXXXXXXXXXXXXXXXXXX",
:oauth_token => "105XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
:oauth_token_secret => "BJ7AlXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
)
If I place these contents directly into the script above, then my plugin script works just fine. But when I move them to an external file and try to read them in as shown, Jekyll fails to authenticate. The function seems to work just fine when I call it from irb, so I am not sure why it does not work during the Jekyll build.
I think that you may be confused about how require works. When you call require, first Ruby checks if the file has already been required, if so it just returns directly. If it hasn’t then the contents of the file are run, but not in the same scope as the require statement. In other words using require isn’t the same as replacing the require statement with the contents of the file (which is how, for example, C’s #include works).
In your case, when you require your ~/.twitter_auth.rb file, the #client instance variable is being created, but as an instance variable of the top level main object, not as an instance variable of the TwitterFeed instance where require is being called form.
You could do something like assign the Twitter::Client object to a constant that you could then reference from the render method:
MyClient = Twitter::Client.new{...
and then
require '~/twitter_auth.rb'
#client = MyClient
...
I only suggest this as an explanation of what’s happening with require, it’s not really a good technique.
A better option, I think, would be to keep your credentials in a simple data format in your home directory, then read them form your script and create the Twitter client with them. In this case Yaml would probably do the job.
First replace your ~/twitter_auth.rb with a ~/twitter_auth.yaml that looks soemthing like:
:consumer_key: "CEoYXXXXXXXXXXX"
:consumer_secret: "apnHXXXXXXXXXXXXXXXXXXXXXXXX"
:oauth_token: "105XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
:oauth_token_secret: "BJ7AlXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
Then where you have requre "~/twitter_auth.rb" in your class, replace with this (you’ll also need require 'yaml' at the top of the file):
#client = Twitter::Client.new(YAML.load_file("~/twitter_auth.yaml"))

how to store the name of nested files in a variable and loop through them in rake

I have the following rake file to create a static version of my sinatra app,
stolen from http://github.com/semanticart/stuff-site/blob/master/Rakefile
class View
attr_reader :permalink
def initialize(path)
filename = File.basename(path)
#permalink = filename[0..-6]
end
end
view_paths = Dir.glob(File.join(File.dirname(__FILE__), 'views/pages', '*.haml'))
ALL_VIEWS = view_paths.map {|path| View.new(path) }
task :build do
def dump_request_to_file url, file
Dir.mkdir(File.dirname(file)) unless File.directory?(File.dirname(file))
File.open(file, 'w'){|f| f.print #request.get(url).body}
end
static_dir = File.join(File.dirname(__FILE__), 'public')
require 'sinatra'
require 'c4eo'
#request = Rack::MockRequest.new(Sinatra::Application)
ALL_VIEWS.each do |view|
puts view
dump_request_to_file("/#{view.permalink}", File.join(static_dir, view.permalink+'.html'))
end
end
ALL_VIEWS is now an array containing all the Haml files in the root of my 'views/pages' directory.
How do I modify ALL_VIEWS and the dump_request_to_file method to cycle through all the subdirectories in my views/pages directory?
My views directory looks a bit like this: http://i45.tinypic.com/167unpw.gif
If it makes life a lot easier, I could have all my files named index.haml, inside directories.
Thanks
To cycle through all subdirs, change 'views/pages' to 'views/pages/**'
The double splats tells it to search recursively, you can see it in the docs at
http://ruby-doc.org/core/classes/Dir.html#M002322
Note that I haven't looked thoroughly at your use case, but preliminarily it appears that you may have trouble generating a permalink. When I checked the results, I got:
[#<View:0x1010440a0 #permalink="hound">,
#<View:0x101044078 #permalink="index">,
#<View:0x101044000 #permalink="hound">,
#<View:0x101043f88 #permalink="index">,
#<View:0x101043f10 #permalink="references">,
#<View:0x101043e98 #permalink="do_find">,
#<View:0x101043e20 #permalink="index">,
#<View:0x101043da8 #permalink="README">]
Which were generated from these files:
["/Users/josh/deleteme/fileutilstest/views/pages/bar/cheese/rodeo/hound.haml",
"/Users/josh/deleteme/fileutilstest/views/pages/bar/cheese/rodeo/outrageous/index.haml",
"/Users/josh/deleteme/fileutilstest/views/pages/bar/pizza/hound.haml",
"/Users/josh/deleteme/fileutilstest/views/pages/bar/pizza/index.haml",
"/Users/josh/deleteme/fileutilstest/views/pages/bar/pizza/references.haml",
"/Users/josh/deleteme/fileutilstest/views/pages/do_find.haml",
"/Users/josh/deleteme/fileutilstest/views/pages/tutorials/index.haml",
"/Users/josh/deleteme/fileutilstest/views/pages/tutorials/README.haml"]
And it looks like you create the link with:
File.join(static_dir, view.permalink+'.html')
So you can see that in this case, that would create three files like static_dir/index.html
A fairly obvious solution is to include the relative portion of the link, so it would become
static_dir/bar/cheese/rodeo/outrageous/index.html
static_dir/bar/pizza/index.html
static_dir/tutorials/index.html
EDIT: In regards to addressing how to find the relative url, this seems to work
class View
attr_reader :permalink
def initialize( root_path , path )
root_path = File.expand_path(root_path).sub(/\/?$/,'/')
path = File.expand_path path
filename = path.gsub root_path , String.new
raise "#{path} does not appear to be a subdir of #{root_path}" unless root_path + filename == path
#permalink = filename[0..-6]
end
end
view_paths = Dir.glob(File.join(File.dirname(__FILE__), 'views/pages/**', '*.haml'))
ALL_VIEWS = view_paths.map { |path| View.new 'views/pages' , path }
require 'pp'
pp ALL_VIEWS
I'm not all that keen on the [0..-6] thing, it only works if you know your file has a suffix and that it is five characters long. But I'm going to leave it alone since I don't really know how you would want to handle the different future situations I might anticipate (ie generate an html from the haml and serve that up, now you have two files index.html and index.haml, which, after you remove their extensions, are both just index. Or styles.css which loses part of its filename when you attempt to remove its extension by pulling in [0..-6]

Resources