I've written some ruby CGI scripts (using the Ruby CGI class) that I serve from my production server using lighttpd. I want to test them on my development server using thin. Basically, I want to drop all my CGI scripts in a directory and start thin in that directory. Then, any requests to http://localhost:3000/<script> should just execute <script> in the current directory and return the results. If thin has a built-in way of doing this, I can't find it. I would imagine the Rack config file for this is easy if you know what you're doing, but I don't.
Update:
This rackup file seems to work. I'm not sure if it's the best solution, but it should be fine for a development environment.
run(lambda do |env|
require 'rubygems'
require 'systemu'
script = env['REQUEST_PATH'][1..-1] + '.rb'
response = ''
err = ''
systemu(['ruby', script], 'stdout' => response, 'stderr' => err, 'env' => {
'foo' => 'bar' })
if err.length > 0
[ 500, {'Content-Type' => 'text/plain'}, err ]
else
idx = 0
status = -1
headers = {}
while true
line_end = response.index("\n", idx)
line = response[idx..line_end].strip
idx = line_end+1
if status < 0
if line =~ /(\d\d\d)/
status = $1.to_i
else
raise "Invalid status line: #{line}"
end
elsif line.empty?
break
else
name, value = line.split /: ?/
headers[name] = value
end
end
content = response[idx..-1]
[status, headers, content]
end
end)
I'm a little unclear as to why Rack is necessary at all. If you wrote the script using Ruby's built-in CGI module, you should be able to just tell thin to treat the directory as a cgi-bin,just like the Apache ScriptAlias directive, and Ruby CGI will take care of the rest. If thin can't do this, perhaps lighttpd would be a better solution.
Related
I am writing my first ruby script and am curious how to actually have gem referenced in the script. I am unable to test the code before hand because it reads form an email in /etc/aliases through a pipe.
Any one one with experiences with ruby scripts to advise?
P.S So many bugs because not tested or refactored
Sample Script
#!/usr/bin/env ruby
# Reading files
mail = File.open(ARGV[0])
lines = []
mail.each_with_index do |i,line|
line[i] = lines.#remove leading and trailing spaces
end
first_line = line[1].strip
if line[1] /^(256)/
phone_number = first_line.gsub("+", "")
else
phone_number = "256#{first_line.gsub(/^0+/,"")}"
end
message = line[2].strip
# Sending message
url = "http://xxxxxxxxxxx.com/api/v2/json/messages?token=XXXXXXXXXXXXXXXXXXXXXXXXXXX&to=#{phone_number}&from=XXXXXX&message=#{CGI.escape(message)}"
5.times do |i|
response = HTTParty.get(url)
body = JSON.parse(response.body)
if body["status"] == "Success"
break
end
end
Gems in question are CGI, Httparty, and Json parsing.
Using external gems can be done by calling the "require" method.
So to include them in your script, the first few lines could be something like this:
#!/usr/bin/env ruby
require "json"
require "cgi"
require "httparty"
#rest of your code...
I assume you have installed your gems with gem install <gemname>?
I've written a simple Jekyll plugin to pull in my tweets using the twitter gem (see below). I'd like to keep the ruby script for the plugin on my open Github site, but following recent changes to the twitter API, the gem now requires authentication credentials.
require 'twitter' # Twitter API
require 'redcarpet' # Formatting links
module Jekyll
class TwitterFeed < Liquid::Tag
def initialize(tag_name, text, tokens)
super
input = text.split(/, */ )
#user = input[0]
#count = input[1]
if input[1] == nil
#count = 3
end
end
def render(context)
# Initialize a redcarpet markdown renderer to autolink urls
# Could use octokit instead to get GFM
markdown = Redcarpet::Markdown.new(Redcarpet::Render::HTML,
:autolink => true,
:space_after_headers => true)
## Attempt to load credentials externally here:
require '~/.twitter_auth.rb'
out = "<ul>"
tweets = #client.user_timeline(#user)
for i in 0 ... #count.to_i
out = out + "<li>" + markdown.render(tweets[i].text) +
" <a href=\"http://twitter.com/" + #user + "/statuses/" +
tweets[i].id.to_s + "\">" + tweets[i].created_at.strftime("%I:%M %Y/%m/%d") +
"</a> " + "</li>"
end
out + "</ul>"
end
end
end
Liquid::Template.register_tag('twitter_feed', Jekyll::TwitterFeed)
If I replace the line
require '~/.twitter_auth.rb'
where twitter_auth.rb contains something like:
require 'twitter'
#client = Twitter::Client.new(
:consumer_key => "CEoYXXXXXXXXXXX",
:consumer_secret => "apnHXXXXXXXXXXXXXXXXXXXXXXXX",
:oauth_token => "105XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
:oauth_token_secret => "BJ7AlXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
)
If I place these contents directly into the script above, then my plugin script works just fine. But when I move them to an external file and try to read them in as shown, Jekyll fails to authenticate. The function seems to work just fine when I call it from irb, so I am not sure why it does not work during the Jekyll build.
I think that you may be confused about how require works. When you call require, first Ruby checks if the file has already been required, if so it just returns directly. If it hasn’t then the contents of the file are run, but not in the same scope as the require statement. In other words using require isn’t the same as replacing the require statement with the contents of the file (which is how, for example, C’s #include works).
In your case, when you require your ~/.twitter_auth.rb file, the #client instance variable is being created, but as an instance variable of the top level main object, not as an instance variable of the TwitterFeed instance where require is being called form.
You could do something like assign the Twitter::Client object to a constant that you could then reference from the render method:
MyClient = Twitter::Client.new{...
and then
require '~/twitter_auth.rb'
#client = MyClient
...
I only suggest this as an explanation of what’s happening with require, it’s not really a good technique.
A better option, I think, would be to keep your credentials in a simple data format in your home directory, then read them form your script and create the Twitter client with them. In this case Yaml would probably do the job.
First replace your ~/twitter_auth.rb with a ~/twitter_auth.yaml that looks soemthing like:
:consumer_key: "CEoYXXXXXXXXXXX"
:consumer_secret: "apnHXXXXXXXXXXXXXXXXXXXXXXXX"
:oauth_token: "105XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
:oauth_token_secret: "BJ7AlXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
Then where you have requre "~/twitter_auth.rb" in your class, replace with this (you’ll also need require 'yaml' at the top of the file):
#client = Twitter::Client.new(YAML.load_file("~/twitter_auth.yaml"))
I'm running a simple thin server, that publish some messages to different queues, the code looks like :
require "rubygems"
require "thin"
require "amqp"
require 'msgpack'
app = Proc.new do |env|
params = Rack::Request.new(env).params
command = params['command'].strip rescue "no command"
number = params['number'].strip rescue "no number"
p command
p number
AMQP.start do
if command =~ /\A(create|c|r|register)\z/i
MQ.queue("create").publish(number)
elsif m = (/\A(Answer|a)\s?(\d+|\d+-\d+)\z/i.match(command))
MQ.queue("answers").publish({:number => number,:answer => "answer" }.to_msgpack )
end
end
[200, {'Content-Type' => "text/plain"} , command ]
end
Rack::Handler::Thin.run(app, :Port => 4001)
Now when I run the server, and do something like http://0.0.0.0:4001/command=r&number=123123123
I'm always getting duplicate outputs, something like :
"no command"
"no number"
"no command"
"no number"
The first thing is why I'm getting like duplicate requests ? is it something has to do with the browser ? since when I use curl I'm not having the same behavior , and the second thing why I can't get the params ?
Any tips about the best implementation for such a server would be highly appreciated
Thanks in advance .
The second request comes from the browser looking for the favicon.ico. You can inspect the requests by adding the following code in your handler:
params = Rack::Request.new(env).params
p env # add this line to see the request in your console window
Alternatively you could use Sinatra:
require "rubygems"
require "amqp"
require "msgpack"
require "sinatra"
get '/:command/:number' do
command = params['command'].strip rescue "no command"
number = params['number'].strip rescue "no number"
p command
p number
AMQP.start do
if command =~ /\A(create|c|r|register)\z/i
MQ.queue("create").publish(number)
elsif m = (/\A(Answer|a)\s?(\d+|\d+-\d+)\z/i.match(command))
MQ.queue("answers").publish({:number => number,:answer => "answer" }.to_msgpack )
nd
end
return command
end
and then run ruby the_server.rb at the command line to start the http server.
Current code works as long as there is no remote error:
def get_name_from_remote_url
cstr = "http://someurl.com"
getresult = open(cstr, "UserAgent" => "Ruby-OpenURI").read
doc = Nokogiri::XML(getresult)
my_data = doc.xpath("/session/name").text
# => 'Fred' or 'Sam' etc
return my_data
end
But, what if the remote URL times out or returns nothing? How I detect that and return nil, for example?
And, does Open-URI give a way to define how long to wait before giving up? This method is called while a user is waiting for a response, so how do we set a max timeoput time before we give up and tell the user "sorry the remote server we tried to access is not available right now"?
Open-URI is convenient, but that ease of use means they're removing the access to a lot of the configuration details the other HTTP clients like Net::HTTP allow.
It depends on what version of Ruby you're using. For 1.8.7 you can use the Timeout module. From the docs:
require 'timeout'
begin
status = Timeout::timeout(5) {
getresult = open(cstr, "UserAgent" => "Ruby-OpenURI").read
}
rescue Timeout::Error => e
puts e.to_s
end
Then check the length of getresult to see if you got any content:
if (getresult.empty?)
puts "got nothing from url"
end
If you are using Ruby 1.9.2 you can add a :read_timeout => 10 option to the open() method.
Also, your code could be tightened up and made a bit more flexible. This will let you pass in a URL or default to the currently used URL. Also read Nokogiri's NodeSet docs to understand the difference between xpath, /, css and at, %, at_css, at_xpath:
def get_name_from_remote_url(cstr = 'http://someurl.com')
doc = Nokogiri::XML(open(cstr, 'UserAgent' => 'Ruby-OpenURI'))
# xpath returns a nodeset which has to be iterated over
# my_data = doc.xpath('/session/name').text # => 'Fred' or 'Sam' etc
# at returns a single node
doc.at('/session/name').text
end
I'm currently using Mongrel to develop a custom web application project.
I would like Mongrel to use a defined Http Handler based on a regular expression. For example, everytime someone calls a url like http://test/bla1.js or http://test/bla2.js the same Http handler is called to manage the request.
My code so far looks a like that:
http_server = Mongrel::Configurator.new :host => config.get("http_host") do
listener :port => config.get("http_port") do
uri Regexp.escape("/[a-z0-9]+.js"), :handler => BLAH::CustomHandler.new
uri '/ui/public', :handler => Mongrel::DirHandler.new("#{$d}/public/")
uri '/favicon', :handler => Mongrel::Error404Handler.new('')
trap("INT") { stop }
run
end
end
As you can see, I am trying to use a regex instead of a string here:
uri Regexp.escape("/[a-z0-9]+.js"), :handler => BLAH::CustomHandler.new
but that does not work. Any solution?
Thanks for that.
You should consider creating a Rack application instead. Rack is:
the standard for Ruby web applications
used internally by all popular Ruby web frameworks (Rails, Merb, Sinatra, Camping, Ramaze, ...)
much easier to extend
ready to be run on any application server (Mongrel, Webrick, Thin, Passenger, ...)
Rack has a URL mapping DSL, Rack::Builder, which allows you to map different Rack applications to particular URL prefixes. You typically save it as config.ru, and run it with rackup.
Unfortunately, it does not allow regular expressions either. But because of the simplicity of Rack, it is really easy to write an "application" (a lambda, actually) that will call the proper app if the URL matches a certain regex.
Based on your example, your config.ru may look something like this:
require "my_custom_rack_app" # Whatever provides your MyCustomRackApp.
js_handler = MyCustomRackApp.new
default_handlers = Rack::Builder.new do
map "/public" do
run Rack::Directory.new("my_dir/public")
end
# Uncomment this to replace Rack::Builder's 404 handler with your own:
# map "/" do
# run lambda { |env|
# [404, {"Content-Type" => "text/plain"}, ["My 404 response"]]
# }
# end
end
run lambda { |env|
if env["PATH_INFO"] =~ %r{/[a-z0-9]+\.js}
js_handler.call(env)
else
default_handlers.call(env)
end
}
Next, run your Rack app on the command line:
% rackup
If you have mongrel installed, it will be started on port 9292. Done!
You have to inject new code into part of Mongrel's URIClassifier, which is otherwise blissfully unaware of regular expression URIs.
Below is one way of doing just that:
#
# Must do the following BEFORE Mongrel::Configurator.new
# Augment some of the key methods in Mongrel::URIClassifier
# See lib/ruby/gems/XXX/gems/mongrel-1.1.5/lib/mongrel/uri_classifier.rb
#
Mongrel::URIClassifier.class_eval <<-EOS, __FILE__, __LINE__
# Save original methods
alias_method :register_without_regexp, :register
alias_method :unregister_without_regexp, :unregister
alias_method :resolve_without_regexp, :resolve
def register(uri, handler)
if uri.is_a?(Regexp)
unless (#regexp_handlers ||= []).any? { |(re,h)| re==uri ? h.concat(handler) : false }
#regexp_handlers << [ uri, handler ]
end
else
# Original behaviour
register_without_regexp(uri, handler)
end
end
def unregister(uri)
if uri.is_a?(Regexp)
raise Mongrel::URIClassifier::RegistrationError, "\#{uri.inspect} was not registered" unless (#regexp_handlers ||= []).reject! { |(re,h)| re==uri }
else
# Original behaviour
unregister_without_regexp(uri)
end
end
def resolve(request_uri)
# Try original behaviour FIRST
result = resolve_without_regexp(request_uri)
# If a match is not found with non-regexp URIs, try regexp
if result[0].blank?
(#regexp_handlers ||= []).any? { |(re,h)| (m = re.match(request_uri)) ? (result = [ m.pre_match + m.to_s, (m.to_s == Mongrel::Const::SLASH ? request_uri : m.post_match), h ]) : false }
end
result
end
EOS
http_server = Mongrel::Configurator.new :host => config.get("http_host") do
listener :port => config.get("http_port") do
# Can pass a regular expression as URI
# (URI must be of type Regexp, no escaping please!)
# Regular expression can match any part of an URL, start with "^/..." to
# anchor match at URI beginning.
# The way this is implemented, regexp matches are only evaluated AFTER
# all non-regexp matches have failed (mostly for performance reasons.)
# Also, for regexp URIs, the :in_front is ignored; adding multiple handlers
# to the same URI regexp behaves as if :in_front => false
uri /^[a-z0-9]+.js/, :handler => BLAH::CustomHandler.new
uri '/ui/public', :handler => Mongrel::DirHandler.new("#{$d}/public/")
uri '/favicon', :handler => Mongrel::Error404Handler.new('')
trap("INT") { stop }
run
end
end
Seems to work just fine with Mongrel 1.1.5.