I have a web app that runs on different pieces of hardware, that for the most part consists of smart TVs and set-top boxes.
My web app contains a ruby script to setup the app for local debugging. This script builds my app, listens for file changes, and hosts the app using a simple WEBrick server.
Now I'm running into a problem on a specific piece of hardware. This hardware expects to get a success response from a POST request to a health_check API running on the same host as the web app, before it will load up the web app.
I'm simply hoping to intercept this request and spoof it so that the hardware will load my client. So far I've gotten as far as this:
def start_server
require 'webrick'
root = File.expand_path 'public'
request_callback = Proc.new { |req, res|
if req.path =~ /health_check/
# return 200 response somehow?
end
}
server = WEBrick::HTTPServer.new :Port => 5000, :DocumentRoot => root, :RequestCallback => request_callback
server.start
end
I can modify the response object to set status to 200, but it still ends up returning a 404.
You don't need to "intercept" all requests and check for a specific path. You simply want mount_proc, to handle a specific route with a proc.
Add the following before server.start:
server.mount_proc '/health_check' do |req, res|
res.body = 'what what' # your content here
end
You'll probably want to wrap this in a check to determine if you're running on whatever custom hardware requires this behavior.
See Custom Behavior in the WEBrick docs.
Related
I have an app I created on Heroku which is written in Ruby (not rails) and Sinatra.
It is hosted on the default herokuapp domain so I can address the app with both HTTP and HTTPS.
The app requests user credentials which I forward on to an HTTPS call so the forwarding part is secure.
I want to ensure my users always connect securely to my app so the credentials aren't passed in clear text.
Despite lots of research, I've not found a solution to this simple requirement.
Is there a simple solution without changing my app to Ruby rails or otherwise?
Thanks,
Alan
I use a helper that looks like this:
def https_required!
if settings.production? && request.scheme == 'http'
headers['Location'] = request.url.sub('http', 'https')
halt 301, "https required\n"
end
end
I can then add it to any single route I want to force to https, or use it in the before filter to force on a set of urls:
before "/admin/*" do
https_required!
end
Redirect in a Before Filter
This is untested, but it should work. If not, or if it needs additional refinement, it should at least give you a reasonable starting point.
before do
redirect request.url.sub('http', 'https') unless request.secure?
end
See Also
Filters
Request Object
RackSsl::Enforcer
I have phusion-passenger installed with apache on Ubuntu. In my config.ru, I have the following code:
require 'cgi'
$tpl = CGI.new['myvar'] + '.rb'
app = proc do |env|
[200, { "Content-Type" => "text/html" }, [$tpl]]
end
run app
So then when I go to my browser at http://localhost/?myvar=hello, I see the word hello printed out, which is fine. Then I change the url to http://localhost/?myvar=world, but the page still shows hello. Only after I reload apache will the page show world.
Before using phusion-passenger, I was using mod_ruby with apache. If I remember correctly, I didn't need to restart apache to get the CGI variable to print the updated value.
I'm not stuck on needing to use CGI. I just want to be able to grab query string parameters without having to reload apache each time.
I'm not using rails or Sinatra because i'm just trying to wrap my head around the Ruby language and what phusion-passenger with apache is all about.
IMO this behavior makes sense. Because $tpl is set only once when the file is loaded, what happens when the first request is served. After that - in following requests - only the proc is called, but that does not change $tpl anymore.
Instead of using plain CGI, I would do it with a very simple Rack app:
require 'rack'
require 'rack/server'
class Server
def self.call(env)
req = Rack::Request.new(env)
tpl = "#{req.params['myvar']}.rb"
[200, {}, [tpl]]
end
end
run Server
I have a small simple Net::HTTP POST request to do to my Sinatra app:
def collect(website)
uri = URI("http://localhost:9393/save/#{website}")
res = Net::HTTP.post_form(uri, 'q' => 'ruby', 'max' => '50')
puts res.body
end
But it causes a timeout. Here is the request handler:
post '/save/:website' do |website|
puts request.body
"done"
end
I never reach the puts nor the done. My shotgun server is running on port 9393 of course. When I use the REST Console extension and paste valid json in it, it works for that same path.
What is causing this Timeout::Error?
So the weird thing is, I changed my server from shotgun to simply running it with sinatra and the gem sinatra/reloader. I was using shotgun because it would auto reload whenever the source file changed, and sinatra itself didn't.
After ditching shotgun, it worked straight away.
How can I use local resources like css, js, png, etc. within a dynamically rendered page using webrick? In other words, how are things like Ruby on Rails linking made to work? I suppose this is one of the most basic things, and there should be a simple way to do it.
Possible Solution
I managed to do what I wanted using two servlets as follows:
require 'webrick'
class WEBrick::HTTPServlet::AbstractServlet
def do_GET request, response
response.body = '<html>
<head><base href="http://localhost:2000"/></head>
<body><img src="path/image.png" /></body>
</html>'
end
end
s1 = WEBrick::HTTPServer.new(Port: 2000, BindAddress: "localhost")
s2 = WEBrick::HTTPServer.new(Port: 3000, BindAddress: "localhost")
%w[INT TERM].each{|signal| trap(signal){s1.stop}}
%w[INT TERM].each{|signal| trap(signal){s2.shutdown}}
s1.mount("/", WEBrick::HTTPServlet::FileHandler, '/')
s2.mount("/", WEBrick::HTTPServlet::AbstractServlet)
Thread.new{s1.start}
s2.start
Is this the right way to do it? I do not feel so. In addition, I am not completely satisfied with it. For one thing, I do not like the fact that I have to specify http://localhost:2000 in the body. Another is the use of thread does not seem right. Is there a better way to do this? If you think this is the right way, please answer so.
Generally speaking, because of security concerns browsers likely won't link to local files (using file:// schema) from an internet site (using http:// or https:// schema). See Can Google Chrome open local links?. This is unrelated to any server side technology.
Outside of that, it seems your server is working perfectly. You've made it so it responds to all requests with a HTML page containing a link to /. When you click on that link, something does indeed happen; a request is sent and you are served the same page again.
It kind of sounds like you want to expose your entire filesystem via HTTP. If that is what you're trying to accomplish, you can simply get away with not mounting a servlet:
server = WEBrick::HTTPServer.new(Port: 3000, BindAddress: "localhost", DocumentRoot: "/")
%w[INT TERM].each{|signal| trap(signal){server.shutdown}}
server.start
Try code like this:
require 'webrick'
class WEBrick::HTTPServlet::AbstractServlet
def do_GET request, response
if request.unparsed_uri == "/"
response.body = '<html><body>test</body></html>'
end
end
end
server = WEBrick::HTTPServer.new(Port: 3000, BindAddress: "localhost", DocumentRoot: "/")
%w[INT TERM].each { |signal| trap(signal) { server.shutdown } }
server.mount("/", WEBrick::HTTPServlet::AbstractServlet)
server.start
This works for me, I'm not sure why but it seems to work whenever I call at least one method on the request object.
It sounds like you are confusing web pages that are served vs. pages that are opened by the browser directly from your drive, and how file: differs from http:, https:, and ftp:.
file: is a locally available resource when a page is directly opened from the drive. The others are remotely available resources when a page is served from a httpd host.
The browser can't tell that a page from a server came from your drive; It only knows it got it from a server, somewhere, and doesn't know or care whether that server is on the same hardware. Browsers will not allow access to local resources from remotely retrieved pages. That was an exploit that was closed years ago.
See RFC 1738's specification 3.10 FILES for file: URLs for the official statements.
I finally found out that I can mount multiple servlets on a single server. It took a long time until I found such example.
require 'webrick'
class WEBrick::HTTPServlet::AbstractServlet
def do_GET request, response
response.body = '<html>
<head><base href="/resource/"/></head>
<body>
<img src="path_to_image/image.png";alt="picture"/>
<a href="path_to_directory/" />link</a>
...
</body>
</html>'
end
end
server = WEBrick::HTTPServer.new(Port: 3000, BindAddress: "localhost")
%w[INT TERM].each{|signal| trap(signal){server.shutdown}}
server.mount("/resource/", WEBrick::HTTPServlet::FileHandler, '/')
server.mount("/", WEBrick::HTTPServlet::AbstractServlet)
server.start
The path /resource/ can be anything else. The link will now correctly redirect to the expected directory, showing that there is no access permission, which indicates that things are working right; it's now just a matter of permission.
I am working on a website hosted on microsoft's office live service. It has a contact form enabling visitors to get in touch with the owner. I want to write a Ruby script that sits on a seperate sever and which the form will POST to. It will parse the form data and email the details to a preset address. The script should then redirect the browser to a confirmation page.
I have an ubuntu hardy machine running nginx and postfix. Ruby is installed and we shall see about using Thin and it's Rack functionality to handle the script. Now it's come to writing the script and i've drawn a blank.
It's been a long time and if i remember rightly the process is something like;
read HTTP header
parse parameters
send email
send redirect header
Broadly speaking, the question has been answered. Figuring out how to use the answer was more complicated than expected and I thought worth sharing.
First Steps:
I learnt rather abruptly that nginx doesn't directly support cgi scripts. You have to use some other process to run the script and get nginx to proxy requests over. If I was doing this in php (which in hind sight i think would have been a more natural choice) i could use something like php-fcgi and expect life would be pretty straight forward.
Ruby and fcgi felt pretty daunting. But if we are abandoning the ideal of loading these things at runtime then Rack is probably the most straight forward solution and Thin includes all we need. Learning how to make basic little apps with them has been profoundly beneficial to a relative Rails newcomer like me. The foundations of a Rails app can seem hidden for a long time and Rack has helped me lift the curtain that little bit further.
Nonetheless, following Yehuda's advice and looking up sinatra has been another surprise. I now have a basic sinatra app running in a Thin instance. It communicates with nginx over a unix socket in what i gather is the standard way. Sinatra enables a really elegant way to handle different requests and routes into the app. All you need is a get '/' {} to start handling requests to the virtual host. To add more (in a clean fashion) we just include a routes/script.rb into the main file.
# cgi-bin.rb
# main file loaded as a sinatra app
require 'sinatra'
# load cgi routes
require 'routes/default'
require 'routes/contact'
# 404 behaviour
not_found do
"Sorry, this CGI host does not recognize that request."
end
These route files will call on functionality stored in a separate library of classes:
# routes/contact.rb
# contact controller
require 'lib/contact/contactTarget'
require 'lib/contact/contactPost'
post '/contact/:target/?' do |target|
# the target for the message is taken from the URL
msg = ContactPost.new(request, target)
redirect msg.action, 302
end
The sheer horror of figuring out such a simple thing will stay with me for a while. I was expecting to calmly let nginx know that .rb files were to be executed and to just get on with it. Now that this little sinatra app is up and running, I'll be able to dive straight in if I want to add extra functionality in the future.
Implementation:
The ContactPost class handles the messaging aspect. All it needs to know are the parameters in the request and the target for the email. ContactPost::action kicks everything off and returns an address for the controller to redirect to.
There is a separate ContactTarget class that does some authentication to make sure the specified target accepts messages from the URL given in request.referrer. This is handled in ContactTarget::accept? as we can guess from the ContactPost::action method;
# lib/contact/contactPost.rb
class ContactPost
# ...
def action
return failed unless #target.accept? #request.referer
if send?
successful
else
failed
end
end
# ...
end
ContactPost::successful and ContactPost::failed each return a redirect address by combining paths supplied with the HTML form with the request.referer URI. All the behaviour is thus specified in the HTML form. Future websites that use this script just need to be listed in the user's own ~/cgi/contact.conf and they'll be away. This is because ContactTarget looks in /home/:target/cgi/contact.conf for the details. Maybe oneday this will be inappropriate, but for now it's just fine for my purposes.
The send method is simple enough, it creates an instance of a simple Email class and ships it out. The Email class is pretty much based on the standard usage example given in the Ruby net/smtp documentation;
# lib/email/email.rb
require 'net/smtp'
class Email
def initialize(from_alias, to, reply, subject, body)
#from_alias = from_alias
#from = "cgi_user#host.domain.com"
#to = to
#reply = reply
#subject = subject
#body = body
end
def send
Net::SMTP.start('localhost', 25) do |smtp|
smtp.send_message to_s, #from, #to
end
end
def to_s
<<END_OF_MESSAGE
From: #{#from_alias}
To: #{#to}
Reply-To: #{#from_alias}
Subject: #{#subject}
Date: #{DateTime::now().to_s}
#{#body}
END_OF_MESSAGE
end
end
All I need to do is rack up the application, let nginx know which socket to talk to and we're away.
Thank you everyone for your helpful pointers in the right direction! Long live sinatra!
It's all in the Net module, here's an example:
#net = Net::HTTP.new 'http://www.foo.com', 80
#params = {:name => 'doris', :email => 'doris#foo.com'}
# Create HTTP request
req = Net::HTTP::Post.new( 'script.cgi', {} )
req.set_form_data #params
# Send request
response = #net.start do |http|
http.read_timeout = 5600
http.request req
end
Probably the best way to do this would be to use an existing Ruby library like Sinatra:
require "rubygems"
require "sinatra"
get "/myurl" do
# params hash available here
# send email
end
You'll probably want to use MailFactory to send the actual email, but you definitely don't need to be mucking about with headers or parsing parameters.
CGI class of Ruby can be used for writing CGI scripts. Please check: http://www.ruby-doc.org/stdlib/libdoc/cgi/rdoc/index.html
By the way, there is no need to read the HTTP header. Parsing parametres will be easy using CGI class. Then, send the e-mail and redirect.