Logging to remote location - ruby

I'm writing a client application which runs on a users computer and sends various requests to a server.
What I'd like to do is make logs from the client programs available on the server so that issues can be easily detected.
So, what I was thinking was to make the client log to a remote location over http.
Is this a good idea and are there any gems or libraries that will facilitate this?

You could use DRb + Logger for this. Both are part of the Ruby standard library, so you don't even need to install any gems on either machine.
Here's how it works:
Remote Logging Machine
require 'drb'
require 'logger'
DRb.start_service 'druby://0.0.0.0:9000', Logger.new('foo.log', 'weekly')
DRb.thread.join
Machine Doing the Logging
require 'drb'
$log = DRbObject.new_with_uri 'druby://remote.server.ip:9000'
begin
$log.info "Hello World"
rescue DRb::DRbConnError => e
warn "Could not log because: #{e}"
# Optionally re-log the message somewhere else.
end
puts "Yay, still running!"
I just tested this between two machines 1500 miles apart, where the client machine is even behind NAT, and it worked flawlessly.

Related

OCRA missing submodules of libraries if rack/grape api endpoints aren't called

I'm trying to pack my REST app into into an executable with OCRA. I have a few gems required in my script:
require 'rack'
require 'rack/server'
require 'grape'
require 'grape-entity'
require 'rubygems'
I skip starting the server with this:
if not defined?(Ocra)
Rack::Server.start options
end
When I try to run my server.exe:
Temp/../server.rb:221:in `default_middleware_by_environment':
cannot load such file -- rack/content_length (LoadError)
Which means that it doesn't detect submodules of rack that exist, but aren't used and therefore not included. If I add a require 'rack/content_length' it continues with cannot load such file -- rack/chunkedEven` and so on.
When I interrupted my server by hand before I also had to call a few api endpoints to have everything included.
I think my options are either:
Tell OCRA to include all the submodules of rack and grape, but compiling that list is a bit time consuming and would increase the file size
I already tried ocra server.rb --gem-full=rack --gem-full=grape, which get my server started, but when calling the API 'rack/mount/strexp' is missing again..
Calling the API within my script, but I couldn't figure out how to do that. I can't add a block to Rack::Server.start options and it does only continue when I interrupt the server.
Any ideas to implement either option, or is there another solution?
If we run the rack app with a rack handler (webrick / thin / else), we can shutdown the server in another thread so that ocra can finish packing (not sure how to do same thing with Rack::Server).
app = Rack::Directory.new ENV['HOME'] # a sample app
handler = Rack::Handler.pick %w/ thin webrick /
handler.run app do |server|
# handler.run yields a server object,
# which we shutdown when ocra is packing
if ocra_is_packing # replace with proper condition
Thread.new { sleep 10; server.shutdown }
end
end
You may have to do something else (access the server etc.) to have ocra pack appropriate dependencies.

Is it possible to send a notification when a Unicorn master finishes a restart?

I'm running a series of Rails/Sinatra apps behind nginx + unicorn, with zero-downtime deploys. I love this setup, but it takes a while for Unicorn to finish restarting, so I'd like to send some sort of notification when it finishes.
The only callbacks I can find in Unicorn docs are related to worker forking, but I don't think those will work for this.
Here's what I'm looking for from the bounty: the old unicorn master starts the new master, which then starts its workers, and then the old master stops its workers and lets the new master take over. I want to execute some ruby code when that handover completes.
Ideally I don't want to implement any complicated process monitoring in order to do this. If that's the only way, so be it. But I'm looking for easier options before going that route.
I've built this before, but it's not entirely simple.
The first step is to add an API that returns the git SHA of the current revision of code deployed. For example, you deploy AAAA. Now you deploy BBBB and that will be returned. For example, let's assume you added the api "/checks/version" that returns the SHA.
Here's a sample Rails controller to implement this API. It assumes capistrano REVISION file is present, and reads current release SHA into memory at app load time:
class ChecksController
VERSION = File.read(File.join(Rails.root, 'REVISION')) rescue 'UNKNOWN'
def version
render(:text => VERSION)
end
end
You can then poll the local unicorn for the SHA via your API and wait for it to change to the new release.
Here's an example using Capistrano, that compares the running app version SHA to the newly deployed app version SHA:
namespace :deploy do
desc "Compare running app version to deployed app version"
task :check_release_version, :roles => :app, :except => { :no_release => true } do
timeout_at = Time.now + 60
while( Time.now < timeout_at) do
expected_version = capture("cat /data/server/current/REVISION")
running_version = capture("curl -f http://localhost:8080/checks/version; exit 0")
if expected_version.strip == running_version.strip
puts "deploy:check_release_version: OK"
break
else
puts "=[WARNING]==========================================================="
puts "= Stale Code Version"
puts "=[Expected]=========================================================="
puts expected_version
puts "=[Running]==========================================================="
puts running_version
puts "====================================================================="
Kernel.sleep(10)
end
end
end
end
You will want to tune the timeouts/retries on the polling to match your average app startup time. This example assumes a capistrano structure, with app in /data/server/current and a local unicorn on port 8080.
If you have full access to the box, you could script the Unicorn script to start another script which loops through checking for /proc/<unicorn-pid>/exe which will link to the running process.
See: Detect launching of programs on Linux platform
Update
Based on the changes to the question, I see two options - neither of which are great, but they're options nonetheless...
You could have a cron job that runs a Ruby script every minute which monitors the PID directory mtime, then ensure that PID files exist (since this will tell you that a file has changed in the directory and the process is running) then executes additional code if both conditions are true. Again, this is ugly and is a cron that runs every minute, but it's minimal setup.
I know you want to avoid complicated monitoring, but this is how I'd try it... I would use monit to monitor those processes, and when they restart, kick off a Ruby script which sleeps (to ensure start-up), then checks the status of the processes (perhaps using monit itself again). If this all returns properly, execute additional Ruby code.
Option #1 isn't clean, but as I write the monit option, I like it even better.

Quick FTP server

I'm looking for a quick, configuration-less, FTP server. Something exactly like Serve or Rack_dav, but for FTP, which can publish a folder just by running a command.
Is there a gem or something to do such thing?
Solution
Based on Wayne's ftpd gem, I created a quick and easy-to-use gem called Purvey.
The ftpd gem supports TLS, and comes with a file system driver. Like em-ftpd, you supply a driver, but that driver doesn't need to do much. Here's a bare-minimum FTP server that accepts any username/password, and serves files out of a temporary directory:
require 'ftpd'
require 'tmpdir'
class Driver
def initialize(temp_dir)
#temp_dir = temp_dir
end
def authenticate(user, password)
true
end
def file_system(user)
Ftpd::DiskFileSystem.new(#temp_dir)
end
end
Dir.mktmpdir do |temp_dir|
driver = Driver.new(temp_dir)
server = Ftpd::FtpServer.new(driver)
server.start
puts "Server listening on port #{server.bound_port}"
gets
end
NOTE: This example allows an FTP client to upload, delete, rename, etc.
To enable TLS:
include Ftpd::InsecureCertificate
...
server.certfile_path = insecure_certfile_path
server.tls = :explicit
server.start
Disclosure: I am ftpd's author and current maintainer
take a look at this gem, a Lightweight FTP server framework built on the EventMachine
https://github.com/yob/em-ftpd

Testing server ruby-application with cucumber

My ruby application runs Webrick server. I want to test it by cucumber and want to ensure that it gives me right response.
Is it normal to run server in test environment for testing? Where in my code I should start server process and where I should destroy it?
Now I start server by background step and destroy in After hook. It's slow because server starts before every scenario and destroys after.
I have idea to start server in env.rb and destroy it in at_exit block declared also in env.rb. What do you think about it?
Do you know any patterns for that problem?
I use Spork for this. It starts up one or more servers, and has the ability to reload these when needed. This way, each time you run your tests you're not incurring the overhead of firing up Rails.
https://github.com/sporkrb/spork
Check out this RailsCast for the details: http://railscasts.com/episodes/285-spork
Since cucumber does not support spork any more (why ?) I use the following code in env.rb
To fork a process I use this lib : https://github.com/jarib/childprocess
require 'childprocess'
ChildProcess.posix_spawn = true
wkDir=File.dirname(__FILE__)
server_dir = File.join(wkDir, '../../site/dev/bin')
#Because I use rvm , I have to run the server thru a shell
#server = ChildProcess.build("sh","-c","ruby pageServer.rb -p 4563")
#server.cwd = server_dir
#server.io.inherit!
#server.leader = true
#server.start
at_exit do
puts "----------------at exit--------------"
puts "Killing process " + #server.pid.to_s
#server.stop
if #server.alive?
puts "Server is still alive - kill it manually"
end
end

Having trouble debugging Sinatra app in production

I'm deploying a Sinatra app using passenger. The deployed app is working, but not entirely: some paths work fine, others simply render a blank page. I can't seem to find any major differences between the routes that work and the routes that don't, and I can't seem to track down any errors..
Handlers
I have defined the not_found and error handlers as follows:
not_found do
'404. Bummer!'
end
error do
'Nasty error: ' + env['sinatra.error'].name
end
These work fine on my local machine, both in development and production, but I never see these come up on the server.
Apache Logs
When I tail Apache's access.log and hit one of the broken paths, I see a 500:
helpers [27/Oct/2009:15:54:59 -0400] "GET /admin/member_photos/photos HTTP/1.1" 500 20 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3"
rack_hoptoad
I've also installed and configured rack_hoptoad middleware in my config.ru, but no exceptions are making it to hoptoad.
# Send exceptions to hoptoad
require 'rack_hoptoad'
use Rack::HoptoadNotifier, 'MY_API_KEY'
logging
I've set up logging like so..
set :raise_errors => true
set :logging, true
log = File.new("log/sinatra.log", "a+")
STDOUT.reopen(log)
STDERR.reopen(log)
require 'logger'
configure do
LOGGER = Logger.new("log/sinatra.log")
end
helpers do
def logger
LOGGER
end
end
This setup lets me call logger.info within my routes, which works locally and on the server for the working routes, but the broken paths don't get far enough to call logger.info.
What to do?
Any ideas as to how I can see what's causing the 500 errors? Thanks for any help!
I would try using the Rack::ShowExceptions middleware to try and trace out the problem. In your config.ru add these two lines before the run call:
require 'rubygems'
require 'your-app'
use Rack::ShowExceptions
run YourApp
That should catch and display the backtrace for any exceptions occurring in Rack or in your app. That should give you more details to work with, at least that would be the hope.
Maybe there's something wrong with your log setup?
Redirect STDERR when running the Sinatra server so you can read it. Like:
ruby myapp.rb -p 1234 > log/app.log 2>&1
Thanks for the responses, but I didn't end up needing to use them. I was originally deploying the app in a sub-URI configuration. When I deployed the app to it's own subdomain instead, the problems went away.
So.. I'm not really sure what the problem was, but getting rid of this line is my Apache configuration for the site is what resolved things:
Redirect permanent / https://www.example.org/admin/member_photos/

Resources