I've used PPerl for deamon like processes.
This program turns ordinary perl
scripts into long running daemons,
making subsequent executions extremely
fast. It forks several processes for
each script, allowing many proceses to
call the script at once.
Does anyone know of something like this for ruby? Right now I am planing on using a wrapper around curl to call a REST WebService written in Sinatra running on JRuby. I'm hoping there is a simpler option.
Have you looked at using nailgun? It sets up a background JVM process that your scripts execute in. That way you can use jruby w/o incurring the JVM startup time you would normally get with each script run.
You mean like daemons?
Simple example of in-process daemonization
require 'rubygems'
require 'daemons'
Daemons.daemonize
loop do
`touch /tmp/me`
sleep 1
end
Also, instead of using curl, have you looked at rest-client?
Related
Working with a Sinatra application, and found 3 ways to run a background process:
Thread.new
Process.fork
Process.spawn
Figured out how to get the first two to work, but now the challenge is that the tests need to run synchronously (for a few reasons).
What is a good way to run jobs asynchronously in production, but force the tests to run synchronously? Preferably with a call in the spec_helper...?
Ruby 1.9.3, Sinatra app, RSpec.
I recommend the following hybrid approach:
Write simple unit tests and refactor what you're testing to be synchronous. In other words, put all of the background functionality into straightforward classes and methods that can be unit tested easily. Then the background processes can call the same functionality, that's already been unit tested. You shouldn't have to unit test the background thread/process creation itself, since this is already tested in ruby (or another library like god, bluepill or Daemons. This TDD approach has the added benefit of making the codebase more maintainable.
For functional and integration tests, follow the approach of delayed_job and provide a method to do all of the background work synchronously, like with Delayed::Worker.new.work_off
You also may want to consider using EventMachine (as in Any success with Sinatra working together with EventMachine WebSockets? ) over spawning threads or processes, especially if the background processes are IO intensive (making http or database requests, for example).
Here's what I came up with:
process_in_background { slow_method }
def process_in_background
Rails.env == 'test' ? yield : Thread.new { yield }
end
def slow_method
...code that takes a long time to run...
end
I like this solution because it is transparent: it runs the exact same code, just synchronously.
Any suggestions/problems with it? Is it necessary to manage zombies? How?
I'm designing a web service using Sinatra and I need to perform certain operations when the service is started and some other operations when the server is stopped.
How can I register those operations to be fully integrated with sinatra?
Thanks.
The answer depends on how you need to perform your operations. Does they need to be ran for each ruby process or do they need to be ran just once for the service. I suppose it's once for all the service and in the case of the latest :
You might be tempted to run some code before your Sinatra app is starting but this is not really the behavior you might expect. I'll explain why just after. The workaround would be adding code before your sinatra class like
require "sinatra"
puts "Starting"
get "/" do
...
end
You could add some code to your config.ru too btw, would have the same effect but I don't which one is uglier.
Why is this wrong ? Because when you host your web service, many web server instances will be fired and each one will execute the puts method or your "starting" code. This is correct when you want to initialize things that are local to your app instance, like a database connection but not to initialize things which are shared by all of them.
And about the code firing at its end, well you can't (or maybe you could with some really ugly workaround, but you'll end with the same issue you get with the start).
So the best way to handle on and off operations would be to wrap it within your tasks firing your service.
Run some rake task or ruby script that do your initalization stuff
Start your web server
And to stop it
Run a rake task or ruby script that stops the server
Run your rake task or ruby script that does the cleaning operations.
You can wrap those into a single rake task, by starting your app server directly from ruby, like I did there https://github.com/TactilizeTeam/photograph/blob/master/bin/photograph.
This way you can easily add some code to get ran before starting the service, still keeping it into a single task. With some plumbing, I guess you can fire multiple thin instances and then allow you to start your cluster of thin (or whatever you use) instances and have still one task to rely on.
I'd say that adding a handler to the SIGINT signal could allow you to run some code before exiting. See http://www.ruby-doc.org/core-1.9.3/Signal.html for how to do that. You might want to check if Thin isn't already registering a trap for that signal, I'm not sure if this is handled in the library or in the script used to launch thin ( the "thin" executable that gets in your $PATH).
Another way to handle the exit, would be to have a watchdog process, that check if your cluster is running and could ensure the stop code is being ran if no more instances are running.
I have scripts web.rb (sinatra) and rufus.rb (cron using rufus gem) running on the same computer (Win XP). Both are using functions.rb where I have all the functions. I have an array variable $webserver_status where I store history of commands web server performed/is performing. The web server runs some dos commands and php scripts and I want to be sure that only one runs at a time and also give the user some overview what is happening.
I used to run cron jobs (rufus.rb) over http so in fact I access the web server as from the browser. So the status variable was updated correctly. Now I started to call the same code from functions.rb so the variable doesn't show correct server status any more.
Is there any way cron can access the $webserver_status variable directly?
Or I have to update the variable over http? Or some kind of status file on the disk?
ruby 1.8.7 (2010-08-16 patchlevel 302) [i386-mingw32]
web server runs at all times
I have production and testing version of cron code
See the suggestions I made in this answer. The question was essentially the same unless I'm missing something in your scenario. There are many possible solutions depending on your needs.
Edit:
Based on your comment, I'm guessing that you want to share memory across two ruby processes or otherwise communicate between processes. Read about IPC in ruby to see how you could make UNIX sockets suit your needs.
It doesn't really make sense to talk about the same variable being accessed in two processes - you have to go via some kind of intermediary whether it's sockets, a database or a file. If this isn't what you want then I suggest you clarify the situation and why you need shared access to the memory rather than something like this.
I think something like this is what you're looking for:
#web.rb
require './functions'
print_value("apple")
and
#rufus.rb
require './functions'
print_value("not apple")
and
#functions.rb
def print_value(value)
puts value
end
Calling web.rb returns the string Apple.
I want to start my Rails server in a background thread from within a Ruby script. I could use Kernel#system but I want to be able to kill the Rails server when the thread is stopped. Is there a way to execute the Rails server using some Rails API call instead? I'm thinking something it would be nice to be able to put something like Rails.run_server(:port => 3000, ...)
I'm on Windows Server 2008.
Check out the file gems/rails.x.x.x/lib/commands/server.rb. It looks like that's the starting point that script/server uses.
Since script/server is itself a ruby script, it stands to reason that you ought to be able to start a server by doing something similar to what's in server.rb. But I imagine you might have some difficulty getting your ruby environment right...
Note that I'm looking at rails 2.3.8 here, so if you're on 3.whatever your results will probably be different.
I eventually decided to avoid any ickiness and start the rails server in its own process, as detailed in this post. (Being able to kill it plus its child processes consistently was the main blocker and the original reason I'd considered starting it in a thread instead.)
I have a small HTTP server script I've written using eventmachine which needs to call external scripts/commands and does so via backticks (``). When serving up requests which don't run backticked code, everything is fine, however, as soon as my EM code executes any backticked external script, it stops serving requests and stops executing in general.
I noticed eventmachine seems to be sensitive to sub-processes and/or threads, and appears to have the popen method for this purpose, but EM's source warns that this method doesn't work under Windows. Many of the machines running this script are running Windows, so I can't use popen.
Am I out of luck here? Is there a safe way to run an external command from an eventmachine script under Windows? Is there any way I could fire off some commands to be run externally without blocking EM's execution?
edit: the culprit that seems to be screwing up EM the most is my usage of the Windows start command, as in: start java myclass. The reason I'm using start is because I want those external scripts to start running and keep running after the EM request is served
The ruby documentation states that the backtick operator "Returns the standard output of running cmd in a subshell"
So if your command i.e. start java myclass is continuing to run then ruby is waiting for it to finish to pass back it's output to your program.
Try win32-open3 (and if it needs to be cross-platform not windows-only, also have a look at POpen4)
EventMachine has a thread pool. You can EM.defer your backticks like this
EM.defer { `start java myclass` }
By default the thread pool has 20 threads, and you can change its size by assiging EM.threadpool_size a value.
Important to note, that EM.defer can be passed operation, which is executed in deferred thread, callback, which is executed in reactor thread, and error callback which is run in reactor thread when the operation raises the exception.
If you use Java, you may consider using jruby, which has real threads support, and you could probably reuse your Java code from within jruby.