I'm attempting to write tests around an application that makes heavy use of TCPSockets (an IRC bot to be specific). While writing my first class's tests, I had been skimping by with doing:
#In the describe block
before(:all) { TCPServer.new 6667 }
...which allowed for my TCPSockets to function (by connecting to localhost:6667), though they are not actually being properly mocked. However, this has now caused problems when moving onto my second class as I cannot create a TCPServer on the same port.
How can I mock the TCPSocket class in such a way that will allow me to test things such as its(:socket) { should be_kind_of(TCPSocket) } and other common operations like #readline and #write?
You could try keeping track and closing the TCPServer in your before and after:
before do
#server = TCPServer.new 6667
end
after do
#server.close
end
it ... do
end
it ... do
end
After each of the individual tests, the TCPServer is killed so you can create a new one with the same port.
I'm not quite sure if I understand your problem, but why don't you just install some kind irc server on your local machine? ircd-irc2, ircd-hybrid or something like that?
Suppose you have irc client implemented this way:
class Bot
attr_accessor :socket
def initialize
socket = TCPSocket.new("localhost", 6667)
end
end
You can then test it like this
let(:bot) { Bot.new }
it "should be kind of TCP Socket"
bot.should be_kind_of(TCPSocket)
bot.should be_a(TCPSocket)
end
Related
I've got
let(:ssl_socket) { instance_double(OpenSSL::SSL::SSLSocket) }
Now I want to test if the object I have is an instance of OpenSSL::SSL::SSLSocket. I don't want to use be_an_instance_of matcher because I want my test to check more like "duck typing" way. So I want to know if object responds to 'ssl_version' method
it 'should change socket to ssl' do
#do some stuff and set socket = ssl_socket
expect(socket).to respond_to(:ssl_version)
end
It doesn't work
expected #<InstanceDouble(OpenSSL::SSL::SSLSocket) (anonymous)> to respond to :ssl_version
But if I do it using other matcher
it 'should change socket to ssl' do
#do some stuff and set socket = ssl_socket
expect(socket).to receive(:ssl_version)
socket.ssl_version
end
then InstanceDouble will actually check if :ssl_version method is present in OpenSSL::SSL::SSLSocket instances. Is this a feature or a bug of RespondTo matcher? Should I use Receive matcher?
instance_double basically creates a Struct instance under the hood.
Out of the box it’s an object, that knows nothing about wrapped class. receive extends stubs attribute of it.
So yes, one should use receive in this case. To be more precise, one should not test RSpec itself and what you are doing is testing instance_double implementation.
I have built a pretty simple REST service in Sinatra, on Rack. It's backed by 3 Tokyo Cabinet/Table datastores, which have connections that need to be opened and closed. I have two model classes written in straight Ruby that currently simply connect, get or put what they need, and then disconnect. Obviously, this isn't going to work long-term.
I also have some Rack middleware like Warden that rely on these model classes.
What's the best way to manage opening and closing the connections? Rack doesn't provide startup/shutdown hooks as I'm aware. I thought about inserting a piece of middleware that provides reference to the TC/TT object in env, but then I'd have to pipe that through Sinatra to the models, which doesn't seem efficient either; and that would only get be a per-request connection to TC. I'd imagine that per-server-instance-lifecycle would be a more appropriate lifespan.
Thanks!
Have you considered using Sinatra's configure blocks to set up your connections?
configure do
Connection.initialize_for_development
end
configure :production do
Connection.initialize_for_production
end
That's a pretty common idiom while using things like DataMapper with Sinatra
Check out the "Configuration" section at http://www.sinatrarb.com/intro
If you have other Rack middleware that depend on these connections (by way of a dependence on your model classes), then I wouldn't put the connection logic in Sinatra -- what happens if you rip out Sinatra and put in another endpoint?
Since you want connection-per-application rather than connection-per-request, you could easily write a middleware that initialized and cleaned up connections (sort of the Guard Idiom as applied to Rack) and install it ahead of any other middleware that need the connections.
class TokyoCabinetConnectionManagerMiddleware
class <<self
attr_accessor :connection
end
def initialize(app)
#app = app
end
def call(env)
open_connection_if_necessary!
#app.call(env)
end
protected
def open_connection_if_necessary!
self.class.connection ||= begin
... initialize the connection ..
add_finalizer_hook!
end
end
def add_finalizer_hook!
at_exit do
begin
TokyoCabinetConnectionManagerMiddleware.connection.close!
rescue WhateverTokyoCabinetCanRaise => e
puts "Error closing Tokyo Cabinet connection. You might have to clean up manually."
end
end
end
end
If you later decide you want connection-per-thread or connection-per-request, you can change this middleware to put the connection in the env Hash, but you'll need to change your models as well. Perhaps this middleware could set a connection variable in each model class instead of storing it internally? In that case, you might want to do more checking about the state of the connection in the at_exit hook because another thread/request might have closed it.
can anybody explain me why the Redis (redis-rb) synchrony driver works directly under EM.synchrony block but doesn't within EM:Connection?
Considering following example
EM.synchrony do
redis = Redis.new(:path => "/usr/local/var/redis.sock")
id = redis.incr "local:id_counter"
puts id
EM.start_server('0.0.0.0', 9999) do |c|
def c.receive_data(data)
redis = Redis.new(:path => "/usr/local/var/redis.sock")
puts redis.incr "local:id_counter"
end
end
end
I'm getting
can't yield from root fiber (FiberError)
when using within receive_data. From reading source code for both EventMachine and em-synchrony I can't figure out what's the difference.
Thanks!
PS: Obvious workaround is to wrap the redis code within EventMachine::Synchrony.next_tick as hinted at issue #59, but given the EM.synchrony I would expect to already have the call wrapped within Fiber...
PPS: same applies for using EM::Synchrony::Iterator
You're doing some rather tricky here.. You're providing a block to start_server, which effectively creates an "anonymous" connection class and executes your block within the post_init method of that class. Then within that class you're defining an instance method.
The thing to keep in mind is: when the reactor executes a callback, or a method like receive_data, that happens on the main thread (and within root fiber), which is why you're seeing this exception. To work around this, you need to wrap each callback to be executed within a Fiber (ex, see Synchrony.add_(periodic)_timer methods).
To address your actual exception: wrap the execution of receive_data within a Fiber. The outer EM.synchrony {} won't do anything for callbacks which are scheduled later by the reactor.
I am implementing a poller service whose interface looks like this.
poller = Poller.new(SomeClass)
poller.start
poller.stop
The start method is supposed to continuously start hitting an http request and update stuff in the database. Once started, the process is supposed to continue till it is explicitly stoped.
I understand that implementation of start needs to spawn and run in a new process. I am not quite sure how to achieve that in Ruby. I want a ruby solution instead of a ruby framework specific solution (Not rails plugins or sinatra extensions. Just ruby gems). I am exploring eventmachine and starling-workling. I find eventmachine to be too huge to understand in short span and workling is a plugin and not a gem. So it is a pain get it working for Ruby application.
I need guidance on how do I achieve this. Any pointers? Code samples will help.
Edit
Eventmachine or starling-workling solution would be preferred over threading/forking.
Can't you use the example from Process#kill:
class Poller
def initialize klass
#klass = klass
end
def start
#pid = fork do
Signal.trap("HUP") { puts "Ouch!"; exit }
instance = #klass.new
# ... do some work ...
end
end
def stop
Process.kill("HUP", #pid)
Process.wait
end
end
class Poller
def start
#thread = Thread.new {
#Your code here
}
end
def stop
#thread.stop
end
end
Have you considered out the Daemons gem?
I have built a pretty simple REST service in Sinatra, on Rack. It's backed by 3 Tokyo Cabinet/Table datastores, which have connections that need to be opened and closed. I have two model classes written in straight Ruby that currently simply connect, get or put what they need, and then disconnect. Obviously, this isn't going to work long-term.
I also have some Rack middleware like Warden that rely on these model classes.
What's the best way to manage opening and closing the connections? Rack doesn't provide startup/shutdown hooks as I'm aware. I thought about inserting a piece of middleware that provides reference to the TC/TT object in env, but then I'd have to pipe that through Sinatra to the models, which doesn't seem efficient either; and that would only get be a per-request connection to TC. I'd imagine that per-server-instance-lifecycle would be a more appropriate lifespan.
Thanks!
Have you considered using Sinatra's configure blocks to set up your connections?
configure do
Connection.initialize_for_development
end
configure :production do
Connection.initialize_for_production
end
That's a pretty common idiom while using things like DataMapper with Sinatra
Check out the "Configuration" section at http://www.sinatrarb.com/intro
If you have other Rack middleware that depend on these connections (by way of a dependence on your model classes), then I wouldn't put the connection logic in Sinatra -- what happens if you rip out Sinatra and put in another endpoint?
Since you want connection-per-application rather than connection-per-request, you could easily write a middleware that initialized and cleaned up connections (sort of the Guard Idiom as applied to Rack) and install it ahead of any other middleware that need the connections.
class TokyoCabinetConnectionManagerMiddleware
class <<self
attr_accessor :connection
end
def initialize(app)
#app = app
end
def call(env)
open_connection_if_necessary!
#app.call(env)
end
protected
def open_connection_if_necessary!
self.class.connection ||= begin
... initialize the connection ..
add_finalizer_hook!
end
end
def add_finalizer_hook!
at_exit do
begin
TokyoCabinetConnectionManagerMiddleware.connection.close!
rescue WhateverTokyoCabinetCanRaise => e
puts "Error closing Tokyo Cabinet connection. You might have to clean up manually."
end
end
end
end
If you later decide you want connection-per-thread or connection-per-request, you can change this middleware to put the connection in the env Hash, but you'll need to change your models as well. Perhaps this middleware could set a connection variable in each model class instead of storing it internally? In that case, you might want to do more checking about the state of the connection in the at_exit hook because another thread/request might have closed it.