What is the best way to simulate no Internet connection within a Cucumber test? - ruby

Part of my command-line Ruby program involves checking if there is an internet connection before any commands are processed. The actual check in the program is trivial (using Socket::TCPSocket), but I'm trying to test this behaviour in Cucumber for an integration test.
The code:
def self.has_internet?(force = nil)
if !force.nil? then return force
begin
TCPSocket.new('www.yelp.co.uk', 80)
return true
rescue SocketError
return false
end
end
if has_internet? == false
puts("Could not connect to the Internet!")
exit 2
end
The feature:
Scenario: Failing to log in due to no Internet connection
Given the Internet is down
When I run `login <email_address> <password>`
Then the exit status should be 2
And the output should contain "Could not connect to the Internet!"
I obviously don't want to change the implementation to fit the test, and I require all my scenarios to pass. Clearly if there is actually no connection, the test passes as it is, but my other tests fail as they require a connection.
My question: How can I test for this in a valid way and have all my tests pass?

You can stub your has_internet? method and return false in the implementation of the Given the Internet is down step.
YourClass.stub!(:has_internet?).and_return(false)

There are three alternative solutions I can think of:
have the test temporarily monkeypatch TCPSocket.initialize (or maybe Socket#connect, if that's where it ends up) to pretend the internet is down.
write (suid) a script that adds/removes an iptables firewall rule to disable the internet, and have your test call the script
use LD_PRELOAD on a specially written .so shared library that overrides the connect C call. This is harder.
Myself, I would probably try option 1, give up after about 5 minutes and go with option 2.

maybe a bit late for you :), but have a look at
https://github.com/mmolhoek/vcr-uri-catcher
I made this to test network failures, so this should do the trick for you.

Related

Prefered way to fork / start subprocesses in Cucumber

Let's say I have this scenario:
Scenario: Test LDAP access
Given that the LDAP dummy server is started
And the LDAP query is executed
...
I wish to start a LDAP server in that step. In my case, I use ruby-ldapserver, so I could, in theory, do this in my step:
args = { ... }
#ldap_pid = fork do
redirect_stdout_stderr_to_logfile()
wait_for_ldap_requests(args)
exit # avoid messing with Cucumber/web driver cleanup
end
...
After do
if #ldap_pid
Process.kill("HUP", #ldap_pid)
Process.wait #ldap_pid
end
end
A totally different approach:
system("some_script_that_starts_ldap_dummy < #{input} >#{tmpfile} 2>&1 &")
This certainly works but is rather unelegant (starting a ruby program from inside ruby - unnecessary process creation, and I need to set up the input parameters for that subprogram as well).
All that said, I'm not too altogether about either approach (the "warm fuzzy feeling" is not there).
What is your standard approach to these things? Is there one to speak of? Does Cucumber bring something to the table that could support me here? Should I run something to tell Cucumber that it has forked and should handle itself like a child process?
Edit: actually, when playing around with the fork approach, I did not notice any problems with the DB at all. I did notice that if I kill the child with SIGINT, it will break the web driver (Poltergeist / PhantomJS) in my case. A functioning workaround for this is to send a SIGHUP, handle it in the child by shutting down gracefully (if needed) but not callingexit; and then, after a few seconds a SIGKILL (which denies the child any chance to close down any protocols and just rips it away). Not nice... and not free of race conditions, say if the CI server should be under load.

How to keep a persistent connection to SQL Server using Ruby Sequel and Tiny_TDS while in a loop

I have a ruby script that needs to run continually on the server. I've daemonized it using the daemon gem, and in my script I have it running in an infinite loop, since the daemon gem handles starting and stopping of the process that kicks off my script. In my script, I start out by setting up my DB instance using the Sequel gem and tiny_tds. Like so:
DB = Sequel.connect(adapter: 'tinytds', host: MSSQLHost, database: MSSQLDatabase, user: MSSQLUser, password: MSSQLPassword)
Then I have a loop do that is my infinite loop. Inside that, I test to see if I have a connection using DB.test_connection and then I query the DB every second or so to check if there is new content using a query such as:
DB['SELECT * FROM dbo.[MyTable]'].all do |row|
# MY logic here
# As part of my logic I test to see if I need to delete this row in the table and if so I use
DB.run('DELETE FROM dbo.[MyTable] WHERE some condition')
end
Then at the end of my logic, just before I loop again, I do:
sleep 1
DB.disconnect
All of this works great for about an hour to an hour and a half with everything checking the table, doing the logic, deleting rows, etc., then it dies and gives me this error message TinyTds::Error: Adaptive Server connection timed out
My question, why is that happening? Do I need to reformat my code in a different way? Why doesn't the DB.test_connection do what it is advertised to do? The documentation on that says it checks for a connection in the connection pool, and uses it if it finds it, and creates a new one otherwise.
Any help would be much appreciated
DB.test_connection just acquires a connection from the connection pool, it doesn't check that the connection is still valid (it must have been valid at one point or it wouldn't be in the pool). There's no way that a connection is still valid without actually sending a query. You can use the connection_validator extension that ships with Sequel if you want to do that automatically.
If you are loading Sequel before forking, you need to make sure you call DB.disconnect before forking, otherwise you can end up with multiple forked processes sharing the same connection, which can cause many different issues.
I finally ended up just putting a rescue statement in there that caught this, and re-ran my line of code to create the DB instance, yes, it puts a warning in my log about already setting that instance, but I guess I could just make that not a contstant an that would go away. Anyway, it appears to be working now, and the times it does timeout, I'm recovering gracefully from those. I just wish I could have figured out why it was/is disconnecting like it is.

What's the correct way to check if a host-alive and handle timeouts efficiently?

I'm trying to check if a given host is up, running, and listening to a specific port, and to handle any errors correctly.
I found a a number of references of Ruby socket programming but none of them seems to able to handle "socket time-out" efficiently. I tried IO.select, which takes four parameters, of which, the last one is the timeout value:
IO.select([TCPSocket.new('example.com', 22)], [nil], [nil], 4)
The problem is, it gets stuck, especially if the port number is wrong or the server is not listening on to it. So, finally I ended up with this, which I didn't like that much but doing the job:
require 'socket'
require 'timeout'
dns = "example.com"
begin
Timeout::timeout(3) { TCPSocket.new(dns, 22) }
puts "Responded!!"
# do some stuff here...
rescue SocketError
puts "No connection!!"
# do some more stuff here...
rescue Timeout::Error
puts "No connection, timed out!!"
# do some other stuff here...
end
Is there a better way doing this?
The best test for availability of any resource is to try to use it. Adding extra code to try to predict ahead of time whether the use will work is bound to fail:
You test the wrong thing and get a different answer.
You test the right thing but at the wrong time, and the answer changes between the test and the use, and your application performs double the work for nothing, and you write redundant code.
The code you have to write to handle the test failure is identical to the code you should write to handle the use-failure. Why write that twice?
We make extensive use of Net::SSH in one of our systems, and ran into timeout issues.
Probably the biggest fix was to implement use of the select method, to set a low-level timeout, and not try to use the Timeout class, which is thread based.
"How do I set the socket timeout in Ruby?" and "Set socket timeout in Ruby via SO_RCVTIMEO socket option" have code to investigate for that. Also, one of those links to "Socket Timeouts in Ruby" which has useful code, however be aware that it was written for Ruby 1.8.6.
The version of Ruby can make a difference too. Pre-1.9 the threading wasn't capable of stopping a blocking IP session so the code would hang until the socket timed out, then the Timeout would fire. Both the above questions go over that.

Spec testing EventMachine-based (Reactor) Code

I'm trying out the whole BDD approach and would like to test the AMQP-based aspect of a vanilla Ruby application I am writing. After choosing Minitest as the test framework for its balance of features and expressiveness as opposed to other aptly-named vegetable frameworks, I set out to write this spec:
# File ./test/specs/services/my_service_spec.rb
# Requirements for test running and configuration
require "minitest/autorun"
require "./test/specs/spec_helper"
# External requires
# Minitest Specs for EventMachine
require "em/minitest/spec"
# Internal requirements
require "./services/distribution/my_service"
# Spec start
describe "MyService", "A Gateway to an AMQP Server" do
# Connectivity
it "cannot connect to an unreachable AMQP Server" do
# This line breaks execution, commented out
# include EM::MiniTest::Spec
# ...
# (abridged) Alter the configuration by specifying
# an invalid host such as "l0c#alho$t" or such
# ...
# Try to connect and expect to fail with an Exception
MyApp::MyService.connect.must_raise EventMachine::ConnectionError
end
end
I have commented out the inclusion of the em-minitest-spec gem's functionality which should coerce the spec to run inside the EventMachine reactor, if I include it I run into an even sketchier exception regarding (I suppose) inline classes and such: NoMethodError: undefined method 'include' for #<#<Class:0x3a1d480>:0x3b29e00>.
The code I am testing against, namely the connect method within that Service is based on this article and looks like this:
# Main namespace
module MyApp
# Gateway to an AMQP Server
class MyService
# External requires
require "eventmachine"
require "amqp"
# Main entry method, connects to the AMQP Server
def self.connect
# Add debugging, spawn a thread
Thread.abort_on_exception = true
begin
#em_thread = Thread.new {
begin
EM.run do
#connection = AMQP.connect(#settings["amqp-server"])
AMQP.channel = AMQP::Channel.new(#connection)
end
rescue
raise
end
}
# Fire up the thread
#em_thread.join
rescue Exception
raise
end
end # method connect
end
end # class MyService
The whole "exception handling" is merely an attempt to bubble the exception out to a place where I can catch/handle it, that didn't help either, with or without the begin and raise bits I still get the same result when running the spec:
EventMachine::ConnectionError: unable to resolve server address, which actually is what I would expect, yet Minitest doesn't play well with the whole reactor concept and fails the test on ground of this Exception.
The question then remains: How does one test EventMachine-related code using Minitest's spec mechanisms? Another question has also been hovering around regarding Cucumber, also unanswered.
Or should I focus on my main functionality (e.g. messaging and seeing if the messages get sent/received) and forget about edge cases? Any insight would truly help!
Of course, it can all come down to the code I wrote above, maybe it's not the way one goes about writing/testing these aspects. Could be!
Notes on my environment: ruby 1.9.3p194 (2012-04-20) [i386-mingw32] (yes, Win32 :>), minitest 3.2.0, eventmachine (1.0.0.rc.4 x86-mingw32), amqp (0.9.7)
Thanks in advance!
Sorry if this response is too pedantic, but I think you'll have a much easier time writing the tests and the library if you distinguish between your unit tests and your acceptance tests.
BDD vs. TDD
Be careful not to confuse BDD with TDD. While both are quite useful, it can lead to problems when you try to test every edge case in an acceptance test. For example, BDD is about testing what you're trying to accomplish with your service, which has more to do with what you're doing with the message queue than connecting to the queue itself. What happens when you try to connect to a non-existent message queue fits more into the realm of a unit test in my opinion. It's also worth pointing out that your service shouldn't be responsible for testing the message queue itself, since that's the responsibility of AMQP.
BDD
While I'm not sure what your service is supposed to do exactly, I would imagine your BDD tests should look something like:
start the service (can do this in a separate thread in the tests if you need to)
write something to the queue
wait for your service to respond
check the results of the service
In other words, BDD (or acceptance tests, or integration tests, however you want to think about them) can treat your app as a black box that is supposed to provide certain functionality (or behavior). The tests keep you focused on your end goal, but are more meant for ensuring one or two golden use cases, rather than the robustness of the app. For that, you need to break down into unit tests.
TDD
When you're doing TDD, let the tests guide you somewhat in terms of code organization. It's difficult to test a method that creates a new thread and runs EM inside that thread, but it's not so hard to unit test either of these individually. So, consider putting the main thread code into a separate function that you can unit test separately. Then you can stub out that method when unit testing the connect method. Also, instead of testing what happens when you try to connect to a bad server (which tests AMQP), you can test what happens when AMQP throws an error (which is your code's responsibility to handle). Here, your unit test can stub out the response of AMQP.connect to throw an exception.

Controlling Tor client with Ruby

I am writing a Ruby script which automatically crawls websites for data analysis, and now I have a requirement which is fairly complicated: I have to be able to simulate access from a variety of countries, about 20 different ones. The website will contain different information depending on the IP location, so the only way to get it done is to request it from a server which is actually in that country.
Since I don't want to buy servers in each of those 20 countries, I chose to give Tor a try - as many of you will know, by editing the torrc configuration file it is possible to specify the exit node and hence the country from which the actual request will originate.
When I do this manually, e.g. by editing the torrc file to use an Argentinian server, then disconnecting Tor using Vidalia, reconnecting Vidalia, and then rerunning the request, it works fine. However, I want to automate this process entirely, and do it as efficiently as possible. Tor is written in C, and I'd like to avoid taking apart its entire source code for this. Any idea of what's the easiest way to automate the whole process using only Ruby?
Also, if I'm missing something and there's a simpler alternative to this whole ordeal, let me know.
Thanks!
Please take a look at Tor control protocol. You can control circuits using telnet.
http://thesprawl.org/memdump/?entry=8
To switch to a new circuit wich switches to a new endpoint:
require 'net/telnet'
def switch_endpoint
localhost = Net::Telnet::new("Host" => "localhost", "Port" => "9051", "Timeout" => 10, "Prompt" => /250 OK\n/)
localhost.cmd('AUTHENTICATE ""') { |c| print c; throw "Cannot authenticate to Tor" if c != "250 OK\n" }
localhost.cmd('signal NEWNYM') { |c| print c; throw "Cannot switch Tor to new route" if c != "250 OK\n" }
localhost.close
end
Be aware of the delay to make a new circuit, may take couple seconds, so you'd better add a delay in the code, or check if your address has changed by calling some remote IP detection site.

Resources