When I write tests that involve subscribing to events on the Eventstream or watching actors and listning for "Terminated", the tests work fine running them 1 by 1 but when I run the whole testsuite those tests fail.
Tests also works if each of those tests are in a separate test class with Xunit.
How come?
A repo with those kind of tests: https://github.com/Lejdholt/AkkaTestError
Took a look at your repository. I can reproduce the problems you are describing.
It feels like a bug in the TestKit, some timing issue somewhere. But its hard to pin down.
Also, not all unit test frameworks are created equally. The testkit uses its own TaskDispatcher to enable the testing of what are normally inherently asynchronous processed operations.
This sometimes causes some conflicts with the testframework being used. Is also coincidentally why akka.net also moved to XUnit for their own CI process.
I have managed to fix your problem, by not using the TestProbe. Although im not sure if the problem lies with the TestProbe per say, or the fact that your where using an global reference (your 'process' variable).
I suspect that the testframework, while running tests in parrallel, might be causing some wierd things to happen with your testprobe reference.
Example of how i changed one of your tests:
[Test]
public void GivenAnyTime_WhenProcessTerminates_ShouldLogStartRemovingProcess()
{
IProcessFactory factory = Substitute.For<IProcessFactory>();
var testactor = Sys.ActorOf<FakeActor>("test2");
processId = Guid.NewGuid();
factory.Create(Arg.Any<IActorRefFactory>(), Arg.Any<SupervisorStrategy>()).Returns(testactor);
manager = Sys.ActorOf(Props.Create(() => new Manager(factory)));
manager.Tell(new StartProcessCommand(processId));
EventFilter.Info("Removing process.")
.ExpectOne(() => Sys.Stop(testactor));
}
It should be fairly self explanatory on how you should change your other test.
The FakeActor is nothing more then an empty ReceiveActor implementation.
Related
I have a very simple test cases(scalatest, but doesn't matter) and I provide two implementation of accessing some resources, this method returns either Try or some case class instance.
Test cases:
"ResourceLoader" must
"successfully initialize resource" in {
/async code test
noException should be thrownBy Await.result(ResourceLoader.initializeRemoteResourceAsync(credentials, networkConfig), Duration.Inf)
}
"ResourceLoader" must
"successfully sync initialize remote resources" in {
noException should be thrownBy ResourceLoader.initializeRemoteResource(credentials, networkConfig)
}
This tests testing different code which access some remote resources
Sync version
def initializeRemoteResource(credentials: Credentials, absolutePathToNetworkConfig: String): Resource = {
//some code accessing remote server
}
Async version
def initializeRemoteResourceAsync(credentials: Credentials, absolutePathToNetworkConfig: String): Future[Try[Resource]] = {
Future {
//the same code as in sync version
}
}
In IDEA test tab I see that future based version is twice slower then sync version, my question is there overhead for calling Await.result explicitly? If not, why it slows down the execution? Appreciate any help, Thanks.
Note: I know it is not the best way to measure performance of production system. But it at list says how much time was spend on each test case.
Yes, there will be a small overhead for Await.result, but in practice it probably doesn't amount to much. Future {} requires an ExecutionContext (thread pool or thread creator) in implicit scope so you won't be able to successfully use it without importing the default execution context (which will simply spawn a thread) or some other context. If you're using the default execution context, for example, you will have two threads instead of one which will involve some overhead for context switching. It shouldn't be much though. If 'twice as slow' means 40ms instead of 20 then perhaps it's not worth worrying about.
I'm trying to capture output written from each task as it is executed. The code below works as expected when running Gradle with --max-workers 1, but when multiple tasks are running in parallel this code below picks up output written from other tasks running simultaneously.
The API documentation states the following about the "getLogging" method on Task. From what it says I judge that it should support capturing output from single tasks regardless of any other tasks running at the same time.
getLogging()
Returns the LoggingManager which can be used to control the logging level and standard output/error capture for this task. https://docs.gradle.org/current/javadoc/org/gradle/api/Task.html
graph.allTasks.forEach { Task task ->
task.ext.capturedOutput = [ ]
def listener = { task.capturedOutput << it } as StandardOutputListener
task.logging.addStandardErrorListener(listener)
task.logging.addStandardOutputListener(listener)
task.doLast {
task.logging.removeStandardOutputListener(listener)
task.logging.removeStandardErrorListener(listener)
}
}
Have I messed up something in the code above or should I report this as a bug?
It looks like every LoggingManager instance shares an OutputLevelRenderer, which is what your listeners eventually get added to. This did make me wonder why you weren't getting duplicate messages because you're attaching the same listeners to the same renderer over and over again. But it seems the magic is in BroadcastDispatch, which keeps the listeners in a map, keyed by the listener object itself. So you can't have duplicate listeners.
Mind you, for that to hold, the hash code of each listener must be the same, which seems surprising. Anyway, perhaps this is working as intended, perhaps it isn't. It's certainly worth an issue to get some clarity on whether Gradle should support listeners per task. Alternatively raise it on the dev mailing list.
Why would this be happening? I have a no code in my test:
[TestClass]
public class ApiClientTest
{
[TestMethod]
public void ApiTestSampleUseCase()
{
TestCleanup();
}
public void TestCleanup()
{
//DeleteUsers();
}
}
Yet when I run this, it takes ~1 minute to complete. It passes, it just takes forever. I set a breakpoint after DeleteUsers and it gets hit immediately; the delay is after the test has completed.
Check your bin directory - for some incomprehensible reason, MSTest copies all referenced assemblies to a new directory every time you run the tests. (These are safe to delete, though.) Maybe it does something else as well that causes it to slow down after many runs... According to this answer and this documentation, you can disable this behaviour.
I'd recommend switching to NUnit if possible, preferrably in combination with FluentAssertions (due to the not insignificant number of weird design decisions in MSTest, such as this behaviour, the lack of easy-to-use parameterized tests, and the fact that a test will get run even if [TestInitialize] throws an exception).
I'm trying out the whole BDD approach and would like to test the AMQP-based aspect of a vanilla Ruby application I am writing. After choosing Minitest as the test framework for its balance of features and expressiveness as opposed to other aptly-named vegetable frameworks, I set out to write this spec:
# File ./test/specs/services/my_service_spec.rb
# Requirements for test running and configuration
require "minitest/autorun"
require "./test/specs/spec_helper"
# External requires
# Minitest Specs for EventMachine
require "em/minitest/spec"
# Internal requirements
require "./services/distribution/my_service"
# Spec start
describe "MyService", "A Gateway to an AMQP Server" do
# Connectivity
it "cannot connect to an unreachable AMQP Server" do
# This line breaks execution, commented out
# include EM::MiniTest::Spec
# ...
# (abridged) Alter the configuration by specifying
# an invalid host such as "l0c#alho$t" or such
# ...
# Try to connect and expect to fail with an Exception
MyApp::MyService.connect.must_raise EventMachine::ConnectionError
end
end
I have commented out the inclusion of the em-minitest-spec gem's functionality which should coerce the spec to run inside the EventMachine reactor, if I include it I run into an even sketchier exception regarding (I suppose) inline classes and such: NoMethodError: undefined method 'include' for #<#<Class:0x3a1d480>:0x3b29e00>.
The code I am testing against, namely the connect method within that Service is based on this article and looks like this:
# Main namespace
module MyApp
# Gateway to an AMQP Server
class MyService
# External requires
require "eventmachine"
require "amqp"
# Main entry method, connects to the AMQP Server
def self.connect
# Add debugging, spawn a thread
Thread.abort_on_exception = true
begin
#em_thread = Thread.new {
begin
EM.run do
#connection = AMQP.connect(#settings["amqp-server"])
AMQP.channel = AMQP::Channel.new(#connection)
end
rescue
raise
end
}
# Fire up the thread
#em_thread.join
rescue Exception
raise
end
end # method connect
end
end # class MyService
The whole "exception handling" is merely an attempt to bubble the exception out to a place where I can catch/handle it, that didn't help either, with or without the begin and raise bits I still get the same result when running the spec:
EventMachine::ConnectionError: unable to resolve server address, which actually is what I would expect, yet Minitest doesn't play well with the whole reactor concept and fails the test on ground of this Exception.
The question then remains: How does one test EventMachine-related code using Minitest's spec mechanisms? Another question has also been hovering around regarding Cucumber, also unanswered.
Or should I focus on my main functionality (e.g. messaging and seeing if the messages get sent/received) and forget about edge cases? Any insight would truly help!
Of course, it can all come down to the code I wrote above, maybe it's not the way one goes about writing/testing these aspects. Could be!
Notes on my environment: ruby 1.9.3p194 (2012-04-20) [i386-mingw32] (yes, Win32 :>), minitest 3.2.0, eventmachine (1.0.0.rc.4 x86-mingw32), amqp (0.9.7)
Thanks in advance!
Sorry if this response is too pedantic, but I think you'll have a much easier time writing the tests and the library if you distinguish between your unit tests and your acceptance tests.
BDD vs. TDD
Be careful not to confuse BDD with TDD. While both are quite useful, it can lead to problems when you try to test every edge case in an acceptance test. For example, BDD is about testing what you're trying to accomplish with your service, which has more to do with what you're doing with the message queue than connecting to the queue itself. What happens when you try to connect to a non-existent message queue fits more into the realm of a unit test in my opinion. It's also worth pointing out that your service shouldn't be responsible for testing the message queue itself, since that's the responsibility of AMQP.
BDD
While I'm not sure what your service is supposed to do exactly, I would imagine your BDD tests should look something like:
start the service (can do this in a separate thread in the tests if you need to)
write something to the queue
wait for your service to respond
check the results of the service
In other words, BDD (or acceptance tests, or integration tests, however you want to think about them) can treat your app as a black box that is supposed to provide certain functionality (or behavior). The tests keep you focused on your end goal, but are more meant for ensuring one or two golden use cases, rather than the robustness of the app. For that, you need to break down into unit tests.
TDD
When you're doing TDD, let the tests guide you somewhat in terms of code organization. It's difficult to test a method that creates a new thread and runs EM inside that thread, but it's not so hard to unit test either of these individually. So, consider putting the main thread code into a separate function that you can unit test separately. Then you can stub out that method when unit testing the connect method. Also, instead of testing what happens when you try to connect to a bad server (which tests AMQP), you can test what happens when AMQP throws an error (which is your code's responsibility to handle). Here, your unit test can stub out the response of AMQP.connect to throw an exception.
I'm testing ReactiveUI, it seems very nice.
However, I am a bit puzzled by the MessageBus.
Sample code :
var bus = new MessageBus();
int result = -1;
bus.Listen<int>().Subscribe(x => result = x);
bus.SendMessage(42);
It does work when calling an Assert statement, but in a standard WPF application the result value is never updated. This is probably due to the Scheduler implementation, but it's not quite clear to me yet.
Any hint is welcome.
The result is eventually updated (the same as calling Dispatcher.BeginInvoke), not immediately. By default, RxUI schedules things differently in a unit test runner to make it easier to write unit tests - that's why you see that warning in the unit test runner output.
If you were to instead do something like:
var bus = new MessageBus();
bus.Listen<int>().Subscribe(x => MessageBox.Show("The answer is " + x));
bus.SendMessage(42);
You will see the Message Box (if not, it's definitely a bug!).
Why is the MessageBus deferred? It makes it easier to write async code, since you can now SendMessage from other threads without seeing WPF's dreaded InvalidOperationException due to accessing objects on the wrong thread.