100+ test cases are ran parallelly in Go, How can I get the code covered trace for every case?
In Java, we can get this by asm and AOP, any way in Go?
Related
I'm struggling to find a framework to help me test the performance of a service I am writing, that has a long running process it fronts. A simplified description of the service is:
POST data to the service /start endpoint, it returns a token.
GET the status of the action at /status/{token}, poll this until it returns a status of completed.
GET the results from /result/{token}.
I've dabbled with Locust.io, which is fine for measuring the responsiveness of the API, but does little for measuring the overall end to end performance. What I would really like to do is measure how long all three steps take to complete, particularly when I run many in parallel etc. I should imagine my service back end falls over far sooner than the rest API does.
Can anyone recommend any tools / libraries / frameworks I can use to measure this please? I would like to integrate it with my build pipeline so I can measure performance as code is changed.
Many thanks
The easiest option I can think of is going for Apache JMeter, it provides Transaction Controller which generates an additional "transaction" holding its children cumulative response time (along with other metrics)
"Polling" can be implemented using While Controller
Example test plan outline with results:
I wonder how could I write spring tests to assert logic chain which is triggered by 'SourcePollingChannelAdapter'.
What comes to my mind:
use Thread.sleep() which is really bad idea for tests
Have another test version of spring context where I will replace all pollable channels with direct ones. This requires much work.
Are there any common ways to force trigger poller within test?
Typically we use QueueChannel in our tests and wait for the messages via its receive(10000) method. This way, independently of the source of data, our test method thread is blocked until data has arrived.
The SourcePollingChannelAdapter is triggered by the TaskScheduler, therefore the whole flow logic is done within a separate thread from the test method. I mean that your idea about replacing channels won't help. The Thread.sleep() might have value, but QueueChannel.receive(10000) is much reliable because we really maximum wait only for those 10 seconds.
Another way to block test-case comes from the standard CountDownLatch, which you would countDown() somewhere in the flow and wait for it in the test method.
There is some other way to test: have some loop with short sleep period in between iteration and check some condition to exit and verify. That may be useful in case of poller and database in the end. So, we would perform SELECT in that loop until desired state.
You can find some additional info in the Reference Manual.
How to Integrate Jmeter Test Results with TestRail, please anyone help me on this with small example.
You should use TestRail API, that's true. It's a simple, ReST one.
To me it looks you got two options here.
1) Implement the custom Beanshell/JSR223 listener that would peek up the Sample Result data and push them into TestRail as the test is going.
Although I won't recommend that: it would apparently slows JMeter down, consume resources & may influence the test result recordings through that.
So you better opt for
2) Keep your Test Plan lean in JMeter, record results (like, jmeter -n -t test.jmx -l test.jtl) - and implement the auxiliary util to ingest them, parse & push into TestRail.
That way seems to be more effective & reliable, for obvious reasons.
PS Basically, there exists the third way, especially if you need the results coming live: stream the results using Backend Listener.
Although TestRail seems not having the stream API out of the box, so you're going to write a kind of a proxy that would transform Graphite stream (for example) into sequence of TestRail calls.
That's gonna be interesting, but apparently the trickiest of all solutions.
Using ruby / cucumber, I know you can explicitly call a fail("message"), but what are your other options?
The reason I ask is that we have 0... I repeat, absolutly NO control over our test data. We have cucumber tests that test edge cases that we may or may not have users for in our database. We (for obvious reasons) do not want to throw away the tests, because they are valuable; however since our data set cannot test that edge case, it fails because the sql statement returns an empty data set. Right now, we just have those tests failing, however I would like to see something along the lines of "no_data" or something like that if the sql statement returns an empty data set. So the output would look like
Scenarios: 100 total (80 passed, 5 no_data, 15 fail)
I am willing to use the already implemented "skipped" if there is a skip("message") function.
What are my options so we can see that with the current data, we just don't have any test data for those tests? making these manual tests is also not an option. They need to be run ever week with our automation, but somehow separate from the failures. Failure means defect, no_data found means it's not a testable condition. It's the difference between a warning: we have not tested this edge case, and Alert: broken code.
You can't invoke 'skipped', but you can certainly call pending with or without an error message. I've used this in a similar situation to yours. Unless you're running in strict mode then having pending scenarios won't cause any failures. The problem I encountered was that occasionally a step would get mis-spelled causing cucumber to mark that as pending, since it was not matching a step definition. That then became lost in the sea of 'legitimate' pending scenarios and was weeks before we discovered it.
I am using VS 2010 unit tests to create a load test.
BeginTimer/EndTimer are used to measure the response time.
On success, the timer works as expected.
If an error occurs in the application, I don't want the timer to record the response time. This will throw off the analysis reports. As an example, an OpenOrder method on success may take 30 seconds, but on failure (eg order not found), the OpenOrder might return in 2 seconds. I want the response time metrics to represent only thebactions that were successful.
Alternatively, is there a way to filter out the timers/transactions
that were not successfull?
Is there another way to do this? Something else besides
BeginTimer/EndTimer?
Code Snippet:
testContextInstance.BeginTimer("trans_01");
OpenOrder("123");// Execute some code
// If the open fails, I want to disregard/kill the timer.
// If I do a 'return' here, never calling EndTimer, the timer
// is still ended and its response time recorded.
testContextInstance.EndTimer("trans_01");
This is a deficiency in the unit testing API. There is a similar lack in the web testing API. I too have wanted a way to disregard timers around failures, but I think we are out of luck here. I believe an enhancement request to Microsoft is needed.
There is (possibly) a (lengthy) workaround: add in your own timer (using Stopwatch class) that you can ignore/terminate at will, and also add the relevant code to insert a result row directly into the transactions table in the load test results database (asynchronously for best performance).
But that's awful. It would much easier if the API simply offered a 'KillTimer' method.