For reporting testcases to Xray we are using Karate Runtime Hooks issue here is testcases are not getting executed while using hooks & hooks code ran [duplicate] - runtime

Strange behaviour when I call a feature file for test clean using afterFeature hook. The cleanup feature file is called correctly because I can see the print from Background section of the file, but for some reason the execution hangs for Scenario Outline.
I have tried running feature with Junit5 runner and also in IntelliJ IDE by right clicking on feature file but get the same issue, the execution hangs.
This is my main feature file:
Feature: To test afterFeature hook
Background:
* def num1 = 100
* def num2 = 200
* def num3 = 300
* def dataForAfterFeature =
"""
[
{"id":'#(num1)'},
{"id":'#(num2)'},
{"id":'#(num3)'}
]
"""
* configure afterFeature = function(){ karate.call('after.feature'); }
Scenario: Test 1
* print 'Hello World 1'
Scenario: Test 2
* print 'Hello World 2'
The afterFeature file:
#ignore
Feature: Called after calling feature run is completed
Background:
* def dynamicData = dataForAfterFeature
* print 'dynamicData: ' + dynamicData
Scenario Outline: Print dynamic data
* print 'From after feature for id: ' + <id>
Examples:
| dynamicData |
The execution stalls at Scenario Outline. I can see the printed value for dynamicData variable in console but nothing happens after that.
Seems like the outline loop is not starting or has crashed? Was not able to get details from log as the test has not finished or there is no error reported. What else can I check or what might be the issue?
If not easily reproducible, what test cleanup workaround do you recommend?

For now, I have done the following workaround where I have added a test clean-up scenario at the end of the feature that has tests. Have stopped parallel execution for these tests and to be honest I do not mind these tests not running in parallel as they are fast to run anyways.
Ids to delete:
* def idsToDelete =
"""
[
101,
102,
103
]
"""
Test clean up scenario:
# Test data clean-up scenario
Scenario: Delete test data
# Js method to call delete data feature.
* def deleteTestDataFun =
"""
function(x) {
var temp = [x];
// Call to feature. Pass argument as json object.
karate.call('delete-test-data.feature', { id: temp });
}
"""
* karate.forEach(idsToDelete, deleteTestDataFun)
Calls the delete test data scenario and passes it a list of ids that needs to be deleted.
Delete test data feature:
Feature: To delete test data
Background:
* def idVal = id
Scenario: Delete
Given path 'tests', 'delete', idVal
Then method delete

Yeah I personally recommend a strategy to pre-clean-up always, because you cannot guarantee that an "after" hook gets called, e.g. if the machine is switched off.
Sometimes the simplest option is to do this as plain old Java code in your JUnit test-suite. So maybe a one-line after using Runner is sufficient.
It gets tricky if you need to keep track of dynamic data that your tests have created. What I would do is write a Java singleton, use it in your tests to "collect" the ID-s that need to be deleted, and then use this in your JUnit class. You can use things like #AfterClass.
Please try and replicate using the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - because this can indeed be a bug with Scenario Outline.
Finally, you can evaluate ExecutionHook which has an afterSuite() callback: https://github.com/intuit/karate/issues/970#issuecomment-557443551
EDIT: in 1.0 - it has become RuntimeHook: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks

Related

Customising Junit5 test output via Gradle

I'm trying to output BDD from my junit tests like the following: -
Feature: Adv Name Search
Scenario: Search by name v1
Given I am at the homepage
When I search for name Brad Pitt
And I click the search button2
Then I expect to see results with name 'Brad Pitt'
When running in IntelliJ IDE, this displays nicely but when running in Gradle nothing is displayed. I did some research and enabled the test showStandardStreams boolean i.e.
In my build.gradle file I've added ...
test {
useJUnitPlatform()
testLogging {
showStandardStreams = true
}
}
This produces ...
> Task :test
Adv Name Search STANDARD_OUT
Feature: Adv Name Search
Tests the advanced name search feature in IMDB
Adv Name Search > Search by name v1 STANDARD_OUT
Scenario: Search by name v1
Given I am at the homepage
When I search for name Brad Pitt
And I click the search button2
Then I expect to see results with name 'Brad Pitt'
... which is pretty close but I don't really want to see the output from gradle (the lines with STANDARD_OUT + extra blank lines).
Adv Name Search STANDARD_OUT
Is there a way to not show the additional Gradle logging in the test section?
Or maybe my tests shouldn't be using System.out.println at all, but rather use proper logging (eg. log4j) + gradle config to display these?
Any help / advice is appreciated.
Update (1)
I've created a minimum reproducable example at https://github.com/bobmarks/stackoverflow-junit5-gradle if anyone wants to quickly clone and ./gradlew clean test.
You can replace your test { … } configuration with the following to get what you need:
test {
useJUnitPlatform()
systemProperty "file.encoding", "utf-8"
test {
onOutput { descriptor, event ->
if (event.destination == TestOutputEvent.Destination.StdOut) {
logger.lifecycle(event.message.replaceFirst(/\s+$/, ''))
}
}
}
}
See also the docs for onOutput.
FWIW, I had originnaly posted the following (incomplete) answer which turned out to be focusing on the wrong approach of configuring the test logging:
I hardly believe that this is possible. Let me try to explain why.
Looking at the code which produces the lines that you don’t want to see, it doesn’t seem possible to simply configure this differently:
Here’s the code that runs when something is printed to standard out in a test.
The method it calls next unconditionally adds the test descriptor and event name (→ STANDARD_OUT) which you don’t want to see. There’s no way to switch this off.
So changing how standard output is logged can probably not be changed.
What about using a proper logger in the tests, though? I doubt that this will work either:
Running tests basically means running some testing tool – JUnit 5 in your case – in a separate process.
This tool doesn’t know anything/much about who runs it; and it probably shouldn’t care either. Even if the tool should provide a logger or if you create your own logger and run it as part of the tests, then the logger still has to print its log output somewhere.
The most obvious “somewhere” for the testing tool process is standard out again, in which case we wouldn’t win anything.
Even if there was some interprocess communication between Gradle and the testing tool for exchanging log messages, then you’d still have to find some configuration possibility on the Gradle side which configures how Gradle prints the received log messages to the console. I don’t think such configuration possibility (let alone the IPC for log messages) exists.
One thing that can be done is to set the displayGranuality property in testLogging Options
From the documentation
"The display granularity of the events to be logged. For example, if set to 0, a method-level event will be displayed as "Test Run > Test Worker x > org.SomeClass > org.someMethod". If set to 2, the same event will be displayed as "org.someClass > org.someMethod".

How mocking works mocha gem?

I am new to mocha gem before that I am using minitest to test my product. Then I came across a situation where my application is publishing jobs to facebook. It selects some jobs and then publish them on facebook.
So somebody told me to use mocking and i found mocha gem.
I see a sample test.
def test_mocking_an_instance_method_on_a_real_object
job = Job.new
job.expects(:save).returns(true)
assert job.save
end
But I did not get the idea. In my jobs controller, I have validations
and the empty job cannot be saved successfully. But here with mocking the
above test assert that job can be saved without mandatory fields.So what exactly we test in above test case?
It is a good practice generally for several reasons:
From an isolation point of view: The responsibility of the controller is to handle the incoming request and trigger actions accordingly. In our given case the actions are: create a new Job, and issue a new post to Facebook if everything fits. (Please notice our controller doesn't need to know about how to post to FB)
So imagine the following controller action:
def create
job = Job.new job_params
if job.save
FacebookService.post_job job
...
else
...
end
end
I would test it like:
class JobsControllerTest < ActionController::TestCase
test "should create a job and issue new FB post" do
job_params = { title: "Job title" }
# We expect the post_job method will be called on the FacebookService class or module, and we replace the original implementation with an 'empty/mock' method that does nothing
FacebookService.expects :post_job
post :create, job_params
assert_equal(Job.count, 1) # or similar
assert_response :created
end
end
The other advantage is: FacebookService.post_job might take significant time, and might require internet access etc, we don't want our tests to pending on those, especially if we have CI.
And finally I would test the real FB posting in the FacebookService test, and maybe stub out some other method, to prevent posting on FB every single time when the test runs (it needs time, internet, FB account...).

jenkins-pipeline load scoping "method code too large"

I'm setting up a pretty complex pipeline to handle legacy builds.
There are currently 8 stages, and more on the way - perhaps total of 12-15 stages.
Each stage does pretty similar actions:
- take a list, and for each item
- create entry in map that
- allocates a node, and executes a set of BAT scripts (yes, windows)
and then run list in parallel
The current pipeline is about 1,000 lines long, and I'm getting the "method too large" error
I'm in process of refactoring this DSL into separate load-able scripts.
So far, so good.
But I just ran a test that indicates loading a script is additive to the overall pipeline. So I'd like to learn what is best to do here.
Test:
base.groovy:
def myVar //wchi is global (to basRef, i thought)
def setTest() { myVar='abc' }
def getTest() { return myVar }
pipeline.groovy
stage('one') {
def basRef = load('base.groovy')
basRef.setTest()
echo basRef.getTest()
}
stage('two') {
def basRef = load('base.groovy')
echo basRef.getTest()
}
stage one shows "abc" as expected.
stage two ALSO SHOWS "abc"
My ask:
How do I know using loadable files will not result in "method too large"?
What is the scope of a loadable file?
I've tried setting basRef = null to allow garbage collection to work, but I'm not sure it does.
Thanks for any guidance on this.

Customizing the tests in Go

I have a scenario running testcases in GO where in I know that a testfile for eg: first_test.go will pass after second or third attempt ,
assuming that it is invoking a connection to a database or calling a REST service or any other typical scenario.
Was going through the options available in the $go test command ,but no parameters are available to many tries.
Is there any way of implementing the tries for a file or calling a method from a static file with contains any method to try 3-4 times, like for this typical file scenario:
func TestTry(t *testing.T) {
//Code to connect to a database
}
One idiom is to use build flags. Create a special test file only for integration test and add
// +build integration
package mypackage
import testing
Then to run the tests for integration run :
go test -tags=integration
And then you can add logic
// +build integration
package testing
var maxAttempts = flag.Int(...)
func TestMeMaybe(t *testing.T){
for i :=0 ; i < *maxAttempts; i++ {
innerTest()
}
}
No, this would be very strange: What good is a test if it randomly succeeds sometimes?
Why don't you "try" yourself inside the test? The real test either passes or fails and you handle your knowledge about "I need to 'try' calling this external resource n times to wake it up."
That's not the way test are meant to work: a test is here to tell you if your code is working as expected, not tell if an external resource is available.
The simplest way to do it when using an external resource (a webservice or api, for example) is to mock out it's functionnalities by making fake calls that return a valid response, then run your code on that. Then you will be able to test your code.

RSpec - How to mock a stored procedure

Consider the following stored procedure:
CREATE OR REPLACE FUNCTION get_supported_locales()
RETURNS TABLE(
code character varying(10)
) AS
...
And the following method that call's it:
def self.supported_locales
query = "SELECT code FROM get_supported_locales();"
res = ActiveRecord::Base.connection.execute(query)
res.values.flatten
end
I'm trying to write a test for this method but I'm getting some problems while mocking:
it "should list an intersection of locales available on the app and on last fm" do
res = mock(PG::Result)
res.should_receive(:values).and_return(['en', 'pt'])
ActiveRecord::Base.connection.stub(:execute).and_return(res)
Language.supported_locales.should =~ ['pt', 'en']
end
This test succeds but any test that runs after this one gives the following message:
WARNING: there is already a transaction in progress
Why does this happen? Am I doing the mocking
The database is postgres 9.1.
Your test is running using database level transactions. When the test completes, the transaction is rolled back so that none of the changes made in the test are actually saved to the database. In your case, this rollback can't happen because you have stubbed out the execute method on the ActiveRecord connection.
You can disable transactions globally and switch to using DatabaseCleaner to enable/disable transactions for various tests. You could then set up to use transactions through DatabaseCleaner by default so your existing tests don't change, and then in this one test choose to disable transactions in favor of some other strategy (such as the null strategy since there is no cleaning to be done for this test).
This other SO post indicates you may be able to avoid disabling transactions globally and turn them off on a per test basis as well, I have not tried that myself though.

Resources