Customizing the tests in Go - go

I have a scenario running testcases in GO where in I know that a testfile for eg: first_test.go will pass after second or third attempt ,
assuming that it is invoking a connection to a database or calling a REST service or any other typical scenario.
Was going through the options available in the $go test command ,but no parameters are available to many tries.
Is there any way of implementing the tries for a file or calling a method from a static file with contains any method to try 3-4 times, like for this typical file scenario:
func TestTry(t *testing.T) {
//Code to connect to a database
}

One idiom is to use build flags. Create a special test file only for integration test and add
// +build integration
package mypackage
import testing
Then to run the tests for integration run :
go test -tags=integration
And then you can add logic
// +build integration
package testing
var maxAttempts = flag.Int(...)
func TestMeMaybe(t *testing.T){
for i :=0 ; i < *maxAttempts; i++ {
innerTest()
}
}

No, this would be very strange: What good is a test if it randomly succeeds sometimes?
Why don't you "try" yourself inside the test? The real test either passes or fails and you handle your knowledge about "I need to 'try' calling this external resource n times to wake it up."

That's not the way test are meant to work: a test is here to tell you if your code is working as expected, not tell if an external resource is available.
The simplest way to do it when using an external resource (a webservice or api, for example) is to mock out it's functionnalities by making fake calls that return a valid response, then run your code on that. Then you will be able to test your code.

Related

Calling functions and variables in same package but different files with build tags

I'm setting up some integration testing, which I'm doing in a separate test package to my src code. This is done to prevent circular dependencies. Unit tests are not stored here, they are stored alongside the files they are testing.
My golang project hierarchy looks like:
cmd
public
...
testing/
main_test.go
database_test.go
in main_test.go, I plan to initialise the connections to external dependencies, such as my test database.
package tests
type Database struct {
...
}
var DB Database
func TestMain(m *testing.M){
SetUpDatabase()
exitCode := m.Run()
os.Exit(exitCode)
}
database_integration_test.go
func Test(t *testing.T) {
tests := []struct {
title string
run func(t *testing.T)
}{
{"should make query", testQuery},
}
for _, test := range tests {
t.Run(test.title, func(t *testing.T) {
test.run(t)
})
}
}
func testQuery(t *testing.T) {
var r result.Result
err := DB.database.DoQuery("").Decode(&r)
if err != nil {
t.Errorf(err.Error())
t.Fatal()
}
}
This setup works when I run it, however, I would like to add build tags to these files, or the type: // +build integration
However, as soon as I use the build tag, the database_integration_test.go file cannot see the initalised Database type. How can I stop this? Also as a side note, should I change the name of main_test.go. I only called it that due to main being the standrd entry point.
Firstly, Regarding this:
Also as a side note, should I change the name of main_test.go. I only
called it that due to main being the standard entry point.
I think it is confusing to name it as main_test.go as it might indicate that you are testing the main function in this file (according to golang convention)
Secondly, Regarding this:
However, as soon as I use the build tag, the
database_integration_test.go file cannot see the initialised Database
type. How can I stop this?
A Build Constraint or also known as Build Tag is used to include or exclude files in a package during a build process. With this, we can build different types of builds from the same source code.
So if you are not seeing the Database Type initialized then most probably the definition of the Database Type and the integration test are tagged with different build tags. Make sure they are present in the same build tags. Also, I think you can use more than one build tag to label a file. So you can try that as well.
For more details on the Build Tags check out the following article by Dave Cheney here
You could simply add a flag to your TestMain:
var isIntegration bool
func init() {
flag.StringVar(&isIntegration, "mytest.integration", "Set flag to set up DB for integration tests")
}
func TestMain(t *testing.M) {
SetUpDatabase(isIntegration)
//etc...
}
Then just simply have the SetUpDatabase call different, unexported, functions based on whether or not the argument is true or false. That'd be a quick way to get the behaviour, without having to much about with custom build constraints. Especially considering you're running tests, not building the application as-such.
As far as renaming main_test.go is concerned: I don't see why you'd need to change it. It does what it says on the tin. When someone else wants to see how the tests are structured/run, or what possible flags have been added, it's a lot easier to just check the directory and look for a main_test.go file (along with init.go, that'd be first file I'd look for). Any other name like setup_integration_test.go, integration_setup_test.go, integration_start_test.go, ... is just going to muddy the waters.

Rego test to filter by IP address

I am using a similar rule to this:
allow {
http_request.method == "POST"
allowed_paths[http_request.path]
net.cidr_contains("XX.YYY.ZZZ.160/29-XX.YYY.ZZZ.32/29",source_address.Address.SocketAddress.address)
}
And I have two questions:
Is this the right way to filter by the IP address of the client which makes the request?
Does exist some way to simulate the request from some of these IPs and test it?
Yes, net.cidr_contains is the right way to go if you know the specific blocks approved requests will originate from.
I assume your Rego looks something like this:
package validate
import input.attributes.request.http as http_request
import input.attributes.source.address as source_address
allowed_paths = {
"/foo",
"/bar"
}
allow {
http_request.method == "POST"
allowed_paths[http_request.path]
net.cidr_contains("127.0.0.1/24",source_address.Address.SocketAddress.address)
}
There's a few ways to test.
Manually, you can use the Rego Playground which allows you to hand write requests and test them. This isn't a good automated solution, but will work for spot/sanity-checking.
For CI or precommit checks, you can use the opa CLI to do unit testing. The Gatekeeper Library repository provides excellent examples of how to do this. A test might look something like:
package validate
test_input_allowed_request {
input := {"attributes":{"request":{"http":{"method":"POST","path":"/foo"}},"source":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.64/26"}}}}}}
results := allow with input as input
results.allow
}

How to ignore failure/skipping statement in cucumber for next than statements

In cucumber suppose my one than statement is failed then my all than statement is skipped by cucumber for that scenario and it started executing next scenario ... Do anyone have any way to assist cucumber to run next step without skipping all other than statement for that scenario.. do we have any provision for same?
I am using cucumber, maven with java
This is a bad practice. If you have the need for something like this, it only means that your Cucumber scenario is not written properly.
Having said that, if there is a step that is expected to fail but its failure does not imply a failure of the whole scenario, you will have to implement some sort of "failsafe" workaround within your glue code. For example try...catch clause that will acknowledge the failure, perhaps log it but will not fail the scenario due to thrown exception.
Cucumber steps should not be polluted with internal logic.
If a step in a scenario fails, then the entire scenario fails. To do anything else undermines several principles of testing. Once a failure has happened executing the subsequent steps make no sense as we don't have a consistent starting point ( something has already gone wrong)
If you want to run a single scenario and exclude a particular step, just remove it from the scenario.
In this case its up to you to use the tool properly. Cucumber is not going to help you do stupid things with it.
You can either handle it using try - - - catch block or you can use soft assertion
Soft Assertions are the type of assertions that do not throw an exception when an assertion fails and would continue with the next step after assert statement.This is usually used when our test requires multiple assertions to be executed and the user want all of the assertions/codes to be executed before failing/skipping the tests.AssertJ is library providing fluent assertions. It is very similar to Hamcrest which comes by default with JUnit. Along with all the asserts AssertJ provides soft assertions with its SoftAssertions class inside org.assertj.core.api package
Consider the below example:
public class Sample {
#Test
public void test1() {
SoftAssert sa = new SoftAssert();
sa.assertTrue(2 < 1);
System.out.println(“Assertion Failed”);
sa.assertFalse(1 < 2);
System.out.println(“Assertion Failed”);
sa.assertEquals(“Sample”, “Failed”);
System.out.println(“Assertion Failed”);
}
}
Output:
Assertion Failed Assertion Failed Assertion Failed
PASSED: test1
Even now the test PASSED instead of FAILED. The problem here is the test would not fail when an exception is not thrown. In order to achieve the desired result we need to call the assertAll() method at the end of the test which will collate all the exceptions thrown and fail the test if necessary.
Extending the SachinB answer.
We can use assertj to achive same.
We need to use it's lib/dependency as below
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<version>3.9.0</version>
</dependency>
You need to create object of SoftAssertions() which is provide by assetj
package you need to import
import org.assertj.core.api.SoftAssertions;
Example code
public class myclass {
SoftAssertions sa = null;
#Then("^mycucucmberquote$")
public void testCase2() {
sa = new SoftAssertions();
sa.assertThat("a").contains("b");
}
#Then("^mycucucmberquoteLastThen of that scario$")
public void testCase3() {
try {
sa.assertAll();
} catch (Exception e) {
}
}
}
sa.assertAll(); implemented function fails and it will provide the stack trace of failed steps.

Generating PHPUnit tests from array with endpoints

For an application we use configuration files in which a large number of endpoint characteristics are determined (relations, fillables, visibles, roles, etc.) We would like to loop through these files and conduct automatic tests with PHPUnit, simply to see if we receive a response, if validation errors are being triggered, if the response is in line with the files, etc.
We load the configuration and perform the tests for each endpoint configuration:
public function testConfigurationFiles()
{
$config = resolve('App\Contracts\ConfigInterface');
foreach ($config->resources as $resource=>$configuration) {
foreach ($configuration->endpoints() as $method=>$rules) {
$this->endpoint($method, $resource, $configuration);
}
}
}
After which we use a switch, to test each type of method differently (index, show, create, update, delete). In total this comes down to dozens of tests with hundreds of assertions.
However, if even one of these endpoints fails, the entire tests fails without showing explicit information what went wrong. Is there a way to automatically generate a "test{$resource}{$method}" method for each endpoint, so they will be handled like individual tests?
Besides these tests we also conduct units tests & e2e tests, so we are fully aware of the disadvantages of this way of testing.
After studying PHPUnit some more, I found my answer in dataProviders:
https://phpunit.de/manual/current/en/writing-tests-for-phpunit.html#writing-tests-for-phpunit.data-providers
This way you can indicate a data provider for a method, which should return an array with all cases you want to iterate over.

Error vs Fatal in tests

I am developing a JSON web service using Go-Json-Rest. I am writing tests.
...
recorded = test.RunRequest(t, &api.Handler,
test.MakeSimpleRequest("POST", "http://localhost/api/products",
product))
recorded.CodeIs(201)
recorded.ContentTypeIsJson()
var newProduct Product
err := recorded.DecodeJsonPayload(&newProduct)
if err != nil {
t.Fatal(err)
}
...
I am using Fatal as I am coming from Python world where an assert would immediately stop test case method execution. And this make sense: why trying to decode the data, if it's not JSON?
But recorded.CodeIs(201), recorded.ContentTypeIsJson() and other tests I've seen use Error which doesn't stop test execution.
What should I use in tests? Error or Fatal?
I think you use Error until continuing to run the test can't possibly give you any more information useful in debugging, then you use Fatal. And if you're not sure (like if you're writing a factored-out method like CodeIs to be used in the context of lots of different tests), go for Error, since you're generally not doing harm by continuing to run the test.
By that criteria, it makes sense for you to Fatal at failed JSON decoding after which, as you say, nothing interesting is going to happen. And it's understandable that CodeIs and ContentTypeIsJson use Error because they're methods that are going to be used across different tests.
A different example might better illustrate why to use Error until you know nothing else interesting will happen: say you want to sanity-check several different things about the JSON response, and any subset them could be wrong. (Like, your product API could return price using the wrong type, or it could fail to return empty descriptions when you don't want that, or...) Using Error instead of Fatal for each check means your test will always run them all and report which ones failed.

Resources