Unit testing 3dSecure - braintree

We have began working on our payment service and all was going well with the test nonces that Braintree supply
We've been using fake-valid-nonce all over the place for our transactions however, we now need to implement 3dsecure which has 0 test nonces meaning all of our tests fail with a Gateway Rejected: three_d_secure error.
Has anyone had any experience with heavy unit testing of Braintree?
Thanks

I must concur with Joe on this,
I am participating to develop the Braintree library for Golang:
https://github.com/lionelbarrow/braintree-go
We are having issues with the exact same problem. There is no pre-defined nonces that will work through 3ds secure validation.
If we create payment methods using the cards provided in the sentinel document:
https://developers.braintreepayments.com/files/Centinel.IntegrationGuide.ConsumerAuthentication_TestCases_v1_18_0_20160823.pdf
and create a nonce from the server for one of those cards, the nonce obviously does not contain any 3ds information.
During our automated integration tests we do not have access to a client-side SDK and cannot run the "required" threeDSecure.verifyCard() (this is only available through JS code)
We are thus stuck without any means to automatic tests.
This is a serious issue as the server should always verify those fields on a transaction by itself without relying on data coming from the client
I asked the Braintree support for help on this case. Their only answer at the moment is that we should do manual testing and go through the whole client side workflow of validating a 3ds card.
I reminded them we are in 2018 and that requiring developers to manually test all integration test cases each time they commit something is not a sane way to develop.
I also reminded them that we are talking about security features that are touching client payment methods and should be tested automatically and thoroughly for obvious reasons.
I also pointed out that at least in their own python SDK they have integration tests (that work only on their own infrastructure) that test 3d secure:
https://github.com/braintree/braintree_python/blob/bdc95168f46b4c3ad3904fd56e5f8e15e04e9935/tests/test_helper.py#L297
This means someone on their teams is thinking about tests and is trying to do something. This something is not enough for us out there unfortunately.

Related

how to capture the recaptcha Response in jmeter from a get request

I am having a get request containing reCaptcha on submitting a contact form with a Post request.
And when making scripts for Performance testing, there in response, reCaptcha value, which is needed in the post request.
Can anyone help me out with that?
CAPTCHA is the acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart" so this is exactly what it is used for: make automation very hard or even impossible in order to protect against bots and DDoS attacks.
Theoretically you could use machine vision combined with machine learning in JSR223 Test Elements using Groovy language but it will consume really a lot of CPU and RAM even for 1 user and won't guarantee 100% success.
So if reCaptcha isn't your primary functional or performance test target instead of trying to bypass it I would recommend asking your developers or devops to disable it for the duration of the performance test as you need to focus solely on your application, all 3rd-party content and integration should be switched off or moved out of the testing scope by any other means.

IdentityServer4 first acceptance test

I am starting a new project to create an authentication api using IdentityServer4 following TDD. Many microservices and websites will be using this to authenticate users. But I could not figure out first 3 acceptance tests for the project. Any help will be highly appreciated.
Note: I have recently read goos
Well, in the book they suggest starting with the simplest success case possible. For an authentication service that would probably be a successful authentication.
So your first acceptance test could look something like that:
When: receiving valid user data
Then: authentication should be successful
That may seem awfully small for an acceptance test that is supposed to test a whole system, but your system is also very small and there aren't many user stories to handle. Basically only authentication success, fail and maybe a test that covers the case when a user has tried to log in too many times without success.
Your unit tests then can go more into detail about the actual authentication mechanism, but the acceptance test should always be about the user story.
I guess one could also argue that you don't need to write acceptance tests for your authentication service at all, since it is only a part of your system and you should rather write acceptance tests for your whole system, meaning when you have brought all the microservices together or for each individual website that will rely on that service. The main reason for this kind of argument is that acceptance testing is about testing from the outside in and your authentication service is already a rather deep component of your system.

Integration testing with Web API - non-InMemory-tests or InMemory tests -

I would like to do integration testing on my Web API Controllers.
When the integration test starts the whole request/response pipeline of the Web API should be processed so its a real integration test.
I have read some blogs about non-InMemory-tests or InMemory tests. I need to know what is the difference and what of those approaches matches my above criteria?
I would really be glad about some explanations from people who really dealt with integration testing on Web API for self-hosting or IIS hosting (if there is a difference in testing...)
Not sure what you mean by non-in-memory testing but with integration testing involving an in-memory hosted web API, the requests are sent directly to the HttpServer, which is basically the first component to run in ASP.NET Web API pipeline. This means, the requests do not hit the network stack. So, you don't need to worry about running on specific ports, etc and also if you write good amount of tests, the time it takes to run all your tests will not be too big, since you deal with in-memory and not network. You should get comparable running times as a typical unit test. Look at this excellent post from Kiran for more details on in-memory testing. In-memory testing will exercise all the components you setup to run in the pipeline but one thing to watch out for is formatters. If you send ObjectContent in the request, there is no need to run media-formatters, since the request is already in deserialized format and hence media formatting does not happen.
If you want to get more closer and willing to take a hit on the running time, you can write your tests using a self-host. Is that what you mean by non-in-memory testing? As an example, you can use OWIN self-hosting. You can use Katana hosting APIs and host your web API and hit it with your requests. Of course, this will use the real HttpListener and the requests do traverse the network stack although it is all happening in the same machine. The tests will be comparatively slower but you get much closer to your prod runs probably.
I personally have not seen anyone using web-hosting and doing lots of integration testing. It is technically possible to fire off your requests using HttpClient and inspect the response and assert stuff but you will not have lot of control over arranging your tests programatically.
My choice is to mix and match, that is, use in-memory tests as much as possible and use Katana-based host only for those specific cases I need to really hit the network.

When testing an API - Should I test API methods validation?

I've inherited a project which will be connecting to a CRM system via a SOAP service, written by another dev. My question is: to what level should I be testing the interface with the Soap services?
I set up a test case and wrote some methods to test a Soap update method, and confirmed it failed with a suitable error code for invalid customers or order numbers.
I next tested an invalid order status value (not within in a set of expected parameters) and the service returned a success code, which wasn't expected.
I believe I should report this to the developer, but should I now remove this test from my test suite? Or leave it showing as a fail?
If the soap service chooses not to validate its imput parameters I think it's a poor design, but it's not a fault in MY code, I just need to ensure I clean the input before passing values to the other system, and that validation should be covered under another set of tests anyway.
Should I even be talking to the SOAP service through the Unit Tests in the first place?
You should write at least one test per atomic functional requirement. Now, if you are writing using minimal interface guidelines, then you should have at most one interface per atomic functional requirement. But you can write more than one test for each requirement, since there may be a variety of invariants that can be tested.
In general, you should think of invariants and functional requirements when writing tests, not interface methods. Tests >= Atomic Functional Requirements.
Think of the service "contract", what are it's pre-conditions (i.e. legal inputs), it's post-conditions (i.e. legal outputs) and invariants (legal service state). If these are not clear by the developer, or there's a chance another developer is misusing the service, this should be reported and handled.
One exception though - these is all nice and good in theory, but if there are no other customers to the service (besides maybe the original developer) sometimes excess checking is obsolete. It is quite reasonable to assume that in such cases invalid inputs are checked and eliminated by the customer code.
If the library (or 3rd party code) has a good reputation (for example, Apache Commons, Guava...), I don't retest the API.
However, when I am not sure of the quality of the code, I tend to write a couple of tests verifying my assumptions about the library/API (but I don't retest all the library).
If those simple assumptions fail, it is a very bad sign for me. In this case, I tend to write more tests to check more aspects of the library. In your case, I would write more tests and signal errors to the developer.
These tests are not lost, because if a new version of the library is delivered, you can still use them to check for regressions.

Ways to Unit Test Oauth for different services in ruby?

Are there any best practices in writing unit tests when 90% of the time I'm building the Oauth connecting class, I need to actually be logging into the remote service?
I am building a rubygem that logs in to Twitter/Google/MySpace, etc., and the hardest part is making sure I have the settings right for that particular provider, and I would like to write tests for that.
Is there a recommended way to do that? If I did mocks or stubs, I'd still have to spend that 90% of the time figuring out how to use the service, and would end up writing tests after the fact instead of before...
On a related note, if I created test API keys for each Oauth provider, and I just used it for the gem for testing purposes, is there any issue in leaving the api key and secret in plain view in the tests? So other people could test it out more realistically too.
nothing wrong with hitting live services in your integration tests.
you should stub out the oauth part completely in your unit tests though
If you're planning on open sourcing the code you're writing, leaving API keys in the code might not be a good idea as you might hit API usage limits or rate limiting when the gem becomes popular and people run the tests frequently which will lead to unexpected test failures.

Resources