I am starting a new project to create an authentication api using IdentityServer4 following TDD. Many microservices and websites will be using this to authenticate users. But I could not figure out first 3 acceptance tests for the project. Any help will be highly appreciated.
Note: I have recently read goos
Well, in the book they suggest starting with the simplest success case possible. For an authentication service that would probably be a successful authentication.
So your first acceptance test could look something like that:
When: receiving valid user data
Then: authentication should be successful
That may seem awfully small for an acceptance test that is supposed to test a whole system, but your system is also very small and there aren't many user stories to handle. Basically only authentication success, fail and maybe a test that covers the case when a user has tried to log in too many times without success.
Your unit tests then can go more into detail about the actual authentication mechanism, but the acceptance test should always be about the user story.
I guess one could also argue that you don't need to write acceptance tests for your authentication service at all, since it is only a part of your system and you should rather write acceptance tests for your whole system, meaning when you have brought all the microservices together or for each individual website that will rely on that service. The main reason for this kind of argument is that acceptance testing is about testing from the outside in and your authentication service is already a rather deep component of your system.
Related
We have began working on our payment service and all was going well with the test nonces that Braintree supply
We've been using fake-valid-nonce all over the place for our transactions however, we now need to implement 3dsecure which has 0 test nonces meaning all of our tests fail with a Gateway Rejected: three_d_secure error.
Has anyone had any experience with heavy unit testing of Braintree?
Thanks
I must concur with Joe on this,
I am participating to develop the Braintree library for Golang:
https://github.com/lionelbarrow/braintree-go
We are having issues with the exact same problem. There is no pre-defined nonces that will work through 3ds secure validation.
If we create payment methods using the cards provided in the sentinel document:
https://developers.braintreepayments.com/files/Centinel.IntegrationGuide.ConsumerAuthentication_TestCases_v1_18_0_20160823.pdf
and create a nonce from the server for one of those cards, the nonce obviously does not contain any 3ds information.
During our automated integration tests we do not have access to a client-side SDK and cannot run the "required" threeDSecure.verifyCard() (this is only available through JS code)
We are thus stuck without any means to automatic tests.
This is a serious issue as the server should always verify those fields on a transaction by itself without relying on data coming from the client
I asked the Braintree support for help on this case. Their only answer at the moment is that we should do manual testing and go through the whole client side workflow of validating a 3ds card.
I reminded them we are in 2018 and that requiring developers to manually test all integration test cases each time they commit something is not a sane way to develop.
I also reminded them that we are talking about security features that are touching client payment methods and should be tested automatically and thoroughly for obvious reasons.
I also pointed out that at least in their own python SDK they have integration tests (that work only on their own infrastructure) that test 3d secure:
https://github.com/braintree/braintree_python/blob/bdc95168f46b4c3ad3904fd56e5f8e15e04e9935/tests/test_helper.py#L297
This means someone on their teams is thinking about tests and is trying to do something. This something is not enough for us out there unfortunately.
I'm currently working on an application that uploads a file to a web service (using spring resttemplate). This upload function returns an id which can be used to download the uploaded file later on.
I want this scenario covered by a test (i'm not talking about unit test - maybe integration or functional test, whichever is appropriate).
What i want to do is the download test case will depends on the result of the upload test (since the id will comes from the upload function) - this will be tested against an actual web service for me to confirm if the upload and download functions works properly.
I'm not sure if this approach that i want to do is correct so if any one can suggest a good approach how to implement it, it would be greatly appreciated.
Thanks in advance!
Since this upload/download functionality is already covered on Unit level
I want this scenario covered by a test (i'm not talking about unit test - maybe integration or functional test, whichever is appropriate).
I know Test chaining is considered harmful
the download test case will depends on the result of the upload test (since the id will comes from the upload function)
and can cause lots of overlap between tests, so changes to one can cascade outwards and cause failures everywhere. Further more the tests should have Atomicity (isolation). But if the trade-off in your case suite you, my advice is to use it.
What you can look at, is a proper Test Fixture strategy. The other Fixture Setup patterns can help you with this.
Sounds like an 'acceptance test' is what is required. This would be basically an integration test of a subsystem for the desired feature.
Have a look at Cucumber as a good easy framework to get started.
Here you would define your steps
Given:
When:
Then:
and you can then test the feature as a whole.
Services external (that You have no control over) to Your application has to be mocked, even in e2e test.
This means that service where You are uploading file should be faked. Just setup dummy http server that is pretending to be real service.
With such fake service you can setup it's behaviour for every test, in example you can prepare file to be downloaded with given id.
Pseudo code:
// given
file = File(id, content);
fakeFileService.addFile(file);
// when
applicationRunner.downloadFile(file.id());
// then
assertThatFileWasDownloaded(file);
This is a test which checks if application can download given file.
File class is some domain object in Your application, not a system
File!
fakeFileService is instance that controls dummy file service.
applicationRunner is a wrapper around Your application that makes
it do what you want.
I recommend You to read "Growing Object-Oriented software guided by tests".
I am using rspec in my test and i would like to protect my API key it when i publish my gem on github.
What are the best practices to do that? should i use VCR and then remove my key from the git log?
Broadly speaking, here are three approaches I have used in the past in similar situations. Which you choose will depend on the details your particular situation.
Test user supplies API key
If your test suite requires, or at least prefers, actual API calls with an actual API key, you can have the caller of the tests supply the credentials when running the tests.
Two most common ways of doing this are:
File in project with well-known name which is not checked into version control. Include an example with fake credentials, which is checked into version control, along with instructions for users to supply their real credentials into the real file before calling test suite.
Read from environment variables. Include instructions for users to set appropriate environment variables before calling test suite.
Otherwise,
Mock out the API
This can be the VCR approach you described. This could also be patching the API call to return some fake results.
Test your domain-specific code separate from the API interaction
Assume the API and the API client behave how you expect. Then, factor out the parts of your code which construct the API input and process the API output. Test properties of your input generated. Test behavior of output processor with known or fake output.
Finally, a warning:
If you have ever committed your API key to version control, it will visible in the history. If you have ever pushed to a public hosting service, it will have been exposed to the Internet, most notably, it will have been exposed to specialized bots which scrape newly-pushed commits for sensitive credentials. If this is you, change your credentials now!
I can't find the original blog post at the moment, but there was at least one report of someone accidentally pushing their AWS credentials to GitHub. They subsequently woke up to a several thousand dollar bill.
I would like to do integration testing on my Web API Controllers.
When the integration test starts the whole request/response pipeline of the Web API should be processed so its a real integration test.
I have read some blogs about non-InMemory-tests or InMemory tests. I need to know what is the difference and what of those approaches matches my above criteria?
I would really be glad about some explanations from people who really dealt with integration testing on Web API for self-hosting or IIS hosting (if there is a difference in testing...)
Not sure what you mean by non-in-memory testing but with integration testing involving an in-memory hosted web API, the requests are sent directly to the HttpServer, which is basically the first component to run in ASP.NET Web API pipeline. This means, the requests do not hit the network stack. So, you don't need to worry about running on specific ports, etc and also if you write good amount of tests, the time it takes to run all your tests will not be too big, since you deal with in-memory and not network. You should get comparable running times as a typical unit test. Look at this excellent post from Kiran for more details on in-memory testing. In-memory testing will exercise all the components you setup to run in the pipeline but one thing to watch out for is formatters. If you send ObjectContent in the request, there is no need to run media-formatters, since the request is already in deserialized format and hence media formatting does not happen.
If you want to get more closer and willing to take a hit on the running time, you can write your tests using a self-host. Is that what you mean by non-in-memory testing? As an example, you can use OWIN self-hosting. You can use Katana hosting APIs and host your web API and hit it with your requests. Of course, this will use the real HttpListener and the requests do traverse the network stack although it is all happening in the same machine. The tests will be comparatively slower but you get much closer to your prod runs probably.
I personally have not seen anyone using web-hosting and doing lots of integration testing. It is technically possible to fire off your requests using HttpClient and inspect the response and assert stuff but you will not have lot of control over arranging your tests programatically.
My choice is to mix and match, that is, use in-memory tests as much as possible and use Katana-based host only for those specific cases I need to really hit the network.
Are there any best practices in writing unit tests when 90% of the time I'm building the Oauth connecting class, I need to actually be logging into the remote service?
I am building a rubygem that logs in to Twitter/Google/MySpace, etc., and the hardest part is making sure I have the settings right for that particular provider, and I would like to write tests for that.
Is there a recommended way to do that? If I did mocks or stubs, I'd still have to spend that 90% of the time figuring out how to use the service, and would end up writing tests after the fact instead of before...
On a related note, if I created test API keys for each Oauth provider, and I just used it for the gem for testing purposes, is there any issue in leaving the api key and secret in plain view in the tests? So other people could test it out more realistically too.
nothing wrong with hitting live services in your integration tests.
you should stub out the oauth part completely in your unit tests though
If you're planning on open sourcing the code you're writing, leaving API keys in the code might not be a good idea as you might hit API usage limits or rate limiting when the gem becomes popular and people run the tests frequently which will lead to unexpected test failures.