Im testing a small api project with CRUD endpoints. I run all my tests using supertest to check my calls all work on local DynamoDB. The issue is, when I run these tests they randomly all pass or one fails. It's rarely the same one that fails but it's always only one.
Any ideas on whats going on?
Related
I currently have a spring application, in which I have implemented a continuous integration pipeline.
In this pipeline, postman tests are executed automatically.
I would like to add a performance dimension to these tests. So, I set up elastic-apm in my application with an apm server, elasticsearch and kibana.
However, I would like to find a way to link the requests sent by my postman tests to their associated transaction visualization in kibana in order to generate a test performance report
I can't find a way to do this. I've seen that there is a transaction.id or trace.id metadata, but I can't figure out how to use it.
Do you have any idea, even from a "high-level" point of view, how to achieve this?
Thanks in advance !
I have been creating some Lambda functions on AWS using the serverless framework, Flask and SLS WSGI. Some dynamodb tables but that should not matter in this case.
The problem that I am facing is that I can not debug the whole thing end to end, I am able to run sls wsgi serve and run a local instance of my lambda functions, happy days. However, I am a little bit spoiled by other dev tools, languages and IDEs (even just Flask itself) that allow me to set breakpoints and see the scope, step through etc. So I would really like to be able to get that done here as well.
I tried launching the sls command mentioned above in a launch configuration inside vs code, no luck. Next thing I tried was to run the default flask launch config but that obviously didn't include all the configuration stored in the sls.yml file which is essential for accessing the local dynamodb instance.
The last thing I tried was to attach to ptvsd at the end of my app.py file. So I would hit a wait action from ptvsd, attach the debugger in vs code to the specified port, which seems to be successful and returning the code execution. However, it seems like sls wsgi runs the file twice, so that the attaching happens for the first instance and not the second, which then does not trigger a breakpoint when I try to execute an API call through Postman.
I guess I could include the wait step everywhere manually, then attach for each method that I am trying to debug inside the code instead of in the IDE, but that seems like overkill and not very convenient.
I have been looking for answers online and reading through docs and could find no further.
I figured out that I can use Attach using Process Id It is however a little bit tricky because there are always 2 processes running in the list (different pid's). Its not great, but it does the trick
One technique I have found useful, albeit in a node environment but should apply here, is to make use of unit testing as a way to execute code locally with the ability to tie in a debuggerm as well as make use of mocking to stub away the external dependencies such as AWS services (S3, DynamoDB, etc). I wrote a blog post about setting this up for node but you may find it useful as a way to consider setting things up with Pythoin as well: https://serverless.com/blog/serverless-local-development/
However, in the world of serverless development, it ultimately doesn't matter how sophisticated you get with your local development environment, you will have to to test in the cloud environment as well. The unit testing technique I described is good for catching those basic syntactical and logical errors but you still will need to perform a deployment into the cloud and test in that environment. Its one of the reasons at Serverless we are working very hard on ways to improve the ability and time it takes to deploy to the cloud so that testing in AWS is replaces local testing.
I'm setting up my front-end application to use continuous integration in CircleCI. Unit tests work fine, but end-to-end tests are not.
The problem is that it requires the backend (API) server to be running, and ours is in another completely different application. So, what is the best way to setup this backend server (thinking about CI)?
I thought about uploading it on heroku, but then I'd have to keep manually updating the code via git. Another option was to download the code to the CI VM and run the server directly there, but it is just too much work (install ruby, postgres, gems...), and it doesn't seem in no way the best option.
Have anyone passed through the same situation? How do you guys usually deal with this kind of situations?
I ended up doing everything inside the CI. I made some custom scripts that configure the backend project every time the test suite is ran. Also, I cached the folder with the backend code and the gems (which was taking ~2min to install).
The configuring part now adds ~20 seconds to the total time, so it wasn't a big deal. Although I still think that this is probably not the best way to do this, it has some advantages, such as not worrying about updating the backend code (it pulls from master automatically) or its database (it runs rake db:reset after updating the code).
Assuming the API server is running somewhere, configure the front-end application to point there while in the test/CI environment, at least to start out. If there are multiple API environments, choose the one the most closely matches the front-end environment (e.g. dev, staging, etc).
It gets more complicated if/when you need to run the e2e tests each time the API is built or match up specific build versions of the front-end and the API. In that case you will have to run the API server as part of the test.
I would like to do integration testing on my Web API Controllers.
When the integration test starts the whole request/response pipeline of the Web API should be processed so its a real integration test.
I have read some blogs about non-InMemory-tests or InMemory tests. I need to know what is the difference and what of those approaches matches my above criteria?
I would really be glad about some explanations from people who really dealt with integration testing on Web API for self-hosting or IIS hosting (if there is a difference in testing...)
Not sure what you mean by non-in-memory testing but with integration testing involving an in-memory hosted web API, the requests are sent directly to the HttpServer, which is basically the first component to run in ASP.NET Web API pipeline. This means, the requests do not hit the network stack. So, you don't need to worry about running on specific ports, etc and also if you write good amount of tests, the time it takes to run all your tests will not be too big, since you deal with in-memory and not network. You should get comparable running times as a typical unit test. Look at this excellent post from Kiran for more details on in-memory testing. In-memory testing will exercise all the components you setup to run in the pipeline but one thing to watch out for is formatters. If you send ObjectContent in the request, there is no need to run media-formatters, since the request is already in deserialized format and hence media formatting does not happen.
If you want to get more closer and willing to take a hit on the running time, you can write your tests using a self-host. Is that what you mean by non-in-memory testing? As an example, you can use OWIN self-hosting. You can use Katana hosting APIs and host your web API and hit it with your requests. Of course, this will use the real HttpListener and the requests do traverse the network stack although it is all happening in the same machine. The tests will be comparatively slower but you get much closer to your prod runs probably.
I personally have not seen anyone using web-hosting and doing lots of integration testing. It is technically possible to fire off your requests using HttpClient and inspect the response and assert stuff but you will not have lot of control over arranging your tests programatically.
My choice is to mix and match, that is, use in-memory tests as much as possible and use Katana-based host only for those specific cases I need to really hit the network.
Are there any best practices in writing unit tests when 90% of the time I'm building the Oauth connecting class, I need to actually be logging into the remote service?
I am building a rubygem that logs in to Twitter/Google/MySpace, etc., and the hardest part is making sure I have the settings right for that particular provider, and I would like to write tests for that.
Is there a recommended way to do that? If I did mocks or stubs, I'd still have to spend that 90% of the time figuring out how to use the service, and would end up writing tests after the fact instead of before...
On a related note, if I created test API keys for each Oauth provider, and I just used it for the gem for testing purposes, is there any issue in leaving the api key and secret in plain view in the tests? So other people could test it out more realistically too.
nothing wrong with hitting live services in your integration tests.
you should stub out the oauth part completely in your unit tests though
If you're planning on open sourcing the code you're writing, leaving API keys in the code might not be a good idea as you might hit API usage limits or rate limiting when the gem becomes popular and people run the tests frequently which will lead to unexpected test failures.