I have a 3rd party lib which internally makes a couple of AJAX calls to download remote content. In my test code (jasmine/karma), I get a log that the files were not found (404). The files are not available during testing (they get generated later in the build) so I cannot serve them from karma!
Is there a way to configure karma to serve an empty file with a 200 HTTP status when the request comes in? Or is there a way to mock the AJAX calls in jasmine (NOTE: I do not know exactly how the 3rd party lib is doing the ajax call and I do not want to spend time digging through their code to figure it out).
Related
i am still new to JMeter and i was assigned to a work that I will need to use JMeter to perform automation testing. The idea is to write script using JMeter and run the script to fill in the forms in the website. I was curious that can JMeter use different data from the database to fill in the form of the website everytime it execute?(unique data for every user)
I have followed this tutorial (https://www.blazemeter.com/blog/fill-forms-and-submit-with-jmeter-made-easy/ ) and it succeed, however, when i try to change the parameter name (to some other names that do not matches the field name found in the inpect mode), it still works. So i was wondering how JMeter knows where to put in the parameter even i have change to a wrong field name?
As per JMeter Project main page:
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time).
Browsers don't do any magic, they execute HTTP Requests, wait for response and render it. JMeter in its turn can execute the same HTTP requests so traffic would be the same, however it will not render the response, but rather measure the time and collect some more metrics.
If you change the names of the inputs in the form most probably the request will be successful, to wit you will get HTTP Status Code below 400 hence JMeter will mark the result as "green", however if you inspect the response using View Results Tree listener you will see that the form is not filled and/or you still at the same page.
If you want to use JMeter for checking the data returned by the application you're testing consider using JMeter Assertions to test presence of expected values, absence of errors, set response time thresholds, etc.
You can automate the form submission or order placement usin JMeter. You can JMeter for API testing as well by adding assertions. But the main purpose of the JMeter is to test the performance of the application. Its not like selenium which performs actions on the browser whereas JMeter sends the request in various protocols to relevant server and can also simulate many users at the same time.
If you want to do extensive automation testing,JMeter isnt the ideal tool for that.
You can use webdriver sampler to run the selenium with jmeter. It requires to configure sampler and browser config which are plugin and can be downloaded using plugin manager.
For more info:-https://www.blazemeter.com/blog/jmeter-webdriver-sampler/
Now, without the plugin it is working on protocol level and not on the frontend as pointed out in the above comments.
So, yes it can depend on which layer you want to work. It can work on frontend like selenium using the webdriver plugin and can submit the form with different data as a direct request to the server without using the frontend/GUI.
Hope this helps.
It depends on what you are trying to automate. If you plan to automate API invocation where there are some pre-requisites like grabbing tokens, cookies, session IDs from the browser, then JMeter can probably be used where existing JMeter capabilities can be leveraged using BeanShell scripting and other plugins.
But if you plan to have a full blown UI automation framework then JMeter might not be an ideal choice.
Is it possible to rerun the tests created in the post response test without
resubmitting the request?
For example you submit an API request in Postman, then it comes back with
some data.
I want to just re-run the scripts against this data.
Could be really useful in debugging these post-response scripts.
I want to rerun Tests in area 1 without hitting the Send button
(area 2). That way I can test the javascript correctness of my Test scripts without having to wait for server responses.
You can use https://postman-echo.com/post and their API will echo the request you send. So basically you create the actual request once and then use the result in the ECHO call to do the development.
More info here:
https://docs.postman-echo.com/?version=latest
I'm afraid there is no better way at the moment.
I was looking for the same thing and came across this tip on the PostMan Community.
https://community.postman.com/t/re-run-test-script-without-re-sending-request/9160/8
Basically you:
Make your request
Save your response as an example
Create a PostMan mock from that response
Rerun and build up your test against the mock (and any variants, like failure cases)
Run against the original and verify everything is good.
While this workaround helps get the job done, I do wish that they could just make it so that you could hold the original state of the response and the env, run your tests, reset to the original state, rerun your tests and tweak until it all works.
You can write test entire test or the part you want to execute twice in a function and call that function 2 times, that the easy and low effort and maintenance way I am looking
Even though it may not be good practice for a unit/feature test, I like to have a test for ensuring that my Laravel app gives the expected result for:
calling a web route
.. that makes an async (Guzzle) call to the same application
and returns the modified result.
Unfortunately, for phpunit I can define that the configuration is loaded from .env.testing and it connects to the testing database. However when a new request is done it will call my application with my normal .env variables. Which makes sense as it is a new http request, however it is not desired in the scenario.
I was thinking of changing the Guzzle call based on the environment used and call my endpoint directly when I am testing. However it doesn't feel good to have env based switches in my code, especially when it results in a different method (what if it does work calling it directly but fails when using guzzle, e.g. guzzle got lost from the dependencies).
What would be your method of preference to test the request till response as a whole, without breaking it up to smaller pieces?
I'm writing an rspec feature spec to test a file upload page that sends files directly from the client to S3 using the s3_direct_upload gem. Instead of hitting S3, I'd like to stub out that link and return the correct response so I can test the behavior after a failed/successful ajax request.
I think my issue lies in the fact that I can't use WebMock to stub requests, since the request isn't fired from the server (it's happening on the client).
Changing the test type/framework
I could add a new testing dependency like Jasmine or Sinon and test the js/page directly, but that still leaves me with feature specs that are trying to hit S3.
Changing the url
All of the bucket/path information for the link to S3 is bundled with the gem, so I would have to patch it in the test env, which I'd like to avoid.
Question
How I can run a feature spec that stubs out a client-side ajax request?
I'm using capybara-webkit to test integration with a third party website (I need javascript).
I want to use vcr to record requests made during the integration test but capybara-webkit doesn't go over net http so vcr is unable to record them. How would I go about writing an adaptor for vcr that would allow me to record the reqeusts?
Unfortunately, VCR is very much incompatible with capybara-webkit. The fact is that capybara webkit is using webkit, which is in c. Webmock and Fakeweb, which are the basis for VCR, can only be used for Ruby web requests. Making the two work together would likely be a monumental task.
I've solved this problem two ways:
The first (hacky, but valid) is to add a new javascript file to the application that is only included in the test environment. This file stubs out the JS classes which make external web requests. Aside from the pure hackatude of this approach, it requires that every time a new request is added or changed you must change the stubs as well.
The second approach is to route all external requests through my own server, effectively proxying all external requests through my server. This has the huge disadvantage that you have to have an action for everything you want to consume (you could genericize it, with some work). It also suffers from the fact that it could as much as double the time for the request to complete. However, since the requests are now being made by Ruby you can use VCR in all it's glory.
In my situations, approach #2 has been much more to my advantage thanks to the fact that I need ruby to manipulate the data so that I can keep my javascript source-agnostic. I was, however, using approach #1 for quite a while successfully.
I've written a small ruby library (puffing-billy) for rspec+capybara that does exactly this -- it injects a proxy in between your browser and the outside world and allows you to fake responses to specific requests.
Example:
describe 'fetching badges from stackoverflow API' do
it 'should show a nice message when you have no badges' do
# stub some JSONP
proxy.stub('http://api.stackoverflow.com/1.1/users/1/badges',
:jsonp => { :badges => [] })
visit '/my_badges'
page.should have_content("You don't have any badges :(")
end
end