Is it better to use mocha with a local server through the grunt-contrib-connect task or just run it with grunt-mocha?
What are the differences/downsides of both?
They are two totally different things. You do not automatically run spec files with grunt-contrib-connect, it is meant to be used in conjunction with other tasks that hit the connect server. You can use it with grunt-mocha (see the urls option), but it's really only useful if you need to test with server logic. Otherwise, you can mock server responses and XHR requests in your tests using sinon.
Related
I have an AngularJS application and I'm trying to use Cypress to stub some of the network requests that it makes. Currently, my problem is with a request with resource type Img. I know from Cypress documentation that Cypress cannot stub non-XHR resource types/requests, but I'm looking for a workaround.
My application requests the image from a backend server, which I want to stub or fake. I prefer not to modify the application code, and would rather create an external workaround.
I've looked into the following and found them not to be useful in this scenario:
Sinon.js - Similarly can only handle XHR requests.
nock - Replaces node's http.request, but that doesn't seem to work within Cypress. It might work if I added it straight into my application code, which I prefer not to do.
I've also tried the following but was unsuccessful:
mockserver - Ran the mockserver and added an expectation, but none of the requests made to the mockserver seemed to go through.
Service Worker API - Was unsure about how to register my service worker, since it requires a .js file as an input. What .js file would be served as input if I'm controlling the service worker via Cypress?
a mock server using express - The issue is the application is running on localhost:<some_port>, while the mock server is running on localhost:<some_other_port>. I'm having trouble specifying port numbers when constructing the request through the application. Basically, my application isn't really respecting different port numbers.
EDIT:
I've been successful with creating a mock server using express. According to Cypress documentation, servers shouldn't be started and stopped within before() and after()'s. Instead, they should be started prior to Cypress being started, and stopped after Cypress is stopped.
I am trying to use cypress for running some monitoring tests on production.I am also using snapshot match plugin to compare screenshots.
I just want to know is this safe to do ?
I am not using any dashboard services from cypress -just running tests on our local machines-will cypress sent any info outside our network?
Cypress doesn't send anything to Cypress's servers unless you specifically configure it to - it's safe.
The only other thing is, by default, Cypress will send crash reports (when Cypress itself crashes) to be analyzed. You can turn this off by following the instructions here.
I am writing e2e tests where I want to test that when I add an entity on one client, the other online client will be synced and see the added entity. (Think google docs when you type and the words appear on the other users' screens).
My question is: how can I e2e test client synchronization through WebSockets?
Should I mock WebSockets if possible? Should I find an e2e framework that allows multiple tabs/ browser instances and test that the clients sync like that? Is there another way?
I have looked at applications that also use synchronization like these: https://github.com/automerge/trellis/blob/master/test/application.js and https://github.com/automerge/pixelpusher. Unfortunately, they either don't have tests or don't use WebSockets.
I think the simplest way would be to start two tests simultaneously like this:
create a new script entry in the scripts section of the package.json file:
"scripts" : {
"testcafe": testcafe chrome,
"test-synchro": npm run testcafe -- test1.js & npm run testcafe -- test2.js
}
in test1.js you add one entity then create a json file that contains all information on the new entity.
in test2.js you wait for this file to be present and stable in the file system and then you act on it. Maybe you could use package wait-on to achieve this.
I need to stress test a Server with around 3000 users conecting to it concurrently via SyncML Clients. For simulation of each user, a application needs to be launched which then connects to the server and does some operations.
Each user corresponds to each process.
The process is unix based and does http transactions based on SyncML Protocol.
I need to run the load for these 3000 processes for an hour or so.
Can you suggest best industry methods to fulfil such requirements?
Can JMeter or Locust help me in this?
Regards
You can definitely use Locust for this.
I wouldn't recommend starting processes to generate the load (even though it's possible), mainly because you won't get detailed statistics on what requests are made, how long they take to complete, etc.
Either you could just manually do the HTTP POST requests containing the SyncML data with the built in Locust HTTP client, or you could actually take something like pysyncml, and make your own SyncML client that reports the requests it does to Locust. It's fairly simple to do, you can read more about it, and see example, on the documentation page about custom clients.
Yes, JMeter can do this, though it's not clear to me what exactly the unix based processes needs to do.
JMeter can natively make HTTP POST requests and send XML data. Unless you have some very custom logic to make the requests, stick to JMeter on it's own.
If you must, you CAN execute a local process, but then you're severely limiting the number of users you can simulate per machine.
http://jmeter.apache.org/usermanual/component_reference.html#OS_Process_Sampler
I am running some unit test that persist documents into the MongoDb database. For this unit test to succeed the MongoDb server must be started. I perform this by using Process.Start("mongod.exe").
It works but sometimes it takes time to start and before it even starts the unit test tries to run and FAILS. Unit test fails and complains that the mongodb server is not running.
What to do in such situation?
If you use external resource(DB, web server, FTP, Backup device, server cluster) in test then it rather integration test then unit test. It is not convenient and not practical to start that all external resources in test. Just ensure that your test will be running in predictable environment. There are several ways to do it:
Run test suite from script (BAT,
nant, WSC), which starts MongoDB
before running test.
Start MongoDB on server and never shut
down it.
Do not add any loops with delays in your tests to wait while external resource is started - it makes tests slow, erratic and very complex.
Can't you run a quick test query in a loop with a delay after launching and verify the DB is up before continuing?
I guess I'd (and by that I mean, this is what I've done, but there's every chance someone has a better idea) write some kind of MongoTestHelper that can do a number of things during the various stages of your tests.
Before the test run, it checks that a test mongod instance is running and, if not, boots one up on your favourite test-mongo port. I find it's not actually that costly to just try and boot up a new mongod instance and let it fail as that port is already in use. However, this very different on windows, so you might want to check that the port is open or something.
Before each individual test, you can remove all the items from all the tested collections, if this is the kind of thing you need. In fact, I just drop all the DBs, as the lovely mongodb will recreate them for you:
for (String name : mongo.getDatabaseNames()) {
mongo.dropDatabase(name);
}
After the tests have run you could always shut it down if you've chosen to boot up on a random port, but that seems a bit silly. Life's too short.
The TDD purists would say that if you start the external resource, then it's not a unit test. Instead, mock out the database interface, and test your classes against that. In practice this would mean changing your code to be mockable, which is arguably a good thing.
OTOH, to write integration or acceptance test, you should use an in-memory transient database with just your test data in it, as others have mentioned.