How to set up the UI tests in a Hilt/Dagger App? - dagger-hilt

I use Dagger in a project and have studying a bit whether migration to Hilt would make sense.
My current setup is somewhat similar to the one presented in a (Hilt code lab)
i.e. I also have UserComponent with custom scope (from the moment the user logs in to the moment the user logs out). This is very handy as I have a lot’s of Repository classes caching user data and it’s very easy to cleanup all that data just by deleting the UserComponent. The migration strategy in my case would then be leaving Hilt and Dagger side by side.
In the Espresso tests I’m able to initialize the TestUserDataModule with useData that is needed for the test case. This makes it possible to launch Activity under test directly and make the app behave like there is a user already logged in.
This part is not covered by the Hilt codelab or any other documentation I have seen so far.
So, how should I set up the UI tests in a project where Hilt/Dagger co-exist?

Refer to this tutorial, it explains how to implement UI / Fragment testing with Dagger hilt. The Codelab does not provide this, because the logic of the Ui test does not change when using hilt, but the approach does. Explaining everything in one Stackoverflow answer is way too much, so I hope the Video helps you.

Related

One set of tests for few projects with different parameters

i'm using Protractor and Jasmine and would like to organize my E2E test in the best way.
Example:
There is a set of the tests for check registration function (registration with right credentials, register as existed user, etc.).
I need to run those tests in three different projects. Tests are same, but credentials are different. For one project it could be 3 fields in the registration form, in another one - 6.
Now everything is organized in a very complicated way:
each single test is made not as "it" but as a function
there is a function which contains all tests (functions which test)
there is a file with Describe function in each
in that file there is one "it" which call the function which contains all tests
there is test suite for each project
I believe that there is a practice how to organize everything in a right way, that each test was in own "it". So will be happy to see some links or advice.
Thank you in advance!
Since it's a broad question, i will redirect you to few links. You should probably be looking at page-object model of Protractor. It will help you simplify and set a standard to organise your tests in a way that is readable and easy to use. Here's the link to it as described by Protractor team.
page-object model
However if you want to know why do we need to use such a framework, there are many shortcomings to it, which can be solved by using such framework. A detailed explanation is here
shortcomings of protractor and how to overcome them
EDIT: Based on your comments i feel that you are trying make a unified file/function that can cater to all the suites that will be using it. In order to handle such things try adding a generalised function (to fill form fields in your case), export that function and then require it into your test suites. Here's a sample link to it -
Exports and require
Hope this helps.

GWT : Automate tests of UIs with Selenium/FluentLenium

I have such a big problem and I really need your help.
Basically, I'm working on a project whose core technology is GWT and I have to make functional tests and the tests of UIs. In fact, I have also to use Cucumber the framework which is BDD-based framework.
Now I come to the main problem : Indeed, at every Maven build, GWT generates automatically the ids of the widgets. Then, Selenium could not find these widgets because of the recent updates/changes of their Ids. Moreover, I can't find some widgets with the methods (findByName/xPath/cssSelector etc.). I'm working now on the FluentLenium which is an overlay of Selenium.. I don't know how to fix this problem because I have no control of how GWT generates the Ids behind ..
Does anynone met the same problem before ?
Thank you a lot.
I've worked with GWT/Selenium/Cucumber. We had a single class file with public static String fields for each ID used in the whole application. These id's were set with ensureDebugId. This same class file is then used in Selenium/Cubumber tests to find the widgets by id. I don't know if this works for you. But in our case the tester was in control of the id's.

Testing ViewModels in MVVMCorss

I have just started working with MVVMCross for a cross platform app and I am having a hard time figuring out how to test my ViewModels. I tried following the testing done in TwitterSearch and ran into problems. Specifically in the MockSetup.cs I found that in the latest version of MvvmCross there no longer seems to be a IMvxViewDispatcherProvider but that is ok because I think its functionality has been rolled up into the IMvxViewDispatcher. However, when actually setting up the dipatcher for my test cases there is no RequestNavigate method for the dispatcher anymore and I can not find an implementation of MvxShowViewModelRequest. So I can not actually get any tests for my ViewModels to work.
I also tried to follow the testing here http://slodge.blogspot.com/2012/10/testing-viewmodels-in-mvvmcross.html but again ran into issues with missing MvxOpenNetCfServiceProviderSetup.
So in summary, my issue has been getting a MockSetup working so that I can test my ViewModels. If I could just be pointed in the right direction on the dispatcher, I think that would help.
It looks like you are trying to test an mvvmcross v3 application using mvvmcross vnext objects.
The updated twitter search test for v3 is at https://github.com/slodge/MvvmCross-Tutorials/tree/master/Sample%20-%20TwitterSearch/TwitterSearch.Test
This test uses a single special mock object: https://github.com/slodge/MvvmCross-Tutorials/blob/master/Sample%20-%20TwitterSearch/TwitterSearch.Test/Mocks/MockMvxViewDispatcher.cs
The role of this mock is just currently:
to provide a very simple main thread (it uses the current thread)
to provide simple storage for any navigation requests.
You can see it used in:
https://github.com/slodge/MvvmCross-Tutorials/blob/master/Sample%20-%20TwitterSearch/TwitterSearch.Test/HomeViewModelTest.cs

Rails rspec controller test vs integration test

I just completed writing a detailed rspec capybara integration and unit tests for Rails app, which includes mocking Omniauth (twitter) login, filling in forms, data validations, etc. However, I am wondering whether there is a need to write a separate controller or functional test.
Would appreciate your input and any links to further readings etc.
I'll play devil's advocate here, since I know I'm probably in the minority with this opinion: I actually prefer to do exceedingly thorough controller testing. A few reasons:
1) I find it easier to systematically test every path and outcome at the controller level than at the integration test level. My integration tests are primarily just happy-paths, and some of the more common error paths.
2) A lot of potential security issues occur at the controller level. Thorough testing helps me ensure that nothing malicious can get through to my model logic.
3) This is subjective, but it really forces me to think about some of the long-tail paths that my application might go through. What if someone tries to for an invalid password reset token into the URL? Controller testing ensures that I consider all options.
4) Unlike integration tests, they're fairly straight-forward to test. Each action is just a ruby method!
Personally, I think if your request (integration) spec is exercising all code paths you're covered. Ryan Bates has a great Railscast about how he tests here: http://railscasts.com/episodes/275-how-i-test?autoplay=true and about 5:05 in he says a similar thing. Like you I like to write integration tests rather than controller specs. Most of the time controllers simply front CRUD type operations anyway (especially if you're careful about keeping domain logic out of the controller), so all you're testing is the scaffolding.

Meteor test driven development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I don't see how to do test driven development in meteor.
I don't see it mentioned anywhere in documentation or FAQ. I don't see any examples or anything like that.
I see that some packages are using Tinytest.
I would need response from developers, what is roadmap regarding this. Something along the lines of:
possible, no documentation, figure it out yourself
meteor is not built in a way that you can make testable apps
this is planned feature
etc
Update 3: As of Meteor 1.3, meteor includes a testing guide with step-by-step instructions for unit, integration, acceptance, and load testing.
Update 2: As of November 9th, 2015, Velocity is no longer maintained. Xolv.io is focusing their efforts on Chimp, and the Meteor Development Group must choose an official testing framework.
Update: Velocity is Meteor's official testing solution as of 0.8.1.
Not much has been written about automated testing with Meteor at this time. I expect the Meteor community to evolve testing best-practices before establishing anything in the official documentation. After all, Meteor reached 0.5 this week, and things are still changing rapidly.
The good news: you can use Node.js testing tools with Meteor.
For my Meteor project, I run my unit tests with Mocha using Chai for assertions. If you don't need Chai's full feature set, I recommend using should.js instead. I only have unit tests at the moment, though you can write integration tests with Mocha as well.
Be sure to place your tests in the "tests" folder so that Meteor does not attempt to execute your tests.
Mocha supports CoffeeScript, my choice of scripting language for Meteor projects. Here's a sample Cakefile with tasks for running your Mocha tests. If you are using JS with Meteor, feel free to adapt the commands for a Makefile.
Your Meteor models will need a slight bit of modification to expose themselves to Mocha, and this requires some knowledge of how Node.js works. Think of each Node.js file as being executed within its own scope. Meteor automatically exposes objects in different files to one another, but ordinary Node applications—like Mocha—do not do this. To make our models testable by Mocha, export each Meteor model with the following CoffeeScript pattern:
# Export our class to Node.js when running
# other modules, e.g. our Mocha tests
#
# Place this at the bottom of our Model.coffee
# file after our Model class has been defined.
exports.Model = Model unless Meteor?
...and at the top of your Mocha test, import the model you wish to test:
# Need to use Coffeescript's destructuring to reference
# the object bound in the returned scope
# http://coffeescript.org/#destructuring
{Model} = require '../path/to/model'
With that, you can start writing and running unit tests with your Meteor project!
Hi all checkout laika - the whole new testing framework for meteor
http://arunoda.github.io/laika/
You can test both the server and client at once.
See some laika example here
See here for features
See concept behind laika
See Github Repository
Disclaimer: I'm the author of Laika.
I realize that this question is already answered, but I think this could use some more context, in the form of an additional answer providing said context.
I've been doing some app development with meteor, as well as package development, both by implementing a package for meteor core, as well as for atmosphere.
It sounds like your question might be actually a question in three parts:
How does one run the entire meteor test suite?
How does one write and run tests for individual smart packages?
How does one write and run tests for his own application?
And, it also sounds like there may be a bonus question in there somewhere:
4. How can one implement continuous integration for 1, 2, and 3?
I have been talking and begun collaborating with Naomi Seyfer (#sixolet) on the meteor core team to help get definitive answers to all of these questions into the documentation.
I had submitted an initial pull request addressing 1 and 2 to meteor core: https://github.com/meteor/meteor/pull/573.
I had also recently answered this question:
How do you run the meteor tests?
I think that #Blackcoat has definitively answered 3, above.
As for the bonus, 4, I would suggest using circleci.com at least to do continuous integration for your own apps. They currently support the use case that #Blackcoat had described. I have a project in which I've successfully gotten tests written in coffeescript to run unit tests with mocha, pretty much as #Blackcoat had described.
For continuous integration on meteor core, and smart packages, Naomi Seyfer and I are chatting with the founder of circleci to see if we can get something awesome implemented in the near term.
RTD has now been deprecated and replaced by Velocity, which is the official testing framework for Meteor 1.0. Documentation is still relatively new as Velocity is under heavy development. You can find some more information on the Velocity Github repo, the Velocity Homepage and The Meteor Testing Manual (paid content)
Disclaimer: I'm one of the core team members of Velocity and the author of the book.
Check out RTD, a full testing framework for Meteor here rtd.xolv.io.
It supports Jasmine/Mocha/custom and works with both plain JS and coffee. It includes test coverage too that combines unit/server/client coverage.
And an example project here
A blog to explain unit testing with Meteor here
An e2e acceptance testing approach using Selenium WebdriverJS and Meteor here
Hope that helps. Disclaimer: I am the author of RTD.
I used this page a lot and tried all of the answers, but from my beginner's starting point, I found them quite confusing. Once I had any trouble, I was flummoxed as to how to fix them.
This solution is really simple to get started with, if not fully documented yet, so I recommend it for people like myself who want to do TDD but aren't sure how testing in JavaScript works and which libraries plug into what:
https://github.com/mad-eye/meteor-mocha-web
FYI, I found that I also need to use the router Atmosphere package to make a '/tests' route to run and display the results from the tests, as I didn't want it to clutter my app every time it loads.
About the usage of tinytest, you may want to take a look at those useful ressources:
The basics are explained in this screencast:
https://www.eventedmind.com/feed/meteor-testing-packages-with-tinytest
Once you understood the idea, you'll want the public API documentation for tinytest. For now, the only documentation for that is at the end of the source of the tinytest package: https://github.com/meteor/meteor/tree/devel/packages/tinytest
Also, the screencast talks about test-helpers, you may want to have a look at all the available helpers in here:
https://github.com/meteor/meteor/tree/devel/packages/test-helpers
There often is some documentation inside each file
Digging in the existing tests of meteor's packages will provide a lot of examples. One way of doing this is to make a search for Tinytest. or test. in the package directory of meteor's source code
Testing becomes a core part of Meteor in the upcoming 1.3 release. The initial solution is based on Mocha and Chai.
The original discussions of the minimum viable design can be found here and the details of the first implementation can be found here.
MDG have produced the initial bones of the guide documentation for the testing which can be found here, and there are some example tests here.
This is an example of a publication test from the link above:
it('sends all todos for a public list when logged in', (done) => {
const collector = new PublicationCollector({userId});
collector.collect('Todos.inList', publicList._id, (collections) => {
chai.assert.equal(collections.Todos.length, 3);
done();
});
});
I'm doing functional/integration tests with Meteor + Mocha in the browser. I have something along the lines of the following (in coffeescript for better readability):
On the client...
Meteor.startup ->
Meteor.call 'shouldTest', (err, shouldTest) ->
if err? then throw err
if shouldTest then runTests()
# Dynamically load and run mocha. I factored this out in a separate method so
# that I can (re-)run the tests from the console whenever I like.
# NB: This assumes that you have your mocha/chai scripts in .../public/mocha.
# You can point to a CDN, too.
runTests = ->
$('head').append('<link href="/mocha/mocha.css" rel="stylesheet" />')
$.getScript '/mocha/mocha.js', ->
$.getScript '/mocha/chai.js', ->
$('body').append('<div id="mocha"> </div>')
chai.should() # ... or assert or explain ...
mocha.setup 'bdd'
loadSpecs() # This function contains your actual describe(), etc. calls.
mocha.run()
...and on the server:
Meteor.methods 'shouldTest': -> true unless Meteor.settings.noTests # ... or whatever.
Of course you can do your client-side unit testing in the same way. For integration testing it's nice to have all Meteor infrastructure around, though.
As Blackcout said, Velocity is the official TDD framework for Meteor. But at this moment velocity's webpage doesn't offer good documentation. So I recommend you to watch:
Concept behind velocity
Step by step tutorial
And specially the Official examples
Another option, made easily available since 0.6.0, is to run your entire app out of local smart packages, with a bare minimum amount of code outside of packages to boot your app (possibly invoking a particular smart package that is the foundation of your app).
You can then leverage Meteor's Tinytest, which is great for testing Meteor apps.
Ive successfully been using xolvio:cucumber and velocity to do my testing. Works really well and runs continuously so you can always see that your tests are passing.
Meteor + TheIntern
Somehow I managed to test Meteor application with TheIntern.js.
Though it is as per my need. But still I think it may lead someone to the right direction and I am sharing what I have done to resolve this issue.
There is a execute function which allows us to run JS code thorugh which we can access browsers window object and hence Meteor also.
Want to know more about execute
This is how my test suite looks for Functional Testing
define(function (require) {
var registerSuite = require('intern!object');
var assert = require('intern/chai!assert');
registerSuite({
name: 'index',
'greeting form': function () {
var rem = this.remote;
return this.remote
.get(require.toUrl('localhost:3000'))
.setFindTimeout(5000)
.execute(function() {
console.log("browser window object", window)
return Products.find({}).fetch().length
})
.then(function (text) {
console.log(text)
assert.strictEqual(text, 2,
'Yes I can access Meteor and its Collections');
});
}
});
});
To know more, this is my gist
Note: I am still in very early phase with this solution. I don't know whether I can do complex testing with this or not. But i am pretty much confident about it.
Velocity is not mature yet. I am facing setTimeout issues to use velocity. For server side unit testing you can use this package.
It is faster than velocity. Velocity requires a huge time when I test any spec with a login. With Jasmine code we can test any server side method and publication.

Resources