Mockito vs Mocktail in Flutter - flutter-test

I am new on Flutter. I have started to writing tests. I saw mockito and mocktail as the most used testing libraries. I couldn't find any question/articles that explains differences between them. If there is a experienced developers -who used both of them- can you explain differences/advantages/disadvantages of them? Which one should I prefer?
Thanks in advance!

1. Assuming that you are new with Flutter, it would be probably easier for you to utilize the mocktail package.
The main "inconvenience" with the mockito package is that you need to generate the mocks running flutter pub run build_runner build, define meta-annotations like #GenerateMocks, and imports like xxx.mocks.dart, and an extra build_runner dev dependency at your pubspec.yaml.
The mocktail package simplifies mocking: you just need to extend the Mock class. That's it. Without code-generating, annotations, "magic" xxx.mocks.dart imports.
2. Also, you have to keep in mind that the mocktail package is very new and has a stable history of just 10 months. The mockito package is a proven by time and developers library that has almost 8 years of history of stable releases: the library is well-known and is widespread among the Flutter and Dart community.
With the experience, you will better understand which library better fits your projects' needs.
PS: you can take a look at the code snippets of both packages.
The mocktail snippet screenshot:
The mockito snippet screenshot

Answering my own question after experiencing both of them:
At first I decided to use mockito since it has more reputation. But, it was difficult to regenerate mock classes again and again. But then, I wanted to give a chance to mocktail and definitely saw that it is better! I would recommend using that instead of mockito for a few advantages. Being not have to generate mock classes is not the only advantage!

Related

Should I create separate Benchmark project?

I want to measure performance of some methods in my console application using BenchmarkDotNet library.
The question is: should I create a separate project in my solution where I will copy the methods I am interested in measuring and do the measuring there or should I add all the attributes necessary for measuring into the existing project?
What is the convention here?
You can think about it as of adding unit tests for your console app. You don't add the tests to the app itself, but typically create a new project that references (not copies) the logic that you want to test.
In my opinion the best approach would be to:
Add a new console app for benchmarks to your solution.
In the benchmarks app, add a project reference to the existing console app.
Add new benchmarks that have all the BDN annotations to the benchmark project, but implement the benchmarks by referencing public types and methods exposed by your console app. Don't copy the code (over time you might introduce changes to one copy and end up testing outdated version).

Why am I getting `java.lang.AssertionError: Built-in class kotlin.Any is not found` when using `copy` in TeamCity DSL?

Background
I'm trying to create some Teamcity configuration using Kotlin. I'm using a Maven in Intellij when testing the generation of the Teamcity, although I get the same result by using the command line.
Problem
A minimum example: https://gist.github.com/3761e6f3847db9f8f772c9e16663aaa9
To recreate the issue, use the command:
mvn teamcity-configs:generate
The error is:
[ERROR] Runtime error RootProjectId: kotlin.reflect.jvm.internal.impl.builtins.KotlinBuiltIns$3[113]: java.lang.AssertionError: Built-in class kotlin.Any is not found
Although I've taken steps to minimise the provided example, I'm no Maven expert and I'm not sure what else could have been shaved off the pom.xml file.
The problem seems to stem from an attempted use to copy (cf., docs) which seems to trip up Kotlin with some reflection issue. Remove the copy and the generation of the config works fine.
Research
There are a few places around where this is discussed (e.g. here and here), but I can't find any that match the issue I'm experiencing or suggest a solution which fixes it for me.
The most interesting one is this one, which is not relevant as it's regarding moving from Kotlin versions 1.3.x to 1.4.x, however, the discussion has a little on the interdependence of kotlin-stdlib[...], kotlin-reflect and Java itself, from JetBrain developer "Udalov" (direct link to comment). The details are over my head and may not be relevant here, but it's the most technical answer I've seen addressing this issue.
What I've tried
I've tried adding kotlin-reflect as an explicit dependency and making sure that kotlin-stdlib-jdk8 is present and correct. I've tried varying the Kotlin version from 1.3.70 to 1.3.72 to 1.4.32 with no change to the result.
Any help or insight on this would be appreciated, even if it's just to advance my understanding of this software stack.

Cucumber Tests Framework

We are looking at cucumber for our automation test framework because everyone including business people can understand it.
We use Angualr JS frontend and Java REST backend. Our team that is going to write the step definitions likes Ruby so we want to stick with Ruby for that.
Also we would like to use Maven to tie this process into our build process.
Will cucumber be a good fit given that story above ?
Hui Peztherez, from my prospective cucumber is a great choice, using it with the same architecture expect for Angular.
We are using Maven too, and it's so useful to orchestrate them with Jenkins, using maven to run the tags..
mvn test -Dcucumber.options="--tags #smoke"
ref: https://cucumber.io/docs/reference/jvm
Also Jenkins have several plugin to report the Cucumber Analysis, so useful for testers, and in the end, we are now working about the HPQ server integration with a plugin called Bumblebee (this part is still under development for both sides, our and bumblebee)
Another good choice is Ruby, you can take the step definition so easily defined with Ruby...
We also have a integration with Selenium for the front end side, and it works as well...
So go further!
We are using Cucumber in Java with gradle in past, It was in Maven and It works fine. We have framework for UI and API, In UI we used WebDriver to write step definition and In API, We used RestAssured to write step definition. You can do same thing in Java what you can do in Ruby.
Maven for Java Cucumber :
http://mvnrepository.com/artifact/info.cukes/cucumber-java/1.2.4 - Please add other dependency as per requirement.
Jenkin Plugin : https://wiki.jenkins-ci.org/display/JENKINS/Cucumber+Reports+Plugin
Will cucumber be a good fit given that story above ?
- Yes It is good fit. I will request you to show POC(Proof of concept) to management. I had experience in past that management have no clue about BDD and they have very hard to time to understand coverage. We did very deep dive to provide all information to them. It is very important to answer following question to management
BDD report is providing accurate test converage idea to management ?
Everyone in team is able to write feature file and able to provide same quality of feature file
Feature file and BDD report will be starting tool for check test converage
Thank you.
Please be aware that Cucumber is a BDD framework that can be used on top of a browser automation framework like Selenium WebDriver/Watir/Protractor they are two distinct things. Most of them implements Selenium WebDriver's protocol.
My only concern is for you using Maven in that project setup, I know that you can run ruby code in a JVM by using JRuby. But I'm not sure which plugin you'd use to trigger that from Maven.

grails 2 / groovy 2 / JDK7: how to reap the benefits?

I really love Grails but I was wondering how to get the performance benefits of Groovy 2.
The question is how to configure the development and production environments in order to get that "close to Java" performance boost.
So, if I setup:
* JDK 7
* Groovy 2 (indie JAR to use invokedynamic)
* Grails 2.2
are there any guidelines in order to really speed my webapp out-of-the-box?
And do I need to do any re-factoring in my Grails webapp codebase? I mean that dependency injection stuff like referencing services in controllers should be statically compiled or should I keep writing code as the docs say?
ps: I guess Groovy #CompileStatic and Grails might be a relevant question...
It depends on what might be slowing your web application down :) I know "it depends" is so often the answer, but it's still true.
Anyway, I've asked around and it seems that Grails and invokedynamic won't go together just yet. The reloading agent needs updating and there may be problems with the cglib/asm libraries used by Hibernate.
Regardless, internally Grails is making more and more use of #CompileStatic (for the stuff that wasn't already written in Java), so unless your app is doing a lot of work itself, you're unlikely to see a big boost with invokedynamic.
It would be useful to have some official information on this, but it's not out there right now.

why do we need complexity in dependency management

I am not sure if the title of the question is correct, but please read the question.
I have been working on C/C++ for most part of my work life (close to 11 years). we only had C/C++ source/header files and all dependencies were managed by Makefiles. things were simple and manageable.
for the last 1.5 years i have shifted to Java domain. and i feel extremely irritated that most difficult aspect of working with anything new is the dependency managers. e.g. maven, leiningen, builder, sbt, etc etc etc.
whenever i download anything new from the open source world, there is a significant amount of time to be spent to just to setup the compilation, build, run environment. that too when i am using eclipse. why can't all the dependencies be placed along with the software to be downloaded?? why the tools like maven,leiningen, etc must make a separate internet connection to download the dependencies. i know that maven forms a local repository and should be able to find the dependency locally as it downloads whole internet anyway, but why is this model used. I am behind a firewall and not everything is accessible, and the tools fail to download dependencies. i am sure the same situation is there in most work environments.
recently i started with clojure, and boy it has been a pain to get eclipse configured for clojure. leiningen is supposed to be some magic which must be used with any clojure development. sometimes it feels learning leiningen is more important than learning concepts of clojure. i downloaded so called 'standalone' jar file for leiningen as 'self-install' was not working for me. but it fooled me. as soon as i run 'lein' command it is making an internet connection and trying to download somethings. WHY? it wont even print the help menu without connecting with the internet. WHY? there is no way i can fulfill its demands without bypassing my internet firewall, as i dont know, and no one can tell me what all things this guy wants. there is simply no other way.
And every one seems to be inventing their own. Java had ant which was simple, and went to Maven, some project uses Ruby based Builder, Clojure has leiningen, Scala has sbt. Go has something else. WHY? Why we need this added complexity in a world already full of complexity. why cant there be just one tool.
All you experts in Java technology please excuse my rant. I am sure this question will be downvoted and closed as from someone who is not trying hard enough to understand the things. But please believe me i have spend enough hours battling with this unnecessary complexity.
I just want to know how others get around this, or am i the only unfortunate one facing these issues.
I guess this question cannot accept an answer. I humbly can provide you with elements, hopefully they will help you get some perspective on the problem.
There are mainly 2 problems I identify with Java build systems:
some of them are declarative while others are using scripts
the fragmentation of the Java tools for building and exercising control is tied to people and Java stewardship of the space, not so much the technological choices.
Maven is the paramount of a method of defining your build using a formal grammar in a standard manner. Your pom.xml file contains a lot more than just your build : it is the identity of your artifacts, the project metadata, the modules and the plugins brought in. It treats with particular attention of the declaration of the dependencies and repositories.
Maven is declarative.
For a certain population of programmers, this is great, and they don't create new projects very often. It works well over time, it consolidates the build nicely.
Ant is a different system where you define tasks that will execute, chained in a particular order. All the definitions are made using XML and in effect, you are writing scripts and declaring how they will be stitched together.
Buildr (full disclosure: I am a committer there) is a build system which was created off the frustration of dealing with the inefficiency of the declarative approach for cases where the build needed to do additional steps and complex testing and the rigidity of using XML for a build. It is script-based, enforcing conventions over configuration (expecting a few good defaults, but letting you drive if you need to change things).
I am not familiar with Gradle and SBT but I think they extend and build on this approach, from what I heard.
So this gives you I hope a better picture of the landscape in terms of build tools.
The reason why no standard build tool emerged is probably tied to the fact Sun didn't push one with Java. Eventually, I think they adopted Ant (I have some most JSR jars being built with it). There also has been some products built in this space over extending some of those build systems ; there is always going to be a huge difference between people being paid to maintain code rather than doing it on the side.
And well, people argue. Build systems are a great way to start a flame war. We have a hard time agreeing on a standard though some of the common elements are now settling well around the Maven artifacts.
As for the need to download the Internet over and over again, it's a rather long story but here are a few things that may trigger the need for an unnecessary download:
any of the dependencies using SNAPSHOT will try to get the latest snapshot. This is a great scheme but it takes its toll. You might depend on something that depends on a snapshot, and get a download because of that.
Maven doesn't redownload the artifacts but sometimes checks md5. This is easy to fix, just use the -O option from the command line.
Tools like Buildr were built around fixing this issue once for all. First off, you only download what you said you would. Second, no connection is made again unless you asked for it. By default, Buildr doesn't play the transitive dependencies game though you can ask for it, but you have to do it explicitly.
I hope this was informative and that your journey in Java land becomes less painful going forward.

Resources