Appium Tests in Actual Mobile App Dev Project - gradle

I have been doing some hands on labs in Appium for a couple of weeks now, using tutorials on YouTube, Udemy and other sources. I am pretty much comfortable in running those tests using sample APKs in these tutorials.
Now, I would like to also understand details on how Appium tests would be run on an actual Mobile App Dev project using JUnit or TestNG where we do not work on the APK, but rather that the automated tests be triggered during build using IntelliJ and Gradle. Running the automated tests manually does not make sense, because these tutorials do that only rather than the tests being kicked off during build. Any of you'll with live experience with Bitrise - if you can also give your inputs, it would really help since in my project Bitrise would be used as well.
Any inputs on this would be greatly appreciated!
Thanks in advance.
PS - I am newbie tester :)

i use Bitrise, but only use them as a continuous integration (CI), I don't use mobile devices there. I use aws device farm to run automated tests.
When you said
Running the automated tests manually does not make sense
it depends: project size, test suite size, how many new features are released on each new pull request (PR), how many new features are released in each new version, execution time of everything, Costs
The value of having automated tests that fire on every push/commit , PR or Release should be evaluated.
For example, if you use the testing pyramid concept:
Unit Tests (owner is same developer), check each build
Service Tests (owner: backend and automatic QA) check each build
User Interface Tests (QC and automatic QA) verify each new version or Release

Related

How to set up Jenkins for build, unit test and system tests

I want to set up Jenkins for a decent build chain for a JavaFX application that controls a robotic arm and other hardware:
We are using BitBucket with the Git Flow model.
We have 2 development modules and 1 System Test module in IntelliJ that we build with Maven.
We have Unit tests, Integration test and System tests. Unit and Integration tests use JUnit and can run on the Jenkins master, for the System tests we use TestNG and they can run the TestFX tests on a Jenkins agent.
(I think TestNG is more suited for System tests than JUnit)
Development build project (build, unit+integration tests) was already in place. The Test chain has been recently set up by copying the development project, adding the system tests and ignoring the Unit/Integration tests. (so building the application is done twice)
We have 2 types of System tests:
Tests that are fairly simple and run on the application itself
Tests that are more complex and run on the application that interacts with several simulators for the robotic arm
Now I need to set up the 2nd type of tests.
My question would be: what is the best way to set this up in Jenkins?
I'm reading about Jenkins Pipelines and the Blue Ocean plugin here, about a matrix configuration project here. To me it is all a bit confusing what is the ideal way to achieve my goals.
I have no clue how to scale from a testng.xml file in my SystemTest module to flexible tests.
Can I put a sort of capabilities tag to tests so that the correct preconditions are set? For example, for tests in category 1, only the main application needs to be started for TestFX. However, for tests in the category 2, several simulators needs to be started and configured. I think using a sort of capabilities tag, will make this much more maintainable.
My goals:
Easy to maintain Jenkins flow
Efficient building, so preference to copying artifacts instead of building a second time
Possibility to split the system tests over multiple agents, preferably without me having to be concerned about what runs where (similar to Selenium Grid)
Correct dependencies (simulators etc) are started depending if the test needs them
We are looking into running the tests on VMs with OpenGL 3D acceleration due to a canvas used in the application. If tests are able to allocate, start, stop VMs on demand, that would be cool (but would only save some electricity)
Easy reporting where all test results are gathered from all agents. Notice that I prefer the JUnit report that highlights which tests were #Ignored. TestNg report format, doesn't say anything about #Ignored tests.

how to set up a Appium UI test maven project to work with Gitlab CI to test Android App?

I am an intern now, new to automation test.My goal here is to help my company set up CI for client side.
Right now I have a maven project contains several tests using Appium java-client lib, under Eclipse IDE, which could run the UI tests locally. My goal next step is to hook my tests with the gitlab repo(which is already there, created by the android developers), but I am stuck here. Could somebody help me out?
Please try to be specific:
how should I set up the .gitlab.yaml?
can we just have the script in yaml to download Appium and maven?
or we could just download Appium, but import all the Appium java-client jars to libs in main?
If either of above is true, how? if neither, what and how should I
do?
Where should I put my test in gitlab in that repo? Or I don't have to
put my tests in the existing repo. Instead, I could have another one
and tell yaml where to reach? Again, how?
It will be helpful if you could help me go through the workflow.
Like, when I developers check in code, gitlab read the yaml, then
build, then find my test suits in where(Q3), then execute etc.
Many thanks in advance!
Since finally someone is also interested in this question, let me share my solution to this.
So, if you are looking at this question, I assume you already have your test suite and you could test it locally in your machine, either have your app installed in a simulator or a real device. Now you need to read more about gitlab pipeline and gitlab CI :
pipeline: https://docs.gitlab.com/ee/ci/pipelines.html
gitlab CI: https://docs.gitlab.com/ee/ci/quick_start/
And you should have noticed that, one of the advantages of Appium is that you don't need to change a thing about the App you are testing, you are testing exactly the same App which is going into production. To learn more about Apppium:
http://appium.io/docs/en/about-appium/intro/
Now, to run the automation test, you need your test suite, the app, and Appium server. What we need to do is adding another stage in .gitlab-ci.yml, tell it to
take the newly compiled App, compile your test suite
install the App in simulator/real device
compile your test suite and run it.
To make things easier to understand, we start with question 4, workflow:
So when the code is checked in to gitlab, the gitlab runner runs the jobs of each stage in your .gitlab-ci.yml, and when it runs to your stage, it does the automation test, and note that it is running on your server, so it means you need to have Appium installed on your server and have it up and running when try to run your automation test suite. Now the problem is that, is your server capable to do so? If you wanna do the automation test in your server, you need to install Appium on it, simulator probably(and which might need your server to equip with GPU), etc, these are the concerns of maintaining server. The alternative would be using the third-party service ,which is what I did. Turns out our(when I was in that company) server isn't capable of running automation UI test, so we turned to AWS-ADF(Amazon Device Farm), there are many other service providers you could choose, see the link for references:
https://adtmag.com/blogs/dev-watch/2017/05/device-clouds.aspx
So I basically have a python script in my functional test stage, and it will grab the newly complied App, the automation test suite, upload them to AWS ADF, and then schedule a run, yields result when the run is finished.
so, to answer question 1:
we need to create one more stage for our functional test in .gitlab.yaml, in my case, I have a stage functionalTest_project stage after the stage which compiles the Android App. And then you script the necessary cmd in your stage, or if its too lengthy, your script in another file(put it in your repo) and then execute it. In my case, I put my script in python_ci.py, and then I execute it in my stage use “python python_ci.py” .(here you need a docker with these requirement, see below too)
You don’t download Appium, you set up Appium on your or if you use a cloud service, that service should set up Appium for you.
What I did it is that I use maven built and package the test suite locally and then push it to gitlab repo, which now I believe the better way would be compile and package it in the your functionalTest stage in .gitlab.yml. now it comes back to first point of question 1, how to get maven, my understanding is that its a dependency of the server, like python, so they could both be obtained by telling gitlab to execute your script with a docker that has python and maven dependency.
answer to question 3:
put it in the same repo, but out of the Android project(i.e. they will under the same directory).
how to tell yml to reach the test suite? remember they are in the same server, so you could the relative path in your yml script to tell yml where to get your test suite.
Hope this helps!

Continuous Integration testing with physical devices

I've been wondering how you go about doing CI-style testing when you're dealing with physical devices.
I imagine you have a suite of tests, and a pool of devices against which they can be run.
Additionally:
Some tests may require specific device models.
Some tests may require the use of more than one device.
What CI servers have support for this?
I'm still interested in those which have partial support, either natively or through plugins, as I'm interested in how it's done.
Continuous Integration enables a team integrate & test their work frequently. Automated builds are meant to compile, link and run unit-tests. You want your CI to run fast, especially if you run it at every check-in. That is why you would want to restrict CI activities to simple confirmation of the build and unit tests alone. What you're asking seems more-along the lines of quality assurance (QA) testing...and having QA failures mixed into your CI efforts would detract development efforts from progressing.
As such, I'm more under the impression activities associated with CI are not dependent upon the final physical machine said work may eventually be migrated to.
Now...this doesn't mean you CAN'T take the CI-compiled package and run it against some final target-machine....but again...that is really considered a seperate activity.
This seems to be re-enforced in the following article by Martin Fowler.
Notice he doesn't talk about the final targeted devices...only the build machine.
I can suggest Test Manager that is part of the TFS suite of Microsoft. I have not tried it with many different environments apart from windows based though I know there are many connectors. For windows based environments I believe it will satisfy most needs.
I use it for nightly builds to perform smoke tests (Turn it on, see if any smoke comes out) but you have to be careful to keep tests small in order to have them finished in a matter of hours and not days, if you want it to be part of your CI.
Then when you have a good enough quality you can proceed onto regression tests and integration tests if needed.
I wouldn't get too caught up in what a CI system is supposed to do or not do. Instead I would focus on the problem you are trying to solve. It sounds like that problem is to facilitate development on multiple platforms. You can use the concept of Continuous Integration and add to it successfully address the issue. I know, because I've done it in the past.
I implemented a build system for code that needed to compile and test successfully on 4 different platforms (nt, wince, linux-arm, linux-x86). The CI server would:
Used a linux and winnt build server compilation (and cross compilation)
The compiled tests and supporting libs would then be copied to the appropriate devices and an automated test run executed.
After the test suite was completed the log would be copied back, (or it was written to a network mounted fs)
If the test suite was successful we would tag the source, and package the libs and executables.
This same platform was reused for developer verification before commits. Developers would run a partial build and test (only updated source would be recompiled and those tests rerun). The CI would execute a full build (from scratch).
Our build were pretty fast because we had a proper DAG for build dependencies. This allowed for concurrent compilation within a platform build. Each platform build was also concurrent. As a result partial builds took a few seconds, full builds took ~30 minutes. Our build servers were quite beefy (optimized for fast compiles) and the codebase was of moderate size (I don't remember the stats).

Infrastructure required for TDD?

I am 'relatively new' to unit-testing and TDD. Only more recently have I completed my first production application that has (at least in theory) 100% code coverage. I have done unit-testing in previous projects as well for some time, but not in true TDD fashion and with good code coverage. It had always been an after-thought. I feel I have a pretty good grasp on it now though.
I'm also trying to train the rest of the team on TDD and unit testing so that we can grow togeather and start moving forward with doing unit testing in all of our applications, and eventually progress to doing full TDD w/ automated builds & continous integration. I posted a thread here regarding my plan of attack / training agenda for comments & critisism.
One of the replies (in fact the highest voted) suggested I first setup infrastructure before I go forward with the training. Unfortunately I have no exposure to this, and googling on the topics is difficult because the pages for CruiseControl.NET / nAnt / etc do not really explain the 'why' we should set this up and the 'how' everything connects togeather.
We are a small shop (about 10 developers) and use almost exclusively microsoft technologies and do our development in VB.NET. We are looking to eventually start using C# but that's for another time. I've been using the MSTest project that comes with VS2008 for my unit tests, and I've been building my apps using Visual Studio, and deploying using MSI setup projects... We also (unfortunately) use VSS for our soure control - but that is also on the chopping block and I'd really like to get rid of it and use subversion.
I know that I need to use CruiseControl.NET for CI, and either nAnt or MSBuild for building the applications. And I probably need a build server to run all these builds. But I just can't find anything that 'connects' the dots and explains how they interact with eachother, what should be on your build server, when you should build with your build server (is it just for deployment builds, or even when you just want to compile the app you're developing after making a small change, on your local environment?). I'm also planning on axing MSTest as I've found it to be buggy and will use nUnit instead.
Can anyone perhaps illuminate this gap I have from 'knowing how to do TDD' to 'setting up the proper infrastructure so the whole team can do it and work togeather'? I do understand what continous integration is, but again, I'm not sure how a build server should be setup and how it connects with everything, and why we need one (e.g. the pitch to management).
thanks very much for your time.
What portion of finalbuilder do I need? It seems there's some overlap with final builder and teamcity. Finalbuilder server seems to be a CI server, so I'm guessing I don't need that. FinalBuilder seems to be a build server - but I thought TeamCity is also a build server... And Automise seems to be a visual windows automation tool, like some kind of development platform for winforms apps...
_I also don't see support for final builder in The Team City Supported Apps Diagram : _
Take a look at a webinar I did a few weeks ago - How To Start Unit Testing Successfully. In that webinar I've talked about tools and unit testing best practices and it was aimed at developers just like you who want to introduce unit testing in their organization.
First order of business you want to put a CI (Continuous Integration) process in place and for that you'll need three tools:
Source control
Build server
Build client/script
I hope you already have some form of source control in place so let's talk about the other two.
Build Server - checks the source control and when it changes (or some other condition met) runs a build script on some client (or same machine) there are several build server available I recommend JetBrain's TeamCity it's easy to install and use (great web interface) and is free for up to 20 developers (that's you).
Build Script - on your build client you want to run a build script that would build your solution and run your unit tests. TeamCity has some basic build & test capabilities but for more advanced options (build installer, documentation etc.) you'll need some script runner at work we use FinalBuilder - it's not free but has very good editor. If you're looking for a free alternative have a look at ANT or NANT - but be prepared to edit a lot of XML.
Other tools - Because an important part of successful unit testing is how easy it is to write and run tests on the developer's machines I suggest you check if there are better IDE's or external tools that would help the developers write & run their unit tests.

NAnt with DB integration tests, and eventually Continuous Integration

I've been exploring different strategies for running integration tests within some Nant build scripts. Typically a number of different scripts are chained in one monolithic build that has separate targets: staging (build a staging version, like build), build (just build the stuff), integration (build the stuff and run the integration tests). This works reasonably well, the build target takes about a third of the time to run as the integration target and it's not painfully long so I don't find myself disinclined to run it frequently.
The integration target on the other hand takes long enough that I don't want to do it very often - ideally just before I'm ready to do a deploy. Does this seem like a reasonable strategy? IOW, am I doing it right?
The plan is to eventually move this project to Continuous Integration. I'm new to the whole Continuous Integration thing but I think I understand the concept of "breaking the build" so I'm wondering what are some good practices to pick up in order to make the most of it?
Any good sources of reading on this subject would be appreciated as well. Thanks!
Yes, you are on the right track. What you need to do now is to hook up your nant target to an automated process. I recommend using either Team City or Cruise Control for as your CI tool. Once you have your automated server setup you can run your build and unit tests on each check in (Continuous Integration). Your integration tests could then run at night or over the weekend since they typically take longer to run. If your integration tests are successful, you can then have a job that will deploy to some QA or other server.
Sounds like you're 99% of the way there. My advice is to just dive in and start doing it. You'll learn a lot more by actually taking the plunge and doing it than by thinking about whether you're doing it right.
My company is currently using CruiseControl and I personally think it's great.
See this related thread What is a good CI build process?
You are on the right track. If you're using a decent CI tool, you should be able to set each setup up as a separate project that triggers the next step in the chain... i.e. sucessfull build triggers tests which trigger deployment which triggers integration etc
This way your ealiest "break" stops the line so to speak.
We use CruiseControl to build, unit-test, configure and deploy, run integration tests and code coverage, run acceptance tests, and package for release. This is with a system of 8 or so web services, and a dozen or so databases, all with interralated configuration and deployment dependencies with across multiple environments with different configurations (anythin from single boxes to redundent boxes for each component)

Resources