Using TeamCity how can I deploy to an environment then run tests against that environment? - continuous-integration

I am struggling to get my head around this!
I wish to have TeamCity deploy our windows service to a particular environment, then a separate project run acceptance tests against that environment.
Currently I have a project that builds then runs unit tests, and finally packages up the deployable elements.
A second project takes the package (artefact dependency) and deploys to the environment.
Now I wish to run acceptance tests against that deployment. The tests are not in the deployable package so I must return to the "build" project... I thought I could use a Snapshot dependency to use the already compiled files (I don't want to checkout/re-compile anything)
However I just get an empty folder on the agent when I hit 'run' on this project.
I must have misunderstood how this works!
Are there any blog posts to help elucidate this?
The tests are specflow/nunint tests.
Please ask for more info if I have not been clear!

You might want to set up the tests as an artifact of the build project, then deploy the tests to the deployment environment.
Then run a separate TeamCity agent on the deployment environment to actually execute the tests on that environment.

Related

Run newman on the local build instead of deploying to a test environment using TeamCity

I am looking to be able to run my postman scripts using newman during a TeamCity build.
Instead of deploying the build to a test environment, I'd like to run the postman scripts on that particular build, so it isn't deployed to an environment used by other developers which could potentially break it.
My current build chain in TeamCity is:
Build main project (contains the REST Api and all required code)
Run Postman scripts using Newman on that project
I have the collection and environment file, along with the CLI command to call it. When I try and point the environment for a local build, it does not work.
I am thinking of running an IIS Express server on the agent and then with that active port, run the tests but I have been unsuccessful.
Any ideas on how to approach this would be appreciated!
I have looked at How do I integrate my Postman Integration Tests with TeamCity and this uses a test environment, which is not what I am after.
I looked at https://ie.com.au/a-how-set-up-automated-api-testing and this was helpful, but I think this is still reliant on setting up a test envrionment.
TeamCity isn't really equipped to handle what you are trying to do. You are trying to run API tests against a build, in order to do that, you'll need an environment. You need something to run your project in order to query against it.
The only potential path you might try looking at is containerizing your project, in docker or something similar, then running your image after it's built and querying against that. However this isn't a great practice and bloats the build time.
A good practice would be to build your project > deploy it to a test environment, you should set up a separate 'test' or 'dev' environment that is ok being broken > after deploy trigger a service to run your tests against the 'dev'

how to set up a Appium UI test maven project to work with Gitlab CI to test Android App?

I am an intern now, new to automation test.My goal here is to help my company set up CI for client side.
Right now I have a maven project contains several tests using Appium java-client lib, under Eclipse IDE, which could run the UI tests locally. My goal next step is to hook my tests with the gitlab repo(which is already there, created by the android developers), but I am stuck here. Could somebody help me out?
Please try to be specific:
how should I set up the .gitlab.yaml?
can we just have the script in yaml to download Appium and maven?
or we could just download Appium, but import all the Appium java-client jars to libs in main?
If either of above is true, how? if neither, what and how should I
do?
Where should I put my test in gitlab in that repo? Or I don't have to
put my tests in the existing repo. Instead, I could have another one
and tell yaml where to reach? Again, how?
It will be helpful if you could help me go through the workflow.
Like, when I developers check in code, gitlab read the yaml, then
build, then find my test suits in where(Q3), then execute etc.
Many thanks in advance!
Since finally someone is also interested in this question, let me share my solution to this.
So, if you are looking at this question, I assume you already have your test suite and you could test it locally in your machine, either have your app installed in a simulator or a real device. Now you need to read more about gitlab pipeline and gitlab CI :
pipeline: https://docs.gitlab.com/ee/ci/pipelines.html
gitlab CI: https://docs.gitlab.com/ee/ci/quick_start/
And you should have noticed that, one of the advantages of Appium is that you don't need to change a thing about the App you are testing, you are testing exactly the same App which is going into production. To learn more about Apppium:
http://appium.io/docs/en/about-appium/intro/
Now, to run the automation test, you need your test suite, the app, and Appium server. What we need to do is adding another stage in .gitlab-ci.yml, tell it to
take the newly compiled App, compile your test suite
install the App in simulator/real device
compile your test suite and run it.
To make things easier to understand, we start with question 4, workflow:
So when the code is checked in to gitlab, the gitlab runner runs the jobs of each stage in your .gitlab-ci.yml, and when it runs to your stage, it does the automation test, and note that it is running on your server, so it means you need to have Appium installed on your server and have it up and running when try to run your automation test suite. Now the problem is that, is your server capable to do so? If you wanna do the automation test in your server, you need to install Appium on it, simulator probably(and which might need your server to equip with GPU), etc, these are the concerns of maintaining server. The alternative would be using the third-party service ,which is what I did. Turns out our(when I was in that company) server isn't capable of running automation UI test, so we turned to AWS-ADF(Amazon Device Farm), there are many other service providers you could choose, see the link for references:
https://adtmag.com/blogs/dev-watch/2017/05/device-clouds.aspx
So I basically have a python script in my functional test stage, and it will grab the newly complied App, the automation test suite, upload them to AWS ADF, and then schedule a run, yields result when the run is finished.
so, to answer question 1:
we need to create one more stage for our functional test in .gitlab.yaml, in my case, I have a stage functionalTest_project stage after the stage which compiles the Android App. And then you script the necessary cmd in your stage, or if its too lengthy, your script in another file(put it in your repo) and then execute it. In my case, I put my script in python_ci.py, and then I execute it in my stage use “python python_ci.py” .(here you need a docker with these requirement, see below too)
You don’t download Appium, you set up Appium on your or if you use a cloud service, that service should set up Appium for you.
What I did it is that I use maven built and package the test suite locally and then push it to gitlab repo, which now I believe the better way would be compile and package it in the your functionalTest stage in .gitlab.yml. now it comes back to first point of question 1, how to get maven, my understanding is that its a dependency of the server, like python, so they could both be obtained by telling gitlab to execute your script with a docker that has python and maven dependency.
answer to question 3:
put it in the same repo, but out of the Android project(i.e. they will under the same directory).
how to tell yml to reach the test suite? remember they are in the same server, so you could the relative path in your yml script to tell yml where to get your test suite.
Hope this helps!

Jenkins - Build, Deploy and Promote

Recently, I started learning how to use Jenkins CI. So I am a little bit of a noob at jenkins. I am about to start to try and do the following:
I have setup a maven multi-module job on jenkins, which builds, tests, and finally creates 4 seperate war applications. I archive the war artifacts as part of this job. These war files will only ever be built once, they contain multiple environment properties, and the war file along with each environments server will manage the profile it runs in, eg dev, test, staging, prod, etc
I have another job on jenkins which will deal with the deployment to multiple environments.
This second job, uses the copy artifact plugin, and uses a post build action to deploy to a dev environment.
The job in step 2 will hopefully be able to have multiple promotions, allowing deployment to multiple environments: test/staging/performance/production etc.
I have searched stackoverflow and google, and all the posts I see, always use the parameterized plugin, specifying a parameter for the environment. This means there is a seperate build for each env which I don't like.
Can anyone tell me if this is the right way to go? Or direct me to some tutorial on how to do this properly.
Looks like what you need is a matrix-project build.
P.S.
A good introduction to Jenkins could be found in Jenkins: The Definitive Guide
After playing around with the jenkins configuration. I have this working very nicely now.
In the deployment job, I didn't see the "Add another promotion process" button, which allows me to promote the same build to multiple environments manually or automatically.

Maven, switching to a different profile

I have a problem with proper maven profile configuration of a project that is deployed to a continuous integration server.
In my project, there are some resources that needs to be included only during tests at the daily building phase and others that needs to be included during nightly builds, and they can never be included both at the same time, because building process will fail, I can achive this locally by activating one profile at the same time.
Continuous integration server runs following maven commands:
-during daily builds:
mvn clean package -Pci -Dci
-during nightly builds
mvn clean install -Dmaven.test.failure.ignore -Pci,nightly -Dci -Dnightly
As you see, nightly build command include maven variables and profiles defined in daily build command, which makes some troubles for me, becouse I want to have only one profile activated at the same time.
Specifically, what I want is having 3 separate profiles:
-my-pforile (activated by default, not used on CI server)
-ci-profile (activated only on daily builds, used on CI server)
-nightly-profile (activated only on nightly builds, used on CI server)
How can I achieve that? I tried almost everything. Reconfiguring CI server is not an option.
When I have to configure the same build with different profiles, using Jenkins as a CI,
I usually create as much builds as profiles, so each build uses the correct configuration.
If adding a new build is not an option probably you can try to create a workaround
using something like the exec plugin (http://mojo.codehaus.org/exec-maven-plugin/) to download
the resources from a ftp (or something else).
You will have also to create a cron job (or equivalent) to replace the correct resources between the builds:
in the evening you put there the resources for the night, in the morning the ones for the day.
But considering how cumbersome this process will be, probably it is better to try to add
a new build.

Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ...
MsBuild Version .NET 4.0
MsBuild Build File ./WebProjectFolder/WebProject.csproj
Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True
We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project.
If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process.
Options ...
Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project.
Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package.
Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it.
Problems ...
There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ...
This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements.
This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc.
Initial problem.
We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ...
We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...
In Jenkins/Hudson, it's quite ok to have many jobs. some for doing compilation triggered version control changes, some for running (unit) tests triggered by successful builds, some for doing more tests (integeration) trigered by successful earlier tests, some for deploying, triggered by successfully passing all tests.
Look at plugins like join, build pipeline, parametrized trigger and more to help out with this.
This will also allow things to happen in parallel, by using multiple nodes. Trying to cram everything in one job is not the way to go.

Resources