Run newman on the local build instead of deploying to a test environment using TeamCity - teamcity

I am looking to be able to run my postman scripts using newman during a TeamCity build.
Instead of deploying the build to a test environment, I'd like to run the postman scripts on that particular build, so it isn't deployed to an environment used by other developers which could potentially break it.
My current build chain in TeamCity is:
Build main project (contains the REST Api and all required code)
Run Postman scripts using Newman on that project
I have the collection and environment file, along with the CLI command to call it. When I try and point the environment for a local build, it does not work.
I am thinking of running an IIS Express server on the agent and then with that active port, run the tests but I have been unsuccessful.
Any ideas on how to approach this would be appreciated!
I have looked at How do I integrate my Postman Integration Tests with TeamCity and this uses a test environment, which is not what I am after.
I looked at https://ie.com.au/a-how-set-up-automated-api-testing and this was helpful, but I think this is still reliant on setting up a test envrionment.

TeamCity isn't really equipped to handle what you are trying to do. You are trying to run API tests against a build, in order to do that, you'll need an environment. You need something to run your project in order to query against it.
The only potential path you might try looking at is containerizing your project, in docker or something similar, then running your image after it's built and querying against that. However this isn't a great practice and bloats the build time.
A good practice would be to build your project > deploy it to a test environment, you should set up a separate 'test' or 'dev' environment that is ok being broken > after deploy trigger a service to run your tests against the 'dev'

Related

Jenkins + Docker Compose + Integration Tests

I have a crazy idea to run integration tests (xUnit in .Net) in the Jenkins pipeline by using Docker Compose. The goal is to create testing environment ad-hoc and run integration tests form Jenkins (and Visual Studio) wthout using DBs etc. on physical server. In my previous project sometimes there was a case, when two builds override test data from the second build and I would like to avoid it.
The plan is the following:
Add dockerfile for each test project
Add references in the docker compose file (with creation of DBs on docker)
Add step in the Jenkins that will run integration tests
I have no long experience with contenerization, so I cannot predict what problems can appear.
The questions are:
Does it have any sence?
Is it possible?
Can it be done simpler?
I suppose that Visual Sutio test runner won't be able to get results from the docker images. I am right?
It looks that development of tests will be more difficult, because test will be run on the docker. I am right?
Thanks for all your suggestions.
Depends very much on the details. In a small project - no, in a big project with multiple micro services and many devs - sure.
Absolutely. Anything that can be done with shell commands can be automated with Jenkins
Yes, just have a test DB running somewhere. Or just run it locally with a simple script. Automation and containerization is the opposite of simple, you would only do it if the overhead is worth it in the long run
Normally it wouldn't even run on the same machine, so that could be tricky. I am no VS Code expert though
The goal of containers is to make it simpler because the environment does not change, but they add configuration overhead. Most days it shouldn't make a difference but whenever you make a big change it will cost some time.
I'd say running a Jenkins on your local machine is rarelly worth it, you could just use docker locally with scripts (bash or WSL).

How to configure ASPNET Core with Docker using VSTS to build, run tests and deploy to Azure with Unit Tests and Environment Variables

I'm trying to do something incredibly trivial I thought but apparently this needs to be hard. And yes there are bits and pieces throughout stack overflow but they're either out of date or don't actually work.
I've got an asp.net core site that I've dockerized with the add/docker/linux command.
In VSTS I can build the image and publish it with 2 docker-compose items.
And then I can release the image with the release management.
What I can't figure out how to do:
run dotnet test on my image and report the results to VSTS
Setup environment variables on Azure App Service Container that get properly passed into the image when its run.
On #1, I cannot find any up-to-date documentation on how to set it up so that while developing unit tests don't run unless specifically specified (and if I tell it to run tests in visual studio they should run in the docker image! I can get them to run always, but that's a waste of time while developing if they run every time you start debugging!).
And I cannot figure out how to use either docker-compose or the new VS.net 2017 15.8 way with just docker run commands to run the tests. It seems to me that I would need a new dockerfile just for the tests to run and have it generate and then discard the image that was created. But I can't figure out how to do this or even if this is the right way.
How should this be setup to do unit tests? (I've gone through 5 pages of google search results and none of them work right.)
On #2, setting and application setting in the App Service does not pass the values in docker run. I've tried everything and they never get passed. How do you pass environment variables on Azure so that the run command gets the right -e parameters?
For#1 you could use dotnet test command. This will generate a .trx file that VSTS can pick up and render a nice test report. You just need to setup the “Publish Test Results” task.
dotnet test --logger trx --results-directory /var/temp
More details please take a look at this blog: Running your unit tests with Visual Studio Team Services and Docker Compose
For#2 not totally get your point, if you want to override environment variable values on VSTS and use the value on Azure App Service Container. Please try this solution through powershell script: How to override values of environment variables on VSTS tasks
Beside suggest you also go through this blog shows how Docker Deployment to Azure App Service (Linux) using VSTS including both CI and CD. Which maybe helpful to you.

how to set up a Appium UI test maven project to work with Gitlab CI to test Android App?

I am an intern now, new to automation test.My goal here is to help my company set up CI for client side.
Right now I have a maven project contains several tests using Appium java-client lib, under Eclipse IDE, which could run the UI tests locally. My goal next step is to hook my tests with the gitlab repo(which is already there, created by the android developers), but I am stuck here. Could somebody help me out?
Please try to be specific:
how should I set up the .gitlab.yaml?
can we just have the script in yaml to download Appium and maven?
or we could just download Appium, but import all the Appium java-client jars to libs in main?
If either of above is true, how? if neither, what and how should I
do?
Where should I put my test in gitlab in that repo? Or I don't have to
put my tests in the existing repo. Instead, I could have another one
and tell yaml where to reach? Again, how?
It will be helpful if you could help me go through the workflow.
Like, when I developers check in code, gitlab read the yaml, then
build, then find my test suits in where(Q3), then execute etc.
Many thanks in advance!
Since finally someone is also interested in this question, let me share my solution to this.
So, if you are looking at this question, I assume you already have your test suite and you could test it locally in your machine, either have your app installed in a simulator or a real device. Now you need to read more about gitlab pipeline and gitlab CI :
pipeline: https://docs.gitlab.com/ee/ci/pipelines.html
gitlab CI: https://docs.gitlab.com/ee/ci/quick_start/
And you should have noticed that, one of the advantages of Appium is that you don't need to change a thing about the App you are testing, you are testing exactly the same App which is going into production. To learn more about Apppium:
http://appium.io/docs/en/about-appium/intro/
Now, to run the automation test, you need your test suite, the app, and Appium server. What we need to do is adding another stage in .gitlab-ci.yml, tell it to
take the newly compiled App, compile your test suite
install the App in simulator/real device
compile your test suite and run it.
To make things easier to understand, we start with question 4, workflow:
So when the code is checked in to gitlab, the gitlab runner runs the jobs of each stage in your .gitlab-ci.yml, and when it runs to your stage, it does the automation test, and note that it is running on your server, so it means you need to have Appium installed on your server and have it up and running when try to run your automation test suite. Now the problem is that, is your server capable to do so? If you wanna do the automation test in your server, you need to install Appium on it, simulator probably(and which might need your server to equip with GPU), etc, these are the concerns of maintaining server. The alternative would be using the third-party service ,which is what I did. Turns out our(when I was in that company) server isn't capable of running automation UI test, so we turned to AWS-ADF(Amazon Device Farm), there are many other service providers you could choose, see the link for references:
https://adtmag.com/blogs/dev-watch/2017/05/device-clouds.aspx
So I basically have a python script in my functional test stage, and it will grab the newly complied App, the automation test suite, upload them to AWS ADF, and then schedule a run, yields result when the run is finished.
so, to answer question 1:
we need to create one more stage for our functional test in .gitlab.yaml, in my case, I have a stage functionalTest_project stage after the stage which compiles the Android App. And then you script the necessary cmd in your stage, or if its too lengthy, your script in another file(put it in your repo) and then execute it. In my case, I put my script in python_ci.py, and then I execute it in my stage use “python python_ci.py” .(here you need a docker with these requirement, see below too)
You don’t download Appium, you set up Appium on your or if you use a cloud service, that service should set up Appium for you.
What I did it is that I use maven built and package the test suite locally and then push it to gitlab repo, which now I believe the better way would be compile and package it in the your functionalTest stage in .gitlab.yml. now it comes back to first point of question 1, how to get maven, my understanding is that its a dependency of the server, like python, so they could both be obtained by telling gitlab to execute your script with a docker that has python and maven dependency.
answer to question 3:
put it in the same repo, but out of the Android project(i.e. they will under the same directory).
how to tell yml to reach the test suite? remember they are in the same server, so you could the relative path in your yml script to tell yml where to get your test suite.
Hope this helps!

Using TeamCity how can I deploy to an environment then run tests against that environment?

I am struggling to get my head around this!
I wish to have TeamCity deploy our windows service to a particular environment, then a separate project run acceptance tests against that environment.
Currently I have a project that builds then runs unit tests, and finally packages up the deployable elements.
A second project takes the package (artefact dependency) and deploys to the environment.
Now I wish to run acceptance tests against that deployment. The tests are not in the deployable package so I must return to the "build" project... I thought I could use a Snapshot dependency to use the already compiled files (I don't want to checkout/re-compile anything)
However I just get an empty folder on the agent when I hit 'run' on this project.
I must have misunderstood how this works!
Are there any blog posts to help elucidate this?
The tests are specflow/nunint tests.
Please ask for more info if I have not been clear!
You might want to set up the tests as an artifact of the build project, then deploy the tests to the deployment environment.
Then run a separate TeamCity agent on the deployment environment to actually execute the tests on that environment.

Teamcity build for live after successful CI build?

I have an environmental build system which currently has the following environments:
dev
ci
uat
live
Just to be clear when I say environmental build I mean there are a set of properties files for each environment and during the build these properties are used to template project files, so a database server may be "localhost" on dev environment but "12.34.56.78" on CI. So when starting the build you can give it an environment property and it will build for something other than dev (which is the default environment).
Now the CI build works fine and spits out the artifacts correctly, however as the build is CI all of it is configured to work on that environment, and I am looking at being able to trigger a build for live or uat when a CI build succeeds. This would then run the same build but with a different build argument.
Now I noticed there are a few mechanisms for this, one seems to be doing an automatic trigger on complete which could trigger another build, but this seems to require 2 separate build configurations which are essentially identical other than the build argument being "environment=live" rather than "environment=ci". Then there is adding another build step which would be the same as the first but take different argument and output the live artifacts elsewhere, but this would always happen much like the first option.
The final option I could see was to trigger a manual build once I have a live candidate, but it is unclear as to how to set a build argument, I could make a build parameter however it doesn't seem to get pulled into the build script like a command like build argument would.
I will see if there is a better answer, but after writing this I found that using Build Parameters seems the best option, this then can be embedded within your build configuration anywhere using the %environment% (or %your_parameter_here%).
This can then be setup to create a form element for manual builds so you can easily create a build for a different environments.

Resources