How to configure ASPNET Core with Docker using VSTS to build, run tests and deploy to Azure with Unit Tests and Environment Variables - visual-studio

I'm trying to do something incredibly trivial I thought but apparently this needs to be hard. And yes there are bits and pieces throughout stack overflow but they're either out of date or don't actually work.
I've got an asp.net core site that I've dockerized with the add/docker/linux command.
In VSTS I can build the image and publish it with 2 docker-compose items.
And then I can release the image with the release management.
What I can't figure out how to do:
run dotnet test on my image and report the results to VSTS
Setup environment variables on Azure App Service Container that get properly passed into the image when its run.
On #1, I cannot find any up-to-date documentation on how to set it up so that while developing unit tests don't run unless specifically specified (and if I tell it to run tests in visual studio they should run in the docker image! I can get them to run always, but that's a waste of time while developing if they run every time you start debugging!).
And I cannot figure out how to use either docker-compose or the new VS.net 2017 15.8 way with just docker run commands to run the tests. It seems to me that I would need a new dockerfile just for the tests to run and have it generate and then discard the image that was created. But I can't figure out how to do this or even if this is the right way.
How should this be setup to do unit tests? (I've gone through 5 pages of google search results and none of them work right.)
On #2, setting and application setting in the App Service does not pass the values in docker run. I've tried everything and they never get passed. How do you pass environment variables on Azure so that the run command gets the right -e parameters?

For#1 you could use dotnet test command. This will generate a .trx file that VSTS can pick up and render a nice test report. You just need to setup the “Publish Test Results” task.
dotnet test --logger trx --results-directory /var/temp
More details please take a look at this blog: Running your unit tests with Visual Studio Team Services and Docker Compose
For#2 not totally get your point, if you want to override environment variable values on VSTS and use the value on Azure App Service Container. Please try this solution through powershell script: How to override values of environment variables on VSTS tasks
Beside suggest you also go through this blog shows how Docker Deployment to Azure App Service (Linux) using VSTS including both CI and CD. Which maybe helpful to you.

Related

Specflow tests running on local web server

I am trying to use Specflow with Playwright in order to do BDD on a portal app developed but I am facing a small problem.
The Specflow project is a separate project with the ASP.Net core server that has the Api of the portal app (it is in Vue). Since the tests are pointing to a specific URL (currently localhost), before running the tests, I need to run the ASP.Net core & Vue project locally. Otherwise, Specflow & Playwright will not be able to do the test (as it will not find the localhost).
Is it any way I can force the run of the Web Server project? I tried to run it from outside Visual Studio with dotnet build and then dotnet run commands but somehow they are missing parameters (that exist while running it from inside VS) and apart from that, these commands must somehow be triggered while trying to run the tests.
I have seen solutions like creating a Docker image from a Docker Compose file in order to pack a .Net project & server in it before running the Specflow tests. Then in the BeforeTestRun hook using the FluentDocker to spin-up the server but I am not quite sure it is the easier (or best) solution.
Does anyone know how I can trigger running the .net core project (with the Vue pages)?
This is actually a pretty big question, with a pretty big answer, however this is well-trodden ground. The issue isn't so much a "specflow" issue as a general automated testing issue. Development practices like continuous integration and continuous delivery can help. Each one is too big for a single question, however I can answer this in more general terms.
In its simplest form, running automated tests locally involves these steps:
Build the application
Deploy the application to a real web server
Run tests
I'm going to assume you are developing in a Windows environment, however every operating system has some sort of command line scripting solution available. The scripting language might change, but the overall idea will not.
Configure a web server. In Windows, this would be Internet Information Services (IIS).
Add a new "application" (or "IIS app" as some people call it) to your localhost web server. Point the physical directory to the root directory for the web project. Repeat this for each web site or web app your system requires.
Write a PowerShell script that gives you an easy way to build and deploy the applications to your local web server.
This script should use publish profiles set up in Visual Studio, which allows you to publish directly from Visual Studio before invoking tests manually through Test Explorer.
Write a PowerShell script used has a "harness" script to coordinate building, deploying locally, and then invoking dotnet test.
Running tests locally just requires a single line of PowerShell to invoke your test harness script:
.\Scripts\Run-Tests.ps1 -solutionDir . -tags BlogPosts,Create
# Skip deploying in case web apps haven't changed:
.\Scripts\Run-Tests.ps1 -solutionDir . -tags BlogPosts,Create -deploy:False

Jenkins + Docker Compose + Integration Tests

I have a crazy idea to run integration tests (xUnit in .Net) in the Jenkins pipeline by using Docker Compose. The goal is to create testing environment ad-hoc and run integration tests form Jenkins (and Visual Studio) wthout using DBs etc. on physical server. In my previous project sometimes there was a case, when two builds override test data from the second build and I would like to avoid it.
The plan is the following:
Add dockerfile for each test project
Add references in the docker compose file (with creation of DBs on docker)
Add step in the Jenkins that will run integration tests
I have no long experience with contenerization, so I cannot predict what problems can appear.
The questions are:
Does it have any sence?
Is it possible?
Can it be done simpler?
I suppose that Visual Sutio test runner won't be able to get results from the docker images. I am right?
It looks that development of tests will be more difficult, because test will be run on the docker. I am right?
Thanks for all your suggestions.
Depends very much on the details. In a small project - no, in a big project with multiple micro services and many devs - sure.
Absolutely. Anything that can be done with shell commands can be automated with Jenkins
Yes, just have a test DB running somewhere. Or just run it locally with a simple script. Automation and containerization is the opposite of simple, you would only do it if the overhead is worth it in the long run
Normally it wouldn't even run on the same machine, so that could be tricky. I am no VS Code expert though
The goal of containers is to make it simpler because the environment does not change, but they add configuration overhead. Most days it shouldn't make a difference but whenever you make a big change it will cost some time.
I'd say running a Jenkins on your local machine is rarelly worth it, you could just use docker locally with scripts (bash or WSL).

Run newman on the local build instead of deploying to a test environment using TeamCity

I am looking to be able to run my postman scripts using newman during a TeamCity build.
Instead of deploying the build to a test environment, I'd like to run the postman scripts on that particular build, so it isn't deployed to an environment used by other developers which could potentially break it.
My current build chain in TeamCity is:
Build main project (contains the REST Api and all required code)
Run Postman scripts using Newman on that project
I have the collection and environment file, along with the CLI command to call it. When I try and point the environment for a local build, it does not work.
I am thinking of running an IIS Express server on the agent and then with that active port, run the tests but I have been unsuccessful.
Any ideas on how to approach this would be appreciated!
I have looked at How do I integrate my Postman Integration Tests with TeamCity and this uses a test environment, which is not what I am after.
I looked at https://ie.com.au/a-how-set-up-automated-api-testing and this was helpful, but I think this is still reliant on setting up a test envrionment.
TeamCity isn't really equipped to handle what you are trying to do. You are trying to run API tests against a build, in order to do that, you'll need an environment. You need something to run your project in order to query against it.
The only potential path you might try looking at is containerizing your project, in docker or something similar, then running your image after it's built and querying against that. However this isn't a great practice and bloats the build time.
A good practice would be to build your project > deploy it to a test environment, you should set up a separate 'test' or 'dev' environment that is ok being broken > after deploy trigger a service to run your tests against the 'dev'

how to set up a Appium UI test maven project to work with Gitlab CI to test Android App?

I am an intern now, new to automation test.My goal here is to help my company set up CI for client side.
Right now I have a maven project contains several tests using Appium java-client lib, under Eclipse IDE, which could run the UI tests locally. My goal next step is to hook my tests with the gitlab repo(which is already there, created by the android developers), but I am stuck here. Could somebody help me out?
Please try to be specific:
how should I set up the .gitlab.yaml?
can we just have the script in yaml to download Appium and maven?
or we could just download Appium, but import all the Appium java-client jars to libs in main?
If either of above is true, how? if neither, what and how should I
do?
Where should I put my test in gitlab in that repo? Or I don't have to
put my tests in the existing repo. Instead, I could have another one
and tell yaml where to reach? Again, how?
It will be helpful if you could help me go through the workflow.
Like, when I developers check in code, gitlab read the yaml, then
build, then find my test suits in where(Q3), then execute etc.
Many thanks in advance!
Since finally someone is also interested in this question, let me share my solution to this.
So, if you are looking at this question, I assume you already have your test suite and you could test it locally in your machine, either have your app installed in a simulator or a real device. Now you need to read more about gitlab pipeline and gitlab CI :
pipeline: https://docs.gitlab.com/ee/ci/pipelines.html
gitlab CI: https://docs.gitlab.com/ee/ci/quick_start/
And you should have noticed that, one of the advantages of Appium is that you don't need to change a thing about the App you are testing, you are testing exactly the same App which is going into production. To learn more about Apppium:
http://appium.io/docs/en/about-appium/intro/
Now, to run the automation test, you need your test suite, the app, and Appium server. What we need to do is adding another stage in .gitlab-ci.yml, tell it to
take the newly compiled App, compile your test suite
install the App in simulator/real device
compile your test suite and run it.
To make things easier to understand, we start with question 4, workflow:
So when the code is checked in to gitlab, the gitlab runner runs the jobs of each stage in your .gitlab-ci.yml, and when it runs to your stage, it does the automation test, and note that it is running on your server, so it means you need to have Appium installed on your server and have it up and running when try to run your automation test suite. Now the problem is that, is your server capable to do so? If you wanna do the automation test in your server, you need to install Appium on it, simulator probably(and which might need your server to equip with GPU), etc, these are the concerns of maintaining server. The alternative would be using the third-party service ,which is what I did. Turns out our(when I was in that company) server isn't capable of running automation UI test, so we turned to AWS-ADF(Amazon Device Farm), there are many other service providers you could choose, see the link for references:
https://adtmag.com/blogs/dev-watch/2017/05/device-clouds.aspx
So I basically have a python script in my functional test stage, and it will grab the newly complied App, the automation test suite, upload them to AWS ADF, and then schedule a run, yields result when the run is finished.
so, to answer question 1:
we need to create one more stage for our functional test in .gitlab.yaml, in my case, I have a stage functionalTest_project stage after the stage which compiles the Android App. And then you script the necessary cmd in your stage, or if its too lengthy, your script in another file(put it in your repo) and then execute it. In my case, I put my script in python_ci.py, and then I execute it in my stage use “python python_ci.py” .(here you need a docker with these requirement, see below too)
You don’t download Appium, you set up Appium on your or if you use a cloud service, that service should set up Appium for you.
What I did it is that I use maven built and package the test suite locally and then push it to gitlab repo, which now I believe the better way would be compile and package it in the your functionalTest stage in .gitlab.yml. now it comes back to first point of question 1, how to get maven, my understanding is that its a dependency of the server, like python, so they could both be obtained by telling gitlab to execute your script with a docker that has python and maven dependency.
answer to question 3:
put it in the same repo, but out of the Android project(i.e. they will under the same directory).
how to tell yml to reach the test suite? remember they are in the same server, so you could the relative path in your yml script to tell yml where to get your test suite.
Hope this helps!

Using MongoDB (in a container?) in Visual Studio Team Services pipelines

I have a node.js server that communicates with a MongoDB database. As part of the continuous-integration process I'd like to spin up a MongoDB database and run my tests against the server + DB.
With bitbucket pipelines I can spin up a container that has both node.js and MongoDB. I then run my tests against this setup.
What would be the best way to achieve this with Visual Studio Team Services? Some options that come to mind:
1) Hosted pipelines seem easiest but they don't have MongoDB on them. I could use Tool Installers, but there's no mention of a MongoDB installer, and in fact I don't see any tool installer in my list of available tasks. Also, it is mentioned that there is no admin access to the hosted pipeline machines and I believe MongoDB requires admin access. Lastly, downloading and installing Mongo takes quite a bit of time.
2) Set up my own private pipeline - i.e. a VM with Node + Mongo, and install the pipeline agent on it. Do I have to spin up a dedicate Azure instance for this? Will this instance be torn down and set up again on each test run, or will it remain up between test runs (meaning I have to take extra care to clean it up)?
3) Magically use a container in the pipeline through an option that I haven't yet discovered...?
I'd really like to use a container to run my tests because then I can use the same container locally during the development process, rather than having to maintain multiple environments. Can this be done?
So as it turns out, VSTS now has Docker support in its pipeline (when I wrote my question it was in beta and I didn't find it for whatever reason). It can be found at https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.docker.
This command allows you to spin up a container of your choice and run a single command on it. If this command is to be synchronously run as part of the pipeline, then Run in Background needs to be unchecked (this will be the case for regular build commands, I guess). I ended up pushing a build script into my git repository and running it on a container.
And re. my question in (2) above - machines in private pipelines aren't cleaned up between pipeline runs.

Resources