Currently trying to run an e2e application on an Nx Monorepo.
Using a shared plugin file and support, pulling in a config file depending on the environment and setting the baseUrl on the environment file. It appears to run the first stage of the tests where it speaks to an API to create a user but then crashes the actual test when trying to access the main site.
In this case, a development URL. The aim is to allow testing on a local environment once it's been created with nx application then #nwrl/cypress:cypress.
Changed it over to a node command to run the cypress open --project with a different env file per environment for each project that is a different area of the site.
Now it appears that it's failing to try to mkdir from a different hard disk location or it's running part the way through the test then closing the test and displaying that there are no tests with different projects selected!
Any help would be much appreciated.
Related
I am trying to use Specflow with Playwright in order to do BDD on a portal app developed but I am facing a small problem.
The Specflow project is a separate project with the ASP.Net core server that has the Api of the portal app (it is in Vue). Since the tests are pointing to a specific URL (currently localhost), before running the tests, I need to run the ASP.Net core & Vue project locally. Otherwise, Specflow & Playwright will not be able to do the test (as it will not find the localhost).
Is it any way I can force the run of the Web Server project? I tried to run it from outside Visual Studio with dotnet build and then dotnet run commands but somehow they are missing parameters (that exist while running it from inside VS) and apart from that, these commands must somehow be triggered while trying to run the tests.
I have seen solutions like creating a Docker image from a Docker Compose file in order to pack a .Net project & server in it before running the Specflow tests. Then in the BeforeTestRun hook using the FluentDocker to spin-up the server but I am not quite sure it is the easier (or best) solution.
Does anyone know how I can trigger running the .net core project (with the Vue pages)?
This is actually a pretty big question, with a pretty big answer, however this is well-trodden ground. The issue isn't so much a "specflow" issue as a general automated testing issue. Development practices like continuous integration and continuous delivery can help. Each one is too big for a single question, however I can answer this in more general terms.
In its simplest form, running automated tests locally involves these steps:
Build the application
Deploy the application to a real web server
Run tests
I'm going to assume you are developing in a Windows environment, however every operating system has some sort of command line scripting solution available. The scripting language might change, but the overall idea will not.
Configure a web server. In Windows, this would be Internet Information Services (IIS).
Add a new "application" (or "IIS app" as some people call it) to your localhost web server. Point the physical directory to the root directory for the web project. Repeat this for each web site or web app your system requires.
Write a PowerShell script that gives you an easy way to build and deploy the applications to your local web server.
This script should use publish profiles set up in Visual Studio, which allows you to publish directly from Visual Studio before invoking tests manually through Test Explorer.
Write a PowerShell script used has a "harness" script to coordinate building, deploying locally, and then invoking dotnet test.
Running tests locally just requires a single line of PowerShell to invoke your test harness script:
.\Scripts\Run-Tests.ps1 -solutionDir . -tags BlogPosts,Create
# Skip deploying in case web apps haven't changed:
.\Scripts\Run-Tests.ps1 -solutionDir . -tags BlogPosts,Create -deploy:False
I'm trying to do something incredibly trivial I thought but apparently this needs to be hard. And yes there are bits and pieces throughout stack overflow but they're either out of date or don't actually work.
I've got an asp.net core site that I've dockerized with the add/docker/linux command.
In VSTS I can build the image and publish it with 2 docker-compose items.
And then I can release the image with the release management.
What I can't figure out how to do:
run dotnet test on my image and report the results to VSTS
Setup environment variables on Azure App Service Container that get properly passed into the image when its run.
On #1, I cannot find any up-to-date documentation on how to set it up so that while developing unit tests don't run unless specifically specified (and if I tell it to run tests in visual studio they should run in the docker image! I can get them to run always, but that's a waste of time while developing if they run every time you start debugging!).
And I cannot figure out how to use either docker-compose or the new VS.net 2017 15.8 way with just docker run commands to run the tests. It seems to me that I would need a new dockerfile just for the tests to run and have it generate and then discard the image that was created. But I can't figure out how to do this or even if this is the right way.
How should this be setup to do unit tests? (I've gone through 5 pages of google search results and none of them work right.)
On #2, setting and application setting in the App Service does not pass the values in docker run. I've tried everything and they never get passed. How do you pass environment variables on Azure so that the run command gets the right -e parameters?
For#1 you could use dotnet test command. This will generate a .trx file that VSTS can pick up and render a nice test report. You just need to setup the “Publish Test Results” task.
dotnet test --logger trx --results-directory /var/temp
More details please take a look at this blog: Running your unit tests with Visual Studio Team Services and Docker Compose
For#2 not totally get your point, if you want to override environment variable values on VSTS and use the value on Azure App Service Container. Please try this solution through powershell script: How to override values of environment variables on VSTS tasks
Beside suggest you also go through this blog shows how Docker Deployment to Azure App Service (Linux) using VSTS including both CI and CD. Which maybe helpful to you.
I am looking to be able to run my postman scripts using newman during a TeamCity build.
Instead of deploying the build to a test environment, I'd like to run the postman scripts on that particular build, so it isn't deployed to an environment used by other developers which could potentially break it.
My current build chain in TeamCity is:
Build main project (contains the REST Api and all required code)
Run Postman scripts using Newman on that project
I have the collection and environment file, along with the CLI command to call it. When I try and point the environment for a local build, it does not work.
I am thinking of running an IIS Express server on the agent and then with that active port, run the tests but I have been unsuccessful.
Any ideas on how to approach this would be appreciated!
I have looked at How do I integrate my Postman Integration Tests with TeamCity and this uses a test environment, which is not what I am after.
I looked at https://ie.com.au/a-how-set-up-automated-api-testing and this was helpful, but I think this is still reliant on setting up a test envrionment.
TeamCity isn't really equipped to handle what you are trying to do. You are trying to run API tests against a build, in order to do that, you'll need an environment. You need something to run your project in order to query against it.
The only potential path you might try looking at is containerizing your project, in docker or something similar, then running your image after it's built and querying against that. However this isn't a great practice and bloats the build time.
A good practice would be to build your project > deploy it to a test environment, you should set up a separate 'test' or 'dev' environment that is ok being broken > after deploy trigger a service to run your tests against the 'dev'
I am an intern now, new to automation test.My goal here is to help my company set up CI for client side.
Right now I have a maven project contains several tests using Appium java-client lib, under Eclipse IDE, which could run the UI tests locally. My goal next step is to hook my tests with the gitlab repo(which is already there, created by the android developers), but I am stuck here. Could somebody help me out?
Please try to be specific:
how should I set up the .gitlab.yaml?
can we just have the script in yaml to download Appium and maven?
or we could just download Appium, but import all the Appium java-client jars to libs in main?
If either of above is true, how? if neither, what and how should I
do?
Where should I put my test in gitlab in that repo? Or I don't have to
put my tests in the existing repo. Instead, I could have another one
and tell yaml where to reach? Again, how?
It will be helpful if you could help me go through the workflow.
Like, when I developers check in code, gitlab read the yaml, then
build, then find my test suits in where(Q3), then execute etc.
Many thanks in advance!
Since finally someone is also interested in this question, let me share my solution to this.
So, if you are looking at this question, I assume you already have your test suite and you could test it locally in your machine, either have your app installed in a simulator or a real device. Now you need to read more about gitlab pipeline and gitlab CI :
pipeline: https://docs.gitlab.com/ee/ci/pipelines.html
gitlab CI: https://docs.gitlab.com/ee/ci/quick_start/
And you should have noticed that, one of the advantages of Appium is that you don't need to change a thing about the App you are testing, you are testing exactly the same App which is going into production. To learn more about Apppium:
http://appium.io/docs/en/about-appium/intro/
Now, to run the automation test, you need your test suite, the app, and Appium server. What we need to do is adding another stage in .gitlab-ci.yml, tell it to
take the newly compiled App, compile your test suite
install the App in simulator/real device
compile your test suite and run it.
To make things easier to understand, we start with question 4, workflow:
So when the code is checked in to gitlab, the gitlab runner runs the jobs of each stage in your .gitlab-ci.yml, and when it runs to your stage, it does the automation test, and note that it is running on your server, so it means you need to have Appium installed on your server and have it up and running when try to run your automation test suite. Now the problem is that, is your server capable to do so? If you wanna do the automation test in your server, you need to install Appium on it, simulator probably(and which might need your server to equip with GPU), etc, these are the concerns of maintaining server. The alternative would be using the third-party service ,which is what I did. Turns out our(when I was in that company) server isn't capable of running automation UI test, so we turned to AWS-ADF(Amazon Device Farm), there are many other service providers you could choose, see the link for references:
https://adtmag.com/blogs/dev-watch/2017/05/device-clouds.aspx
So I basically have a python script in my functional test stage, and it will grab the newly complied App, the automation test suite, upload them to AWS ADF, and then schedule a run, yields result when the run is finished.
so, to answer question 1:
we need to create one more stage for our functional test in .gitlab.yaml, in my case, I have a stage functionalTest_project stage after the stage which compiles the Android App. And then you script the necessary cmd in your stage, or if its too lengthy, your script in another file(put it in your repo) and then execute it. In my case, I put my script in python_ci.py, and then I execute it in my stage use “python python_ci.py” .(here you need a docker with these requirement, see below too)
You don’t download Appium, you set up Appium on your or if you use a cloud service, that service should set up Appium for you.
What I did it is that I use maven built and package the test suite locally and then push it to gitlab repo, which now I believe the better way would be compile and package it in the your functionalTest stage in .gitlab.yml. now it comes back to first point of question 1, how to get maven, my understanding is that its a dependency of the server, like python, so they could both be obtained by telling gitlab to execute your script with a docker that has python and maven dependency.
answer to question 3:
put it in the same repo, but out of the Android project(i.e. they will under the same directory).
how to tell yml to reach the test suite? remember they are in the same server, so you could the relative path in your yml script to tell yml where to get your test suite.
Hope this helps!
I'm very new to the Golang as well as iris (Go web framework). Right now I'm play with them and trying to understand whether they fit my needs. As I understand, after we completed a iris project, what we have is a bunch of .go files. Then we compile them and get one executable. How should we deploy this output? Simply put it at some where in the file system and run it (probably as a service on Windows or background job in Linux)? Is it that simple?
Go allows very simple deployment with a standalone binary you can push to all servers without worrying about available libraries:
Compile your code for the targeted operating system
Push the executable to your server
Run it with whatever you want : service, supervisord...
A good read: Go in production.
depends on yours root project it's a bit different to build. i prefer to configure my project so
AppName
Models
Services
service1
service2
Helpers
Controllers
Web
Views
Assets
Locals
main.go
for that go to the root directory ,
in windows for development you should execute
go run web\main.go
for production when the project complied
Set GOOS=linux //(if your server is linux)
go build web\main.go
after it you will see a binary file
when you put the file on the server in linux it's better to define a services for the binary file to execute on start up and restart on any error.