I've created a batch script which run tests automatically (have a bin folder including all tests and dll's)
The whole software builds without problems on the CI.
the Tests run locally without errors (running the batch from my machine).
But if I run the tests with the gitlab runner on the CI-Server some tests fail.
Where is the difference? I'm new to die GitLab CI and don't understand the difference right now.
I am working with windows 7 :)
I'm happy with every suggestion.
Wish you a nice weekend :)
Related
I have a crazy idea to run integration tests (xUnit in .Net) in the Jenkins pipeline by using Docker Compose. The goal is to create testing environment ad-hoc and run integration tests form Jenkins (and Visual Studio) wthout using DBs etc. on physical server. In my previous project sometimes there was a case, when two builds override test data from the second build and I would like to avoid it.
The plan is the following:
Add dockerfile for each test project
Add references in the docker compose file (with creation of DBs on docker)
Add step in the Jenkins that will run integration tests
I have no long experience with contenerization, so I cannot predict what problems can appear.
The questions are:
Does it have any sence?
Is it possible?
Can it be done simpler?
I suppose that Visual Sutio test runner won't be able to get results from the docker images. I am right?
It looks that development of tests will be more difficult, because test will be run on the docker. I am right?
Thanks for all your suggestions.
Depends very much on the details. In a small project - no, in a big project with multiple micro services and many devs - sure.
Absolutely. Anything that can be done with shell commands can be automated with Jenkins
Yes, just have a test DB running somewhere. Or just run it locally with a simple script. Automation and containerization is the opposite of simple, you would only do it if the overhead is worth it in the long run
Normally it wouldn't even run on the same machine, so that could be tricky. I am no VS Code expert though
The goal of containers is to make it simpler because the environment does not change, but they add configuration overhead. Most days it shouldn't make a difference but whenever you make a big change it will cost some time.
I'd say running a Jenkins on your local machine is rarelly worth it, you could just use docker locally with scripts (bash or WSL).
I am looking to be able to run my postman scripts using newman during a TeamCity build.
Instead of deploying the build to a test environment, I'd like to run the postman scripts on that particular build, so it isn't deployed to an environment used by other developers which could potentially break it.
My current build chain in TeamCity is:
Build main project (contains the REST Api and all required code)
Run Postman scripts using Newman on that project
I have the collection and environment file, along with the CLI command to call it. When I try and point the environment for a local build, it does not work.
I am thinking of running an IIS Express server on the agent and then with that active port, run the tests but I have been unsuccessful.
Any ideas on how to approach this would be appreciated!
I have looked at How do I integrate my Postman Integration Tests with TeamCity and this uses a test environment, which is not what I am after.
I looked at https://ie.com.au/a-how-set-up-automated-api-testing and this was helpful, but I think this is still reliant on setting up a test envrionment.
TeamCity isn't really equipped to handle what you are trying to do. You are trying to run API tests against a build, in order to do that, you'll need an environment. You need something to run your project in order to query against it.
The only potential path you might try looking at is containerizing your project, in docker or something similar, then running your image after it's built and querying against that. However this isn't a great practice and bloats the build time.
A good practice would be to build your project > deploy it to a test environment, you should set up a separate 'test' or 'dev' environment that is ok being broken > after deploy trigger a service to run your tests against the 'dev'
I am an intern now, new to automation test.My goal here is to help my company set up CI for client side.
Right now I have a maven project contains several tests using Appium java-client lib, under Eclipse IDE, which could run the UI tests locally. My goal next step is to hook my tests with the gitlab repo(which is already there, created by the android developers), but I am stuck here. Could somebody help me out?
Please try to be specific:
how should I set up the .gitlab.yaml?
can we just have the script in yaml to download Appium and maven?
or we could just download Appium, but import all the Appium java-client jars to libs in main?
If either of above is true, how? if neither, what and how should I
do?
Where should I put my test in gitlab in that repo? Or I don't have to
put my tests in the existing repo. Instead, I could have another one
and tell yaml where to reach? Again, how?
It will be helpful if you could help me go through the workflow.
Like, when I developers check in code, gitlab read the yaml, then
build, then find my test suits in where(Q3), then execute etc.
Many thanks in advance!
Since finally someone is also interested in this question, let me share my solution to this.
So, if you are looking at this question, I assume you already have your test suite and you could test it locally in your machine, either have your app installed in a simulator or a real device. Now you need to read more about gitlab pipeline and gitlab CI :
pipeline: https://docs.gitlab.com/ee/ci/pipelines.html
gitlab CI: https://docs.gitlab.com/ee/ci/quick_start/
And you should have noticed that, one of the advantages of Appium is that you don't need to change a thing about the App you are testing, you are testing exactly the same App which is going into production. To learn more about Apppium:
http://appium.io/docs/en/about-appium/intro/
Now, to run the automation test, you need your test suite, the app, and Appium server. What we need to do is adding another stage in .gitlab-ci.yml, tell it to
take the newly compiled App, compile your test suite
install the App in simulator/real device
compile your test suite and run it.
To make things easier to understand, we start with question 4, workflow:
So when the code is checked in to gitlab, the gitlab runner runs the jobs of each stage in your .gitlab-ci.yml, and when it runs to your stage, it does the automation test, and note that it is running on your server, so it means you need to have Appium installed on your server and have it up and running when try to run your automation test suite. Now the problem is that, is your server capable to do so? If you wanna do the automation test in your server, you need to install Appium on it, simulator probably(and which might need your server to equip with GPU), etc, these are the concerns of maintaining server. The alternative would be using the third-party service ,which is what I did. Turns out our(when I was in that company) server isn't capable of running automation UI test, so we turned to AWS-ADF(Amazon Device Farm), there are many other service providers you could choose, see the link for references:
https://adtmag.com/blogs/dev-watch/2017/05/device-clouds.aspx
So I basically have a python script in my functional test stage, and it will grab the newly complied App, the automation test suite, upload them to AWS ADF, and then schedule a run, yields result when the run is finished.
so, to answer question 1:
we need to create one more stage for our functional test in .gitlab.yaml, in my case, I have a stage functionalTest_project stage after the stage which compiles the Android App. And then you script the necessary cmd in your stage, or if its too lengthy, your script in another file(put it in your repo) and then execute it. In my case, I put my script in python_ci.py, and then I execute it in my stage use “python python_ci.py” .(here you need a docker with these requirement, see below too)
You don’t download Appium, you set up Appium on your or if you use a cloud service, that service should set up Appium for you.
What I did it is that I use maven built and package the test suite locally and then push it to gitlab repo, which now I believe the better way would be compile and package it in the your functionalTest stage in .gitlab.yml. now it comes back to first point of question 1, how to get maven, my understanding is that its a dependency of the server, like python, so they could both be obtained by telling gitlab to execute your script with a docker that has python and maven dependency.
answer to question 3:
put it in the same repo, but out of the Android project(i.e. they will under the same directory).
how to tell yml to reach the test suite? remember they are in the same server, so you could the relative path in your yml script to tell yml where to get your test suite.
Hope this helps!
I am struggling to get my head around this!
I wish to have TeamCity deploy our windows service to a particular environment, then a separate project run acceptance tests against that environment.
Currently I have a project that builds then runs unit tests, and finally packages up the deployable elements.
A second project takes the package (artefact dependency) and deploys to the environment.
Now I wish to run acceptance tests against that deployment. The tests are not in the deployable package so I must return to the "build" project... I thought I could use a Snapshot dependency to use the already compiled files (I don't want to checkout/re-compile anything)
However I just get an empty folder on the agent when I hit 'run' on this project.
I must have misunderstood how this works!
Are there any blog posts to help elucidate this?
The tests are specflow/nunint tests.
Please ask for more info if I have not been clear!
You might want to set up the tests as an artifact of the build project, then deploy the tests to the deployment environment.
Then run a separate TeamCity agent on the deployment environment to actually execute the tests on that environment.
I've just installed Hudson and it is running beautifully. It builds, runs JUnit-tests and also CheckStyle analysis.
Next step for us would be to create an installation, install it and then run automated tests on the actual installation. I would then like to fail the build if the tests fail or at least publish the results somehow. I think we would set it up so that part runs periodically or manually triggered.
We use InstallAnywhere for installation and IBM Rational Functional Tester for automated tests.
So questions are: anyone created a similar setup? are there any plugins, tutorials or other resource that could help me along. Or do you have any tips or advice in general.
The command line reference for Rational Functional Tester:
http://publib.boulder.ibm.com/infocenter/rfthelp/v8r0m0/index.jsp?topic=/com.ibm.rational.test.ft.doc/topics/RobotJCommandLine.html
Sample command for running a test:
java -classpath "C:\IBM\RFT\FunctionalTester\bin\rational_ft.jar"
com.rational.test.ft.rational_ft -datastore \\My_project\AUser\RobotJProjects -user admin -project
\\My_project\AUser\TestManagerProjects\Test.rsp -build "Build 1" -logfolder "Default" -log
"Al_SimpleClassicsA#1" -rt.log_format "TestManager" -rt.bring_up_logviewer true -playback
basetests.SimpleClassicsA_01
An additional note, you'll want to configure windows properly on your agent machine which will be running the tests. This is not advice specific to Hudson or RFT, but rather all GUI automation tools on Windows. RFT will require an interactive desktop environment for it to be able to click buttons, etc. If you have your Hudson agent running as a Windows service, there will be no desktop. See the following: Silverlight tests not working unless RDP connection open
We have run a fairly complicated distributed build on Hudson, it is a process that basically follows:
Test on Windows.
Test on OSX, run code coverage & push results to server.
Test on OSX Tiger.
Package for OSX Leopard & push build to server.
Package for Windows & push build to server.
Update product website.
We don't use InstallAnywhere or Rational Functional Tester, but have similar sorts of mechanisms in their place. The key we found to making it all sing in Hudson was being able run our various steps from the command line. Maven and appropriate plugins made short work of this task. So my advice would be just that, using whatever build tool you are using (ant, maven, ?) configure them so that you can run your rational functional tester and install anywhere from the command line with a simple goal passed to your build tool (i.e. mvn test or mvn assembly:assembly).
After that, make sure whatever machine Hudson is running on has everything installed (i.e. Rational Functional Tester) and configured, so that you can open up the command line and type in the goal and have your tests correctly execute.
Hooking it up in Hudson from that point on is fairly simple - just pass in the goal when you configure the build.
I believe the best answer is that integrating RFT with Hudson/Jenkins is a useless endeavor.
As this IBM FAQ says, to make RFT work you must:
be logged in the machine;
the screen can't be locked;
if you are remotely connected, you can't minimize the connection screen.
So you can't run Jenkins/Hudson as a service, making it not very useful. You must run it from your logged account. If you are in a corporate computer (very probable if you are using RFT), you probably must use a hack to prevent the screen saver to start. If the screen is locked, your tests will always fails.
It isn't very difficult to configure your tests to run from the command line, you just have to take care of the return codes when the tests fail and succeed.
Jenkins/Hudson would also give you some advantages, like integrating the tests with your version control, probably automatically running the tests when a commit is made. It would also help sending emails when the tests fail.
But you still would have to integrate the RFT logs with some kind of JUnit plugin to have a nice report. You also would have to have script to run the tests using the command line.
I think it is not worth the trouble to use an continuous integration server with RFT. Better just have your tests running every day in Windows Task Scheduler. It is a simpler solution with less failure points.
Or use my final solution: quit RFT and use the free Selenium with a headless web driver.
I have some general advice on this because I have not yet implemented this myself.
I am assuming you want to have Hudson run the RFT scripts automatically for you via a build or Hudson process?
I want to implement something similar in my organisation as well.
I have not yet been able to implement this because of organisational constraints but here is what I have thought out/done so far:
Downloaded Windows process viewer, got the command for running the tests.
Made shell Script out of it, separated out the variables etc
The future plan is to setup a Windows Slave machine which would have all the tools in it that would be required once the Tests are kicked off, for eg. the correct versions of browsers, and environment variables, and other tools that are required.
Hudson would kick off a process which runs the shell scripts created which runs all the RFT Scripts and performs necessary operations on the slave machine.