Some of the steps that i have done are described below:
Setup Jenkins on remote linux server.
Used own mac as a slave to
run Xcode build
I have setup a Web hook on bitbucket that runs a
build on Jenkins server.
Build gets triggered when i push code to
repo.
I have shared scheme in the Xcode project.
But whenever i push failing tests on my repo the build passes..Shouldn't CI server fail the build..I dont know where am i missing..I have posted some screenshots for clear reference
EDIT: with more research i came to know that build fails but you need to see the test results/reports that fails.How can i see the reports? I cant see any xml files under reports section
Even I have faced the same issue where build was successful even though test cases were getting failed. You can use Log Parser plugin to parse the console output and mark your build as either stable or failed based on the output.
The jenkins runs in a step by step fashion.Build succeeds means your build step is correct and there's no issue with that but it does not mean your test cases have also passed.You need to check that with Junit if your test cases pass or not.
In your execute shell phase, you might need to add set -exo pipefail at the top.
Related
I have the following configured Azure DevOps pipeline which is building the Windows version of the Xamarin App, was going to try and get that one working first and the build works, but I'm getting an error in the publish, path not found, the 2 paths look fine but I guess this is down to me not building one of these before.
Build step which works fine
Publish artifact step which fails
The error that I get is as follows
Still trying to work out what I have got wrong, so I will keep going at it but if anyone
has any hints would be greatly apprecaited.
This thread as posted by Hongxin Sui was the solution for me, this is the thread.
Publishing build artifacts failed with an error: Not found PathtoPublish: D:\a\1\s\$(buildStagingDirectory)
However, what I did need to do was create a new YAML pipeline and I copied all the working steps in from the previous classic pipeline, I used the "Copy YAML" functionality and then I used the different Task as stated in that thread. Just make sure you change the case of the "Build" and "ArtifactStaging.." variable names as they need to be lower case not upper case.
I am looking to be able to run my postman scripts using newman during a TeamCity build.
Instead of deploying the build to a test environment, I'd like to run the postman scripts on that particular build, so it isn't deployed to an environment used by other developers which could potentially break it.
My current build chain in TeamCity is:
Build main project (contains the REST Api and all required code)
Run Postman scripts using Newman on that project
I have the collection and environment file, along with the CLI command to call it. When I try and point the environment for a local build, it does not work.
I am thinking of running an IIS Express server on the agent and then with that active port, run the tests but I have been unsuccessful.
Any ideas on how to approach this would be appreciated!
I have looked at How do I integrate my Postman Integration Tests with TeamCity and this uses a test environment, which is not what I am after.
I looked at https://ie.com.au/a-how-set-up-automated-api-testing and this was helpful, but I think this is still reliant on setting up a test envrionment.
TeamCity isn't really equipped to handle what you are trying to do. You are trying to run API tests against a build, in order to do that, you'll need an environment. You need something to run your project in order to query against it.
The only potential path you might try looking at is containerizing your project, in docker or something similar, then running your image after it's built and querying against that. However this isn't a great practice and bloats the build time.
A good practice would be to build your project > deploy it to a test environment, you should set up a separate 'test' or 'dev' environment that is ok being broken > after deploy trigger a service to run your tests against the 'dev'
I am an intern now, new to automation test.My goal here is to help my company set up CI for client side.
Right now I have a maven project contains several tests using Appium java-client lib, under Eclipse IDE, which could run the UI tests locally. My goal next step is to hook my tests with the gitlab repo(which is already there, created by the android developers), but I am stuck here. Could somebody help me out?
Please try to be specific:
how should I set up the .gitlab.yaml?
can we just have the script in yaml to download Appium and maven?
or we could just download Appium, but import all the Appium java-client jars to libs in main?
If either of above is true, how? if neither, what and how should I
do?
Where should I put my test in gitlab in that repo? Or I don't have to
put my tests in the existing repo. Instead, I could have another one
and tell yaml where to reach? Again, how?
It will be helpful if you could help me go through the workflow.
Like, when I developers check in code, gitlab read the yaml, then
build, then find my test suits in where(Q3), then execute etc.
Many thanks in advance!
Since finally someone is also interested in this question, let me share my solution to this.
So, if you are looking at this question, I assume you already have your test suite and you could test it locally in your machine, either have your app installed in a simulator or a real device. Now you need to read more about gitlab pipeline and gitlab CI :
pipeline: https://docs.gitlab.com/ee/ci/pipelines.html
gitlab CI: https://docs.gitlab.com/ee/ci/quick_start/
And you should have noticed that, one of the advantages of Appium is that you don't need to change a thing about the App you are testing, you are testing exactly the same App which is going into production. To learn more about Apppium:
http://appium.io/docs/en/about-appium/intro/
Now, to run the automation test, you need your test suite, the app, and Appium server. What we need to do is adding another stage in .gitlab-ci.yml, tell it to
take the newly compiled App, compile your test suite
install the App in simulator/real device
compile your test suite and run it.
To make things easier to understand, we start with question 4, workflow:
So when the code is checked in to gitlab, the gitlab runner runs the jobs of each stage in your .gitlab-ci.yml, and when it runs to your stage, it does the automation test, and note that it is running on your server, so it means you need to have Appium installed on your server and have it up and running when try to run your automation test suite. Now the problem is that, is your server capable to do so? If you wanna do the automation test in your server, you need to install Appium on it, simulator probably(and which might need your server to equip with GPU), etc, these are the concerns of maintaining server. The alternative would be using the third-party service ,which is what I did. Turns out our(when I was in that company) server isn't capable of running automation UI test, so we turned to AWS-ADF(Amazon Device Farm), there are many other service providers you could choose, see the link for references:
https://adtmag.com/blogs/dev-watch/2017/05/device-clouds.aspx
So I basically have a python script in my functional test stage, and it will grab the newly complied App, the automation test suite, upload them to AWS ADF, and then schedule a run, yields result when the run is finished.
so, to answer question 1:
we need to create one more stage for our functional test in .gitlab.yaml, in my case, I have a stage functionalTest_project stage after the stage which compiles the Android App. And then you script the necessary cmd in your stage, or if its too lengthy, your script in another file(put it in your repo) and then execute it. In my case, I put my script in python_ci.py, and then I execute it in my stage use “python python_ci.py” .(here you need a docker with these requirement, see below too)
You don’t download Appium, you set up Appium on your or if you use a cloud service, that service should set up Appium for you.
What I did it is that I use maven built and package the test suite locally and then push it to gitlab repo, which now I believe the better way would be compile and package it in the your functionalTest stage in .gitlab.yml. now it comes back to first point of question 1, how to get maven, my understanding is that its a dependency of the server, like python, so they could both be obtained by telling gitlab to execute your script with a docker that has python and maven dependency.
answer to question 3:
put it in the same repo, but out of the Android project(i.e. they will under the same directory).
how to tell yml to reach the test suite? remember they are in the same server, so you could the relative path in your yml script to tell yml where to get your test suite.
Hope this helps!
I have an environmental build system which currently has the following environments:
dev
ci
uat
live
Just to be clear when I say environmental build I mean there are a set of properties files for each environment and during the build these properties are used to template project files, so a database server may be "localhost" on dev environment but "12.34.56.78" on CI. So when starting the build you can give it an environment property and it will build for something other than dev (which is the default environment).
Now the CI build works fine and spits out the artifacts correctly, however as the build is CI all of it is configured to work on that environment, and I am looking at being able to trigger a build for live or uat when a CI build succeeds. This would then run the same build but with a different build argument.
Now I noticed there are a few mechanisms for this, one seems to be doing an automatic trigger on complete which could trigger another build, but this seems to require 2 separate build configurations which are essentially identical other than the build argument being "environment=live" rather than "environment=ci". Then there is adding another build step which would be the same as the first but take different argument and output the live artifacts elsewhere, but this would always happen much like the first option.
The final option I could see was to trigger a manual build once I have a live candidate, but it is unclear as to how to set a build argument, I could make a build parameter however it doesn't seem to get pulled into the build script like a command like build argument would.
I will see if there is a better answer, but after writing this I found that using Build Parameters seems the best option, this then can be embedded within your build configuration anywhere using the %environment% (or %your_parameter_here%).
This can then be setup to create a form element for manual builds so you can easily create a build for a different environments.
The group that I work in has standardized on Jenkins for Continuous Integration builds. Code check-in triggers a standard build, Cobertura analysis and publish to an Artifactory SNAPSHOT repo. I've just finished adding a new target to the master build file that'll kick off a Sonar run but I don't want that running on every check-in.
Is there a way to schedule a nightly build of a specific build target in Jenkins? Jenkins obviously facilitates scheduled builds but it'll run the project's regular build every time. I'd like to be able to schedule the Sonar build target to run nightly.
I could, of course, create a separate Jenkins project just to run the Sonar target on a schedule but I'm trying to avoid that if I can. Our Jenkins server already has several hundred builds on it; doubling that for the sake of scheduling nightly builds isn't very desirable. I looked for a Jenkins plug-in that might facilitate this but I couldn't find anything. Any suggestions?
Here's one way to do it, if you are ok with triggering the build using cron or some other scheduling tool:
Make the build parameterized, and use a parameter in your build file to decide if the Sonar build target should run or not.
Trigger the build remotely by HTTP POST:ing the parameter values as a form to http://[jenkins-host]/jobs/[jobname]/buildWithParameters. Depending on your Jenkins version and configuration, you might need to add an Authentication Token and include this in your url.
Authenticate your POST using a username and password.
wget --auth-no-challenge --http-user=USERNAME --http-password=PASSWORD "https://[jenkins-host]/job/[jobname]/buildWithParameters?token=<token defined in job configuration>&<param>=<value>&<param2>=<value2>"
I am also looking for a solution for this. My current solution in my mind is to create 2 triggers in the regular build, one is the nightly build, another one is Polling SCM
In the sonar plugin configuration, it has the options to skip the builds triggered by the SCM change. Therefore, only the nightly build will start a sonar analysis.
I didn't get a chance to test it now, but I suppose this will work.
Updated on 12/19/2011
The above solution doesn't work if the sonar analysis is invoked as a standalone build step. To make the sonar analysis run conditionally, you could use the following 2 plugins:
Conditional BuildStep Plugin - this allows the sonar analysis to be run conditionally
Jenkins Environment Injector Plug-in - this allows you to inject the variables to indicate how the build is triggered.