I have a job configured to automatically deploy an application to a running server. I would like to know if Jenkins provides a way to verify if this application was deployed successfully. Ideally, the build should fail when the deployment fails.
Jenkins has a rundeck plugin which might perhaps be useful.
Rundeck deploys the application, and can be setup to trigger a post-deployment build on Jenkins to perform tasks like running integration tests.
Jenkins could be used to report the status of the deployment, however it might make more sense to use rundeck's dashboard instead.
Related
I can give the URL in cy.visit to point to deployed application, but I am not clear on the setup, as I don't have CI/CD yet. The goal is to be able to run this test on a button click, not having to checkout workspace and build the application.
I can give the URL in cy.visit to point to deployed application, but I am not clear on the setup, as I don't have CI/CD yet. The goal is to be able to run this test on a button click, not having to checkout workspace and build the application.
Yes, you could test your application directly on a staging/preproduction environment on AWS without A CI/CD.
However, I would not recommend using Cypress against a production environment because for E2E testing to be efficient, you will probably create/delete/edit many things, which may not be suited for a prod env.
Finally, it depends on what you try to achieve with your tests. In general, you will use Cypress to ensure your product works as expected after adding a new feature, for example. You always want to test against the latest version of your code.
On the setup, the E2E tests can be packaged in the project and run manually until you have a CI to execute them automatically.
https://docs.cypress.io/guides/guides/command-line#cypress-run
This is my first question so please presume ignorance and positive intent.
As part of our build pipeline I need to run what our dev team calls "unit tests." These unit tests are run via an ant target. Before running that ant target we must spin up, configure and partially populate (the ant target does some of the population) several containers including:
application server
ldap instance
postgres instance
It looks as if each task only supports a single container. Is there a straightforward way to get all of these running together? Ideally I could create a task that would allow me to specify a pod template with the commands running in one of the containers of that pod.
I realize that I could hack this together by using the openshift client or kubernetes actions but I was hoping for something a bit more elegant. I feel like using either of those tasks would require that I build out status awareness, error checking, retry logic, etc that is likely already part of the pipeline logic and then parse the output of the ant run to determine if all of the tests were successful. This is my first foray into tekton so accepted patterns or idioms would be greatly appreciated.
For greater context I see these tasks making up my pipeline:
clone git repo
build application into intermediate image
launch pod with all necessary containers
wait for all containers to become "ready"
execute ant target to run unit tests
if all tests pass build runtime image
copy artifacts from runtime image to external storage (for deployment outside of openshift)
Thank you for your time
Have a look at sidecars. The database must be up and running for the tests to execute, so start the database in a sidecar. The steps in the task will be started as soon as all sidecars are running.
I am trying to do a PoC on how to achieve continuous integration and deployment using VSTS.
I have been successful in the build process i.e. from VSTS it will pull the code (asp.net based application) and build. The build process is also getting successful.
Now after the build is done I want to deploy the application and run my maven based selenium test cases written in java on the application. This is the part where I am struck. As in the deployment step it is not able to put the artifacts to the remote path that I am mentioning.
Can anyone please provide me some pointers on how to achieve the deployment on a remote machine and then run the java based test cases on this application?
Any pointers would be greatly appreciated.
Ok..here is the complete scenario...
1. I have the asp.net code on cloud in my vsts
2. I have been able to add a build step and create the artifacts successfully
3. Now I have a IIS server where i want to deploy these artifacts, and the server is not accessible from the public network and is behind a firewall.
Hence I am looking for any task that would help me achieve this. I am not sure of the complications that might arise due to the firewall and hence am trying out different methods to understand the complete big picture.
I received a reply here to use the Win RM tasks. I used that but it is giving a 53 error and not able to connect to the server that I am trying to deploy the code on.
To deploy asp.net based application, you can use IIS Web App Deployment step/task to deploy to your server or deploy to azure web site by using Azure App Service Deploy step/task.
To do Java test, there is a Maven step/task.
I'm used to having a single entity checkout, build, test, and deploy code, on every commit change (whether it be for a staging server or a production server). Now that we have started looking into Ansible, I'm beginning to think that there are isolated roles with these tools.
Basically I'm asking is it Ansible's responsibility to handle compiling and testing the code before deployment, or should it grab artifacts from a CI server such as Bamboo and trust that artifact is ready for deployment?
I'm not sure about the idea of using ansible to do the compiling, I rather just do that inside of CI as they have facilities done just for that. As for testing it depends on type of tests - if those are unit tests then they should be ran right after build (preferably inside of CI again) and either fail or pass a build.
But if those tests are of integration/functional nature (where they verify whether service actually works in the environment as we expect) then they for sure should be a part of post_tasks of the playbook, and if they don't pass you should mark the deployment as failed and act accordingly. This of course gives an idea of having a safe way to do that, before the service is exposed to production traffic, so if the tests do not pass, you can safely unroll the thing.
Nope, Ansible's responsibility is not to handle compiling and testing the code before deployment.
Yes it should grab artifacts from a CI server such as Bamboo and trust that artifact is ready for deployment.
Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.
https://www.ansible.com/how-ansible-works
I have a project that need to works on windows\linux with those databases : oracle\sqlserver.
Our project is built by Maven.
I have installed Jenkins with master\slave set-up.
master: windows + sqlserver
slave : linux + oracle
for our testing.
Jenkins - Promoting a build to different environments
but its really not helping that much for me.
I have also read Jenkins wiki : https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds
but i still cannot figure out how should i do it.
Since compiling our code takes a lot of time i would like to do it just once than use the final result and test it on those environments master and slave.
build succes only if he runs on both environments.
I have also noticed that i cannot invoke on "post steps" at jenkins to do it. i didn't find any plugin that can really help with deploying and testing on the slave.
I have read somewhere that maybe i should split it to 3 jobs and not to use one job.
first job compiles, and then other jobs are running integration test.
you can look at : http://zeroturnaround.com/rebellabs/the-correct-way-to-use-integration-tests-in-your-build-process/
I hope you could advise to me how should i do it.
Thanks
I would split the build up as follows:
Job 1 compiles the code, runs unit tests and builds your deployable artifact (since you're using Maven, I assume you have a JAR or WAR file)
Job 2 deploys and runs the artifact - you could use build parameters to specify environment-specific criteria.
Job 3 runs the integration test and reports results.