Packer - Parallel Builds - Stop All Builds if One Fails - ansible

I'm using Packer to create AWS AMIs for deployment. I build a couple in Parallel for different types of AMIs (Application server, Worker server), and provision them using Ansible.
However, if one of the build processes fail, I want to halt the entire build process for all parallel builds. Is there a way to accomplish this with packer?

No.
(Unless you do some strange script that runs last in all builds and wait for all other or times out. But you better do a cleanup script that checks the result of the Packer build and deregister AMI's if one of build failed.)

Related

Jenkins + Docker Compose + Integration Tests

I have a crazy idea to run integration tests (xUnit in .Net) in the Jenkins pipeline by using Docker Compose. The goal is to create testing environment ad-hoc and run integration tests form Jenkins (and Visual Studio) wthout using DBs etc. on physical server. In my previous project sometimes there was a case, when two builds override test data from the second build and I would like to avoid it.
The plan is the following:
Add dockerfile for each test project
Add references in the docker compose file (with creation of DBs on docker)
Add step in the Jenkins that will run integration tests
I have no long experience with contenerization, so I cannot predict what problems can appear.
The questions are:
Does it have any sence?
Is it possible?
Can it be done simpler?
I suppose that Visual Sutio test runner won't be able to get results from the docker images. I am right?
It looks that development of tests will be more difficult, because test will be run on the docker. I am right?
Thanks for all your suggestions.
Depends very much on the details. In a small project - no, in a big project with multiple micro services and many devs - sure.
Absolutely. Anything that can be done with shell commands can be automated with Jenkins
Yes, just have a test DB running somewhere. Or just run it locally with a simple script. Automation and containerization is the opposite of simple, you would only do it if the overhead is worth it in the long run
Normally it wouldn't even run on the same machine, so that could be tricky. I am no VS Code expert though
The goal of containers is to make it simpler because the environment does not change, but they add configuration overhead. Most days it shouldn't make a difference but whenever you make a big change it will cost some time.
I'd say running a Jenkins on your local machine is rarelly worth it, you could just use docker locally with scripts (bash or WSL).

Using Jenkins pipeline to build an exising job hangs at "Scheduling Project"

I have multiple existing projects which build fine. They run MSBUILD on a windows agent running the windows service.
I wanted to create a single project that builds them all in a particular order and collects the artifacts from all of them. I decided to try creating a pipeline. When I run it it gets to the first build statement and then just hangs there, no error it just says "Scheduling Project:.." and the little wheel spins forever. The job its trying to start normally finishes in a few seconds.
stage('job1'){
node('windows'){
build job:'job1', quietPeriod: 0, wait: true
}
}
I have to kill the build manually, it never starts the job.
Ok I managed to rearrange things so that the pipeline runs on the master and the individual jobs and artifact copies run one at a time on the slave, its working now.

running process with gitlab ci

I have a gitlab runner installed on one of my test servers.
I want to build and deploy my app on every commit.
The server is a windows server and the app is a .net core 1.1 app.
my build script works fine but eventually it runs dotnet MyApp.dll which obviously makes the pipeline stop wait until it finishes (but, of course, my app won't finish, I want it to run..)
I tried running start dotnet MyApp.dll but that still doesn't work as gitlab's runner won't stop running until all of it's child processes exit.
I am certain I'm using gitlab's CI in a non idiomatic way but fail to understand how to deploy locally correctly.
Any suggestions?
Windows doesn't offer any easy way to disown a process and you probably don't want to task yourself with stopping the process on your next deploy. What you should do is use SRVANY.EXE to create service out of your application and then use the Gitlab CI to stop the service, replace the files and run it again. It's been a while since I used Windows so I'm sorry but I can't provide the exact commands to run.

Using MongoDB (in a container?) in Visual Studio Team Services pipelines

I have a node.js server that communicates with a MongoDB database. As part of the continuous-integration process I'd like to spin up a MongoDB database and run my tests against the server + DB.
With bitbucket pipelines I can spin up a container that has both node.js and MongoDB. I then run my tests against this setup.
What would be the best way to achieve this with Visual Studio Team Services? Some options that come to mind:
1) Hosted pipelines seem easiest but they don't have MongoDB on them. I could use Tool Installers, but there's no mention of a MongoDB installer, and in fact I don't see any tool installer in my list of available tasks. Also, it is mentioned that there is no admin access to the hosted pipeline machines and I believe MongoDB requires admin access. Lastly, downloading and installing Mongo takes quite a bit of time.
2) Set up my own private pipeline - i.e. a VM with Node + Mongo, and install the pipeline agent on it. Do I have to spin up a dedicate Azure instance for this? Will this instance be torn down and set up again on each test run, or will it remain up between test runs (meaning I have to take extra care to clean it up)?
3) Magically use a container in the pipeline through an option that I haven't yet discovered...?
I'd really like to use a container to run my tests because then I can use the same container locally during the development process, rather than having to maintain multiple environments. Can this be done?
So as it turns out, VSTS now has Docker support in its pipeline (when I wrote my question it was in beta and I didn't find it for whatever reason). It can be found at https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.docker.
This command allows you to spin up a container of your choice and run a single command on it. If this command is to be synchronously run as part of the pipeline, then Run in Background needs to be unchecked (this will be the case for regular build commands, I guess). I ended up pushing a build script into my git repository and running it on a container.
And re. my question in (2) above - machines in private pipelines aren't cleaned up between pipeline runs.

Running load tests via Jenkins on a slave EC2 instance that starts and stops with the build

Ideally, we'd like to run load tests on an EC2 Jenkins slave that starts and stops with our build.
Are there any tools out there (without writing our own plugins) that currently solve this?
I've come across this, but it seems to only be triggered based on the load of Jenkins in general, and not tied to a build.
This configuration is environment specific, and not project specific, so I would prefer to keep this maintained within Jenkins instead of within Maven and the project itself. Although, I'm open to suggestions in that realm.
You can check out WebLOAD Jenkins plugin, it executes RadView's WebLOAD load testing tool, triggered by Jenkins. WebLOAD itself can launch EC2 cloud machines as needed, if that's what you need.

Resources