CI based on docker-compose? - continuous-integration

I am currently building a little application that requires that some massively annoying software is installed and running in backround. To ease the pain of developing, I wrote a set of docker-compose files that runs the necessary daemons, creates some jobs, and throws in some test data.
Now, I'd like to run this in a CI-like manner. I currently have Jenkins check all the different repositories and execute a shell script that calles docker-compose up --abort-on-container-exit. That gets the job done, but it seems like a hack, and I'm not such a huge fan of Jenkins.
What I want to ask is: is there a more beautiful way of doing this? Specifically, is there a CI that will
watch a set of git repositories,
re-execute docker-compose (possibly multiple times with different sets of parameters), and
nicely collect and split the logs and tell me which container exactly failed how?
(Optionally) is not some cloud service but installable on my local server?
If the answer to this is "write a Jenkins module", then fine, so be it.
I'm aware that there are options like gitlab-ci, but I'd like to keep the CI script in a fashion that can also be easily executed during development, before pusing to a repo.

Related

How can I run a task that requires multiple containers?

This is my first question so please presume ignorance and positive intent.
As part of our build pipeline I need to run what our dev team calls "unit tests." These unit tests are run via an ant target. Before running that ant target we must spin up, configure and partially populate (the ant target does some of the population) several containers including:
application server
ldap instance
postgres instance
It looks as if each task only supports a single container. Is there a straightforward way to get all of these running together? Ideally I could create a task that would allow me to specify a pod template with the commands running in one of the containers of that pod.
I realize that I could hack this together by using the openshift client or kubernetes actions but I was hoping for something a bit more elegant. I feel like using either of those tasks would require that I build out status awareness, error checking, retry logic, etc that is likely already part of the pipeline logic and then parse the output of the ant run to determine if all of the tests were successful. This is my first foray into tekton so accepted patterns or idioms would be greatly appreciated.
For greater context I see these tasks making up my pipeline:
clone git repo
build application into intermediate image
launch pod with all necessary containers
wait for all containers to become "ready"
execute ant target to run unit tests
if all tests pass build runtime image
copy artifacts from runtime image to external storage (for deployment outside of openshift)
Thank you for your time
Have a look at sidecars. The database must be up and running for the tests to execute, so start the database in a sidecar. The steps in the task will be started as soon as all sidecars are running.

How to prevent GitLab CI/CD from deleting the whole build

I'm currently having a frustrating issue.
I have a setup of GitLab CI on a VPS server, which is working completely fine, I have my pipelines running without a problem.
The issue comes after having to redo a pipeline. Each time GitLab deletes the whole folder, where the build is and builds it again to deploy it. My problem is that I have a "uploads" folder, that stores all user content, that was uploaded, and each time I redo a pipeline everything gets deleted from this folder and I obviously need this content, because it's the purpose of the app.
I have tried GitLab CI cache - no luck. I have also tried making a new folder, that isn't in the repository, it deletes it too.
Running my first job looks like so:
Job
As you can see there are a lot of lines, that says "Removing ..."
In order to persist a folder with local files while integrating CI pipelines, the best approach is to use Docker data persistency, as you'll be able to delete everything from the last build while keeping local files inside your application between your builds, while maintains the ability to start from stretch every time you start a new pipeline.
Bind-mount volumes
Volumes managed by Docker
GitLab's CI/CD Documentation provides a short briefing on how to persist storage between jobs when using Docker to build your applications.
I'd also like to point out that if you're using Gitlab Runner through SSH, they explicitly state they do not support caching between builds when using this functionality. Even when using the standard Shell executor, they highly discourage saving data to the Builds folder. so it can be argued that the best practice approach is to use a bind-mount volume to your host and isolate the application from the user uploaded data.

Run test script in continuous integration using CircleCI?

We're using CircleCI for CI/CD. In the continuous integration process, I'd like to run a test script I wrote that generates some data for my app to use. Then another script will run and make sure the data was processed in a certain amount of time (performance test). Is there a way I can do this through CI/CD using CircleCI?
I have a Spark Streaming App that reads data from Kafka, processes and then puts it into a database.
I generate some data and send it to Kafka. Meanwhile, the Spark program is running (my script didn't start it I manually start it earlier). The Spark app processes this data. My script just waits about 1 minute after it initially generates the data then makes an API call to get some metrics on the data that Spark Processed, basically performance metrics.
I'd like CircleCI to take the code from Github, Build it, Deploy it to a "test" server that then runs my Spark App in the test environment which has Spark/Kafka installed. Then I'd like CircleCI to run my custom script. My script will give back some performance values in some variable say X. I'd essentially like to have CircleCI compare X against some value like assert(X < 60) and then I'd like CircleCI to say that this failed some test if X < 60 == False. Is this possible?
I realise this question has been asked a while ago, however if anyone has a similar requirement, this circleci pipeline runs a dockerised integration and performance test against a locally deployed API server.
Specific problems that needed solving:
wait for servers to start within circleci script on cli using pwt
run performance test against API using wrk with LUA script
Run locally using docker (OSX)
git clone https://github.com/simonmittag/jabba.git
circleci local execute --job integration
circleci local execute --job performance
*Disclaimer: I'm involved with some of the linked github projects.

Knifing outside a chef run from a node

I have a Jenkins server that I want to deploy some code to some servers. To pick the right servers, I would like the jenkins job to query chef for nodes with a particular role.
However, I am not sure if that is a good idea or an anti-pattern, and I am not sure how to go about it in practice.
The jenkins server is already listed as a non-admin client, so I am wondering if I can use the existing credentials for something or if I should create a jenkins admin and set up a knife.rb in Jenkins home.
You would probably want to use one of the Chef scripting libraries like chef-api (Ruby), PyChef (Python), or Jclouds (Java) rather than knife itself. Using Jenkins for deploys is a bit wonky as it isn't reeeeally meant for that, but you can make it work. Tools like Push Jobs, Fabric, and RunDeck are possibly better suited, and all have direct integration with Chef's node catalog like you describe.

How can Puppet fit into a Continuous Delivery tool chain?

I'm investigating Puppet as our future deployment and provisioning tool in our shop, but now I'm stuck at how to make a clever Continuous Integration/Delivery tool chain with deployment through Puppet.
In any of our environments (dev, test, qa, demo, prod) we have a range of components. We need to be able to deploy each component separately and possibly even concurrently.
I'd like a way to initiate (through script) a deploy of a single component package (=Puppet module) and gather the output and success status of that.
Simply waiting for a scheduled agent pull, or doing a 'puppet agent --test' on each node on the environment isn't good enough, because it may pick up other pending changes (I don't know if another component is also in the process of being deployed).
In my tool chain I would like the deployment output and status from component A and component B to be recorded separately and not mixed up.
So my question is: Can I use puppet to deploy one single named package (module) at a time?
And if not, where did I take a wrong turn when I drove down this path?
I realise a master-less Puppet set-up with modules and manifests replicated to each node perhaps could do it, but IMHO a master-less Puppet set-up kind of defeats the purpose of Puppet.
PS: I think what I'm trying to achieve is called 'Directed Orchestration' in Damon Edwards' very enlightening video at Integrating DevOps tools into a Service Delivery Platform (at timestamp around 22:30).
So my question is: Can I use puppet to deploy one single named package (module) at a time?
Yes, you can, via puppet apply. First you need to create a moduledir and a module that will contain your manifests. e.g. :
/scratch/user/puppet/local/ # This is your modulepath for local deployment
# Following contains the manifests for a module name "localmod"
/scratch/user/puppet/local/localmod/manifests/init.pp
# example content of init.pp
class localmod {
notify{"I am in in local module....":}
}
On that local machine you can test this module via puppet apply :
puppet apply -v --modulepath=/scratch/user/puppet/local -e "include localmod"
echo $? # Get the exit status of the above command
I watched the video at the point your video. There are two types of automation you can do.
Application build/deploy automation, which can be achieved via maven/ant (Build) and ant/capistrano/chrome/bash/msdeploy (Deploy) or as termed on that slide "Installer".
System/Infrastructure automation can be achieved via Chef/Puppet/CFEngine.
This question seems to be ... "How do I do applications build using puppet (implied as a system automation tool)"
So quite simply, oval tool in round hole. (I didn't say square)
At my company, we use Jenkins and the Build Pipeline Integration plugin to build massive multi component projects. As an example, a Java app will use ant in a build job, the next chained job will be a "deploy to dev" job which uses Capistrano to deploy the application, then the next job in the chain is "Configure Dev" which calls Chef to update the system configurations in the DEV environment. Chef is used to configure the application. Each of these jobs can be set to run automatically and sequentially.
a master-less Puppet set-up kind of defeats the purpose of Puppet.
Only if you discount
The rich DSL puppet has to offer
So many peer reviewed community modules
Otherwise, something like this gives you remote directed orchestration.
#update manifests etc (version control is the source of truth)
ssh user#host git pull
#run puppet
ssh user#host sudo puppet-apply

Resources