We have two environments, DEV and PROD. Each environment has 3 nodes:
DEV
devapp01 (Tomcat)
devapp02 (Tomcat; identical to 01 and load-balanced)
devdb01 (MySQL)
PROD
app01 (Tomcat)
app02 (Tomcat; identical to 01 and load-balanced)
db01 (MySQL)
The Tomcat instances serve WARs that are produced by a CI build.
We need the software stack on all DEV machines to be configured identically to those on PROD. We have set up a simple Chef server to manage configs on all nodes, and have created recipes for app and DB servers.
On the Chef server, we currently have an auto-update feature that runs every 30 minutes to check all nodes and make sure they're in synch with their respective recipes. The way our in-house "chef" (a sys admin) set everything up, there's a part of a recipe that does an existential check on Tomcat's webapps directory to determine whether an update should be performed or not. In other words, if Tomcat's webapp's directory has a WAR inside of it already, then when the auto-update runs every 30 mins, since webapps isn't empty, Chef won't go out to the CI server and pull in the new WAR.
To combat this, our chef cooked up a "clean slate" recipe that will first delete exploded WARs from Tomcat's webapps dir. So as long as that recipe executes first, TOMCAT_HOME/webapps will be executed before the Tomcat check; it then removes itself from the Chef runlist. His reasoning for this removal was that - in production - if we are
always deleting Tomcat's webapps dir then we will be redeploying prod nodes every 30 minutes.
So on DEV, we do want every CI build to produce a new WAR being deployed to our Tomcat instances (devapp01/02). In PROD, we want to manually kick off the deployment, which according to how this chef has configured everything, involves manually adding the clean slate recipe so that the CI server can deploy the new WAR.
I'm wondering how other people/teams have used CI and Chef in conjunction in the past, and if they ran into similar issues. My concrete question is: how can we accomplish letting CI drive all the DEV deployments, but still make PROD deployments a manual process?
You should take a look at CloudMunch for this as this is one of the scenarios that has been envisaged already there.
CloudMunch integrates natively into chef server and provides a manual workflow to allow deployment into staging or production environments.
Disclaimer: I work at CloudMunch.
The easiest way is to have separate Chef organizations (or even servers) for dev and prod. This gives you strong separation by default. You CI pushes to dev, prod pushes are done purposefully.
Chef Inc is also working on Policyfiles, which are an interesting replacement for Environments that shouldn't suffer from the same metadata/cookbook version leakage problems.
Related
I have a general question about good practices and lets say way of work between docker and IDE.
Right now i am learning docker and docker compose, and i must admit that i like the idea of containers! Ive deployed my whole spring boot microservices architecture on containers, and everything is working really well!
The thing is, that in every place of properties when i am declaring localhost address, i was forced to change localhost to custom container names, for example localhost:8888 --> naming-server:8888. It is okay for running in containers, but obviously when i am trying to run this on IDE, it will fail. I like working/optimizing/debugging microservices in IDE, but i dont want rebuilding image and returning whole docker-compose every time i made a tiny small change.
What does it look like in real dev?
Regards!
In my day job there are at least four environments my code can run in: my desktop development environment, a developer-oriented container environment, and pre-production and production container environments. All four of these environments can have different values for things like host names. That means they must be configurable in some way.
If you've hard-coded localhost as a hostname in your application source code, it will not run in any environment other than your development system, and it needs to be changed to a configuration option.
From a pure-Docker point of view, making these configurable via environment variables is easiest (and Spring can set property values from environment variables). Spring also has the notion of a profile, which in principle matches the concept of having different settings for different environments, but injecting a whole profile configuration can be a little more complex at deployment time.
The other practice I've found helpful is to have the environment variable settings default to reasonable things for developers. The pre-production and production deployments are all heavily scripted and so there's a reasonably strong guarantee that they will have all of the correct environment variables set. If $PGHOST defaults to localhost that's right for a non-Docker developer, and all of the container-based setups can set an appropriate value for their environment at deploy time.
Even though our actual deployment system is based on containers (via Kubernetes) I do my day-to-day development in a mostly non-Docker environment. I can run an individual microservice by launching it from a shell prompt, possibly with setting some environment variables, and services have unit tests that can run just on the checked-out source tree, without needing any Docker at all. A second step is to build an image and deploy it into the development environment, and our CI system runs integration tests against the images it builds.
This is a bit theoretical, but I'll try to explain my setup as much as I can:
1 server (instance) with a self-hosted gitlab
1 server (instance) for development
1 server (instance) for production
Let's say in my gitlab I have a ReactJs project and I configured my gitlab-ci.yml as following:
job deploy_dev Upon pushing to dev branch, the updates will be copied with rsync to /var/www/html/${CI_PROJECT_NAME} (As a deployment to dev server)
The runner that picks up the job deploy_dev is a shared runner installed on that same dev server that I deploy to and it picks up jobs with the tag reactjs
The question is:
If I want to deploy to production what is the best practice should I follow?
I managed to come up with a couple of options that I thought of but I don't know which one is the best practice (if any). Here is what I came up with:
Modify gitlab-ci.yml adding a job deploy_prod with the same tag reactjs but the script should rsync with the production server's /var/www/html/${CI_PROJECT_NAME} using SSH?
Set up another runner on production server and let it pick up the jobs with tags reactjs-prod and modify gitlab-ci.yml to have deploy_prod with the tag reactjs-prod?
You have a better way other than the 2 mentioned above?
Last question (related):
Where is the best place to install my runners? Is what I'm doing (Having my runners on my dev server) actually ok?
Please if you can explain to me the best way (that you would choose) with the reasons as in pros or cons I would be very grateful.
The best practice is to separate your CI/CD infrastructure from the infrastructure where you host your apps.
This is done to minimize the number of variables which can lead to problems with either your applications or your runners.
Consider the following scenarios when you have a runner on the same machine where you host your application: (The below scenarios can happen even if the runner and app are running in separate Docker containers. The underlying machine is still a single point of failure.)
The runner executes a CPU/RAM heavy job and takes up most of the resources on the machine. Your application starts experiencing performance problems.
The Gitlab runner crashes and puts the host machine in an inoperable state. (docker panic or whatever).
Your production app stops functioning.
Your app brakes the host machine (Doesn't matter how. It can happen), your CI/CD stops working and you can not deploy a fix to production.
Consider having a separate runner machine (or machines. Gitlab runner can scale horizontally), that is used to run your deployment jobs to both dev and production servers.
I agree with #cecunami's answer.
As an example, in our Org we have a dedicated VM only for the runner, which is explicitly monitored by our teams.
Since first creating the machine, the CPU, RAM and storage demand has grown massively, thus why the infrastructure is to be separated.
I have cookbooks to deploy infrastructure to azure cloud. My cookbooks create required VMs, setup SQL Servers, attach disks to VM and some software installation.
I want that Kitchen CI itself:
Verify that my resources have spawned correctly or not
Validate that configurations are done correctly or not
If by #1 you mean to check if Chef actually works, don't worry, we have our own test suites for that. What you want focus on is #2, checking that the side effects of your recipes are what you expect. As for how to use Test Kitchen, we have some guides at https://learn.chef.io and the main TK website at https://kitchen.ci/.
I'm used to having a single entity checkout, build, test, and deploy code, on every commit change (whether it be for a staging server or a production server). Now that we have started looking into Ansible, I'm beginning to think that there are isolated roles with these tools.
Basically I'm asking is it Ansible's responsibility to handle compiling and testing the code before deployment, or should it grab artifacts from a CI server such as Bamboo and trust that artifact is ready for deployment?
I'm not sure about the idea of using ansible to do the compiling, I rather just do that inside of CI as they have facilities done just for that. As for testing it depends on type of tests - if those are unit tests then they should be ran right after build (preferably inside of CI again) and either fail or pass a build.
But if those tests are of integration/functional nature (where they verify whether service actually works in the environment as we expect) then they for sure should be a part of post_tasks of the playbook, and if they don't pass you should mark the deployment as failed and act accordingly. This of course gives an idea of having a safe way to do that, before the service is exposed to production traffic, so if the tests do not pass, you can safely unroll the thing.
Nope, Ansible's responsibility is not to handle compiling and testing the code before deployment.
Yes it should grab artifacts from a CI server such as Bamboo and trust that artifact is ready for deployment.
Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.
https://www.ansible.com/how-ansible-works
How can I host concurrent environments of a single application of on AppHarbor?
We are currently in development so a single environment of DEBUG is sufficient, with builds triggered from the develop branch.
Soon we will need a PROD environment, triggered by changes to the master branch (or perhaps manually).
How can I configure AppHarbor such that both the DEBUG & PROD environments are accessible at any given time for a single application?
With hostnames such as:
http://debug-myapp.apphb.com
http://prod-myapp.apphb.com
For now you will have to create two applications, one for your debug environment and one for your production environment. You can also set up the tracking branch for each application. Here is a link where we describe how to manage environments.