I have worked in an organisation where we used both puppet and ansible for configuration management... but I always wondered why would they use both tools ... what can puppet do that Ansible cannot do?
The only thought that came to my mind was:
- Puppet was used to check if the system is in the desired state at regular intervals; while Ansible was used to deploy one time things (code, scripts, packages etc)
Can someone please explain why would an organisation use both the tools? Can regular config check be done by Ansible?
Cheers
In the interest of full disclosure, I'm an upstream community contributing developer to Ansible but I will do my best to keep my response neutral.
I think this is largely opinionated and you'll get varied results depending on who you talk to but I think about it effectively like this:
Ansible is an automation tool and Puppet is a configuration management tool. I don't consider them to be direct competitors they way they seem to get compared by tech journalists except for the fact that there's some overlap in their abilities to perform the functions you would want out of a configuration management tool: service/system state, configuration file templating, application lifecycle management, etc.
The main place where I see these tools in completely different light is that Ansible performs automation of tasks, those tasks can be one of many "type" of things that you don't really expect from a configuration management tool, such as IaaS provisioning (AWS, GCE, Azure, RAX, Linode, etc), physical network configuration (Cisco IOS/ASA, JunOS, Arista, VyOS, Netscaler, etc), virtual machine creation/management, physical load balancer configuration (F5 BigIP) and the list goes on. Effectively, Ansible is your "automation glue" to create and automate a process that you and your team might have otherwise had to do by hand. It as a tool gets compared to things like Puppet, Chef, and SaltStack because one of the many "types" of task you would automate more or less add up to configuration management.
On the flip side though Configuration Management tools such as Puppet generally have a daemon running on the nodes, which needs to be provisioned/bootstrapped (maybe with Ansible), which has it's advantages and disadvantages (which I won't debate here, it's largely out of scope). One thing that daemon provides you is continuous eventual consistency. You can set configuration management authoritatively on the Puppet Master and then the agent will maintain that state on the systems and will provide reporting when it has to change something which can be wired up to alert monitoring to notify you when something's wrong. While Ansible will also report when something needed changing, it only does this when you run the Ansible Playbook. It's a push-model and not pull-model (nor is it a continuously running daemon that will enforce system state). This has it's advantages for reporting and the like. I will note that something like Ansible Tower/AWX can more or less emulate this functionality, but it's not a "baked in" feature. Just something to keep in mind.
Ultimately, I think it boils down to a matter of familiarity of technologies, desired feature set, and if you have a pre-existing investment (both time and money) into a toolchain. If you have been using Puppet for 5 years, there's no real motivation to fork-lift replace it with something else when you can use Ansible to augment it (there's even a puppet module in Ansible) and allow each to play nicely with each other, getting the features you want from both. However, if you're starting from scratch, then I think you may consider actually doing a Pros/Cons or feature comparison for what you really want out of the tool(s) to find out if it's worth the investment of picking up two tools from scratch or finding one that can fulfill all your needs and, while I'm biased towards Ansible in this regard, the choice ultimately lies on the person who's going to have to use the utility to maintain the infrastructure.
I think a good example of the hybrid approach is I know of a few companies that use Puppet for configuration management, and Ansible for software lifecycle release process where one of the tasks in their playbooks is literally calling the puppet module to bring all the systems into configuration consistency. The Ansible component in this is to automate/orchestrate between various systems, the basic outline of the process is this: start with removing a group of hosts from the load balancer, ensure database connections have stopped, perform upgrades/migrations, run puppet for configuration/state consistency, and then bring things back online in whatever order they've deemed appropriate. This all happens from a single command (or a click of a button in Tower/AWX).
Anyhoo, I know that was kind of long winded but hopefully it was helpful.
Related
I have multiple instances of the same engine running as windows services on the same environment and system that just have slightly different connection strings as they point to different queues. Other than a couple of lines in the conifg (XML) the rest of the application is exactly the same (config and binaries). When config changes are made this is done to all instances which is time consuming so I am doing some research into the best method of managing the config files in a scalable and version controlled way. Currently I use a batchfile to copy the default engine directory and config over and then find and replace the individual strings. I'd prefer to have a template config that can be updated that pulls in set variables for the connection strings depending on the instance and environment. I understand that this may be possible using chef, puppet or ansible but to my understanding these are more for system configuration as opposed to individual application files? Does anyone know if this is possible with gitlab or AWS? Before committing to the learning curve I'm trying to discern if one of the aforementioned config management tools would be overkill for this scenario or a realistic solution?
I understand that this may be possible using chef, puppet or ansible but to my understanding these are more for system configuration as opposed to individual application files?
Managing individual files, including details of their contents, is a common facet of configuration management. Chef, Puppet, and Ansible can all do this with relative ease.
Does anyone know if this is possible with gitlab or AWS?
No doubt, someone does. And I anticipate, but cannot confirm, that the answer is "yes" for both.
Before committing to the learning curve I'm trying to discern if one of the aforementioned config management tools would be overkill for this scenario or a realistic solution?
A configuration management system would almost certainly be overkill if the particular task you describe is the only thing you are considering them for.
Currently I use a batchfile to copy the default engine directory and
config over and then find and replace the individual strings. I'd
prefer to have a template config that can be updated that pulls in set
variables for the connection strings depending on the instance and
environment.
In the first place, if it ain't broke, don't fix it. On the other hand, if it is broke, and switching to a template-based approach is a reasonable method to resolve the issue, then you can certainly implement that with a for-purpose local script without bringing in all the apparatus of a configuration management system.
In the event that you do decide that the current mechanism needs to be replaced, do, for goodness sake, ditch batchfile. It's one of the worst scripting languages ever inflicted on humanity. PowerShell would be a natural replacement on Windows, but you might also consider Python, or pretty much any programming language you know.
I am in the process of setting up a dedicated ec2 instance as a statsD server. I was wondering if there's a best practice around this. Allow me to elaborate. When dealing with cloud infrastructure, I found Terraform to be extremely useful. All the infrastructure you need is expressed and code and any changes to this code base of terraform modules can be tracked effectively. It also makes sense to have it in the same repository as your source code. So whenever CD kicks in, we can make sure our infrastructure is updated as and when needed.
I have a similar question around a statsd server. I came across the Chef Configuration Management tool but given the size of our operation at this stage - it feels like an overkill. I am curious to know what do people do for such servers. Do they prefer manually managing these? Or is there a way to express it as code - like chef. Or probably something else I don't know about.
At the risk of sounding opinionated, I would say that maintaining configuration using configuration management tool such as Chef is a good idea regardless of the size of the infrastructure.
However for your situation, you should evaluate below points:
Although your current requirement is one statsd server now, do you foresee requirement for additional machines?
Configuration management tools like Chef are components of your infrastructure, which need to setup as well. Is it feasible for your current requirement or something in near future?
In most cases you will be able to reuse community work, such as statsd cookbook from Chef supermarket. If putting effort into automating it yourself is your concern.
The question is tied more to CI/CD practices and infrastructure. In the release we follow, we club a set of microservices docker image tags as a single release, and do CI/CD pipeline and promote that version.yaml to staging and production - say a sort of Mono-release pattern. The problem with this is that at one point we need to serialize and other changes have to wait, till a mono-release is tested and tagged as ready for the next stage.A little more description regarding this here.
An alternate would be the micro-release strategy, where each microservice release in parallel through production through the CI/CD pipeline. But then would this mean that there would be as many pipelines as there are microservices? An alternate could have a single pipeline, but parallel test cases and a polling CD - sort of like GitOps way which takes the latest production tagged Docker images.
There seems precious little information regarding the way MS is released. Most talk about interface level or API level versioning and releasing, which is not really what I am after.
Assuming your organization is developing services in microservices architecture and is deploying in a kubernetes cluster, you must use some CD tool (continuous delivery tool) to release new microservices services, or even update a microservice.
Take a look in tools like Jenkins (https://www.jenkins.io), DroneIO (https://drone.io)... Some organizations use Python scripts, or Go and so on... I, personally, do not like this approch, I think the best solution is to pick a tool from CNCF Landscape (https://landscape.cncf.io/zoom=150) in Continuous Integration & Delivery group, these are tools test and used in the market.
An alternate would be the micro-release strategy, where each microservice release in parallel through production through the CI/CD pipeline. But then would this mean that there would be as many pipelines as there are microservices?
It's ok in some tools you have a parameterized pipeline thats build projects based in received parameters, but I think the best solution is to have one pipeline per service, and some parameterized pipelines to deploy, or apply specific tests, archive assets and so on... Like you say micro-release strategy
Agreed, there is little information about this out there. From all I understand the approach to keep one pipeline per service sounds reasonable. With a growing amount of microservices you will run into several problems:
how do you keep track of changes in the configuration
how do you test your services efficiently with regression and integration tests
how do you efficiently setup environments
The key here is most probably that you make better use of parameterized environment variables that you then look to version in an efficient manner. This will allow you to keep track of the changes in an efficient manner. To achieve this make sure to a.) strictly paramterize all variables in the container configs and the code and b.) organize the config variables in a way that allows you to inject them at runtime. This is a piece of content that I found helpful in regard to my point a.);
As for point b.) this is slightly more tricky. As it looks you are using Kubernetes so you might just want to pick something like helm-charts. The question is how you structure your config files and you have two options:
Use something like Kustomize which is a configuration management tool that will allow you to version to a certain degree following a GitOps approach. This comes (in my biased opinion) with a good amount of flaws. Git is ultimately not meant for configuration management, it's hard to follow changes, to build diffs, to identify the relevant history if you handle that amount of services.
You use a Continuous Delivery API (I work for one so make sure you question this sufficiently). CDAPIs connect to all your systems (CI pipelines, clusters, image registries, external resources (DBs, file storage), internal resources (elastic, redis) etc. They dynamically inject environment variables at run-time and create the manifests with each deployment. They cache these as so called "deployment sets". Deployment Sets are the representation of the state of an environment at deployment time. This approach has several advantages: It allows you to share, version, diff and relaunch any state any service and application were in at any given point in time. It provides a very clear and bullet proof audit auf anything in the setup. QA environments or test-feature environments can be spun of through the API or UI allowing for fully featured regression and integration tests.
I have an HPC cluster and I would like to monitor its health with Icinga2. I have a number of checks defined for each node in the cluster, but what I would really like is to get a notification if more than a certain percentage of the nodes are sick.
I notice that is possible to define a dummy host which represents the cluster and use the Icinga domain specific language to achieve something like I'm interested (http://docs.icinga.org/icinga2/latest/doc/module/icinga2/chapter/advanced-topics?highlight-search=up_count#access-object-attributes-at-runtime). However this seems like an inelegant and awkward solution.
Is it possible to define this kind of "aggregate" or "meta check" over a hostgroup?
There wasn't any solution, and such a thing put inside the docs helped quite a few users, even if it isn't that elegant. External addons such as business process can do the same but require additional configuration. The Vagrant box integrates the Icinga Web 2 module for instance.
Other users tend to use check_multi or check_cluster for that. Isn't that elegant either.
There are no immediate plans to implement such a feature although the idea is good and lasts long.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am unsure about which monitoring framework to use. Currently I am looking at either Nagios or Sensu.
Can anybody give me a good reference which shows a comparison of these two (or any other monitoring tool which may be a good solution)? My main intention is to scale-out on EC2. I am using Opscode Chef for system integration.
One important difference between Nagios and Sensu -
Nagios requires all the configuration for 1)checks 2)handlers but most importantly 3)hosts to be written in configuration files on the Nagios server. This means that each time one of the 3 above is changed (for example new hosts added, old hosts removed) you need to re-write the configuration files and restart Nagios.
Sensu is almost the same as the above, with one important difference -- when hosts are added or removed from your architecture (as is the case in most auto-scaling cloud deployments) -- the hosts themselves run a sensu-client that "subscribes" to different available checks. So when a new server comes into existence and says "I'm a webserver", the sensu-client running on it will ask the sensu-server "what checks should a webserver run on itself?" and run those.
Other than this, operations wise both Nagios (also Icinga) and Sensu are great and have a lot of facilities for checks, handlers, and visibility through a dashboard (YMMV).
From a little recent experience with Sensu and quite a bit of experience with Nagios I'd say both are excellent choices.
Sensu is definitely the new kid. It has a nice UI and nice API. It does however require Redis and RabbitMQ in your setup to work. So consider if you'll therefore want something to monitor those dependencies outside the sensu monitoring stack. Sonian provide Chef recipes for trying it out too.
https://github.com/sensu/sensu-chef
Nagios has been around for an awfully long time. It's generally packaged for most distros which makes installation simple and it has few dependencies. It's track record also means that finding people who know it or that have used it and can offer advice is easy. On the other hand the UI is ugly and programatic access is often hacky or via third party add-ons. Chef recipes also exist for Nagios:
https://github.com/bryanwb/chef-nagios
If you have time I'd try both, there is little harm in having two monitoring systems running as a trial. The main think to focus on, especially in a dynamic EC2 setup, is how easily the monitoring configuration files can be generated by your configuration management tool.
In terms of other tools I'd personally include something to record time series data, for instance requests per second or load over time. Graphs are a great help with monitoring, and can be used to drive alerting via Nagios or similar. Personally I'm a fan of both Ganglia and Graphite while Librato Metrics (https://metrics.librato.com/) is a very nice non-free option.
I tried using Nagios for a while: I got the feeling that the only reason that it's common is that 'everyone else uses it', because it's absolutely hideous to work with. Massively overcomplicated, difficult and long-winded to make it do anything new: if you find something it doesn't do, you know you're in for a week of swearing at crummy documentation of an archaic design. At the end of all your efforts and it's all working, it looks hideous. Scrapping it made me sleep better.
Cacti looks nice, but again it's unnecessarily complex when creating new plugins.
For graphing I'd recommend Munin: it's completely trivial to write new plugins in any language, there are hundreds available, and it looks reasonable. It's incredibly easy to install - one command to install and set one access rule, so works well for automated deployments, easy to wrap into a chef recipe. 2.0 is out soon and addresses most of its shortcomings (in particular adding variable update intervals, zoomable graphs, ssh transport). Munin can talk to Nagios for notifications, or it can do that itself, and it provides a basic dashboard.
For local process/file/service monitoring, monit is simpler and works better than god. I've not tried it with m/monit.
When compared with Sensu and Nagios... The pick would be Sensu monitoring systems.
Below is the are the main reasons,
1.Easy Setup.. There is lot of reduction of restarting of Clients.. which is major trouble in the large enterprise
2. Nagios Plugins can be used with the Sensu Ecosystem.
3. Scalable and easily for the Cloud environment.
Has anyone heard about Zabbix.It has lot many features and comes as a single package. I doubt the scalability
As long as enterprise it consists of databases, sap, network devices, webservers, filers, backup libraries.... there is barely an alternative to nagios (or it's cousins icinga, shinken)
Maybe one day everything will come out of clouds automagically but still a few years there will be static servers (physical or virtual, it doesn't matter) with a defined purpose resting at least for a few months. We will still have to monitor interface bandwidth, tablespaces, business processes, database sessions, logfiles, jmx metrics. All things where the plugin concept of the nagios world has an advantage.