Ansible vs Puppet - Agent "check-in" - ansible

When it comes to Ansible vs Puppet, what's the difference when the nodes are receiving their configuration?
I know that the Puppet agent checks in every 30 min to get their configuration.
How is this for Ansible?

Puppet agent runs every 30 minutes by default making sure the state of
the checked in node (server) is in the desired (described) state.
Ansible doesn’t have that mechanism so if you want a scheduler you
need to look at Ansible Tower which has recently become Open Source.
Puppet vs Ansible

Ansible works on push model whereas puppet works on pull model i.e the nodes are pulling the configuration from the puppet master

Related

ansible vs cronjob for background job to label dynamic nodes

Need to change the labels of worker nodes with a predefined combination of alphanumeric characters (example: machine01) as and when they join the cluster (again and again if node(s) leave or when new nodes join the cluster). Is it possible to do it using ansible or we need to setup a cronjob. If possible using ansible, I would like to have a hint on how we can run a playbook once and keep it active in background to keep on checking new node labels. Which is more cheap computationally.
How we can run a playbook once and keep it active (annot.: running) in background to keep on checking new node labels?
Since Ansible is a push based Configuration Management tool, it is not designed and developed for such use case. Ansible will just connect to the remote device via SSH and perform a configuration task.
Further Documentation
Ansible concepts
Ansible playbooks

Liquibase with stolon via ansible - what is the right way to apply changes

If I have stolon cluster on let's say 3 hosts what is the right way to apply liquibase changes?
Should I always choose master-node host? And what will happen if I choose replica host instead of master?
What will happen if I try to run the same liquibase script on each node (e.g. I have all three nodes listed in my ansible scripts and don't know yet how to choose master stolon host by ansible scripts and decide run scrips on each host just in case).
Are there examples of ansible playbooks for this task?
Which jdbc url should I address?
jdbc:postgresql://{{postgres_host}}:{{stolon_postgres_port}}/{{postgres_database}}
or
jdbc:postgresql://{{postgres_host}}:{{stolon_proxy_port}}/{{postgres_database}}
Or where can I read anything about these issues?
P.S. It seems using proxy's port is the right way
P.S.S. The way is to execute migration on any host - proxy will transfer execution to master node any way. But do it only once - it can be done by run_once ansible key

How Ansible make sure that all the remote nodes are in desired state as per the template/playbook

Suppose after running a playbook, if someone tried to change the configuration on one/many of the node(s) managed by Ansible. So how does Ansible come to know that one/many of his managed node(s) is/are out of sync and sync it properly to desired state.
I presume we have this in other automation platforms like Chef and Puppet in which the remote agent runs periodically to be in sync with the master server template.
Also what are the best practices to do so.
Ansible doesn't manage anything by itself. It is a tool to automate tasks.
And it is agentless, so no way to get state updates from remote hosts by their will.
You may want to read about Ansible Tower. Excerpt from features list:
Set up occasional tasks like nightly backups, periodic configuration remediation for compliance, or a full continuous delivery pipeline with just a few clicks.

Ansible tower to manage hosts configured for pull mode?

I currently manage ~20 machines located around the world, behind a variety of firewalls using ansible operating in pull mode (all machines pull the playbook from a git+ssh repository). I'd like better reporting of the status of the machines so I am looking into Ansible Tower.
As far as I can tell Tower only supports push mode. The docs for ansible tower are not clear - can it also manage machines that run in pull mode? that is can each machine, for example, phone home to tower to retrieve configuration and report its results rather than requiring tower to push to those machines?
Alternative strategies using something like autossh + reverse tunnel are not options due to the variances of the remote machines firewalls and IT departments.
Yes, "pull" with Tower is used to initialize machines- for instance, when AWS creates them with autoscaling.

Automating Rundeck in a CD platform- adding nodes to rundeck

I am exploring Rundeck for my Continuous Delivery platform. The challenge I could foresee here is automating the rundeck itself - adding the nodes to the Rundeck whenever a new node/vm get created.
I thought of creating the vm with the the public keys of my rundeck server and adding the vm details into the resources file [~/rundeck/projects/../resources.xml]. But its an inefficient approach as I have to manage the resources.xml file by removing the entries each time a vm is deleted. I am primary depending on chef for infrastructure provisioning, getting the node inventory from the chef seems like a viable solution but it adds more overhead and delays in the workflow.
It would be great if I could get some simple/clean suggestions for solving the problem.
As suggested you could download and use chef-rundeck gem from the below link.
https://github.com/oswaldlabs/chef-rundeck
But if you need audit information on the nodes like who added a node or who deleted a node or when the changes in node info occurred, I would suggest maintaining a node info file in SVN or Git and use the URL source option.
This is supported in the Poise Rundeck cookbook via the rundeck_node_source_file resource.

Resources