ansible vs cronjob for background job to label dynamic nodes - ansible

Need to change the labels of worker nodes with a predefined combination of alphanumeric characters (example: machine01) as and when they join the cluster (again and again if node(s) leave or when new nodes join the cluster). Is it possible to do it using ansible or we need to setup a cronjob. If possible using ansible, I would like to have a hint on how we can run a playbook once and keep it active in background to keep on checking new node labels. Which is more cheap computationally.

How we can run a playbook once and keep it active (annot.: running) in background to keep on checking new node labels?
Since Ansible is a push based Configuration Management tool, it is not designed and developed for such use case. Ansible will just connect to the remote device via SSH and perform a configuration task.
Further Documentation
Ansible concepts
Ansible playbooks

Related

Liquibase with stolon via ansible - what is the right way to apply changes

If I have stolon cluster on let's say 3 hosts what is the right way to apply liquibase changes?
Should I always choose master-node host? And what will happen if I choose replica host instead of master?
What will happen if I try to run the same liquibase script on each node (e.g. I have all three nodes listed in my ansible scripts and don't know yet how to choose master stolon host by ansible scripts and decide run scrips on each host just in case).
Are there examples of ansible playbooks for this task?
Which jdbc url should I address?
jdbc:postgresql://{{postgres_host}}:{{stolon_postgres_port}}/{{postgres_database}}
or
jdbc:postgresql://{{postgres_host}}:{{stolon_proxy_port}}/{{postgres_database}}
Or where can I read anything about these issues?
P.S. It seems using proxy's port is the right way
P.S.S. The way is to execute migration on any host - proxy will transfer execution to master node any way. But do it only once - it can be done by run_once ansible key

How to make a job run on all the servers in Ansible tower cluster?

We have a clustered Ansible tower setup comprising of 3 servers. I want to run a patching job on all the 3 servers at a time it only runs on one server(which tower figures out based on internal algo). How do I make it run on 3 servers in cluster at the same time?
You're referring to a "Sliced Job" (or a "Distributed Job"):
https://docs.ansible.com/ansible-tower/3.6.2/html/installandreference/glossary.html#term-distributed-job
The Ansible Tower documentation goes over "Job Slicing" in section 17 (for Tower version 3.6.2):
https://docs.ansible.com/ansible-tower/latest/html/userguide/job_slices.html
Please note this warning if you are trying to orchestrate a complex set of tasks across all the hosts:
Any job that intends to orchestrate across hosts
(rather than just applying changes to individual
hosts) should not be configured as a slice job.
Any job that does, may fail, and Tower will not
attempt to discover or account for playbooks that
fail when run as slice jobs.
Job slicing is useful when you have to re-configure a lot of machines and want to use all of your Tower cluster capacity. Using it for patching 100's of machines would be a good example, the only limits would be to ensure the source of the patches (the local patch server, or your Internet connect) have sufficient capacity to handle the load, and that any reboots of the systems are co-ordinated correctly in spite of the job slicing.

How can i limit cron to one node

When adding a node manually or via automatic horizontal scaling master node will be cloned. This means that also crontab will be cloned right?
How can i avoid, that a cron starts on two or more nodes simultaneously (which usually is not intended)?
There are few possible ways:
Your script can understand that it's not the master node, so it can remove itself from the cron or just do nothing. Each node has information about the master node in the layer/nodeGroup.
env | grep MASTER
MASTER_IP=172.25.2.1
MASTER_HOST=node153580
MASTER_ID=153580
Disable cron via Cloud Scripting at onAfterScaleOut. Here is an example how to use this event.
Deploy software templates as custom docker images (even if you use a certified Jelastic template). Such images are not cloned during horizontal scaling, they are created from scratch.

How Ansible make sure that all the remote nodes are in desired state as per the template/playbook

Suppose after running a playbook, if someone tried to change the configuration on one/many of the node(s) managed by Ansible. So how does Ansible come to know that one/many of his managed node(s) is/are out of sync and sync it properly to desired state.
I presume we have this in other automation platforms like Chef and Puppet in which the remote agent runs periodically to be in sync with the master server template.
Also what are the best practices to do so.
Ansible doesn't manage anything by itself. It is a tool to automate tasks.
And it is agentless, so no way to get state updates from remote hosts by their will.
You may want to read about Ansible Tower. Excerpt from features list:
Set up occasional tasks like nightly backups, periodic configuration remediation for compliance, or a full continuous delivery pipeline with just a few clicks.

Ansible tower to manage hosts configured for pull mode?

I currently manage ~20 machines located around the world, behind a variety of firewalls using ansible operating in pull mode (all machines pull the playbook from a git+ssh repository). I'd like better reporting of the status of the machines so I am looking into Ansible Tower.
As far as I can tell Tower only supports push mode. The docs for ansible tower are not clear - can it also manage machines that run in pull mode? that is can each machine, for example, phone home to tower to retrieve configuration and report its results rather than requiring tower to push to those machines?
Alternative strategies using something like autossh + reverse tunnel are not options due to the variances of the remote machines firewalls and IT departments.
Yes, "pull" with Tower is used to initialize machines- for instance, when AWS creates them with autoscaling.

Resources