I have generated a file with bunch of alter statements based on certain condition on cluster using Ansible task.
Here's the sample file content
alter table test1 with throttling = 0.0;
alter table test2 with throttling = 0.0;
alter table test3 with throttling = 0.0;
I want to login and execute these ALTER statements with a delay of 2 mins. I was able to achieve this with a shell script using sleep command by copying the shell script from Control Node and execute on Remote Node.
But the problem we noticed was we were unable to check if script executed properly or failed (like authentication failed to DB, etc.)
Can we perform the same task using Ansible module and execute them one by one with some delay?
Regarding
Can we perform the same task using ansible module and execute them one by one with some delay?
the short answer, yes, of course.
A slightly longer minimal example
- name: Run queries against db test_db with pause
community.mysql.mysql_query:
login_db: test_db
query: "ALTER TABLE test{{ item }} WITH throttling = 0.0;"
loop: [1, 2, 3]
loop_control:
pause: 120 # seconds
Further Documentation
Community.Mysql - Modules
mysql_query module – Run MySQL queries
Pausing with loop
Extended loop variables
Further Q&A
How to run MySQL query in Ansible?
Ensuring a delay in an Ansible loop
Related
Current setup that we do have ~2000 servers (in 1 group)
I would like to know if there is a way to run x.yml on all the group (where all the 2k servers are in ) but with multiple plays (threaded , or something)
ansible-playbook -i prod.ini -l my_group[50%] x.yml
ansible-playbook -i prod.ini -l my_group[other 50%] x.yml
solutions with awx or ansible-tower are not relevant.
using even 500-1000 forks didn't gave any improvement
try to combine forks, and the free strategy.
the default behavior of Ansible is:
Ansible runs each task on all hosts affected by a play before starting the next task on any host, using 5 forks.
So event if your increase the forks number, the tasks on special forks will still wait any host finish to go ahead. The free strategy allows each host to run until the end of the play as fast as it can
- hosts: all
strategy: free
tasks:
# ...
ansible-playbook -i prod.ini -f 500 -l my_group x.yml
As mentioned above, you should preferably increase fork and set the strategy to free. Increasing fork will help you run the playbook on more server and setting the strategy to free would allow you to run a task for servers independently without waiting for others.
Please refer to below doc for more clarifaction.
docs
resolved by using patterns my_group[:1000] and my_group[999:]
forks didnt give any time decrease in my case.
also free strategy did multiplied the time which was pretty weird.
also debugging free strategy summary is free difficult when u have 2k servers and about 50 tasks in playbook .
thanks everyone for sharing
much appreciated
what I am trying to do is to register variables accordingdly. I am pinging an interface and then I try to verify if the ping was successful or not. However my code is not working the way I have imagined. I simplify it to following sample:
- name: Verify ping
ansible.builtin.shell: echo "yes it is online"
register: tmp
when: ping_output is search("5/5")
- name: Verify ping
ansible.builtin.shell: echo "no it is offline"
register: tmp
when: ping_output is search("0/5")
The variable ping_output was registered before in the previous tasks. Well what I have observed during my tests is that when I want to evaluate my variable tmp it is not set sometimes. This happens when the first conditional is true. Then tmp.stdout is set to "yes its online". Even if ansible is skipping the next conditional ("the else statement"), however it unsets the variable tmp.stdout and at some point I receive tmp.stdout is not defined. Do you have better suggestions how to solve it?
Thanks in advance
EDIT:
Hi, thanks for the review. I'm trying to store information into variables and yes right, later I reference to tmp.stdout. I actually ping from a Cisco device and save the output. There are several pings, so several outputs. Then there are tasks which evaluate these outputs. The goal is to prepare an appropriate description which I send via JSON as an API call. What the enduser sees in the end, is a description in an external trouble ticketing system with information like which interfaces were pingable, which were not etc
Ansible 2.9.6
There is standard way to use retry in Ansible.
- name: run my command
register: result
retries: 5
delay: 60
until: result.rc == 0
shell:
cmd: >
mycommand.sh
until is a passive check here.
How can I do the check with the separate command? Like "retry command A several times until command B return 0"
Of cause I may put both commands inside shell execution "commandA ; commandB" and I will get exit status of the second one for the result.rc. But is any Ansible way to do this?
The ansible way would point towards retries over a block or include that contains a command task for each script. However, that's not supported. You can use a modified version of the workaround described here, though, but it starts to get complicated. Therefore, you may prefer to take Zeitounator's suggestion.
In Azure pipeline yaml file, when defining multiple jobs in a single stage, one can specify dependencies between them. One can also specify the conditions under which each job runs.
Code #1
jobs:
- job: A
steps:
- script: echo hello
- job: B
dependsOn: A
condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
steps:
- script: echo this only runs for master
Code #2
jobs:
- job: A
steps:
- script: "echo ##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar
- job: B
condition: and(succeeded(), ne(dependencies.A.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
steps:
- script: echo hello from B
Question:
Code #1 & #2 above have different orders of the dependency and condition. Does the order matters? If so, what's matter? (what's the difference between different orders)
Discuss 1 and 2 separately.
Code #1:
Since there is no data connection between job1 and job2, data connection here refers to variable sharing and etc.
So, for #1, there's no matters on order. Here you can ignore the dependsOn specified while you have no special requirements on the execution order between job A and job B.
BUT, there's one key thing you need pay attention is, the actual running order will be changed randomly when you do not specify the dependsOn. For example, most of time, they will respect with the order job A, job B. Occasionally, they will randomly run as job B, job A.
Code #2:
This must make the dependsOn specified. Because your job B is using the output variable which created/generated at job A. Since our system allow same variables name exists in different jobs, you must specify the dependsOn so that the system can know the job B should find the variable skipsubsequent from job A not others. Only this key words specified, the variables which generated in job A can be exposed and available to next jobs.
So, the nutshell is once there is any data connection between jobs, e.g variable pass, you must specify dependsOn to make the jobs has connection with each other.
I've been trying to implement Ansible in our team to manage different kinds of application things such as configuration files for products and applications, the distribution of maintenance scripts, ...
We don't like to work with "hostnames" in our team because we have 300+ of them with meaningless names. Therefor, I started out creating aliases for them in the Ansible hosts file like:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
Meaning we have a BPM application named "app1" and it's deployed on two hosts in integration-testing and on two hosts in user-acceptance-testing. So far so good. Now I can run an Ansible playbook to (for example) setup the SSH accesses (authorized_keys) for team members or push a maintenance script. I can run those PBs on each host seperately, on all hosts ITT or UAT or even everywhere.
But, typically, we'll have install the same application app1 again on an existing host but with a different purpose - say "training" environment. My reflex would be to do this:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-t]
bpm-app1-t1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-t2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
bpm-t
But ... running PB's becomes a mess now and cause errors. Logically I have two alias names to reach the same user/host combination : bpm-app1-u1 and bpm-app1-t1. I don't mind, that's perfectly logical, but if I were to test a new maintenance script, I would first push it to bpm-app1-i1 for testing and when ok, I probably would run the PB against bpm-all. But because of the non-unique user/host combinations for some aliases the PB would run multiple times on the same user/host. Depending on the actions in the PB this may work coincidentally, but it may also fail horribly.
Is there no way to tell Ansible "Run on ALL - UNIQUE user/host combinations" ?
Since most tasks change something on the remote host, you could use Conditionals to check for that change on the host before running.
For example, if your playbook has a task to run a script that creates a file on the remote host, you can add a when clause to "skip the task if file exists" and check for the existence of that file with a stat task before that one.
- Check whether script has run in previous instance by looking for file
stat: path=/path/to/something
register: something
- name: Run Script when file above does not exist
command: bash myscript.sh
when: not something.exists