we are patching Windows Servers with Ansible. For some reason a lot of "automatic" Windows services are not starting automaticly, so i'm doing this with Ansible after startup:
- name: start services
when: services is defined
ansible.windows.win_service:
name: "{{ item }}"
state: started
with_items:
- "{{ services }}"
In the host_vars there is for example mssql01.yml with services: "SQL Server (MSSQLSERVER)".
This is working as expected, but results in a lot of host_vars, which is not that clean as i hope it would be. I thought about regisering all services that ran before, and if a service ran before and is a startup automatic/automatic (delayed), it should be started. Sadly ansible.windows.win_service needs a service name, so it can't just register all services without knowing them.
Does anybody have a smart idea how to ensure every service that's set to automatic get's started, or do we need to troubleshoot why this is not happening automatically? Even if this would be solved, you can't be 100% sure that they start, so a solution would be good in any case.
Related
I created a playbook that uses ansible_facts.services to restart specific services on multiple RHEL servers. The services that get restarted begin with a specific name and may or may not exist on the different hosts where the playbook is ran.
I have this working correctly but I would also like to add a follow up task that checks the services that were restarted in the previous task to make sure they are running. The task would need to only check the services that were restarted then start the service if the status is stopped.
Please recommend the best way to do this. Include code if possible. Thanks!
It's sounds odd, but you may try to use handlers (to react on 'changes') and use check_mode for the module to assure the state.
Try to add 'notify' to the restarting task, and check_mode: true to the handler. Handler is, basically, the same systemd module, but without ability to change.
... Or, if you want to test your infrastructure, may be you may like testinfra (pytest-testinfra) as a tool to check your infra.
I understand your question that you like to make sure that a list of serivces is running.
- name: Loop over all services and print name if not running
debug:
msg: "{{ item }}"
when:
- ansible_facts.services[item].state != 'running'
with_items: "{{ ansible_facts.services }}"
Based on that example you could start the service if the status is stopped.
Further Q&A
Ansible: How to get disabled but running services?
How to list only the running services with ansible_facts?
How to check service exists and is not installed in the server using service_facts module in an Ansible playbook?
In my ansible playbook ( Ansible 2.10) I'm using lookup plugin to get all secrets from AWS Secrets Manager with the bypath attribute.
vars:
#which environment: dev, test,uat, prd
my_env: dev
aws_secret_path: mypath/{{my_env}}
- name: "get all secrets from AWS Secrets Manager"
set_fact:
secret_value: "{{ lookup('amazon.aws.aws_secret', '{{aws_secret_path}}', on_missing='skip', bypath='true', region='eu-west-1' )}}"
It is working fine however I realized that it returns only up to 10 elements. Is there a way to return all elements or use pagination ?
Thanks for help,
Greg
Looking at the source code, it seems they are ignoring the advice to honor NextToken, so you'll want to put that information in your GitHub issue (that you omitted from your question here).
Related to that: in the future, you'll want to include all of the helpful information you put into that GitHub issue when asking questions in the Stack Exchange network, too, since in both cases you're asking for help from some people who are not on your computer to know what you know
You might not understand what I am trying to say in the question. Before stating the problem, I would give an example to understand even more.
Let's take terraform for the example.
Imagine we use terraform to spin up an EC2 instance in the aws. So, we are currently storing the state locally. Let's say we need to add a tag to the ec2 instance.
So what we do is, we add the this small block to achieve that.
tags = {
Name = "HelloWorld"
}
Nothing complex, just simple as it is. So, as we add a tag, the terraform will look for any changes in the state and as it found a tag has been added, it runs and only add the tag without recreating the whole instance. (There can be other scenarios where it needs to recreate the instance but let's leave those for now)
So, as I mentioned, terraform doesn't recreate the instance for adding a tag. If we want to add another tag, it will add the tag without recreating the instance. Just as simple as that.
doesn't matter how many times we run the terraform apply it doesn't do any changes to existing resources unless we made any in the terraform files.
So, let's now come to the real question.
Let's say we want to install a httpd using ansible. So I write ansible playbook. And it will use the yum package and install and then start and enable the service.
Okay it is simple like that.
Let's say we we run the same playbook for the second time, now will try to execute the same commands from scratch without checking first whether it has executed this playbook before?
I have couple of experiences where I try to install some services and when I try to execute the same playbook again, it fails.
Is it possible to preserve the state in ansible just like we do in terraform so it always only execute the newer changes other than executing from the start on the second run?
You seem to be describing idempotence:
Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application.
It's up to you to make sure your playbooks are idempotent in ansible. You could do this for your example using the package and service modules.
- name: Setup Apache
hosts: webservers
tasks:
- name: Install Apache
package:
name: httpd
state: present
- name: Start and enable Apache
service:
name: httpd
state: started
enabled: yes
Ansible doesn't achieve idempotence by retaining state on the the controller. Each module should check the state of the host it's operating on to determine what changes it needs to make to achieve the state specified by the playbook. If it determines that it doesn't need to make any changes then that's what it should report back to the controller.
I want to write a playbook to synchronize a source file to a destination host and restart tomcat/apache if the file changed. The documentation on synchronize does not give any example on if this is possible. Can anyone provide some pointers?
If you're only changing one file, you probably want to use copy instead of synchronize. However, this approach should work either way.
The handler system is designed for this sort of thing. The documentation there provides an example of bouncing memcached after a configuration file change:
Here’s an example of restarting two services when the contents of a
file change, but only if the file changes:
- name: template configuration file
template: src=template.j2 dest=/etc/foo.conf
notify:
- restart memcached
- restart apache
The things listed in the notify section of a task are called handlers.
Handlers are lists of tasks, not really any different from regular
tasks, that are referenced by a globally unique name, and are notified
by notifiers. If nothing notifies a handler, it will not run.
Regardless of how many tasks notify a handler, it will run only once,
after all of the tasks complete in a particular play.
Here’s an example handlers section:
handlers:
- name: restart memcached
service: name=memcached state=restarted
- name: restart apache
service: name=apache state=restarted
I am relatively new to Configuration Management tools for big infrastructure. The company will be using Salt for Linux and Windows, but I guess the question does not relate to specific tool.
The thing I don't get is, let's imagine we have 30 machines in cloud and couple of custom services installed for each of them. Versions can be different, depending on customer subscription. And each service has a config file that can have machine specific data. How you update config files for those services using Salt, or Puppet, Chef, Ansible kind of tools.
I don't know about Salt but in general in Configuration Management tools there is groups of hosts for several machines with the same configuration, templates and variables for customization for each machine
for example you can specify variable port=8888 for host1 and port=9999 for host2 but nginx-template will be something like this:
server {
listen {{port}};
}
for the both servers.
The same idea with machine specific data example(ansible):
- name: start container with mon
docker:
name: test-mongo
state: reloaded
image: mongo
when: master_ip == ansible_default_ipv4.address
where master_ip is my variable and ansible_default_ipv4.address is ip which ansible was connected to this host.
Hope this helps.
With Chef you would use an Erb template for the config and interpolate based on node attributes in most cases. Those might be set directly on the node itself, or you could make a role per customer and set customer-specific attribute data in there.