I'm trying to update a cloudformation stack with just a param value that I need to change via Ansible. The stack is previously created and has about 20 input params, but I just need to update the value for one. I tried the following:
- name: update
cloudformation:
stack_name: "{{ stack_name }}"
state: present
region: "{{ region }}"
disable_rollback: false
args:
template_parameters:
CreateAlarms: "{{ create_alarms }}"
When I run it, the play throws an error stating that it expects values for the other template params. From the ansible documentation here http://docs.ansible.com/ansible/latest/cloudformation_module.html, it says "If state is present, the stack does exist, and neither template nor template_url is specified, the previous template will be reused." How do I tell the cloudformation module to use previous values as well? I know that aws cli supports it via the usePreviousValue flag, but how I do it with Ansible cloudformation?
Thanks in advance.
author/maintainer of the current Ansible cloudformation module here. There isn't a method to reuse previous values, you must specify the parameters every time. Usually that's fine because you have your params stored in your Ansible playbook anyhow.
If you're nervous, the values are listed in the cloudformation console, and you can also use changesets in Ansible to make sure only the expected are changing.
Related
Ansible 2.9.2
Tower 3.6.1
There are two Bitbucket repositories viz. bitbucketconfigurator and sonarconfigurator which contain playbooks to configure Bitbucket and Sonarqube projects, respectively. Here, showing just the bitbucketConfigurator:
In the above project, there is a group_vars/all yaml file as follows(note the entry bitbucket_token: "{{ PAT_admin_admin }}" ):
# Defaults used for branch access and restrictions
# Global vars used for rest api
bitbucket_base_url: 'https://git.net'
bitbucket_rest_base_endpoint: 'rest/api/1.0/projects'
bitbucket_rest_url: "{{ bitbucket_base_url }}/{{ bitbucket_rest_base_endpoint }}"
bitbucket_branch_rest_endpoint: "rest/branch-permissions/2.0/projects"
bitbucket_branching_url: "{{ bitbucket_base_url }}/rest/branch-utils/1.0/projects"
sonar_for_bitbucket_base_endpoint: 'rest/sonar4stash/1.0/projects'
bitbucket_token: "{{ PAT_admin_admin }}"
bitbucket_rest_api_latest: 'rest/api/latest'
bitbucket_project_name_lc: "{{ bitbucket.project.name | lower }}"
bitbucket_repository_name_lc: "{{ bitbucket.project.repository.name | lower }}"
The list of job templates, an example is shown further, below:
As an example, let's consider the bb_configure job template I am using 'Extra Variables' to pass a yml file/dictionary e.g:
Here is the list of all the workflow templates, I will summarize their usage subsequently:
I tested the bitbucketconfigurator playbooks in a single workflow viz. wt-bitbucket. It runs successfully. Note that it uses no-arg versions of the individual playbooks i.e the config yaml file is passed to the wt-bitbucket and NOT the individual playbooks.
In the above workflow template, the extra vars are used:
Now, I am using the wk_onboarding_pipeline as a 'master' workflow template which invokes the no-arg versions of the workflow templates:
This 'master' template now has the config yaml file, also, note the entry PAT_admin_admin::
Issue:
When I run this 'master' template, the first playbook of the first workflow template fails as it is unable to access the 'PAT_admin_admin' passed in the extra vars.
Am I making a mistake in the invocation or have I incorrectly configured the workflow templates within a workflow template?
I need some help on using consul_kv module with ansible version since 2.8.x , maybe i missed something, but i took a look to the code of the module and i don't realy see changes between 2.7.x and 2.8.x that can explay the problem i got.
With ansible 2.7.x , when i try to get value from consul, i get consul host, port, path from my env vars and i execute my code like this:
# group_var/all
consul_path: "{{ lookup('env','ANSIBLE_CONSUL_PATH') }}"
consul_host: "{{ lookup('env','ANSIBLE_CONSUL_HOST') }}"
consul_port: "{{ lookup('env', 'ANSIBLE_CONSUL_PORT') }}"
- hosts: localhost
tasks:
- name: test ansible 2.8.5 with consul
debug:
msg: "{{ lookup('consul_kv', consul_path+'path/to/value' }}"
it works on 2.7.0 and i got my value, but doesn't work on 2.8.x , from those newer versions i need to specify host and port on each line which using lookup
msg: "{{ lookup('consul_kv', 'path/to/value', host='myconsulhost.com', port='80') }}"
Is there a way to continue to use env vars in ansible 2.8.x with this module ?
The fine manual says that the lookup now uses the $ANSIBLE_CONSUL_URL environment variable to determine the protocol, hostname, and port -- or (as you observed) using the inline kwargs to the lookup function. Those group_vars you mentioned no longer seem to be consulted
You'll also want to be careful as your group_vars/all (at least in this question, unknown if you are really doing it) has a trailing space in consul_path : which creates a variable named consul_path<space>
I have a CloudFormation stack that was created using the Ansible cloudformation module, and then I have some masked parameters that was updated manually by a separate operations team.
Now I would like to update the stack to perform a version upgrade, and while this is easily done in the AWS Console and through the AWS CLI, I can't seem to find a way to do this through the Ansible module.
Based on another post here, it was noted that upgrades are not possible, and the only way was to simply not use Ansible.
I have tried using the Ansible cloudformation_facts module to try and fetch the parameters to no avail. Is there any other method to fetch this data from CloudFormation, or will I have to accept that I cannot use Ansible?
Thank you in advance.
You can fetch all the parameters from cloudformation using Ansbile with something like the below:
---
- name: Get CloudFormation stats
cloudformation_facts:
stack_name: "{{ stack_name }}"
region: "{{ region }}"
register: my_stack
If you had a parameter called "subnet-id", you could view what the return would like look like this:
---
- name: Get CloudFormation stats
cloudformation_facts:
stack_name: "{{ stack_name }}"
region: "{{ region }}"
register: my_stack
- debug: msg="{{ my_stack.ansible_facts.cloudformation[stack_name].stack_parameters.subnet-id }}"
The return would look like this:
ok: [localhost] => {
"msg": "subnet12345"
}
If values are hashed out however, you won't be able to see what their value was - so the answer is that in that case, you shouldn't be updating cloudformation directly if you're trying to move over to Ansbile. Rather have the values updated in an encrypted file on your source control, and build from there with Ansible.
I have some trouble getting my playbook to include another. I use a playbook to roll out a clean VM. After that, I'd like to run another playbook, to configure the VM in a certain way. I've added the new host to the inventory and have ssh access to it.
Our team has set up a project per servertype. I've retrieved the right path to the project in an early stage (running against localhost) and used set_fact to put it in "servertype_project".
I expect this to work (at the playbook-level, running against the new VM):
- name: "Run servertype playbook ({{ project }}) on VM"
vars:
project: "{{ hostvars['localhost']['servertype_project'] }}"
include: "{{ project }}/ansible/playbook.yml"
But it fails the syntax check with this message:
ERROR! {{ hostvars['localhost']['servertype_project'] }}: 'hostvars' is undefined
I can retrieve the right string if I refer to {{ hostvars['localhost']['servertype_project'] }} from within a task, but not from the level in which I can include another playbook.
Since the value is determined at runtime, I think set_fact is the correct way to store the variable, but that one is host-specific. Is there any way I can pass it along to this host as well? Or did I miss some global-var-like-option?
You are not missing anything. This is expected. Facts are bound to hosts and hostvars is not accessible in playbook scope.
There is no way to define a variable that would be accessible in the include declaration, except for passing it as an extra-vars argument in CLI. But if you use an in-memory inventory (which is a reasonable assumption), then calling another ansible-playbook instance is out of question.
I think you should rethink your flow and instead of including a dynamically-defined play, have statically-defined play(s) running against your in-memory inventory and include the roles conditionally.
I have just created a vpc on AWS with ansible like so:
- name: Ensure VPC is present
ec2_vpc:
state: present
region: "{{ ec2_region }}"
cidr_block: "{{ ec2_subnet }}"
subnets:
- cidr: "{{ ec2_subnet }}"
route_tables:
- subnets:
- "{{ ec2_subnet }}"
routes:
- dest: 0.0.0.0/0
gw: igw
internet_gateway: yes # we want our instances to connect internet
wait: yes # wait for the VPC to be in state 'available' before returning
resource_tags: { Name: "my_project", environment: "production", tier: "DB" } # we tag this VPC so it will be easier for us to find it later
Note that this task is called from another playbook that takes care of filling the variables.
Also note that I am aware the ec2_vpc module is deprecated and I should update this piece of code soon.
But I think those points are not relevant to the question.
So you can notice in the resource_tags I have a name for that project. When I first ran this playbook, I did not have a name.
The VPC got created successfully the first time but then I wanted to add a name to it. So I tried adding it in the play, without knowing exactly how Ansible would know I want to update the existing VPC.
Indeed Ansible created a new VPC and did not update the existing one.
So the question is, how do I update the already existing one instead of creating a new resource?
After a conversation on the ansible irc (can't remember the usernames, sorry) it appears this specific module is not good at updating resources.
It looks like the answer lies in the new ec2_vpc_net module, specifically with the option multi_ok which by default won't duplicate VPCs with the same name and CIDR blocks.
docs: http://docs.ansible.com/ansible/ec2_vpc_net_module.html#ec2-vpc-net