use ansible facts while executing task in ansible - ansible

I am executing below ansible task based on facts value gathered via ansible.
- hosts : mirth1
vars :
env : dev
gather_facts : true
tasks:
#- name: Build corresponding JSON files
- name: Build a Stream
shell: uptime
when: tags['ha'] == 'active'
Facts value gathered via ansible:
"hostvars[inventory_hostname]": "{'httpd_port': 80, 'ntpserver': '192.168.1.2', 'ansible_user': 'ec2-user', 'inventory_file': '/Users/bada/Documents/we-ansible/inventory_awsplugin/test', 'inventory_dir': '/Users/bada/Documents/we-ansible/inventory_awsplugin/test', 'ami_launch_index': 0, 'image_id': 'ami-0915bcb5fa654992', 'instance_id': 'i-06bd5115d656789a9', 'instance_type': 't3.small', 'key_name': 'mykey', 'launch_time': datetime.datetime(2021, 3, 5, 5, 44, 35, tzinfo=tzutc()), 'monitoring': {'state': 'disabled'}, 'placement': {'availability_zone': 'us-east-1a', 'group_name': '', 'region': 'us-east-1'}, 'private_dns_name': 'ip-172-12-16-224.ec2.internal', 'private_ip_address': '172.12.16.224', 'state': {'code': 16, 'name': 'running'}, 'subnet_id': 'subnet-04fcfc6', 'vpc_id': 'vpc-0cf0a45', 'architecture': 'x86_64', 'block_device_mappings': [{'device_name': '/dev/xvda', 'ebs': {'attach_time': datetime.datetime(2021, 3, 5, 5, 44, 36, tzinfo=tzutc()), 'delete_on_termination': True, 'status': 'attached', 'volume_id': 'vol-057912c770df38754'}}], 'client_token': 'a0ce63e5', 'ebs_optimized': False, 'ena_support': True, 'hypervisor': 'xen', 'network_interfaces': [{'attachment': {'attach_time': datetime.datetime(2021, 3, 5, 5, 44, 35, tzinfo=tzutc()), 'attachment_id': 'eni-attach-03fc486b4c06970ce', 'delete_on_termination': True, 'device_index': 0, 'status': 'attached', 'network_card_index': 0}, 'description': '', 'groups': [{'group_name': 'sg_priv_vce_test', 'group_id': 'sg-0b89c5'}], 'ipv6_addresses': [], 'mac_address': '12:0d:44:15:55:a9', 'network_interface_id': 'eni-0772a53', 'owner_id': '58435', 'private_dns_name': 'ip-172-16-12-224.ec2.internal', 'private_ip_address': '172.16.12.224', 'private_ip_addresses': [{'primary': True, 'private_dns_name': 'ip-172-16-12-224.ec2.internal', 'private_ip_address': '172.16.12.224'}], 'source_dest_check': True, 'status': 'in-use', 'subnet_id': 'subnet-04fcfc42cda7cdaa6', 'vpc_id': 'vpc-0cf0a4dded14c2f05', 'interface_type': 'interface'}], 'root_device_name': '/dev/xvda', 'root_device_type': 'ebs', 'security_groups': [{'group_name': 'sg_priv_test', 'group_id': 'sg-0b8b36c5'}], 'source_dest_check': True, 'tags': {'ha': 'active', 'Platform': 'linux', 'Role': 'Mirth', 'Environment': 'test', 'Name': 'we-test-mirth1', 'PrincipalId': 'AIDAIXY2CRBZU5K', 'Owner': 'we-ansible', 'Product': 'we'}, 'virtualization_type': 'hvm', 'cpu_options': {'core_count': 1, 'threads_per_core': 2}, 'capacity_reservation_specification': {'capacity_reservation_preference': 'open'}, 'hibernation_options': {'configured': False}, 'metadata_options': {'state': 'applied', 'http_tokens': 'optional', 'http_put_response_hop_limit': 1, }, 'enclave_options': {'enabled': False}, 'inventory_hostname': '172.16.12.224', 'group_names': ['aws_ec2', 'linux', 'mirthlin_servers'], 'ansible_facts': {}, 'playbook_dir': '/Users/bada/Docs/we-ansible', 'ansible_playbook_python': '/usr/local/opt/python#3.9/bin/python3.9', 'ansible_config_file': '/Users/bada/Documents/vce-ansible/ansible.cfg', 'groups': {'all': ['172.16.13.21','mirth_servers': ['172.16.12.224']}, 'omit': '__omit_place_holder__6af09f538bad60133ef3eb0949ec09272254aad1', 'ansible_version': {'string': '2.10.3', 'full': '2.10.3', 'major': 2, 'minor': 10, 'revision': 3}, 'ansible_check_mode': False, 'ansible_diff_mode': False, 'ansible_forks': 5, 'ansible_inventory_sources': ['/Users/bada/Documents/vce-ansible/inventory_awsplugin/test'], 'ansible_verbosity': 0}"
If you see above facts grabbed by ansible its storing instance tag fact; I am trying to execute a task when a tag ha has value as active but its been failing. Could you let me know how can I execute task based on fact value stored.

As pointed in the documentation:
Ansible facts are data related to your remote systems, including operating systems, IP addresses, attached filesystems, and more. You can access this data in the ansible_facts variable.
Source: https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html#ansible-facts
So your taks should be:
- shell: uptime
when: ansible_facts.tags['ha'] == 'active'
## or ansible_facts.tags.ha
## or ansible_facts['tags']['ha']
## if you want to keep them coherent
And since your debug was done this way you could also have used:
when: hostvars[inventory_hostname].tags.ha == 'active'

Related

How does Phoenix conglomerate cookie data?

I'm attempting to store some data into the session storage and I'm getting the same cookie error as this guy, the cookie is over the system byte limit of 4096.
This seems pretty straight forward, don't attempt to store more than the system limit in the session. Right, but I'm not attempting to do that. Clearly, the cookie is over 4096 bytes and my additions have caused it to overflow, but that doesn't explain where the data is.
The data I'm attempting to store is only 1500 bytes. In fact, the entire session that is being saved is 1500 bytes (the errored session). Thats nowhere near the overflow limit. So that means one thing for certain: The data stored in :plug_session inside of conn is not the only data being stored inside of the session cookie.
This is the session that's throwing the CookieOverflowError:
:plug_session => %{
"_csrf_token" => "XmE4kgdxk4D0NwwlfTL77Ic62t123123sdfh1s",
"page_trail" => [{"/", "Catalog"}, {'/', "Catalog"}],
"shopping_cart_changeset" => #Ecto.Changeset<
action: nil,
changes: %{
order: #Ecto.Changeset<
action: :insert,
changes: %{
address: #Ecto.Changeset<
action: :insert,
changes: %{
address_one: "800 Arola Drive, apt 321, apt 321",
address_two: "apt 321",
city: "Wooster",
company: "Thomas",
country: "US",
name: "user one",
phone: "3305551111",
state: "WV",
zip_code: "44691"
},
errors: [],
data: #FulfillmentCart.Addresses.Address<>,
valid?: true
>,
priority: false,
shipping_method: #Ecto.Changeset<
action: :insert,
changes: %{id: 2, is_priority?: false, name: "3 Day Select"},
errors: [],
data: #FulfillmentCart.ShippingMethods.ShippingMethod<>,
valid?: true
>
},
errors: [],
data: #FulfillmentCart.Orders.Order<>,
valid?: true
>
},
errors: [],
data: #FulfillmentCart.ShoppingCarts.ShoppingCart<>,
valid?: true
>,
"user_id" => 8
},
I actually followed this guide on decoding a phoenix session cookie, and I get the session before the error.
Which gives me:
iex(8)> [_, payload, _] = String.split(cookie, ".", parts: 3)
["SFMyNTY",
"g3QAAAADbQAAAAtfY3NyZl90b2tlbm0AAAAYWU92dkRfVDh5UXlRTUh4TGlpRTQxOFREbQAAAApwYWdlX3RyYWlsbAAAAAJoAm0AAAABL20AAAAHQ2F0YWxvZ2gCawABL20AAAAHQ2F0YWxvZ2ptAAAAB3VzZXJfaWRhCA",
"Ytg5oklzyWMvtu1vyXVvQ2xBzdtMnS9zVth7LIRALsU"]
iex(9)> {:ok, encoded_term } = Base.url_decode64(payload, padding: false)
{:ok,
<<131, 116, 0, 0, 0, 3, 109, 0, 0, 0, 11, 95, 99, 115, 114, 102, 95, 116, 111,
107, 101, 110, 109, 0, 0, 0, 24, 89, 79, 118, 118, 68, 95, 84, 56, 121, 81,
121, 81, 77, 72, 120, 76, 105, 105, 69, 52, 49, ...>>}
iex(10)> :erlang.binary_to_term(encoded_term)
%{
"_csrf_token" => "YOvvD_T8yQyQMHxLiiE418TD",
"page_trail" => [{"/", "Catalog"}, {'/', "Catalog"}],
"user_id" => 8
}
iex(11)>
This is 127 bytes, so the addition of the 1500 bytes isn't the problem. It's the other allocation of storage that isn't represented inside of the session. What is that?
My assumption of the byte size of the text itself in :plug_session is correct, but the reason the cookie is overflowing is not because the byte size of the decoded text in :plug_session is too big but that the encoded version of the :plug_session is too big. I figured this out by creating multiple cookies and looking at the byte_size of the data.
Save a new cookie
conn = put_resp_cookie(conn, "address",
changeset.changes.order.changes.address.changes, sign: true)
Get a saved cookie
def get_resp_cookie(conn, attribute) do
cookie = conn.req_cookies[attribute]
case cookie != nil do
false ->
{:invalid, %{}}
true ->
[_, payload, _] = String.split(cookie, ".", parts: 3)
{:ok, encoded_term } = Base.url_decode64(payload, padding: false)
{val, max, max_age} = :erlang.binary_to_term(encoded_term)
{:valid, val}
end
end
get_resp_cookie/2 pattern matching
address_map = case Connection.get_resp_cookie(conn, "address") do
{:invalid, val} -> IO.puts("Unable to find cookie.");val
{:valid, val} -> val
end
I made a few changes to the way I save the data from when I posted this. Namely I am now storing a map of changes, not the actual changeset...which means that the session most likely would've worked for me all along.
I think the answer to this issue was that the encoded %Ecto.Changeset{} was too big for the cookie to hold.
If you use this solution then be wary, you have to manage the newly created cookies yourself.

Boto3 Amplify list apps

I have a lot of amplify apps which I want to manage via Lambdas. What is the equivalent of the cli command aws amplify list-apps in boto3, I had multiple attempts, but none worked out for me.
My bit of code that was using nextToken looked like this:
amplify = boto3.client('amplify')
apps = amplify.list_apps()
print(apps)
print('First token is: ', apps['nextToken'])
while 'nextToken' in apps:
apps = amplify.list_apps(nextToken=apps['nextToken'])
print('=====NEW APP=====')
print(apps)
print('=================')
Then I tried to use paginators like:
paginator = amplify.get_paginator('list_apps')
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 100,
'PageSize': 100
}
)
for i in response_iterator:
print(i)
Both of the attempts were throwing inconsistent output. The first one was printing first token and second entry but nothing more. The second one gives only the first entry.
Edit with more attemptsinfo + output. Bellow piece of code:
apps = amplify.list_apps()
print(apps)
print('---------------')
new_app = amplify.list_apps(nextToken=apps['nextToken'], maxResults=100)
print(new_app)
print('---------------')```
Returns (some sensitive output bits were removed):
EVG_long_token_x4gbDGaAWGPGOASRtJPSI='}
---------------
{'ResponseMetadata': {'RequestId': 'f6...e9eb', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'application/json', 'content-length': ...}, 'RetryAttempts': 0}, 'apps': [{'appId': 'dym7444jed2kq', 'appArn': 'arn:aws:amplify:us-east-2:763175725735:apps/dym7444jed2kq', 'name': 'vesting-interface', 'tags': {}, 'repository': 'https://github.com/...interface', 'platform': 'WEB', 'createTime': datetime.datetime(2021, 5, 4, 3, 41, 34, 717000, tzinfo=tzlocal()), 'updateTime': datetime.datetime(2021, 5, 4, 3, 41, 34, 717000, tzinfo=tzlocal()), 'environmentVariables': {}, 'defaultDomain': 'dym7444jed2kq.amplifyapp.com', 'customRules': _rules_, 'productionBranch': {'lastDeployTime': datetime.datetime(2021, 5, 26, 15, 10, 7, 694000, tzinfo=tzlocal()), 'status': 'SUCCEED', 'thumbnailUrl': 'https://aws-amplify-', 'branchName': 'main'}, - yarn install\n build:\n commands:\n - yarn run build\n artifacts:\n baseDirectory: build\n files:\n - '**/*'\n cache:\n paths:\n - node_modules/**/*\n", 'customHeaders': '', 'enableAutoBranchCreation': False}]}
---------------
I am very confused, why next iteration doesn't has nextToken and how can I get to the next appId.
import boto3
import json
session=boto3.session.Session(profile_name='<Profile_Name>')
amplify_client=session.client('amplify',region_name='ap-south-1')
output=amplify_client.list_apps()
print(output['apps'])

Tarantool Querying Questions

I have the following data structure format:
unix/:/var/run/tarantool/tarantool.sock> s:format()
---
- [{'name': 'id', 'type': 'unsigned'}, {'name': 'version', 'type': 'array'}, {'name': 'data',
'type': 'array'}]
...
And I have the following data already inside it:
unix/:/var/run/tarantool/tarantool.sock> s:select{}
---
- - [0, [[21, 'action123'], [12, 'actionXYZ'], [11, 'actionABC']], [['actionXYZ',
'SOME_JAVASCRIPT_CONTENT']]]
- [1, [[33, 'action123'], [12, 'baseXYZ'], [11, 'baseABC']], [['bas123', 'SOME_CSS_CONTENT']]]
...
I have read through the references and documentation and I'm a bit lost on completing the following:
What's the "WHERE" equivalent? ie. Select to find entries that have a version of 12
Not seeing applicable examples in
https://www.tarantool.io/en/doc/2.2/reference/reference_lua/box_space/#lua-function.space_object.select
List items with the field names (so I know what block I'm looking at). In a way, sort of like having "column headers" in your results in SQL.
I have named tuples in my format() - how can I see these names when I'm querying data?
{'name': 'id', 'type': 'unsigned'}, {'name': 'version', 'type': 'array'}, {'name': 'data',
'type': 'array'}]
Pretty print! (preferably yaml)
I tried using https://www.tarantool.io/en/doc/2.2/reference/reference_lua/yaml/ to wrap around my select statements, but nothing was working.
You need to use indexes for imperative effective queries, look here:
https://www.tarantool.io/en/doc/2.2/reference/reference_lua/box_space/#lua-function.space_object.create_index
https://www.tarantool.io/en/doc/2.2/reference/reference_lua/box_index/
use tuple:tomap():
https://www.tarantool.io/en/doc/2.2/reference/reference_lua/box_tuple/#lua-function.tuple_object.tomap
It depends on where do you want it pretty. You may have to tune yaml settings, or simply chain tomap calls:
tarantool> box.space.TEST:pairs():map(function(x) return x:tomap({names_only=true}) end):totable()
---
- - COLUMN1: 1
COLUMN2: a
- COLUMN1: 13
COLUMN2: a
- COLUMN1: 1000
COLUMN2: a

PouchDB: filtering, ordering and paging

Very similar to these two CouchDB questions: 3311225 and 8924793, except that these approaches don't allow partial matching. Having e.g. these entries:
[{_id: 1, status: 'NEW', name: 'a'},
{_id: 2, status: 'NEW', name: 'aab'},
{_id: 3, status: 'NEW', name: 'ab'},
{_id: 4, status: 'NEW', name: 'aaa'},
{_id: 5, status: 'NEW', name: 'aa'}]
and key
[status, name, _id]
There seems to be no way to
filter these entries by status (full string match) and name (partial string match ~ startsWith)
order them by id
paginate them
because of the partial string match on name. The high value unicode character \uffff that allows this partial match also causes to ignore the _id part of the key, meaning the resulting entries are not sorted by _id, but rather by status and name.
var status = 'NEW';
var name = 'aa'
var query = {
startkey: [status, name],
endkey: [status, name + '\uffff', {}],
skip: 0,
limit: 10
};
results in
[{_id: 5, status: 'NEW', name: 'aa'},
{_id: 4, status: 'NEW', name: 'aaa'},
{_id: 2, status: 'NEW', name: 'aab'}]
There is no option to sort in memory, as this would only sort the individual pages, and not the entire data set. Any ideas about this?

Extract ruby hash element value from an array of objects

I've got the following array
[#<Attachment id: 73, container_id: 1, container_type: "Project", filename: "Eumna.zip", disk_filename: "140307233750_Eumna.zip", filesize: 235303, content_type: nil, digest: "9a10843635b9e9ad4241c96b90f4d331", downloads: 0, author_id: 1, created_on: "2014-03-07 17:37:50", description: "", disk_directory: "2014/03">, #<Attachment id: 74, container_id: 1, container_type: "Project", filename: "MainApp.cs", disk_filename: "140307233750_MainApp.cs", filesize: 1160, content_type: nil, digest: "6b985033e19c5a88bb5ac4e87ba4c4c2", downloads: 0, author_id: 1, created_on: "2014-03-07 17:37:50", description: "", disk_directory: "2014/03">]
I need to extract the value 73 and 74 from this string which is Attachment id.
is there any way to extract this value
just in case author meant he has an actual String instance:
string = '[#<Attachment id: 73, container_id: 1, container_type: "Project", filename: "Eumna.zip", disk_filename: "140307233750_Eumna.zip", filesize: 235303, content_type: nil, digest: "9a10843635b9e9ad4241c96b90f4d331", downloads: 0, author_id: 1, created_on: "2014-03-07 17:37:50", description: "", disk_directory: "2014/03">, #<Attachment id: 74, container_id: 1, container_type: "Project", filename: "MainApp.cs", disk_filename: "140307233750_MainApp.cs", filesize: 1160, content_type: nil, digest: "6b985033e19c5a88bb5ac4e87ba4c4c2", downloads: 0, author_id: 1, created_on: "2014-03-07 17:37:50", description: "", disk_directory: "2014/03">]'
string.scan(/\sid: (\d+)/).flatten
=> ["73", "74"]
Do as below using Array#collect:
array.collect(&:id)
In case it is a string use JSON::parse to get the array back from the string first, then use Array#collect method as below :
require 'json'
array = JSON.parse(string)
array.collect(&:id)
The elements of the array (I'll call it a) look like instances of the class Attachment (not strings). You can confirm that by executing e.class in IRB, where e is any element a (e.g., a.first). My assumption is correct if it returns Attachment. The following assumes that is the case.
#Arup shows how to retrieve the values of the instance variable #id when it has an accessor (for reading):
a.map(&:id)
(aka collect). You can see if #id has an accessor by executing
e.instance_methods(false)
for any element e of a. This returns an array which contains all the instance methods defined for the class Attachment. (The argument false causes Ruby's built-in methods to be excluded.) If #id does not have an accessor, you will need to use Object#instance_variable_get:
a.map { |e| e.instance_variable_get(:#id) }
(You could alternatively write the argument as a string: "#id").
If
s = '[#<Attachment id: 73, container_id: 1,..]'
in fact a string, but you neglected to enclose it in (single) quotes, then you must execute
a = eval(s)
to convert it to an array of instances of Attachment before you can extract the values of :#a.
Hear that 'click'? That was me starting my stop watch. I want to see how long it will take for a comment to appear that scolds me for suggesting the use of (the much-maligned) eval.
Two suggestions: shorten code to the essentials and avoid the need for readers to scroll horizontally to read it. Here, for example, you could have written this:
a = [#<Attachment id: 73, container_id: 1>, #<Attachment id: 74, container_id: 1>]
All the instance variables I've removed are irrelevant to the question.
If that had been too long to fit on one lines (without scrolling horizontally, write it as:
a = [#<Attachment id: 73, container_id: 1>,
#<Attachment id: 74, container_id: 1>]
Lastly, being new to SO, have a look at this guide.

Resources