Force ansible to apply changes on each lines of an inventory group - ansible

I have a bare metal server, and I want to install multiple services on this server.
My inventory looks like that
[Mygroup]
Server port_service=9990 service_name="service1"
Server port_service=9991 service_name="service2"
When I launch my ansible job,only service 2 is installed because I have the same server in each line of my group. There is way to force ansible to take all lines of a group?
I don't want to create a group for each service

Q: "There is a way to force Ansible to take all lines of a group?"
A: No. There is not. In a group, the hosts shall be unique. If there are multiple hosts with the same name the last one will be taken.
Put the variables into one line e.g.
[Mygroup]
Server port_services="[9990, 9991]" service_names="['service1', 'service2']"
(and change the code).
See How to build your inventory. There is a lot of other options e.g.
[Mygroup]
Server
[Mygroup:vars]
port_services="[9990, 9991]"
service_names="['service1', 'service2']"

I hope I got u right but this should do the trick.
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
Greets
Harry

Another solution is to use an alias.
This solution works fine for me
[Mygroup]
service_1 ansible_host=Server port_service=9990 service_name="service1"
service_2 ansible_host=Server port_service=9991 service_name="service2"

Related

how to switch a schedule task from one host to other in a cluster, when host that contain the schedule task is down in marklogic?

I scheduled a task in a bootstrap node in marklogic and but there might be chance that the host will down ,somehow ,in that case how I switched that task to other host in the cluster.
Note: I have to schedule the task only on a single host at a time from the cluster.
The options for assigning scheduled tasks are currently to set for a specific host, or to leave empty and have it execute on all hosts.
So, if you want to ensure that in the event of a host failure, the task is still executed, you could leave the host assignment empty and add logic inside of the task to determine which host should execute the code, and the others become a no-op.
An example of how to achieve that is to add code to the task to evaluate whether the xdmp:host() is the same host with the open Security forest (assuming that you have HA-replica forest for your Security DB to ensure availability, but could be achieved with any database)
xquery version "1.0-ml";
let $status := xdmp:database("Security") => xdmp:database-forests() => xdmp:forest-status()
where $status/*:host-id/data() eq xdmp:host()
return
(: this will only execute on the host with the open Security forest:)
"execute task logic"

Ansible parsing through list

I have a tower Template which when launched prompts users to provide hostname and it's IP separated by space per line. There will always be minimum two entries.
hostnameX ip_address
hostnameY ip_address
I am using the IP address and hostname to create DNS records in DNS servers in another task.
But for other task I just need the hostname and discard the IP address, and loop through the hostnames. But when I try
with_list: "{{list_serverinfo.split(' ')[0] }}"
The task is only run on hostnameX and never executes on hostnameY! I want the task to execute against all hostnames.
Thanks in advance for the time and all the help!

Multiple postfix output IP

I have a server with multiple public IP addresses.
I want to send campaign emails on this server.
Sometimes i would like to send mail from a particular IP (it is a filter on the sender email address that gives which IP to use).
The only thing i find is to install multiple postfix instances (one per output IP). Is there a best way to do this ?
I have a second question: Postfix gives a unique queue id to each message. If i have several instances of postfix, do you think thoses uniques id can be the same in 2 postfix instances ?
Thanks
sender_dependent_default_transport_maps is your friend. First, add this to main.cf:
sender_dependent_default_transport_maps = hash:/etc/postfix/sender-transport
Next, create the file /etc/postfix/sender-transport with
#my-sender-domain.com smtp-192-168-0-1:
Any message received with sender #my-sender-domain.com will use the service smtp-192-168-0-1 (can be any name) for sending. Don't forget to postmap /etc/postfix/sender-transport the file.
And then, add the service to master.cf
smtp-192-168-0-1 unix - - n - - smtp
-o smtp_bind_address=192.168.0.1
Again, the service name can be anything, but it must match the one on the hash file. This smtp service will send the message from the IP 192.168.0.1. Change as needed.
Add as many services and lines in the hash file as you want. Don't forget to service postfix restart after that.
There are many other options you can add to the smtp service, like -o smtp_helo_name=my.public.hostname.com, etc.
I just finished to set up a postfix like this :-)

Amazon EC2 Autoscaling tools does not show particular autoscale group unless explicitly mentioned

I have an autoscaling group that, using the Autoscaling Command Line Tools doesn't seem to show up unless I explicitly request it. Ie:
as-describe-auto-scaling-groups qa-MyAppName
Returns the auto scale group:
AUTO-SCALING-GROUP qa-MyAppName release_1_by_john_smith us-east-1a qa-MyAppName 5 10 5
...
However the as-describe-auto-scaling-groups command does not show this particular AS group at all:
as-describe-auto-scaling-groups
Does not return this auto scaling group at all. Other AS groups are shown though.
Why is this AS group not showing in the list of all AS groups?
How many SGs do you have? Unless told otherwise, the as-describe... commands will only return a subset of results (20 by default, IIRC). Perhaps this is number 21?

Multiple roles with attributes(?) in Capistrano

How can I pass along attributes to my tasks in capistrano?
My goal is to deploy to multiple servers in a load balancer. I'd like to take each one out, deploy, and add it back in sequentially so no more than one server is down at any time.
I'm thinking it would be something along the lines of... (and the hosts array would be generated dynamically after querying my load balancer)...
role :app,
[["server_one", {:instanceId => "alice"}],
["server_two", {:instanceId => "bob"}],
["server_three", {:instanceId => "charles"}]]
And then for my tasks...
before :deploy, :deregister_instance_from_lb
after :deploy, :register_instance_with_lb
task deregister_instance_from_lb
#TODO - Deregister #{instanceId} from load balancer
end
task register_instance_with_lb
#TODO - Register #{instanceId} with load balancer
end
Any ideas?
I use this to restart my servers in series, instead of in parallel.
task :my_task, :roles => :web do
find_servers_for_task(current_task).each do |server|
run "[task command here]", :hosts => server.host
end
end
Justin, I'm sorry that's not possible, once the stream pool is opened (first run on a server set) there's no way to access server properties. (as the run code isn't run per-server, but against all-matching in the pool). Some people have had some success with doing something like this, but really it's a symptom that your scripts need too much information that you should be able to extract from your production environment.
As in this case it seems you are doing something like using the host's name to pass to a script, use what Unix gives you:
run "./my_script.rb `hostname`"
WIll that work?
References:
• http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html (Section 3.4.5)
• http://unixhelp.ed.ac.uk/CGI/man-cgi?hostname (or $ man (1) hostname)
No one knows? I found something about the sequential block below, but thats as far as I got...
find_servers.each do |server|
#TODO - remove from load balancer
#TODO - deploy
#TODO - add back to load balancer
end
I find it hard to believe that no one has ever needed to do sequential tasks with cap.

Resources