my target is to be able to configure multiple iis website on the same target and in parallel.
each website has multiple tasks that configures it, like creating app pool, creating website, setting the website app pool, create the needed directory paths, creating virtual directories, bindings....
so as i need to create multiple websites, which means i need to loop over a bunch of tasks, so i have set these all these tasks in a separate file and in my main playbook i have created a loop which import the file for each website.
but i endup with a serial execution, i want to be able to create websites in parallel.
i didn't find anyway to achieve this as the async is not supported on ansible block
so is it possible to be done with ansible?
thanks
there is a module called async -- I believe this is something you are looking for
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html
This module will help you to run the tasks in parallel. Ideal for creating "n" of websites is IIS -- that are independent.
Related
As Chef copies your code to agents, does Ansible copy the code or does it just convert to commands and execute them?
I have checked lot of docs but did not find any good doc explaining this workflow.
In short: for each task Ansible packs required modules and libs plus input data into tiny package, delivers to temporary location on target system (usually via ssh), executes it there and cleanup after itself.
Ansible doesn't copy your playbook as a whole to target system. Only data required for each individual task.
More details about workflow in developer guide here.
For target machines running Unix/Linux, a control machine:
opens an SSH session to the target node, performs basic preparations (e.g. creates temporary directories);
creates customised scripts (mostly Python) and transfers them using SFTP (default) or SCP (configurable) to the target;
finally it executes the scripts on the target host.
The process is repeated for each single task on each single host (Ansible can also be optimised to leave an open SSH session for multiple tasks).
In Ansible, basic units of work are coded in modules and specified in (called from) tasks.
For most modules the logic is written in Python. Whether a specific module uses external programs or not, the actions to be performed are wrapped in Python scripts.
One exception to the above is the raw module which executes the specific command directly in SSH session.
Another special case is the synchronize module which is executed on the control machine and uses rsync to transfer files.
Some modules, which target mostly cloud services and network devices, are executed on local host (or proxy machine) and access destination systems and devices with their APIs.
For Windows target machines, Ansible connects with WinRM and runs PowerShell scripts on the target machine through Windows-native PowerShell remoting feature.
I've a scenario where data has to be loaded from different input files. So my current approach is to execute the loader script using selenium grid in 10 different systems. Each system will have their own input files and other information like PORT, IP_ADDRESS for grid will also be passed in rake task itself. Now, these information will be saved in an excel file and code has to be written to build n number of rake task with different environment variables and then execute them all together.
I'm unable to come up with a way where all the task will be created automatically and executed as well.
I know it has to be done using 'parallel_test' gem or rake multi-task feature but don't know how exactly this can be achieved. Any other approach is also welcomed.
Assume that a normal deployment script does a lot of things and many of them are related to preparing the OS itself.
These tasks are taking a LOT of time to run even if there is nothing new to do and I want to prevent running them more often than, let's say once a day.
I know that I can use tags to filter what I am running but that's not the point: I need to make ansible aware that these "sections" executed successfully one hour ago and due to this, it would skip the entire block now.
I was expecting that caching of facts was supposed to do this but somehow I wasnt able to see any read case.
You need to figure out how to determine what "executed successfully" means. Is is just that a given playbook ran to completion? Certain roles ran to completion? Certain indicators exist that allow you determine success?
As you mention, I don't think fact caching is going to help you here unless you want to introduce custom facts into each host (http://docs.ansible.com/ansible/playbooks_variables.html#local-facts-facts-d)
I typically come up with a set of variables/facts that indicate a role has already been run. Sometimes this involves making shell calls and registering vars, looking at gathered facts and determining if certain files exist. My typical pattern for a role looks something like this
roles/my_role/main.yml
- name: load facts
include: facts.yml
- name: run config tasks if needed
include: config.yml
when: fact_var1 and fact_vars2 and inv_var1 and reg_var1
You could also dynamically write a yaml variable file that get's included in your playbooks and contains variables about the configured state of your environment. This is a more global option and doesn't really work if you need to look at the configured status of individual machines. An extension of this would be to write status variables to host_vars or group_vars inventory variable files. These would get loaded automatically on a host by host basis.
Unfortunately, as far as I know, fact caching only caches host based facts such as those created by the setup module so wouldn't allow you to use set_fact to register a fact that, for example, a role had been completed and then conditionally check for that at the start of the role.
Instead you might want to consider using Ansible with another product to orchestrate it in a more complex fashion.
I have two instances of CQ and between them I want to be able to import/export tasks.
For example:
On instance 1 I can see all tasks by going to http://instance1/libs/cq/taskmanagement/content/taskmanager.html#/tasks/Delta
On instance 2 I can see all tasks by going to http://instance2/libs/cq/taskmanagement/content/taskmanager.html#/tasks/Delta
There might be some scenarios where I want to take all tasks from instance2 and add them as additional tasks into instance1 (on top of the tasks it may already have).
Is this possible to do?
Yes, you can do this with Package Manager. The tasks are stored as nodes in the JCR repository, so you can create a package that filters the task nodes you want to migrate from one instance to another. For example, you could define a package with this filter definition to include all tasks:
/etc/taskmanagement/tasks
If you don't want all tasks, you may need to define the filter(s) more narrowly to pick only the ones you want to include.
For example:
/etc/taskmanagement/tasks/2015-05-04/Delta/TheTaskYouWantToMigrate
Use the browser when defining the filter to find the tasks you want to include.
See Working with Packages for details on using the Package Manager. This Tutorial also shows how to create the package and add filters. Once you've created a package with the filters for the tasks you want to include, then build the package and download it. On your other instance upload the package you built and install it. You will then see the tasks one your first instance replicated onto the second instance.
Additionally to what Shawn said, you also can use replication mechanisms to do the work for you, and replicate the desired nodes between any two instances.
So I've been meaning to create a cron job on my prototype Flask app running on Heroku. Searching the web I found that the best way is by using Flask-Script but I fail to see the point of using it. Do I get easier access to my app logic and storage info? And if I do use Flask-Script, how do I organize it around my app? I'm using it right now to start my server without really knowing the benefits. My folder structure is like this:
/app
/manage.py
/flask_prototype
all my Flask code
Should I put the 'script.py' to be run by the Heroku Scheduler on app folder, the same level as manage.py? If so, do I get access to the models defined within flask_prototype?
Thank you for any info
Flask-Script just provides a framework under which you can create your script(s). It does not give you any better access to the application than what you can obtain when you write a standalone script. But it handles a few mundane tasks for you, like command line arguments and help output. It also folds all of your scripts into a single, consistent command line master script (this is manage.py, in case it isn't clear).
As far as where to put the script, it does not really matter. As long as manage.py can import it and register it with Flask-Script, and that your script can import what it needs from the application you should be fine.