I'm trying to find a simple method of running an ansible playbook from within a ruby script.
My script checks various conditions, and - if those conditions a met - runs an ansible playbook.
I thought I could run the playbook via backticks using the kernel module. However I'm unsure how I could parse the output to tell if I had success. Does anyone have a method, wrapper or library to do this? Searching the web only resulted in information about how to deploy ruby with ansible playbooks, not the other way around.
If you want to read the output of a command you need to use Open3.
For example:
require 'open3'
output, status = Open3.capture2('ansible', '...')
Where you split out the arguments to the ansible command separately to avoid ugly shell-interpolation issues.
There's a variety of tools in the Open3 module that help with things like streaming output, checking STDERR and more.
If you don't mind a dependency then you can use tty-command:
require 'tty-command'
cmd = TTY::Command.new
result = cmd.run('ansible ...')
puts result.out.chomp
Related
I want to execute Makefile on one of my Ansible provisioned servers with flag -e but Ansible make module does not seem to support this (just key-values)
Is there any other way of doing that, other than: "command: make -e my_target"?
Ansible make module docs
There doesn't appear to be any code in the module to support what you are looking for. If you want the module to be extended, I'd suggest raising an issue against it at github.com/ansible.
I am hacking together an Ansible solution to deploy a notify.sh bash script as part of a pam.d / pam_exec configuration.
The script uses a bunch of variables that I have been told need to be a part of a separate yml file (so others can change or update them) instead of being defined in the script directly, which is what I am normally used to doing.
I have constructed the vars file where I have defined the variables which the script should be using at runtime.
Now my problem is that I want to be able to access the ansible variables in their standard format {{my_variable}} from the bash script which I am deploying.
Is this even possible? If it isn't possible, what are your suggestions for inserting the variables into the script after its installed?
I have a feeling I am close to the answer, but scowering Ansible help files has not yielded anything yet.
The only thing I kinda figured would be to use the lineinfile module to update the shell script after its already installed, but I feel like this maybe a bit too hacky and there is probably a more elegant solution here.
I appreciate any and all answers.
Sure – anything in /vars/main.yml is automatically available, or you can load a custom file with http://docs.ansible.com/ansible/include_vars_module.html. Then, use a template and deploy your script like this:
template:
src: script_template.j2
dest: "path/notifiy.sh"
mode: 700
I am looking to automate an interactive install process with ansible. This install does not have a silent install option or does not take command line arguments for the interactive questions. The question involve setting a folder location, making sure folder location is right etc for which answers might be default or custom.
I looked into the expect module of ansible but seems like it does not solve my purpose.
- expect:
command: passwd username
responses:
(?i)password: "MySekretPa$$word"
I don't need the command but it's required. Instead I am looking for something that could regex Are you sure you want to continue [y|n]? [n]: for which I want to send the default out By sending return or typing n as a response and for example Backup directory [/tmp] for which the response would be Carriage return.
I don't need the command but it's required. Instead I am looking for something that could regex Are you sure you want to continue [y|n]? [n]:
The module requires a command because you have to run something to get any output.
You obviously do have a command in mind, because you've run it manually and seen the output it produces. That's what you should be plugging into the module.
Alternatively, you can write a pexpect script yourself and use the command or shell modules to run it.
I've figured out a way that works for me. I piped in the arguments to the shell script which when run manually needs the answers. Like ./shell.sh <<< 'answer1\nanswer2\n' which works perfectly for me. This I have added to the task.
Typically, one wants to convert Bash scripts to Chef. But sometimes (like, right now) you need to do the opposite. Is there an automatic way to get the list of commands run for a given Chef cookbook on a given configuration?
I'm not trying to end up with something with the full functionality of the Chef cookbook. I want to end up with a small set of commands that reproduce this particular installation on this particular environment. (The reason in this case is I need to separate out the 'sudo' commands and get them run by someone else. I do have sudo access on a machine that I could run Chef on to carry out this task though.)
I doubt you can do that in general, and even if you could, it would likely be more work than implementing what you need in Chef.
For example, even something as simple as creating a configuration file is implemented in Chef as ruby code; you would need to figure out a way to turn that into echo "…" > /etc/whatever.com. Doing that for all resources would be a major undertaking.
It seems to me that what you should actually do is modify any Chef cookbook that you use to run commands as a different user.
Things like template and file are pretty easy: the file will be created as root, and then chown-ed to the correct user. execute resources (which run commands) can be configured to run the command with su simply by specifying the user:
execute "something" do
command "whoami"
user "nobody"
end
It might take you a while to figure out, but once you get the hang of it it's pretty easy; much easier than converting to bash.
I am trying to use https://github.com/rifraf/Vendorize which is run using a command like
D:\projects\SomeLibrary\lib>ruby -I..\..\Vendorize\lib -rvendorize some_lib.rb
It does something clever where it intercepts required files and logs them, but only the ones that get executed in your command line. On it's documentation pages it says
You can run the program several times with different options if the
required files depend on the options.
Or just run your tests…
I want to run all the tests with the -I function from the command line above, so that all the different avenues of code are run, and the libraries loaded (and logged). Given that I can run them like:
D:\projects\SomeLibrary\lib>rspec ..\spec\some_spec.rb
How do I do this? Thanks!
NB: I am a/ a ruby newbie and b/ running windows
I would try writing something like this at the top of some_spec.rb:
require_relative '..\..\Vendorize\lib\vendorize'
You might need to change that a bit depending on what your working directory is.
Then just runs your specs with rspec as you normally do without any extra commands.
If that doesn't work, then locate the rspec.rb executable and run:
ruby -I..\..\Vendorize\lib -rvendorize path/to/rspec.rb ..\spec\some_spec.rb