Ansible Preferred Way Of Running An Ad Hoc Command - ansible

I'm pretty new to Ansible, just a day old and while trying out some basic ad hoc commands, I noticed that in order to create a directory on a group named nodes, both of the following commands worked.
METHOD 1
ansible nodes -a "mkdir /BYANSIBLE_2"
METHOD 2
ansible nodes -m file -a "path=/BYANSIBLE_3 state=touch"
According to the documentation, -a means module arguments, so why does METHOD 1 work ?
According to my understanding merely providing the arguments of a module without specifying the module itself shouldn't work (unless there is some implicit default).
Also, as a newbie, should I focus on METHOD 1 or METHOD 2 when using adhoc commands ?

Ansible uses the module ansible.builtin.command by default if no module is provided in the command line . This module just runs commands on the remote nodes command line, which is why "mkdir path" works. for you. The argument for this module is, well, a command.
On method 2 you are actually calling a specific module "file" that has its own definition of the arguments required. The argument for this module is just the path that needs to be created.
The method that you use depend on the case. If you are testing commands on remote nodes, method 1 would be my go to, since is faster than explicitly adding the module name. Method 2 is better in the sense that is more explicit in intention.
But more importantly, i try to keep ad hoc commands for very small tests and tasks. Ansible is about automating and scaling to me, so i try to create playbooks whenever possible.
You can read more in the following link:
https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html

Related

Parallel processing not working using create_app in julia

So I'm using the create_app() method in PackageCompiler to create an app of my julia package. It works except that the original package could run on multiple processes and after it's turned into and app it doesn't anymore.
I put this at the beginning of my script (and in the app main function)
#info("using $(nworkers()) workers\n")
It outputs whatever I pass with the -p flag when running the script indicating it is indeed running with multiple workers. After the package is turned into an app it always prints "using 1 workers" regardless of the flags I pass using --julia-args -pX
Is there something that I should enable to make this work, or is this inherently not possible?
cheers
jiq
UPDATE: it seems that using addprocs() does work (which provides a workaround for me) but I'm still confused as to why the command line argument -p is not picked up

How to use openssh-client in Cyberark environments with autocompletion and multiple servers?

I usually have what I need in my ~/.ssh/ folder (a config file and more) to connect to servers with ssh <tab><tab>. In an environment with Cyberark the configuration seems to be a bit more intricate due to the three # signs
I found this answer, but I struggled to find a way to enjoy autocompletion for many hosts because the User field does not support tokens like %h for host, so I'd have to create the same entry again for every server where I previously just added servers to the Host line. Is there a way this can be achieved?
After spending some time I came up with the following solution which is more like a workaround. I'm not really proud of it, but it gets the job done with the least amount of new code or difficult to understand code.
Create a wrapper script like this:
$ cat ~/bin/ssh-wrapper.sh
#!/bin/bash
# https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/latest/en/Content/PASIMP/PSSO-PMSP.htm#The
# Replace where appropriate:
# $1 = Server FQDN
# prefix = your administrative user may be different from you normal user
# internal.example.org = domain server (controller)
# pam.example.org = Cyberark jump host
ssh -t ${USERNAME,,}#prefix${USERNAME,,}#internal.example.org#$1#pam.example.org
Add the following to your bash startup file. Yours may be different than mine, because I'm hacking here in a customer environment with Tortoise Git-Bash. (Which works nice by the way when you use it with Flux Terminal, k9s and jq.)
Create an alias for you wrapper script, I chose sshw here.
Create variable with all the FQDNs of the servers you want to have in your autocompletion, or create file which contains these FQDNs and read it to a variable.
Create a bash completion expression which applies this to your sshw alias.
$ cat ~/.bash_profile
alias sshw="$HOME/bin/ssh-wrapper.sh"
SSH_COMPLETE=$(cat "$HOME/.ssh/known_hosts_wrapper")
complete -o default -W "${SSH_COMPLETE[*]}" sshw
Now you can tab you way to servers.

Pass -e command to the Ansible make module

I want to execute Makefile on one of my Ansible provisioned servers with flag -e but Ansible make module does not seem to support this (just key-values)
Is there any other way of doing that, other than: "command: make -e my_target"?
Ansible make module docs
There doesn't appear to be any code in the module to support what you are looking for. If you want the module to be extended, I'd suggest raising an issue against it at github.com/ansible.

Convert Chef recipe to series of Bash commands?

Typically, one wants to convert Bash scripts to Chef. But sometimes (like, right now) you need to do the opposite. Is there an automatic way to get the list of commands run for a given Chef cookbook on a given configuration?
I'm not trying to end up with something with the full functionality of the Chef cookbook. I want to end up with a small set of commands that reproduce this particular installation on this particular environment. (The reason in this case is I need to separate out the 'sudo' commands and get them run by someone else. I do have sudo access on a machine that I could run Chef on to carry out this task though.)
I doubt you can do that in general, and even if you could, it would likely be more work than implementing what you need in Chef.
For example, even something as simple as creating a configuration file is implemented in Chef as ruby code; you would need to figure out a way to turn that into echo "…" > /etc/whatever.com. Doing that for all resources would be a major undertaking.
It seems to me that what you should actually do is modify any Chef cookbook that you use to run commands as a different user.
Things like template and file are pretty easy: the file will be created as root, and then chown-ed to the correct user. execute resources (which run commands) can be configured to run the command with su simply by specifying the user:
execute "something" do
command "whoami"
user "nobody"
end
It might take you a while to figure out, but once you get the hang of it it's pretty easy; much easier than converting to bash.

Pass variable from Jenkins to Ruby script (Jenkins newbie)

I've pulled a few scripts into Jenkins for a proof of concept and think I'd like to move that direction for all of our scripts. Right now I keep an environment.rb file with my code (watir-webdriver, cucumber) which tells the script which environment we're testing and which browser to use (global variables). Jenkins fires off the script using rake.
I'd love to let the user choose the environment and browser through Jenkins 'choice' variable or similar, and then pass that to the script. While I see the framework in that for Jenkins and set up a choice list for environment, I'm having trouble determining what the next step is.
I could write to environment.rb, I could pass a variable to rake - I have many options for how to pass the information, I just need some assistance finding the first step to find the Jenkins way of accomplishing them. Google results and previous Stack questions weren't what I was looking for.
Thanks
Sure. Give the user either a text entry field a dropdown after telling Jenkins that this is a parameterized build. You'll give them a name, something like BuildEnvironment. Then when you call the build, you can pass these from the environment variables. For example, if you were using ANT, you'd add a line to the parameters that said environment = ${MyEnvironment} Jenkins will then pass the value along for your build tool to use.
There is a way to pass Jenkins Environment Variable to Ruby script. Please see the following example:
workspace_path = `echo $WORKSPACE`.strip # note the use of backticks
puts workspace_path
In the "echo $WORKSPACE".strip # the code work only if you replace quotes with backticks
This code example works in Jenkins on a Linux system.

Resources