Standard variables can be located in separate files by hosts and/or groups.
Eg. group_vars/groupname or host_vars/hostname.
Is it possible to set vars_prompt in any other location than in playbook file?
For example ideally directly in group_vars/groupname or group_vars_prompt/groupname?
Didn't find any relevant documentation.
Thanks
Afaik, you can not do that. At best, you could use a dynamic inventory script that calls read like so:
#!/bin/bash
read foo
cat <<EOF
{
"localhost": {
"hosts": [ "localhost" ]
},
"_meta" : {
"hostvars" : {
"localhost": {
"foo": "$foo"
}
}
}
}
EOF
But since ansible swallows STDOUT and STDERR when executing an inventory script (see: https://github.com/ansible/ansible/blob/devel/lib/ansible/inventory/script.py#L42), you won't be able to show a question prompt whatever file descriptor you write to.
As an alternative, if you're running under X, you could use Zenity:
#!/bin/bash
foo=`zenity --title "Select Host" --entry --text "Enter value for foo"`
cat <<EOF
{
"localhost": {
"hosts": [ "localhost" ]
},
"_meta" : {
"hostvars" : {
"localhost": {
"foo": "$foo"
}
}
}
}
EOF
This way, you'll get a (GUI) prompt.
But I don't think this is desirable anyway, since it can fail in hundred ways. May be you could try an alternate approach, or tell us what you're trying to achieve.
Alternate approaches
use vars files, filled by users or a script
use ansible -e command line options, eventually wrapped in a bash script reading vars, eventually with zenity if you need UI
let users fill inventory files (group_vars/whatever can be a directory, containing multiple files)
use lookup with pipe to read from a script
use lookup with env, to read vars from environment variables
use Ansible Tower with forms
use vars_prompt (falling back to defaults if nothing is entered), because the playbook is probably the best place to do this
...
Any of these solutions is probably better that hacking around inventory which should really be available unattended (because you might run later from Tower, because you might execute from cron, because you might ansible-pull, ...).
Related
I'm new to Golang, and i'm trying out my first CLI application, using the Cobra framework.
My plan is to have few commands, with many flags.
These flags, don't have to have a value attached to them, since they can simply be -r to restart the device.
Currently, i have the following working, but i keep thinking, that this cannot be the correct way to do it.
So any help is appreciated.
The logic is currently, that each command, get's a default value attached to it, and then i look for this, in the run command, and triggers my function, once it captures it.
My "working code" looks like below.
My init function, in the command contains the following.
chargerCmd.Flags().StringP("UpdateFirmware", "u", "", "Updeates the firmware of the charger")
chargerCmd.Flags().Lookup("UpdateFirmware").NoOptDefVal = "yes"
chargerCmd.Flags().StringP("reboot", "r", "", "Reboots the charger")
chargerCmd.Flags().Lookup("reboot").NoOptDefVal = "yes"
And the run section looks like this.
Run: func(cmd *cobra.Command, args []string) {
input, _ := cmd.Flags().GetString("UpdateFirmware")
if input == "yes" {
fmt.Println("Updating firmware")
UpdateFirmware(os.Getenv("Test"), os.Getenv("Test2"))
}
input, _ = cmd.Flags().GetString("reboot")
if input == "yes" {
fmt.Println("Rebooting Charger")
}
},
Maybe to make the usage a bit cleaner, as stated in the comment from Burak - you can better differentiate between commands and flags. With cobra you have the root command and sub-commands attached to the root command. Additionaly each command can accept flags.
In your case, charger is the root commands and you want two sub-commands: update_firmware and reboot.
So as an example to reboot the charger, you would execute the command:
$ charger reboot
In the code above, you are trying to define sub-commands as flags, which is possible, but likely not good practice.
Instead, the project should be set-up something like this: https://github.com/hesamchobanlou/stackoverflow/tree/main/74934087
You can then move the UpdateFirmware(...) operation within the respective command definition under cmd/update_firmware.go instead of trying to check each flag variation on the root chargerCmd.
If that does not help, provide some more details on why you think your approach might not be correct?
I am writing a script that takes care of running a terraform file and create infra. I have requirement where I need to take output from the terraform into the same script to create schema for DB. I need to take Endpoint, username, Password and DB name and take it as an input into the script to login to the db and create schema. I need to take the output from aws_db_instance from terraform which is already created and push that as an input into the bash script.
Any help would be really appreciated as to how can we achieve this. thanks in advance. Below is the schema code that I would be using in script and would need those inputs from terraform.
RDS_MYSQL_USER="Username";
RDS_MYSQL_PASS="password";
RDS_MYSQL_BASE="DB-Name";
mysql -h $RDS_MYSQL_ENDPOINT -P $PORT -u $RDS_MYSQL_USER -p $RDS_MYSQL_PASS -D $RDS_MYSQL_BASE -e 'quit';```
The usual way to export particular values from a Terraform configuration is to declare Output Values.
In your case it seems like you want to export several of the result attributes from aws_db_instance, which you could do with declarations like the following in your root module:
output "mysql_host" {
value = aws_db_instance.example.address
}
output "mysql_port" {
value = aws_db_instance.example.port
}
output "mysql_username" {
value = aws_db_instance.example.username
}
output "mysql_password" {
value = aws_db_instance.example.password
sensitive = true
}
output "mysql_database_name" {
value = aws_db_instance.example.name
}
After you run terraform apply you should see Terraform report the final values for each of these, with the password hidden behind (sensitive value) because I declared it with sensitive = true.
Once that's worked, you can use the terraform output command with its -raw option to retrieve these values in a way that's more convenient to use in a shell script. For example, if you are using a Bash-like shell:
MYSQL_HOST="$(terraform output -raw mysql_host)"
MYSQL_PORT="$(terraform output -raw mysql_port)"
MYSQL_USERNAME="$(terraform output -raw mysql_username)"
MYSQL_PASSWORD="$(terraform output -raw mysql_password)"
MYSQL_DB_NAME="$(terraform output -raw mysql_database_name)"
Each run of terraform output will need to retrieve the latest state snapshot from your configured backend, so running it five times might be slow if your chosen backend has a long round-trip time. You could potentially optimize this by installing separate software like jq to parse the terraform output -json result to retrieve all of the values from a single command. There are some further examples in terraform output: Use in Automation.
Summary:
I am writing a script to check if a directory exists on a remote machine. I need a solution which can allow me to check for that directory and return the result in a usable way. This whole process is automated within a much larger script so I need a functional way to tell the parent script the directory exists or not.
Restrictions:
The tools that I have are limited. $REMOTE_2 can only be accessed through $REMOTE_1. Also, $REMOTE_2 can only be connected via telnet (no ssh available).
Current goal:
I am trying to set a local variable to be then read back to chose a return code. I am open to other options, but this is the closest I've come to a working solution so far.
I realize that $found will take from the parent process, but this is not my desired result and am not sure what syntax I need to return true or false when trying to echo the $found variable.
/usr/bin/expect<<EOF
spawn ssh $USER#$REMOTE_1
expect "*$USER*"
send -- "telnet $REMOTE_2\r"
expect "*login:*"
send -- "root\r"
expect "*$*"
# Everything prior to this can't be changed. Everything after it can be.
send "if \[ -d $DIRECTORY_LOCATION \] ; then found=true; else found=false ; fi\r"
send -- "echo **$found**\r"
expect {
"*true*" {
exit 0
}
"*false*" {
exit 1
}
}
EOF
I believe this type of solution can work, but I am not sure how to use the remote variable that I store within the if statement, later on to allow me to choose which return code to use.
I'm setting up a small project in ansible, with a shared node with other projects. This node is the CI runner, and should rarely be the target of a playbook.
I want to exclude a group from all by default
The current solution that I have is just to have a group called bystanders, and exclude it from all playbooks that run all
hosts:
[groupA]
node1
[bystanders]
ci-node
playbook_example:
hosts: all:!bystanders
...
But this is prone to error, or forgetting to exclude that in some playbook, inadvertently running a playbook on that node.
I asked this question somewhere else, and Dynamic inventory scripts were mentioned.
The dynamic inventory returns 'all' and 'ungrouped', so we can manipulate the results for these variables with dynamic inventory scripts.
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {}
}
However, in that conversation it was mentioned that 'all' is a bit of an anti-pattern, and avoiding it might be a good idea in the first place. 'all' means all, and in this case nothing that is project specific should use 'all'.
So I think this answers the question for me. I will avoid the use of all and in case I really need to do this, I will go with the dynamic inventory scripts
I'm trying to work out the best way to set some environment variables with puppet.
I could use exec and just do export VAR=blah. However, that would only last for the current session. I also thought about just adding it onto the end of a file such as bashrc. However then I don't think there is a reliable method to check if it is all ready there; so it would end up getting added with every run of puppet.
I would take a look at this related question.
*.sh scripts in /etc/profile.d are read at user-login time (as the post says, at the same time /etc/profile is sourced)
Variables export-ed in any script placed in /etc/profile.d will therefore be available to your users.
You can then use a file resource to ensure this action is idempotent. For example:
file { "/etc/profile.d/my_test.sh":
content => 'export MYVAR="123"'
}
Or an alternate means to an indempotent result:
Example
if [[ ! grep PINTO_HOME /root/.bashrc | wc -l > 0 ]] ; then
echo "export PINTO_HOME=/opt/local/pinto" >> /root/.bashrc ;
fi
This option permits this environmental variable to be set when the presence of the
pinto application makes it warrented rather than having to compose a user's
.bash_profile regardless of what applications may wind up on the box.
If you add it to your bashrc you can check that it's in the ENV hash by doing
ENV[VAR]
Which will return => "blah"
If you take a look at Github's Boxen they source a script (/opt/boxen/env.sh) from ~/.profile. This script runs a bunch of stuff including:
for f in $BOXEN_HOME/env.d/*.sh ; do
if [ -f $f ] ; then
source $f
fi
done
These scripts, in turn, set environment variables for their respective modules.
If you want the variables to affect all users /etc/profile.d is the way to go.
However, if you want them for a specific user, something like .bashrc makes more sense.
In response to "I don't think there is a reliable method to check if it is all ready there; so it would end up getting added with every run of puppet," there is now a file_line resource available from the puppetlabs stdlib module:
"Ensures that a given line is contained within a file. The implementation matches the full line, including whitespace at the beginning and end. If the line is not contained in the given file, Puppet appends the line to the end of the file to ensure the desired state. Multiple resources can be declared to manage multiple lines in the same file."
Example:
file_line { 'sudo_rule':
path => '/etc/sudoers',
line => '%sudo ALL=(ALL) ALL',
}
file_line { 'sudo_rule_nopw':
path => '/etc/sudoers',
line => '%sudonopw ALL=(ALL) NOPASSWD: ALL',
}