How to pass environment variable to sh script? - ansible

I want to create an environment variable in Jenkinsfile which will consist of current workspace (using envirnoment variable WORKSPACE). Then I want to pass this new variable to sh script in next stage.
I've tried declaring variable in the following way:
environment {
artefact_path = "${env.WORKSPACE}/temp/unzipped/${artefact_name}/dev"
}
But after passing it to sh script:
sh "ansible-playbook -i inventory playbook.yml -e \"artefact_path=${env.artefact_path}\""
I get the following output:
+ ansible-playbook -i inventory playbook.yml -e 'artefact_path=C:\nowy_dir\workspace\something/temp/unzipped/something/dev'
PLAY [play] **************************************************************
TASK [task] *********************************************************
fatal: [host]: FAILED! => {"changed": false, "dest": "D:/inetpub/something", "msg": "Get-AnsibleParam: Parameter 'src' has an invalid path 'C:\nowy_dir\\workspace\\something/temp/unzipped/something/dev' specified.", "src": "C:\nowy_dir\\workspace\\something/temp/unzipped/something/dev"}
to retry, use: --limit #/etc/ansible/ansible-scripts/playbook.retry
As you can see the variable is passed with two \ instead of one. How can I prevent this from happening?
EDIT 1
I decided to change the way in which I declare the variable to:
def get_deploy_path() {
def site = get_site()
def wspace = "${env.WORKSPACE}"
def deploy_path = wspace.toString() + "\\temp\\snapshot\\" + site + "\\"
return deploy_path
}
environment {
deploy_path = get_deploy_path()
}
Now I have a problem with workspace. I'm operating on two agents (first is Windows and the second is Linux based). I need the path to be the same both in stages on Windows and on Linux. Any idea how I can make it happen?

Related

Run Ansible playbook on OVH cloud instance with Terraform Cloud

I have a Terraform+Ansible combination that sets up an OVH cloud instance, and then runs an Ansible playbook on it using provisioners. When I run this locally, I can supply the public and private keys directly via the command line (not using file paths), and the terraform apply works perfectly.
On Terraform Cloud, I create the keys as variables. When I run the Terraform plan, the remote-exec provisioner works, and connects to the instance as it should. However, the local-exec fails with a Permission denied (publickey). What am I missing?
My provisioner blocks:
# Dummy resource to hold the provisioner that runs ansible
resource "null_resource" "run_ansible" {
provisioner "remote-exec" {
inline = ["sudo apt update", "sudo apt install python3 -y", "echo Done!"]
connection {
host = openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4
type = "ssh"
user = "ubuntu"
private_key = var.pvt_key
}
}
provisioner "local-exec" {
command = "python3 -m pip install --no-input ansible; ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ubuntu -i '${openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4},' '--private-key=${var.pvt_key}' -e 'pub_key=${var.pub_key}' ansible/setup.yml"
}
}
Terraform cloud run error:
TASK [Gathering Facts] *********************************************************
fatal: [xx.xxx.xxx.xx]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'xx.xxx.xxx.xx' (ECDSA) to the list of known hosts.\r\nno such identity: /home/tfc-agent/.tfc-agent/component/terraform/runs/run-AhaANkduM9YXJVoC/config/<<EOT\n-----BEGIN OPENSSH PRIVATE KEY-----<private-key>-----END OPENSSH PRIVATE KEY-----\nEOT: No such file or directory\r\nubuntu#xx.xxx.xxx.xx: Permission denied (publickey).", "unreachable": true}
I solved the problem by creating (sensitive) key files on the Terraform Cloud host, and passing the paths to them to Ansible instead.
The variables are still supplied via TFCloud, but without the heredoc syntax.
I had to add an extra new line \n at the end of the key to get around it being stripped. See the following issue: https://github.com/ansible/awx/issues/9082.
resource "local_sensitive_file" "key_file" {
content = "${var.pvt_key}\n"
filename = "${path.root}/.ssh/key"
file_permission = "600"
directory_permission = "700"
}
resource "local_sensitive_file" "pubkey_file" {
content = "${var.pub_key}\n"
filename = "${path.root}/.ssh/key.pub"
file_permission = "644"
directory_permission = "700"
}

Jenkinsfile with ansibleplaybook using multiple inventory files

When I need to include multiple inventory files using ansible-playbook in cli y usually use:
ansible-playbook -i inventory/dev-hosts -i inventory/staging-hosts playbooks/run.yml
Now I want to do this, but in a Jenkinsfile . I´m using the following code:
stage('1') {
steps {
ansiColor('xterm') {
ansiblePlaybook([
colorized: true,
inventory: inventory/dev-hosts , inventory/staging-hosts ,
playbook: playbooks/run.yml
])
}
}
}
But this not working. I can´t find information how to do this using Jenkinsfile. Any idea on how to solve it ?

How to call a variable of string with spaces in a terraform provisioner?

I am trying to run terraform provisioner which is calling my ansible playbook , now I am passing public key as a variable from user . When passing public key it doesnt take the entire key and just ssh-rsa , but not a complete string.
I want to pass the complete string as "ssh-rsa Aghdgdhfghjfdh"
The provisioner in terraform which I am running is :
resource "null_resource" "bastion_user_provisioner" {
provisioner "local-exec" {
command = "sleep 30 && ansible-playbook ../../../../ansible/create-user.yml --private-key ${path.module}/${var.project_name}.pem -vvv -u ubuntu -e 'username=${var.username}' -e 'user_key=${var.user_key}' -i ${var.bastion_public_ip}, -e 'root_shell=/bin/rbash' -e 'raw_password=${random_string.bastion_password.result}'"
}
}
If i run playbook alone as:
ansible-playbook -i localhost create-user.yml --user=ubuntu --private-key=kkk000.pem -e "username=kkkkk" -e 'user_key='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+GWlljlLzW6DOEo"' -e root_shell="/bin/bash"
it works,
But I want the string to be in a terraform variable which is passed in provisioner.
I want to have key copied to a file as
ssh-rsa AWRDkj;jfdljdfldkf'sd.......
and not just
ssh-rsa
You are getting bitten by the -e key=value splitting that goes on with the command-line --extra-args interpretation [citation]. What you really want is to feed -e some JSON text, to stop it from trying to split on whitespace. That will also come in handy for sufficiently complicated random string passwords, which would otherwise produce a very bad outcome when trying to pass them on the command-line.
Thankfully, there is a jsonencode() function that will help you with that problem:
resource "null_resource" "bastion_user_provisioner" {
provisioner "local-exec" {
command = <<SH
set -e
sleep 30
ansible -vvv -i localhost, -c local -e '${jsonencode({
"username"="${var.username}",
"user_key"="${var.user_key}",
"raw_password"="${random_string.bastion_password.result}",
})}' -m debug -a var=vars all
SH
}
}

How do I pass a dictionary to an ansible ad-hoc command?

If I have an ansible ad-hoc command that wants a dictionary or list valued argument, like the queries argument to postgresql_query, how do I invoke that in ansible ad-hoc commands?
Do I have to write a one-command playbook? I'm looking for a way to minimise the numbers of layers of confusing quoting (shell, yaml/json, etc) involved.
The ansible docs mention accepting structured forms for variables. So I tried the yaml and json syntax for the arguments:
ansible -m postgresql_query -sU postgres -a '{"queries":["SELECT 1", "SELECT 2"]}'
... but got ERROR! this task 'postgresql_query' has extra params, which is only allowed in the following modules: ....
the same is true if I #include a file with yaml or json contents like
cat > 'query.yml' <<'__END__'
queries:
- "SELECT 1"
- "SELECT 2"
__END__
ansible -m postgresql_query -sU postgres -a #queries.yml
You can define a dictionary in a JSON variable to pass it as parameter next:
ansible -m module_name -e '{"dict": {"key": "value"}}' -a "param={{ dict }}"
(parameters positions are arbitrary)
I have most of a solution - a way to express something like a shell script or query payload without extra quoting. But it's ugly:
ansible hostname -m postgresql_query -sU postgres -a 'query="{{query}}"' -e #/dev/stdin <<'__END__'
query: |
SELECT 'no special quotes needed' AS "multiline
identifier works fine" FROM
generate_series(1,2)
__END__
Not only is that shamefully awful, but it doesn't seem to work for lists (arrays):
ansible hostname -m postgresql_query -sU postgres -vvv -a 'query="{{query}}"' -e #/dev/stdin <<'__END__'
queries:
- |
SELECT 1
- |
SELECT 2
__END__
fails with
hostname | FAILED! => {
"changed": false,
"err": "syntax error at or near \"<\"\nLINE 1: <bound method Templar._query_lookup of <ansible.template.Tem...\n ^\n",
"invocation": {
"module_args": {
"autocommit": false,
"conninfo": "",
"queries": null,
"query": "<bound method Templar._query_lookup of <ansible.template.Templar object at 0x7f72531c61d0>>"
}
},
"msg": "Database query failed"
}
so it looks like some kind of lazy evaluation is breaking things.

Escape chars in Terraform local exec provisioner

I want to chain Terraform and Ansible using the local-exec provisioner;
However since this requires input to Ansible from Terraform I am stuck with the following complex command:
provisioner "local-exec" {
command = 'sleep 60; ansible-playbook -i ../ansible/inventory/ ../ansible/playbooks/site.yml --extra-vars "rancher_server_rds_endpoint="${aws_db_instance.my-server-rds.endpoint}" rancher_server_elastic_ip="${aws_eip.my-server-eip.public_ip}""'
}
which keeps returning
illegal char
error;
any suggestion about escaping correctly?
If the ansible-playbook command was to run directly in the shell it would be:
ansible-playbook -i inventory playbooks/site.yml --extra-vars "my_server_rds_endpoint=my-server-db.d30ikkj222.us-west-1.rds.amazonaws.com rancher_server_elastic_ip=88.148.17.236"
(paths differ)
Terraform syntax states that:
Strings are in double-quotes.
So you need to replace single quotes with double ones, and then escape quotes inside, for example:
provisioner "local-exec" {
command = "sleep 60; ansible-playbook -i ../ansible/inventory/ ../ansible/playbooks/site.yml --extra-vars \"rancher_server_rds_endpoint='${aws_db_instance.my-server-rds.endpoint}' rancher_server_elastic_ip='${aws_eip.my-server-eip.public_ip}'\""
}
The only way I know, that will work for any special characters in variables, is to use environment, for example:
provisioner "local-exec" {
command = join(
" ", [
"sleep 60;",
"ansible-playbook -i ../ansible/inventory/",
"../ansible/playbooks/site.yml",
"--extra-vars",
"rancher_server_rds_endpoint=\"$RANCHER_SERVER_RDS_ENDPOINT\"",
"rancher_server_elastic_ip=\"$RANCHER_SERVER_ELASTIC_IP\""
]
)
environment = {
RANCHER_SERVER_RDS_ENDPOINT = aws_db_instance.my-server-rds.endpoint
RANCHER_SERVER_ELASTIC_IP = aws_eip.my-server-eip.public_ip
}
}

Resources