I have a Terraform+Ansible combination that sets up an OVH cloud instance, and then runs an Ansible playbook on it using provisioners. When I run this locally, I can supply the public and private keys directly via the command line (not using file paths), and the terraform apply works perfectly.
On Terraform Cloud, I create the keys as variables. When I run the Terraform plan, the remote-exec provisioner works, and connects to the instance as it should. However, the local-exec fails with a Permission denied (publickey). What am I missing?
My provisioner blocks:
# Dummy resource to hold the provisioner that runs ansible
resource "null_resource" "run_ansible" {
provisioner "remote-exec" {
inline = ["sudo apt update", "sudo apt install python3 -y", "echo Done!"]
connection {
host = openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4
type = "ssh"
user = "ubuntu"
private_key = var.pvt_key
}
}
provisioner "local-exec" {
command = "python3 -m pip install --no-input ansible; ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ubuntu -i '${openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4},' '--private-key=${var.pvt_key}' -e 'pub_key=${var.pub_key}' ansible/setup.yml"
}
}
Terraform cloud run error:
TASK [Gathering Facts] *********************************************************
fatal: [xx.xxx.xxx.xx]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'xx.xxx.xxx.xx' (ECDSA) to the list of known hosts.\r\nno such identity: /home/tfc-agent/.tfc-agent/component/terraform/runs/run-AhaANkduM9YXJVoC/config/<<EOT\n-----BEGIN OPENSSH PRIVATE KEY-----<private-key>-----END OPENSSH PRIVATE KEY-----\nEOT: No such file or directory\r\nubuntu#xx.xxx.xxx.xx: Permission denied (publickey).", "unreachable": true}
I solved the problem by creating (sensitive) key files on the Terraform Cloud host, and passing the paths to them to Ansible instead.
The variables are still supplied via TFCloud, but without the heredoc syntax.
I had to add an extra new line \n at the end of the key to get around it being stripped. See the following issue: https://github.com/ansible/awx/issues/9082.
resource "local_sensitive_file" "key_file" {
content = "${var.pvt_key}\n"
filename = "${path.root}/.ssh/key"
file_permission = "600"
directory_permission = "700"
}
resource "local_sensitive_file" "pubkey_file" {
content = "${var.pub_key}\n"
filename = "${path.root}/.ssh/key.pub"
file_permission = "644"
directory_permission = "700"
}
Related
I want to create an environment variable in Jenkinsfile which will consist of current workspace (using envirnoment variable WORKSPACE). Then I want to pass this new variable to sh script in next stage.
I've tried declaring variable in the following way:
environment {
artefact_path = "${env.WORKSPACE}/temp/unzipped/${artefact_name}/dev"
}
But after passing it to sh script:
sh "ansible-playbook -i inventory playbook.yml -e \"artefact_path=${env.artefact_path}\""
I get the following output:
+ ansible-playbook -i inventory playbook.yml -e 'artefact_path=C:\nowy_dir\workspace\something/temp/unzipped/something/dev'
PLAY [play] **************************************************************
TASK [task] *********************************************************
fatal: [host]: FAILED! => {"changed": false, "dest": "D:/inetpub/something", "msg": "Get-AnsibleParam: Parameter 'src' has an invalid path 'C:\nowy_dir\\workspace\\something/temp/unzipped/something/dev' specified.", "src": "C:\nowy_dir\\workspace\\something/temp/unzipped/something/dev"}
to retry, use: --limit #/etc/ansible/ansible-scripts/playbook.retry
As you can see the variable is passed with two \ instead of one. How can I prevent this from happening?
EDIT 1
I decided to change the way in which I declare the variable to:
def get_deploy_path() {
def site = get_site()
def wspace = "${env.WORKSPACE}"
def deploy_path = wspace.toString() + "\\temp\\snapshot\\" + site + "\\"
return deploy_path
}
environment {
deploy_path = get_deploy_path()
}
Now I have a problem with workspace. I'm operating on two agents (first is Windows and the second is Linux based). I need the path to be the same both in stages on Windows and on Linux. Any idea how I can make it happen?
I am trying to run terraform provisioner which is calling my ansible playbook , now I am passing public key as a variable from user . When passing public key it doesnt take the entire key and just ssh-rsa , but not a complete string.
I want to pass the complete string as "ssh-rsa Aghdgdhfghjfdh"
The provisioner in terraform which I am running is :
resource "null_resource" "bastion_user_provisioner" {
provisioner "local-exec" {
command = "sleep 30 && ansible-playbook ../../../../ansible/create-user.yml --private-key ${path.module}/${var.project_name}.pem -vvv -u ubuntu -e 'username=${var.username}' -e 'user_key=${var.user_key}' -i ${var.bastion_public_ip}, -e 'root_shell=/bin/rbash' -e 'raw_password=${random_string.bastion_password.result}'"
}
}
If i run playbook alone as:
ansible-playbook -i localhost create-user.yml --user=ubuntu --private-key=kkk000.pem -e "username=kkkkk" -e 'user_key='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+GWlljlLzW6DOEo"' -e root_shell="/bin/bash"
it works,
But I want the string to be in a terraform variable which is passed in provisioner.
I want to have key copied to a file as
ssh-rsa AWRDkj;jfdljdfldkf'sd.......
and not just
ssh-rsa
You are getting bitten by the -e key=value splitting that goes on with the command-line --extra-args interpretation [citation]. What you really want is to feed -e some JSON text, to stop it from trying to split on whitespace. That will also come in handy for sufficiently complicated random string passwords, which would otherwise produce a very bad outcome when trying to pass them on the command-line.
Thankfully, there is a jsonencode() function that will help you with that problem:
resource "null_resource" "bastion_user_provisioner" {
provisioner "local-exec" {
command = <<SH
set -e
sleep 30
ansible -vvv -i localhost, -c local -e '${jsonencode({
"username"="${var.username}",
"user_key"="${var.user_key}",
"raw_password"="${random_string.bastion_password.result}",
})}' -m debug -a var=vars all
SH
}
}
I have the following launch config for an auto-scaling group:
resource "aws_launch_configuration" "ASG-launch-config" {
#name = "ASG-launch-config" # see: https://github.com/hashicorp/terraform/issues/3665
name_prefix = "ASG-launch-config-"
image_id = "ami-a4dc46db" #Ubuntu 16.04 LTS
#image_id = "ami-b70554c8" #Amazon Linux 2
instance_type = "t2.micro"
security_groups = ["${aws_security_group.WEB-DMZ.id}"]
key_name = "MyEC2KeyPair"
#user_data = <<-EOF
# #!/bin/bash
# echo "Hello, World" > index.html
# nohup busybox httpd -f -p "${var.server_port}" &
# EOF
provisioner "file" {
source="script.sh"
destination="/tmp/script.sh"
}
provisioner "remote-exec" {
inline=[
"chmod +x /tmp/script.sh",
"sudo /tmp/script.sh"
]
}
connection {
user="ubuntu"
private_key="${file("MyEC2KeyPair.pem")}"
}
lifecycle {
create_before_destroy = true
}
}
Error: Error applying plan:
1 error(s) occurred:
aws_launch_configuration.ASG-launch-config: timeout - last error: dial tcp :22: connectex: No connection could be made because the target machine actively refused it.
I want to run a bash script to basically install WordPress on the instances created.
the script runs fine in a resource type "aws_instance" "example"
how to troubleshoot this?
Sounds like the instance is denying your traffic. Start up an instance without the provisioning script and see if you can SSH to it using the key you provided. You may want to add verbose logging to the SSH command with -v.
When I do terraform plan -var-file=../variables.tfvars
pass all good
But then I run terraform apply -var-file=../variables.tfvars
give me this error and I don't know how to solve this because the directory's path is correct.
Error: Error applying plan:
1 error(s) occurred:
* aws_instance.mongodb_server: 1 error(s) occurred:
* Error running command 'sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo "[mongodb]
54.193.20.170" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e "mongodb_password=blahblah" -e "mongodb_user=admin" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml': exit status 127. Output: /bin/sh: 2: ansible-playbook: not found
The code is like:
resource "aws_instance" "mongodb_server" {
instance_type = "${lookup(var.mongodb_instance_type_control,
var.target_env)}"
vpc_security_group_ids =
["${aws_security_group.default_internal.id}"]
ami = "${lookup(var.amazon_ami_by_location, var.aws_region)}"
key_name = "${var.key_name}"
subnet_id = "${data.aws_subnet.subnet_a.id}"
tags {
Name = "tf-mongodb-${lookup(var.environment, var.target_env)}"
}
associate_public_ip_address = true
provisioner "local-exec" {
command = "sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo \"[mongodb]\n${aws_instance.mongodb_server.public_ip}\" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e \"mongodb_password=${var.mongodb_default_password}\" -e \"mongodb_user=${var.mongodb_default_username}\" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml"
}
Output: /bin/sh: 2: ansible-playbook: not found
This is your actual error. Terraform plan does not capture this error as local-exec commands are not evaluated by terraform plan.
Do you have ansible installed on the machine where you are trying to run the above terraform? And if installed, is it on the path.
Try installing ansible if its not installed already. If ansible is already installed, do a echo $PATH in your local-exec command and confirm if ansible is present in the given path.
Here is what I have after setting kerberos according to ansible:
http://docs.ansible.com/ansible/intro_windows.html
[libdefaults]
default_realm = MY.DOMAIN.COM
…
[realms]
MY.DOMAIN.COM = {
default_domain = my.domain.com
kdc = <domain-controller-server>.my.domain.com
kpasswd_server = <domain-controller-server>.my.domain.com
}
…
[domain_realm]
.my.domain.com = MY.DOMAIN.COM
…
I was able to create a kerberos ticket, here is my output:
root#alex-VirtualBox:/etc/ansible# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: <user_name>#MY.DOMAIN.COM
Valid starting Expires Service principal
04/07/2016 13:58:52 04/07/2016 23:58:52 krbtgt/MY.DOMAIN.COM#MY.DOMAIN.COM
renew until 04/08/2016 13:58:48
04/07/2016 14:02:20 04/07/2016 23:58:52 HTTP/<windows-target-server>.my.domain.com#MY.DOMAIN.COM
renew until 04/08/2016 13:58:48
So what I am trying to do is run ansible playbook or even a simple command on . But I am getting this error which I am pretty sure have nothing to do with ansible:
root#alex-VirtualBox:/etc/ansible# ansible windows -m win_ping --ask-vault-pass
Vault password:
<windows-target-server>.my.domain.com | FAILED! => {
"failed": true,
"msg": "kerberos: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377)), plaintext: 401 Unauthorized."
}
I even went ahead and created the keytab file:
> ktutil
ktutil: addent -password -p <user_name>#MY.DOMAIN.COM -k 1 -e rc4-hmac
provide password
ktutil: wkt <user_name>.keytab
ktutil: quit
But then I get different error:
root#alex-VirtualBox:/etc/ansible# ansible windows -m win_ping --ask-vault-pass
n2-2wbp-wbsvr01.na.msds.rhi.com | FAILED! => {
"failed": true,
"msg": "kerberos: (('An invalid name was supplied', 131072), ('Success', 100001)), plaintext: 401 Unauthorized."
}
Try to put the IP and Hostname of your Windows Host entry in /etc/hosts file and then try: https://github.com/diyan/pywinrm/issues/21#issuecomment-58958732 , https://github.com/diyan/pywinrm/issues/21#issuecomment-59084178
PS:
'Server not found in Kerberos database' - That usually means that the Linux host where you're running kinit is not joined to the domain (ie, it doesn't have a properly configured computer account in the domain). The existing docs unhelpfully omit that requirement...