How to run ansible from linux to deploy on windows machines - windows

Here is what I have after setting kerberos according to ansible:
http://docs.ansible.com/ansible/intro_windows.html
[libdefaults]
default_realm = MY.DOMAIN.COM
…
[realms]
MY.DOMAIN.COM = {
default_domain = my.domain.com
kdc = <domain-controller-server>.my.domain.com
kpasswd_server = <domain-controller-server>.my.domain.com
}
…
[domain_realm]
.my.domain.com = MY.DOMAIN.COM
…
I was able to create a kerberos ticket, here is my output:
root#alex-VirtualBox:/etc/ansible# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: <user_name>#MY.DOMAIN.COM
Valid starting Expires Service principal
04/07/2016 13:58:52 04/07/2016 23:58:52 krbtgt/MY.DOMAIN.COM#MY.DOMAIN.COM
renew until 04/08/2016 13:58:48
04/07/2016 14:02:20 04/07/2016 23:58:52 HTTP/<windows-target-server>.my.domain.com#MY.DOMAIN.COM
renew until 04/08/2016 13:58:48
So what I am trying to do is run ansible playbook or even a simple command on . But I am getting this error which I am pretty sure have nothing to do with ansible:
root#alex-VirtualBox:/etc/ansible# ansible windows -m win_ping --ask-vault-pass
Vault password:
<windows-target-server>.my.domain.com | FAILED! => {
"failed": true,
"msg": "kerberos: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377)), plaintext: 401 Unauthorized."
}
I even went ahead and created the keytab file:
> ktutil
ktutil: addent -password -p <user_name>#MY.DOMAIN.COM -k 1 -e rc4-hmac
provide password
ktutil: wkt <user_name>.keytab
ktutil: quit
But then I get different error:
root#alex-VirtualBox:/etc/ansible# ansible windows -m win_ping --ask-vault-pass
n2-2wbp-wbsvr01.na.msds.rhi.com | FAILED! => {
"failed": true,
"msg": "kerberos: (('An invalid name was supplied', 131072), ('Success', 100001)), plaintext: 401 Unauthorized."
}

Try to put the IP and Hostname of your Windows Host entry in /etc/hosts file and then try: https://github.com/diyan/pywinrm/issues/21#issuecomment-58958732 , https://github.com/diyan/pywinrm/issues/21#issuecomment-59084178
PS:
'Server not found in Kerberos database' - That usually means that the Linux host where you're running kinit is not joined to the domain (ie, it doesn't have a properly configured computer account in the domain). The existing docs unhelpfully omit that requirement...

Related

How can I get the private IP put into conf file ansible?

I have this role task file in roles/make_elasticsearch_conf/tasks/main.yml:
---
# tasks file for make_elasticsearch_conf
#
- name: Get private IP address
command:
cmd: "hostname -I | awk '{print $2}'"
register: "cluster_ip"
- name: Create /etc/elasticsearch/elasticsearch.yml File
ansible.builtin.template:
src: elasticsearch.yml.j2
dest: /etc/elasticsearch/elasticsearch.yml
I also have a template in roles/make_elasticsearch_conf/templates/elasticsearch.yml.j2:
cluster.name: {{ ansible_host }}
node.name: {{ ansible_host }}
network.host: {{ cluster_ip }}
I use it in this make_elastic_search_conf.yml playbook:
---
- name: Make Elastic Search Config.
hosts: all
become: True
gather_facts: True
roles:
- roles/make_elasticsearch_conf
When I run my playbook I get this error:
FAILED! => {"changed": true, "cmd": ["hostname", "-I", "|", "awk", "{print $2}"], "delta": "0:00:00.006257", "end": "2022-12-06 21:54:47.612238", "msg": "non-zero return code", "rc": 255, "start": "2022-12-06 21:54:47.605981", "stderr": "Usage: hostname [-b] {hostname|-F file} set host name (from file)\n hostname [-a|-A|-d|-f|-i|-I|-s|-y] display formatted name\n hostname display host name\n\n {yp,nis,}domainname {nisdomain|-F file} set NIS domain name (from file)\n {yp,nis,}domainname display NIS domain name\n\n dnsdomainname display dns domain name\n\n hostname -V|--version|-h|--help print info and exit\n\nProgram name:\n {yp,nis,}domainname=hostname -y\n dnsdomainname=hostname -d\n\nProgram options:\n -a, --alias alias names\n -A, --all-fqdns all long host names (FQDNs)\n -b, --boot set default hostname if none available\n -d, --domain DNS domain name\n -f, --fqdn, --long long host name (FQDN)\n -F, --file read host name or NIS domain name from given file\n -i, --ip-address addresses for the host name\n -I, --all-ip-addresses all addresses for the host\n -s, --short short host name\n -y, --yp, --nis NIS/YP domain name\n\nDescription:\n This command can get or set the host name or the NIS domain name. You can\n also get the DNS domain or the FQDN (fully qualified domain name).\n Unless you are using bind or NIS for host lookups you can change the\n FQDN (Fully Qualified Domain Name) and the DNS domain name (which is\n part of the FQDN) in the /etc/hosts file.", "stderr_lines": ["Usage: hostname [-b] {hostname|-F file} set host name (from file)", " hostname [-a|-A|-d|-f|-i|-I|-s|-y] display formatted name", " hostname display host name", "", " {yp,nis,}domainname {nisdomain|-F file} set NIS domain name (from file)", " {yp,nis,}domainname display NIS domain name", "", " dnsdomainname display dns domain name", "", " hostname -V|--version|-h|--help print info and exit", "", "Program name:", " {yp,nis,}domainname=hostname -y", " dnsdomainname=hostname -d", "", "Program options:", " -a, --alias alias names", " -A, --all-fqdns all long host names (FQDNs)", " -b, --boot set default hostname if none available", " -d, --domain DNS domain name", " -f, --fqdn, --long long host name (FQDN)", " -F, --file read host name or NIS domain name from given file", " -i, --ip-address addresses for the host name", " -I, --all-ip-addresses all addresses for the host", " -s, --short short host name", " -y, --yp, --nis NIS/YP domain name", "", "Description:", " This command can get or set the host name or the NIS domain name. You can", " also get the DNS domain or the FQDN (fully qualified domain name).", " Unless you are using bind or NIS for host lookups you can change the", " FQDN (Fully Qualified Domain Name) and the DNS domain name (which is", " part of the FQDN) in the /etc/hosts file."], "stdout": "", "stdout_lines": []}
I have tried all sorts of ways to get the private ip of the host but nothing I have tried gave the expected result.
The problem is caused by |.
As per the documentation of the command module:
If you want to run a command through the shell (say you are using <,> >, |, and so on), you actually want the ansible.builtin.shell module instead. Parsing shell metacharacters can lead to unexpected commands
being executed if quoting is not done correctly so it is more secure
to use the command module when possible.
You might want to use shell module. Or better get the IP address from the ansible facts.

Run Ansible playbook on OVH cloud instance with Terraform Cloud

I have a Terraform+Ansible combination that sets up an OVH cloud instance, and then runs an Ansible playbook on it using provisioners. When I run this locally, I can supply the public and private keys directly via the command line (not using file paths), and the terraform apply works perfectly.
On Terraform Cloud, I create the keys as variables. When I run the Terraform plan, the remote-exec provisioner works, and connects to the instance as it should. However, the local-exec fails with a Permission denied (publickey). What am I missing?
My provisioner blocks:
# Dummy resource to hold the provisioner that runs ansible
resource "null_resource" "run_ansible" {
provisioner "remote-exec" {
inline = ["sudo apt update", "sudo apt install python3 -y", "echo Done!"]
connection {
host = openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4
type = "ssh"
user = "ubuntu"
private_key = var.pvt_key
}
}
provisioner "local-exec" {
command = "python3 -m pip install --no-input ansible; ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ubuntu -i '${openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4},' '--private-key=${var.pvt_key}' -e 'pub_key=${var.pub_key}' ansible/setup.yml"
}
}
Terraform cloud run error:
TASK [Gathering Facts] *********************************************************
fatal: [xx.xxx.xxx.xx]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'xx.xxx.xxx.xx' (ECDSA) to the list of known hosts.\r\nno such identity: /home/tfc-agent/.tfc-agent/component/terraform/runs/run-AhaANkduM9YXJVoC/config/<<EOT\n-----BEGIN OPENSSH PRIVATE KEY-----<private-key>-----END OPENSSH PRIVATE KEY-----\nEOT: No such file or directory\r\nubuntu#xx.xxx.xxx.xx: Permission denied (publickey).", "unreachable": true}
I solved the problem by creating (sensitive) key files on the Terraform Cloud host, and passing the paths to them to Ansible instead.
The variables are still supplied via TFCloud, but without the heredoc syntax.
I had to add an extra new line \n at the end of the key to get around it being stripped. See the following issue: https://github.com/ansible/awx/issues/9082.
resource "local_sensitive_file" "key_file" {
content = "${var.pvt_key}\n"
filename = "${path.root}/.ssh/key"
file_permission = "600"
directory_permission = "700"
}
resource "local_sensitive_file" "pubkey_file" {
content = "${var.pub_key}\n"
filename = "${path.root}/.ssh/key.pub"
file_permission = "644"
directory_permission = "700"
}

Ansible - Gather Facts failed on Windows DC

I´m currently trying to create update Jobs for Windows Servers which mostly works. But on all my DCs (expect one, don´t know why this one is working) gathering facts failed with this error message:
fatal: [hostname]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"setup": {"exception": "Access denied \r\nAt line:63 char:44\r\n+ ... e_name] = $(Get-CimInstance -Namespace $namespace -ClassName $instanc ...\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n + CategoryInfo : PermissionDenied: (Root\\CIMV2:Win3...erConfiguration:String) [Get-CimInstance], CimException\r\n + FullyQualifiedErrorId : HRESULT 0x80041003,Microsoft.Management.Infrastructure.CimCmdlets.GetCimInstanceCommand\r\n\r\nScriptStackTrace:\r\nat Get-LazyCimInstance, <No file>: line 63\r\nat <ScriptBlock>, <No file>: line 142\r\n\r\nMicrosoft.Management.Infrastructure.CimException: Access denied \r\n at Microsoft.Management.Infrastructure.Internal.Operations.CimAsyncObserverProxyBase`1.ProcessNativeCallback(OperationCallbackProcessingContext callbackProcessingContext, T currentItem, Boolean moreResults, MiResult operationResult, String errorMessage, InstanceHandle errorDetailsHandle)", "failed": true, "msg": "Unhandled exception while executing module: Access denied "}}, "msg": "The following modules failed to execute: setup\n"}
Executing commands on those DCs are working, only gathering facts failed. On non DCs gathering facts works.
Does anyone have an idea what could be the problem?
Fixed it with update ansible to some > 2.9.
With ansible 2.10 or ansible4 it is working

Unable to put Vault UI in https

I try to run Vault with a CRC OpenShift 4.7 and helm3 but I've some problems when I try to enable the UI in https.
Add hashicorp repo :
helm repo add hashicorp https://helm.releases.hashicorp.com
Install the latest version of vault :
[[tim#localhost config]]$ helm install vault hashicorp/vault \
> --namespace vault-project \
> --set "global.openshift=true" \
> --set "server.dev.enabled=true"
Then I run oc get pods
[tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 0/1 Running 0 48m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 6h9m
I run an interactive shell session with the vault-0 pod :
oc rsh vault-project-0
Then I initialize Vault :
/ $ vault operator init --tls-skip-verify -key-shares=1 -key-threshold=1
Unseal Key 1: iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs=
Initial Root Token: s.xVb0DvIMQRYam7oS2C0ZsHBC
Vault initialized with 1 key shares and a key threshold of 1. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 1 of these keys to unseal it
before it can start servicing requests.
Vault does not store the generated master key. Without at least 1 key to
reconstruct the master key, Vault will remain permanently sealed!
It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.
Export the token :
export VAULT_TOKEN=s.xVb0DvIMQRYam7oS2C0ZsHBC
Unseal Vault :
/ $ vault operator unseal --tls-skip-verify iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs=
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.6.2
Storage Type file
Cluster Name vault-cluster-21448fb0
Cluster ID e4d4649f-2187-4682-fbcb-4fc175d20a6b
HA Enabled false
I check the pods :
[tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 1/1 Running 0 35m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 35m
 
I'm able to get the UI without https :
In the OpenShift console, I switch to the Administrator mode and this is what I've done :
Networking part
- Routes > Create routes
Name : vault-route
Hostname : 192.168.130.11
Path :
Service : vault
Target Port : 8200 -> 8200 (TCP)
Now, if I check the URL : http://192.168.130.11/ui :
The UI is available.
 
In order to enable the https, I've followed the step here :
https://www.vaultproject.io/docs/platform/k8s/helm/examples/standalone-tls
But I've change the K8S commands for the OpenShift commands
# SERVICE is the name of the Vault service in Kubernetes.
# It does not have to match the actual running service, though it may help for consistency.
SERVICE=vault-server-tls
# NAMESPACE where the Vault service is running.
NAMESPACE=vault-project
# SECRET_NAME to create in the Kubernetes secrets store.
SECRET_NAME=vault-server-tls
# TMPDIR is a temporary working directory.
TMPDIR=/**tmp**
Then :
openssl genrsa -out ${TMPDIR}/vault.key 2048
Then create the csr.conf file :
[tim#localhost tmp]$ cat csr.conf
[req]
default_bits = 4096
default_md = sha256
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = #alt_names
[alt_names]
DNS.1 = vault-project
DNS.2 = vault-project.vault-project
DNS.3 = *apps-crc.testing
DNS.4 = *api.crc.testing
IP.1 = 127.0.0.1
Create the CSR :
openssl req -new -key': openssl req -new -key ${TMPDIR}/vault.key -subj "/CN=${SERVICE}.${NAMESPACE}.apps-crc.testing" -out ${TMPDIR}/server.csr -config ${TMPDIR}/csr.conf
Create the file ** csr.yaml :
$ export CSR_NAME=vault-csr
$ cat <<EOF >${TMPDIR}/csr.yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: ${CSR_NAME}
spec:
groups:
- system:authenticated
request: $(cat ${TMPDIR}/server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Send the CSR to OpenShfit :
oc create -f ${TMPDIR}/csr.yaml
Approve CSR :
oc adm certificate approve ${CSR_NAME}
Retrieve the certificate :
serverCert=$(oc get csr ${CSR_NAME} -o jsonpath='{.status.certificate}')
Write the certificate out to a file :
echo "${serverCert}" | openssl base64 -d -A -out ${TMPDIR}/vault.crt
Retrieve Openshift CA :
oc config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 -d > ${TMPDIR}/vault.ca
Store the key, cert, and OpenShift CA into Kubernetes secrets :
oc create secret generic ${SECRET_NAME} \
--namespace ${NAMESPACE} \
--from-file=vault.key=/home/vault/certs/vault.key \
--from-file=vault.crt=/home/vault/certs//vault.crt \
--from-file=vault.ca=/home/vault/certs/vault.ca
The command oc get secret | grep vault :
NAME TYPE DATA AGE
vault-server-tls Opaque 3 4h15m
Edit my vault-config with the oc edit cm vault-config command:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
extraconfig-from-values.hcl: |-
disable_mlock = true
ui = true
listener "tcp" {
tls_cert_file = "/vault/certs/vault.crt"
tls_key_file = "/vault/certs/vault.key"
tls_client_ca_file = "/vault/certs/vault.ca"
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "file" {
path = "/vault/data"
}
kind: ConfigMap
metadata:
creationTimestamp: "2021-03-15T13:47:24Z"
name: vault-config
namespace: vault-project
resourceVersion: "396958"
selfLink: /api/v1/namespaces/vault-project/configmaps/vault-config
uid: 844603a1-b529-4e33-9d58-20525ea7bff
Edit the VolumeMounst, volumes and ADDR parts my statefulset :
volumeMounts:
- mountPath: /home/vault
name: home
- mountPath: /vault/certs
name: certs
volumes:
- configMap:
defaultMode: 420
name: vault-config
name: config
- emptyDir: {}
name: home
- name: certs
secret:
defaultMode: 420
secretName: vault-server-tls
name: VAULT_ADDR
value: https://127.0.0.1:8200
I delete my pods in order to take into account all my changes
oc delete pods vault-project-0
And...
tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 0/1 Running 0 48m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 6h9m
vault-project-0 is on 0/1 but running. If I describe the pods :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 1s (x6 over 26s) kubelet Readiness probe failed: Error checking seal status: Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client
If think that I've missed something but I don't know what...
Someone to tell me how to enable https for the vault UI with openshift ?

Playbook is unable to create VM running into fatal error

We have a playbook which create the VM. First time when I ran the infrastructure pipeline VM is created.
After making some "sku" related changes in playbook and trying to run the pipeline again, getting below error.
2020-11-09T09:42:37.7488228Z TASK [ansible-role-adfv2-shir : Install Java Runtime Environment] **************
2020-11-09T09:42:37.7489504Z task path: /opt/ansible-roles/cloud/2020.10-212/ansible-role-adfv2-shir/tasks/install.yml:24
2020-11-09T09:42:37.7491381Z Monday 09 November 2020 04:42:37 -0500 (0:00:31.732) 0:12:49.681 *******
2020-11-09T09:42:37.8258708Z Using module file /home/cvx_admin_user/.ansible-virtualenv/lib/python2.7/site-packages/ansible/modules/windows/win_package.ps1
2020-11-09T09:42:37.8261046Z <10.71.116.128> ESTABLISH WINRM CONNECTION FOR USER: cvx_admin_user on PORT 5986 TO 10.71.116.128
2020-11-09T09:42:37.8261724Z checking if winrm_host 10.71.116.128 is an IPv6 address
2020-11-09T09:42:37.8262337Z <10.71.116.128> WINRM CONNECT: transport=ssl endpoint=https://10.71.116.128:5986/wsman
2020-11-09T09:42:37.9552112Z <10.71.116.128> WINRM OPEN SHELL: 6450AAB3-A367-49DA-B034-B197FC2A464D
2020-11-09T09:42:37.9555186Z EXEC (via pipeline wrapper)
2020-11-09T09:42:37.9558598Z <10.71.116.128> WINRM EXEC 'PowerShell' ['-NoProfile', '-NonInteractive', '-ExecutionPolicy', 'Unrestricted', '-']
2020-11-09T09:43:15.3305027Z <10.71.116.128> WINRM RESULT u'<Response code 0, out "{"stdout":"","rc":16", err "">'
2020-11-09T09:43:15.3361619Z <10.71.116.128> WINRM CLOSE SHELL: 6450AAB3-A367-49DA-B034-B197FC2A464D
**2020-11-09T09:43:15.3500563Z fatal: [corest-tsir00]: FAILED! => {
2020-11-09T09:43:15.3530075Z "changed": false,
2020-11-09T09:43:15.3560075Z "exit_code": 1618,
2020-11-09T09:43:15.3560924Z "msg": "unexpected rc from install C:\\ExeSources\\jre8u191windowsx64.exe: see rc, stdout and stderr for more details",
2020-11-09T09:43:15.3561602Z "rc": 1618,
2020-11-09T09:43:15.3562066Z "reboot_required": false,
2020-11-09T09:43:15.3562571Z "restart_required": false,
2020-11-09T09:43:15.3563039Z "stderr": "",
2020-11-09T09:43:15.3563488Z "stderr_lines": [],
2020-11-09T09:43:15.3563938Z "stdout": "",
2020-11-09T09:43:15.3564377Z "stdout_lines": []
2020-11-09T09:43:15.3564820Z }**
I am not sure why I am getting this error. Please help me out with this error.
Thanks!
Your return code for jre8u191windowsx64.exe is 1618, which means ERROR_INSTALL_ALREADY_RUNNING. The MSI error code page tells us that:
Another installation is already in progress. Complete that installation before proceeding with this install.

Resources