I have a role that suppose to start a windows service and I get an Error which I am not able to find info about in the Web.
When I start the service manually, it is working like charm.
The step is this:
- name: "Deploy Component | Start WinService"
win_service:
name: TestingEarth
state: started
when:
- inventory_hostname == '10.11.11.10'
tags:
- release
The Error that I get is:
TASK [DeployWinServices : Deploy Component | Start WinService] ***
task path: /home/gitlab-runner/builds/pSVm9HzS/0/narcos/mexico/ansible/roles/DeployWinServices/tasks/deployWinServices.yml:206
Using module file /usr/lib/python2.7/site-packages/ansible/modules/windows/win_service.ps1
Pipelining is enabled.
<10.11.11.10> ESTABLISH WINRM CONNECTION FOR USER: administrator on PORT 5985 TO 10.11.11.10
EXEC (via pipeline wrapper)
The full traceback is:
at Microsoft.Management.Infrastructure.Internal.Operations.CimAsyncObserverProxyBase`1.ProcessNativeCallback(OperationCallbackProcessingContext callbackProcessingContext, T currentItem, Boolean moreResults, MiResult operationResult, String errorMessage, InstanceHandle errorDetailsHandle)
fatal: [10.11.11.10]: FAILED! => {
"changed": false,
"msg": "Unhandled exception while executing module: Invalid class "
}
I will really appreciate assistance with this issue :)
Related
I have created a playbook to create VM in a vmware Vcenter. It worked fine. I have used my localhost as host to create a vm on this vcenter. Now, I am trying to execute a command on that newly created vm but I am getting a connection error. The error message shown by my ansible playbook tries to login from the ESX host on which the new vm is deployed. Please help me to resolve this.
my playbook:
---
- name : do opearation on vm
hosts : rhel66
vars_files :
- "playbook_vars.yml"
tasks :
- name : Run command inside a vm
vmware_vm_shell :
hostname: "{{ vcenter_name }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ vcenter_datacenter }}"
#folder: AutomationDevelopment/vm/Environments/Env07/rhel66_testvm
vm_id: "{{ vm_name }}"
validate_certs : false
vm_username: root
vm_password: *******
vm_shell: /bin/echo
vm_shell_args: " $var >> myFile "
vm_shell_env:
- "PATH=/bin"
- "VAR=test"
vm_shell_cwd: "/tmp"
register: shell_command_output
playbook output:
____________________________
< PLAY [do opearation on vm] >
----------------------------
________________________
< TASK [Gathering Facts] >
------------------------|
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: requests.exceptions.ProxyError: HTTPSConnectionPool(host='YY.YY.YY.YY', port=443): Max retries exceeded with url: /guestFile?id=235&token=52009a00-3927-ee12-8e82-bb795a2332c5235 (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f4d69e0c0f0>: Failed to establish a new connection: [Errno 113] No route to host',)))
fatal: [XX.XX.XX.XX]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}
____________
< PLAY RECAP >
------------
In above error: XX.XX.XX.XX -> my new vm IP and YY.YY.YY.YY is my ESX host on which this new vm is deployed
After dealing with this, and actually answering my own question (asked because there wasn't an answer here), I think your issue is routing. There is no route to host. What I didn't understand initially was that it is trying to connect to the host that the vm is resident on, not the vCenter. Hopefully that helps. I use this for Windows VM's, the connection plugin that is, and it is helpful when trying to install openssh on windows 2016 VMs. It is also helpful when you need to do things prior to connecting the network.
I am facing currently a strange issue and I am not able to guess what causes it.
I wrote small Ansible scripts to test if Kafka schema-registry and connectors are running by calling their APIs.
I could run those Ansible playbooks on my local machine successfully. However, when running them in Gitlab CI pipeline (im using the same local machine as gitlab runner), the connect_test always breaks with the following error:
fatal: [xx.xxx.x.x]: FAILED! => {"changed": false, "elapsed": 0, "msg": "Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://localhost:8083"}
The strange thing, that this failed job will work when I click on retry button in CI pipeline.
Has anyone an idea about this issue? I would appreciate your help.
schema_test.yml
---
- name: Test schema-registry
hosts: SCHEMA
become: yes
become_user: root
tasks:
- name: list schemas
uri:
url: http://localhost:8081/subjects
register: schema
- debug:
msg: "{{ schema }}"
connect_test.yml
---
- name: Test connect
hosts: CONNECT
become: yes
become_user: root
tasks:
- name: check connect
uri:
url: http://localhost:8083
register: connect
- debug:
msg: "{{ connect }}"
.gitlab-ci.yml
test-connect:
stage: test
script:
- ansible-playbook connect_test.yml
tags:
- gitlab-runner
test-schema:
stage: test
script:
- ansible-playbook schema_test.yml
tags:
- gitlab-runner
update
I replaced URI module with shell. as a result, I see the same behaviour. The initial run of the pipeline will fail and retrying the job will fix it
maybe you are restarting the services in a previous job, take in mind that kafka connect needs generally more time to be available after the restart. Try to pause ansible after you restart the service for a minute or so.
What I Do
For Bare Metal deployment, I configure interfaces on CentOS 7 servers via ansible 2.7.9.
Sometimes, the interface definitions change
- name: Copy sysctl and ifcfg- files from templates.
template: src={{ item.src }} dest={{ item.dest }}
with_items:
[...]
- { src: 'network.j2', dest: '/etc/sysconfig/network' }
notify:
- Restart network service
- Wait for reconnect
- Reset host errors
which is why I call a handler to restart the network.service when a change happens:
- name: Restart network service
service:
name: network
state: restarted
- name: Reset host errors
meta: clear_host_errors
- name: Wait for reconnect
wait_for_connection:
connect_timeout: 20
sleep: 5
delay: 5
timeout: 600
What I want
I can't get ansible to not quit the run when the Restart network service handler fails. Since the service restart is working fine on the host itself, I either want the restart to always exit with RC=0 or clear the host error after the failing handler call. In the following list, is there anything I am missing or doing wrong?
What I tried
ignore errors: true, failed_when: false, changed_when: false directives. Either with shell/command module or service module in the restart network handler block.
meta: clear_host_errors directly below the - name: Copy sysctl and ifcfg- files from templates. block
calling meta: clear_host_errors as a handler
Having the handler Restart network service exit with || true
async/poll variants for Restart network service
setting Pipelining to false
I always end up with:
RUNNING HANDLER [os : Restart network service] *******************************************************************
fatal: [host-redacted]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to aa.bb.cc.dd closed.", "unreachable": true}
RUNNING HANDLER [os : Reset host errors] *************************************************************************
fatal: [host-redacted]: FAILED! => {"changed": false, "module_stderr": "Shared connection to aa.bb.cc.dd closed.\r\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 0}
RUNNING HANDLER [os : Wait for reconnect] ************************************************************************
Where is the correct placement for meta: clear_host_errors in this case?
Some additional info
restart of network.service takes about ~40 seconds. (Tried async: 120 and poll: 30)
established, non-ansible SSH connections recover with enough timeout
re-run of ansible directly after the first exit work fine
Interestingly enough, the skipping works fine with ignore_errors: true when using Mitogen:
TASK [os : Restart network service] ******************************************************************************************************************************************************************************************************
skipping: [host-redacted]
skipping: [host-redacted]
fatal: [host-redacted]: FAILED! => {"msg": "EOF on stream; last 300 bytes received: 'ssh: connect to host aa.bb.cc.dd port 22: Connection refused\\r\\n'"}
...ignoring
This starts to look like a bug to me.
When i Invoking Ansible through Jenkins i have added the below script in my Playbook
- name: HELLO WORLD PLAY
hosts: webserver
become: yes
become_method: sudo
tasks:
- debug:
msg: "HELLO......."
- shell: echo "HELLO WORLD"
I am getting below error when i build job
TASK [setup] *******************************************************************
fatal: [10.142.0.13]: UNREACHABLE! =>
{
"changed": false,
"msg": "ERROR! SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue",
"unreachable": true
}
when I run this playbook through CLI it is running successfully
but I am not able to run through Jenkins as (i have already done the set up by pasting private key in Jenkins)
I'm trying to download some files from AWS S3 with IAM Role for EC2 but Ansible is getting an error. Other Ansible win_* modules works great.
Windows Server has Python2 and Python3, and also boto and boto3 modules. Cmd is responding to python command. It opens Python3 when it is executed. I also tested the 'import boto' command when the Python3 is opened to be sure that module is installed.
Ansible Playbook is configured like:
- name: test s3 module
hosts: windows
tasks:
- name: get s3 file
aws_s3:
bucket: drktests3
object: /test
dest: C:\tests3.txt
mode: get
When i run this configuration, the output is like that:
root#ip-172-31-22-4:/etc/ansible/playbooks# ansible-playbook s3test
PLAY [test s3 module] *******************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************
ok: [38.210.201.10]
TASK [get s3 file] **********************************************************************************************************************************************
[WARNING]: FATAL ERROR DURING FILE TRANSFER:
fatal: [38.210.201.10]: FAILED! => {"msg": "winrm send_input failed; \nstdout: Unable to initialize device PRN\r\nUnable to initialize device PRN\r\nUnable to initialize device PRN\r\n\nstderr ANSIBALLZ_WRAPPER : The term 'ANSIBALLZ_WRAPPER' is not recognized as the name \r\nof a cmdlet, function, script file, or operable program. Check the spelling of \r\nthe name, or if a path was included, verify that the path is correct and try \r\nagain.\r\nAt line:1 char:1\r\n+ ANSIBALLZ_WRAPPER = True # For test-module script to tell this is a \r\nANSIBALLZ_WR ...\r\n+ ~~~~~~~~~~~~~~~~~\r\n + CategoryInfo : ObjectNotFound: (ANSIBALLZ_WRAPPER:String) [], C \r\n ommandNotFoundException\r\n + FullyQualifiedErrorId :
The same script works on the Master server(Linux Ubuntu) if i change the hosts value to localhost. Why Ansible cannot execute the python code on the Windows server?
I found that there is a section about the problem in Ansible Docs.
Can I run Python modules?
No, the WinRM connection protocol is set to use PowerShell modules, so Python modules will not work. A way to bypass this issue to use delegate_to: localhost to run a Python module on the Ansible controller. This is useful if during a playbook, an external service needs to be contacted and there is no equivalent Windows module available.
So if you want to do such a process, you have to workaround the problem.
I solved it with suggestion that Ansible Documentation gives.
- name: test s3 module
hosts: windows
tasks:
- name: get s3 file
aws_s3:
bucket: bucketname
object: /filename.jpg
dest: /etc/ansible/playbooks/test.jpg
mode: get
delegate_to: localhost
- name: copy to win server
win_copy:
src: /etc/ansible/playbooks/test.jpg
dest: C:/test.jpg
When you use these sample codes, firstly you are downloading the files to Master Ansible server with delegate_to. And after then that you are copying the files to Windows host.