We are seeing a strange issue on both DirectShow and MediaFoundation where the camera is giving black frames.
On running MFTrace on both the apps, we see that
CMemInputPinDetours::Receive is not getting called (when comparing with working scenario). On what cases & scenarios this can happen?
Problem is seen only with Lenova laptops and only with Windows 10 1703 (i.e. creators update) on wards. Complete log of DirectShow app:
Also the Microsoft samples MFCaptureD3D and SimpleCapture doesn't work!
Not sure what we are missing here, can someone help?
Part of MFTrace log is listed at below:
1252,D2C 14:16:36.74678 CGraphHelpers::Trace #000002CB48084CA0 >>>>>>>>>>>>> Run graph
1252,D2C 14:16:36.74679 CGraphBuilderDetours::EnumFilters #000002CB48084CA0 - enter
1252,D2C 14:16:36.74680 CGraphBuilderDetours::EnumFilters #000002CB48084CA0 - exit
1252,D2C 14:16:36.74681 CGraphHelpers::TraceFilter # Filter #000002CB60D7BCF8, name 'SinkFilter', vendor '(null)'
1252,D2C 14:16:36.74684 CGraphHelpers::TracePin # Input pin #000002CB6602BDD8 (IMemInputPin #000002CB6602BE98) name 'VideoCapture', connected to filter #000002CB48091158 pin #000002CB480C9228, MT: majortype=MEDIATYPE_Video;subtype=MFVideoFormat_YUY2;bFixedSizeSamples=1;bTemporalCompression=0;lSampleSize=1843200;formattype=FORMAT_VideoInfo;pUnk=#0000000000000000;cbFormat=88
1252,D2C 14:16:36.74684 CGraphHelpers::TraceFilter # Filter #000002CB48091158, name 'VideoCaptureFilter', vendor '(null)'
1252,D2C 14:16:36.74686 CGraphHelpers::TracePin # Output pin #000002CB480C9228 name 'Capture', connected to filter #000002CB60D7BCF8 pin #000002CB6602BDD8, MT: majortype=MEDIATYPE_Video;subtype=MFVideoFormat_YUY2;bFixedSizeSamples=1;bTemporalCompression=0;lSampleSize=1843200;formattype=FORMAT_VideoInfo;pUnk=#0000000000000000;cbFormat=88
1252,D2C 14:16:36.74687 CGraphHelpers::TracePin # Input pin #000002CB48084A18 (IMemInputPin #000002CB48084AD8) name 'Video Camera Terminal', NOT CONNECTED(!!!)
1252,D2C 14:16:36.74687 CGraphHelpers::TracePin # Output pin #000002CB480845F8 name 'Still', NOT CONNECTED(!!!)
1252,D2C 14:16:36.74687 CGraphHelpers::Trace #000002CB48084CA0 <<<<<<<<<<<<< Run graph
1252,D2C 14:16:36.74687 CGraphBuilderDetours::EnumFilters #000002CB48084CA0 - enter
1252,D2C 14:16:36.74688 CGraphBuilderDetours::EnumFilters #000002CB48084CA0 - exit
1252,D2C 14:16:36.74688 CMemInputPinDetours::Attach #00007FF80B790928 - enter
1252,D2C 14:16:36.74688 CInterfaceDetours::AttachVtbl #00007FF80B790928 - enter
1252,D2C 14:16:36.74688 CDetourHelpers::AttachInterface # - enter
1252,D2C 14:16:36.74691 CDetourHelpers::AttachInterface # - exit
1252,D2C 14:16:36.74691 CInterfaceDetours::AttachVtbl #00007FF80B790928 - exit
1252,D2C 14:16:36.74691 CMemInputPinDetours::Attach #00007FF80B790928 - exit
1252,D2C 14:16:36.74691 CMemInputPinDetours::Attach #00007FF8129B7D10 - enter
1252,D2C 14:16:36.74691 CInterfaceDetours::AttachVtbl #00007FF8129B7D10 - enter
1252,D2C 14:16:36.74692 CDetourHelpers::AttachInterface # - enter
1252,D2C 14:16:36.74693 CDetourHelpers::AttachInterface # - exit
1252,D2C 14:16:36.74693 CInterfaceDetours::AttachVtbl #00007FF8129B7D10 - exit
1252,D2C 14:16:36.74693 CMemInputPinDetours::Attach #00007FF8129B7D10 - exit
1252,D2C 14:16:36.74696 COle32ExportDetours::CoCreateInstance # - enter
1252,D2C 14:16:36.74811 COle32ExportDetours::CoCreateInstance # Created {E436EBB1-524F-11CE-9F53-0020AF0BA770} System Clock (C:\Windows\System32\quartz.dll) #000002CB4805CE68 - traced interfaces:
1252,D2C 14:16:36.74811 COle32ExportDetours::CoCreateInstance # - exit
1252,D2C 14:16:36.74822 COle32ExportDetours::CoCreateInstance # - enter
1252,D2C 14:16:36.74865 COle32ExportDetours::CoCreateInstance # Created {877E4351-6FEA-11D0-B863-00AA00A216A1} Plug In Distributor: IKsClock (C:\Windows\System32\ksproxy.ax) #000002CB48090920 - traced interfaces:
1252,D2C 14:16:36.74865 COle32ExportDetours::CoCreateInstance # - exit
1252,D2C 14:16:36.78381 CMediaControlDetours::Run #000002CB480C98E8 - exit
1252,D2C 14:16:36.78382 CMediaControlDetours::GetState #000002CB480C98E8 - enter
1252,D2C 14:16:36.78383 CMediaControlDetours::GetState #000002CB480C98E8 - exit
1252,1E08 14:16:54.49906 COle32ExportDetours::CoCreateInstance # - enter
1252,1E08 14:16:54.49921 COle32ExportDetours::CoCreateInstance # Created {9FC8E510-A27C-4B3B-B9A3-BF65F00256A8} (C:\WINDOWS\system32\dataexchange.dll) #000002CB4805BD40 - traced interfaces:
1252,1E08 14:16:54.49921 COle32ExportDetours::CoCreateInstance # - exit
1252,D2C 14:16:54.57840 CMediaControlDetours::Pause #000002CB480C98E8 - enter
1252,D2C 14:16:54.58279 CMediaControlDetours::Pause #000002CB480C98E8 - exit
1252,D2C 14:16:54.58280 CMediaControlDetours::Stop #000002CB480C98E8 - enter
1252,D2C 14:16:54.79315 CMediaControlDetours::Stop #000002CB480C98E8 - exit
Found the root cause, Kaspersky antivirus was causing this problem, adding our application to trusted list solved the problem!
We see that Kaspersky 10 was being used on the machine, following the steps mentioned in the below link solved the problem:
https://support.kaspersky.com/9398#block2
In case if there is Kaspersky 2015, then below link can be used to solve the problem:
https://support.kaspersky.com/11157#block1
Related
I'm having an issue with configuring the BME280 sensor. It keeps giving me the following error: collect2: error: ld returned 1 exit status. I really don't know what's wrong with it.
esphome:
name: raspberry-pi-pico-w--2
friendly_name: Raspberry Pi Pico w - 2
rp2040:
board: rpipico
framework:
# Required until https://github.com/platformio/platform-raspberrypi/pull/36 is merged
platform_version: https://github.com/maxgerhardt/platform-raspberrypi.git
# Enable logging
logger:
# Enable Home Assistant API
api:
encryption:
key: "MlSWE56GvIvD5jLsGN9/GbyDcd4Xhga7sWVZCHko2+M="
ota:
password: "bfdfe6650c2121f904a56194af707ae4"
wifi:
ssid: Stoupik
password: password
i2c:
sda: 00
scl: 01
scan: true
id: bus_a
sensor:
- platform: bme280
address: 0x77
temperature:
name: "Teplota"
id: bme280_temperature
pressure:
name: "Tlak"
id: bme280_pressure
humidity:
name: "Vlhkost"
id: bme280_humidity
update_interval: 60s
I have tried to repeat it and make changes.
When I run pre-commit run --all-files all goes well, when I try to commit, pylint throws an error with: Exit code: 32, followed by the list of usage options. The only files changed are .py files:
git status
On branch include-gitlab-arg
Your branch is up to date with 'origin/include-gitlab-arg'.
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
renamed: code/project1/src/Main.py -> code/project1/src/GitLab/GitLab_runner_token_getter.py
renamed: code/project1/src/get_gitlab_runner_token.py -> code/project1/src/GitLab/get_gitlab_runner_token.py
modified: code/project1/src/__main__.py
modified: code/project1/src/control_website.py
deleted: code/project1/src/get_website_controller.py
modified: code/project1/src/helper.py
Error Output:
The git commit -m "some change." command yields the following pre-commit error:
pylint...................................................................Failed
- hook id: pylint
- exit code: 32
usage: pylint [options]
optional arguments:
-h, --help
show this
help
message and
exit
Commands:
Options which are actually commands. Options in this group are mutually exclusive.
--rcfile RCFILE
whereas pre-commit run --all-files passes.
And the .pre-commit-config.yaml contains:
# This file specifies which checks are performed by the pre-commit service.
# The pre-commit service prevents people from pushing code to git that is not
# up to standards. # The reason mirrors are used instead of the actual
# repositories for e.g. black and flake8, is because those repositories also
# need to contain a pre-commit hook file, which they often don't by default.
# So to resolve that, a mirror is created that includes such a file.
default_language_version:
python: python3.8. # or python3
repos:
# Test if the python code is formatted according to the Black standard.
- repo: https://github.com/Quantco/pre-commit-mirrors-black
rev: 22.6.0
hooks:
- id: black-conda
args:
- --safe
- --target-version=py36
# Test if the python code is formatted according to the flake8 standard.
- repo: https://github.com/Quantco/pre-commit-mirrors-flake8
rev: 5.0.4
hooks:
- id: flake8-conda
args: ["--ignore=E501,W503,W504,E722,E203"]
# Test if the import statements are sorted correctly.
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
- id: isort
args: ["--profile", "black", --line-length=79]
## Test if the variable typing is correct. (Variable typing is when you say:
## def is_larger(nr: int) -> bool: instead of def is_larger(nr). It makes
## it explicit what type of input and output a function has.
## - repo: https://github.com/python/mypy
# - repo: https://github.com/pre-commit/mirrors-mypy
#### - repo: https://github.com/a-t-0/mypy
# rev: v0.982
# hooks:
# - id: mypy
## Tests if there are spelling errors in the code.
# - repo: https://github.com/codespell-project/codespell
# rev: v2.2.1
# hooks:
# - id: codespell
# Performs static code analysis to check for programming errors.
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint
language: system
types: [python]
args:
[
"-rn", # Only display messages
"-sn", # Don't display the score
"--ignore-long-lines", # Ignores long lines.
]
# Runs additional tests that are created by the pre-commit software itself.
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
# Check user did not add large files.
- id: check-added-large-files
# Check if `.py` files are written in valid Python syntax.
- id: check-ast
# Require literal syntax when initializing empty or zero Python builtin types.
- id: check-builtin-literals
# Checks if there are filenames that would conflict if case is changed.
- id: check-case-conflict
# Checks if the Python functions have docstrings.
- id: check-docstring-first
# Checks if any `.sh` files have a shebang like #!/bin/bash
- id: check-executables-have-shebangs
# Verifies json format of any `.json` files in repo.
- id: check-json
# Checks if there are any existing merge conflicts caused by the commit.
- id: check-merge-conflict
# Checks for symlinks which do not point to anything.
- id: check-symlinks
# Checks if xml files are formatted correctly.
- id: check-xml
# Checks if .yml files are valid.
- id: check-yaml
# Checks if debugger imports are performed.
- id: debug-statements
# Detects symlinks changed to regular files with content path symlink was pointing to.
- id: destroyed-symlinks
# Checks if you don't accidentally push a private key.
- id: detect-private-key
# Replaces double quoted strings with single quoted strings.
# This is not compatible with Python Black.
#- id: double-quote-string-fixer
# Makes sure files end in a newline and only a newline.
- id: end-of-file-fixer
# Removes UTF-8 byte order marker.
- id: fix-byte-order-marker
# Add <# -*- coding: utf-8 -*-> to the top of python files.
#- id: fix-encoding-pragma
# Checks if there are different line endings, like \n and crlf.
- id: mixed-line-ending
# Asserts `.py` files in folder `/test/` (by default:) end in `_test.py`.
- id: name-tests-test
# Override default to check if `.py` files in `/test/` START with `test_`.
args: ['--django']
# Ensures JSON files are properly formatted.
- id: pretty-format-json
args: ['--autofix']
# Sorts entries in requirements.txt and removes incorrect pkg-resources entries.
- id: requirements-txt-fixer
# Sorts simple YAML files which consist only of top-level keys.
- id: sort-simple-yaml
# Removes trailing whitespaces at end of lines of .. files.
- id: trailing-whitespace
- repo: https://github.com/PyCQA/autoflake
rev: v1.7.0
hooks:
- id: autoflake
args: ["--in-place", "--remove-unused-variables", "--remove-all-unused-imports", "--recursive"]
name: AutoFlake
description: "Format with AutoFlake"
stages: [commit]
- repo: https://github.com/PyCQA/bandit
rev: 1.7.4
hooks:
- id: bandit
name: Bandit
stages: [commit]
# Enforces formatting style in Markdown (.md) files.
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.16
hooks:
- id: mdformat
additional_dependencies:
- mdformat-toc
- mdformat-gfm
- mdformat-black
- repo: https://github.com/MarcoGorelli/absolufy-imports
rev: v0.3.1
hooks:
- id: absolufy-imports
files: '^src/.+\.py$'
args: ['--never', '--application-directories', 'src']
- repo: https://github.com/myint/docformatter
rev: v1.5.0
hooks:
- id: docformatter
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.9.0
hooks:
- id: python-use-type-annotations
- id: python-check-blanket-noqa
- id: python-check-blanket-type-ignore
# Updates the syntax of `.py` files to the specified python version.
# It is not compatible with: pre-commit hook: fix-encoding-pragma
- repo: https://github.com/asottile/pyupgrade
rev: v3.0.0
hooks:
- id: pyupgrade
args: [--py38-plus]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
With pyproject.toml:
# This is used to configure the black, isort and mypy such that the packages don't conflict.
# This file is read by the pre-commit program.
[tool.black]
line-length = 79
include = '\.pyi?$'
exclude = '''
/(
\.git
| \.mypy_cache
| build
| dist
)/
'''
[tool.coverage.run]
# Due to a strange bug with xml output of coverage.py not writing the full-path
# of the sources, the full root directory is presented as a source alongside
# the main package. As a result any importable Python file/package needs to be
# included in the omit
source = [
"foo",
".",
]
# Excludes the following directories from the coverage report
omit = [
"tests/*",
"setup.py",
]
[tool.isort]
profile = "black"
[tool.mypy]
ignore_missing_imports = true
[tool.pylint.basic]
bad-names=[]
[tool.pylint.messages_control]
# Example: Disable error on needing a module-level docstring
disable=[
"import-error",
"invalid-name",
"fixme",
]
[tool.pytest.ini_options]
# Runs coverage.py through use of the pytest-cov plugin
# An xml report is generated and results are output to the terminal
# TODO: Disable this line to disable CLI coverage reports when running tests.
#addopts = "--cov --cov-report xml:cov.xml --cov-report term"
# Sets the minimum allowed pytest version
minversion = 5.0
# Sets the path where test files are located (Speeds up Test Discovery)
testpaths = ["tests"]
And setup.py:
"""This file is to allow this repository to be published as a pip module, such
that people can install it with: `pip install networkx-to-lava-nc`.
You can ignore it.
"""
import setuptools
with open("README.md", encoding="utf-8") as fh:
long_description = fh.read()
setuptools.setup(
name="networkx-to-lava-nc-snn",
version="0.0.1",
author="a-t-0",
author_email="author#example.com",
description="Converts networkx graphs representing spiking neural networks"
+ " (SNN)s of LIF neruons, into runnable Lava SNNs.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/a-t-0/networkx-to-lava-nc",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: AGPL3",
"Operating System :: OS Independent",
],
)
Question
How can I resolve the pylint usage error to ensure the commit passes pre-commit?
The issue was caused by #"--ignore-long-lines", # Ignores long lines. in the .pre-commit-config.yaml. I assume it conflicts with the line length settings for black, and for the pyproject.toml, which are set to 79 respectively.
Steps followed to installed SNMP manager and agent on ec2
sudo apt-get update
sudo apt-get install snmp snmp-mibs-downloader
sudo apt-get update
sudo apt-get install snmpd
I opened sudo nano /etc/snmp/snmp.conf and commented the following line:
#mibs :
Then I went into the configuration file and modified file as below:
sudo nano /etc/snmp/snmpd.conf
Listen for connections from the local system only
agentAddress udp:127.0.0.1:161 <--- commented this part.
Listen for connections on all interfaces (both IPv4 and IPv6)
agentAddress udp:161,udp6:[::1]:161 <--remove the comment from this line to make it work.
using below command I can get snmp data
snmpwalk -v 2c -c public 127.0.0.1 .
From inside docker container as well I can get the data
snmpwalk -v 2c -c public host.docker.internal .
Docker-compose:
telegraf_snmp:
image: telegraf:1.22.1
container_name: telegraf_snmp
restart: always
depends_on:
- influxdb
networks:
- analytics
extra_hosts:
- "host.docker.internal:host-gateway"
# ports:
# - "161:161/udp"
volumes:
- /mnt/telegraf/snmp:/var/lib/telegraf
- ./etc/telegraf/snmp/:/etc/telegraf/snmp/
env_file:
- secrets.env
environment:
INFLUXDB_URL: http://influxdb:8086
command:
--config-directory /etc/telegraf/snmp/telegraf.d
--config /etc/telegraf/snmp/telegraf.conf
links:
- influxdb
logging:
options:
max-size: "10m"
max-file: "3"
Telegraf Input conf:
[[inputs.snmp]]
## Agent addresses to retrieve values from.
## format: agents = ["<scheme://><hostname>:<port>"]
## scheme: optional, either udp, udp4, udp6, tcp, tcp4, tcp6.
## default is udp
## port: optional
## example: agents = ["udp://127.0.0.1:161"]
## agents = ["tcp://127.0.0.1:161"]
## agents = ["udp4://v4only-snmp-agent"]
# agents = ["udp://127.0.0.1:161"]
agents = ["udp://host.docker.internal:161"]
## Timeout for each request.
timeout = "15s"
## SNMP version; can be 1, 2, or 3.
version = 2
## SNMP community string.
community = "public"
## Agent host tag
# agent_host_tag = "agent_host"
## Number of retries to attempt.
retries = 3
## The GETBULK max-repetitions parameter.
# max_repetitions = 10
## SNMPv3 authentication and encryption options.
##
## Security Name.
# sec_name = "myuser"
## Authentication protocol; one of "MD5", "SHA", or "".
# auth_protocol = "MD5"
## Authentication password.
# auth_password = "pass"
## Security Level; one of "noAuthNoPriv", "authNoPriv", or "authPriv".
# sec_level = "authNoPriv"
## Context Name.
# context_name = ""
## Privacy protocol used for encrypted messages; one of "DES", "AES", "AES192", "AES192C", "AES256", "AES256C", or "".
### Protocols "AES192", "AES192", "AES256", and "AES256C" require the underlying net-snmp tools
### to be compiled with --enable-blumenthal-aes (http://www.net-snmp.org/docs/INSTALL.html)
# priv_protocol = ""
## Privacy password used for encrypted messages.
# priv_password = ""
## Add fields and tables defining the variables you wish to collect. This
## example collects the system uptime and interface variables. Reference the
## full plugin documentation for configuration details.
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysUpTime.0"
name = "uptime"
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysName.0"
name = "source"
is_tag = true
[[inputs.snmp.table]]
oid = "IF-MIB::ifTable"
name = "interface"
inherit_tags = ["source"]
[[inputs.snmp.table.field]]
oid = "IF-MIB::ifDescr"
name = "ifDescr"
is_tag = true
Telegraf logs:
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:09Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:09Z I! Loaded inputs: snmp
2022-09-09T10:10:09Z I! Loaded aggregators:
2022-09-09T10:10:09Z I! Loaded processors:
2022-09-09T10:10:09Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:09Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:09Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:09Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:11Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:11Z I! Loaded inputs: snmp
2022-09-09T10:10:11Z I! Loaded aggregators:
2022-09-09T10:10:11Z I! Loaded processors:
2022-09-09T10:10:11Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:11Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:11Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:11Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
But in telegraf I get above error.
I checked the mibs directory using ls /usr/share/snmp/mibs
I cannot find IF-MIB file here even after installing
$ sudo apt-get install snmp-mibs-downloader
$ sudo download-mibs
How can I resolve this issue ? Do I need to follow some additional steps ?
SNMP Plugin in telegraf should able to pull the data from SNMP
I am using a simple salt state to send (file.managed) and execute (cmd.run) a shell script on a minion/target. No matter what exit or return value the shell script sends, the salt master is interpreting the result as successful.
I tried using cmd.script, but keep getting a permission denied error on the temp version of the file under /tmp. Filesystem is not mounted with noexec so we can't figure out why it won't work.
For cmd.run, stdout in the job output shows the failed return code and message but Salt still says Success. Running the script locally on the minion reports the return/exit code as expected.
I tried adding stateful: True into the cmd.run block and formatted the key value pairs at the end of the shell script as demonstrated in the docs.
Running against 2 minion target, 1 fail 1 succeed. Both report Result as True but correctly populate Comment with my key value pair.
I've tried YES/NO, TRUE/FALSE, 0/1 - nothing works.
The end of my shell script, formatted as shown in the docs.
echo Return_Code=${STATUS}
# exit ${STATUS}
if [[ ${STATUS} -ne 0 ]]
then
echo ""
echo "changed=False comment='Failed'"
else
echo ""
echo "changed=True comment='Success'"
fi
The SLS block:
stop_oracle:
cmd.run:
- name: {{scriptDir}}/{{scriptName}}{{scriptArg}}
- stateful: True
- failhard: True
SLS output from Successful minion:
----------
ID: stop_oracle
Function: cmd.run
Name: /u01/orastage/oraclepsu/scripts/oracle_ss_wrapper.ksh stop
Result: True
Comment: Success
Started: 14:37:44.519131
Duration: 18930.344 ms
Changes:
----------
changed:
True
pid:
26195
retcode:
0
stderr:
stty: standard input: Inappropriate ioctl for device
stdout:
Script running under ROOT
Mon Jul 1 14:38:03 EDT 2019 : Successful
Return_Code=0
SLS output from Failed minion:
----------
ID: stop_oracle
Function: cmd.run
Name: /u01/orastage/oraclepsu/scripts/oracle_ss_wrapper.ksh stop
Result: True
Comment: Failed
Started: 14:07:14.153940
Duration: 38116.134 ms
Changes:
Output from shell script run locally on fail target:
[oracle#a9tvdb102]:/home/oracle:>>
/u01/orastage/oraclepsu/scripts/oracle_ss_wrapper.ksh stop
Mon Jul 1 15:29:18 EDT 2019 : There are errors in the process
Return_Code=1
changed=False comment='Failed'
Output from shell script run locally on success target:
[ /home/oracle ]
oracle#r9tvdo1004.giolab.local >
/u01/orastage/oraclepsu/scripts/oracle_ss_wrapper.ksh stop
Mon Jul 1 16:03:18 EDT 2019 : Successful
Return_Code=0
changed=True comment='Success'
I am developing RouterOS network module for Ansible 2.5.
RouterOS shell can print a few messages that should be detected in on_open_shell() event and either skipped or dismissed automatically. These are Do you want to see the software license? [Y/n]: and a few others, all of which are well documented here in the MikroTik Wiki.
Here is how I'm doing this:
def on_open_shell(self):
try:
if not prompt.strip().endswith(b'>'):
self._exec_cli_command(b' ')
except AnsibleConnectionFailure:
raise AnsibleConnectionFailure('unable to bypass license prompt')
It indeed does bypass the license prompt. However it seems that the \n response from the RouterOS device counts as a reply for the actual commands that follow. So, if I have two tasks in my playbook like this:
---
- hosts: routeros
gather_facts: no
connection: network_cli
tasks:
- routeros_command:
commands:
- /system resource print
- /system routerboard print
register: result
- name: Print result
debug: var=result.stdout_lines
This is the output I get:
ok: [example] => {
"result.stdout_lines": [
[
""
],
[
"uptime: 12h33m29s",
" version: 6.42.1 (stable)",
" build-time: Apr/23/2018 10:46:55",
" free-memory: 231.0MiB",
" total-memory: 249.5MiB",
" cpu: Intel(R)",
" cpu-count: 1",
" cpu-frequency: 2700MHz",
" cpu-load: 2%",
" free-hdd-space: 943.8MiB",
" total-hdd-space: 984.3MiB",
" write-sect-since-reboot: 7048",
" write-sect-total: 7048",
" architecture-name: x86",
" board-name: x86",
" platform: MikroTik"
]
]
}
As you can see, the output seems to be offset by 1. What should I do to correct this?
It turns out that the problem was in the regular expression that defined the shell prompt. I had it defined like this:
terminal_stdout_re = [
re.compile(br"\[\w+\#[\w\-\.]+\] ?>"),
# other cases
]
It did not match the end of prompt which caused Ansible to think that there was a newline before the actual command output. Here is the correct regexp:
terminal_stdout_re = [
re.compile(br"\[\w+\#[\w\-\.]+\] ?> ?$"),
# other cases
]