Are there any special characters in Windmill? How do I override? - windmill

I'm attempting to closeout a popup named [Close].
client.click(link=u'[Close]')
client.waits.forElement(link=u'[Close]', timeout=u'8000')
It seems to die here.
'debug': u'Looking up id docname, failed.
>>> test_results: ERROR Test Failure in test {'version': u'0.1', u'suite_name
': u'exampletest', 'result': False, 'starttime': u'2012-3-6T11:56:0.877Z', 'outp
ut': None, 'debug': u'Looking up id docname, failed.', u'params': {u'uuid': u'44
d8908f-8009-11e1-9665-2c27d7ea08c4', u'id': u'docname'}, 'endtime': u'2012-3-6T1
1:56:0.877Z', u'method': u'asserts.assertNode'}
Is there a way to include literal brackets in the code?

In this case the link had a space between the brackets and letters.
[ Close ]
not
[Close]

Related

Hi Team, I am having Spawn failed issue while deploying the jupiterhub in our Kubernetes namespace

I am having the below error while deploying Jupiterhub
Spawn failed: (422) Reason: error HTTP response headers:
,"status":"Failure","message":"PersistentVolumeClaim "" is invalid: [metadata.name: Required value: name or generateName is required, metadata.labels: Invalid value: "-42hattacharjee-5f-5f-41nusuya": a valid label must be an empty string or consist of alphanumeric characters, '-', '' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9.]*)?[A-Za-z0-9])?')]",
I have tried removing the username from the "pvcNameTemplate" parameter but still getting this error.

Sphinx-autodoc with napoleon (Google Doc String Style): Warnings and Errors about Block quotes and indention

I am using Sphinx 4.4.0 with napoleon extension (Google Doc String). I have this two problems
ARNING: Block quote ends without a blank line; unexpected unindent.
ERROR: Unexpected indentation.
I found something about it on the internet but can not fit this two my code. My problem is I even do not understand the messages. I do not see where the problem could be.
This is the code:
def read_and_validate_csv(basename, specs_and_rules):
"""Read a CSV file with respect to specifications about format and
rules about valid values.
Hints: Do not use objects of type type (e.g. str instead of "str") when
specificing the column type.
specs_and_rules = {
'TEMPLATES': {
'T1l': ('Int16', [-9, ' '])
},
'ColumnA': 'str',
'ColumnB': ('str', 'no answer'),
'ColumnC': None,
'ColumnD': (
'Int16',
-9, {
'len': [1, 2, (4-8)],
'val': [0, 1, (3-9)]
}
}
Returns:
(pandas.DataFrame): Result.
"""
This are the original messages:
.../bandas.py:docstring of buhtzology.bandas.read_and_validate_csv:11: WARNING: Block quote ends without a blank line; unexpected unindent.
.../bandas.py:docstring of buhtzology.bandas.read_and_validate_csv:15: ERROR: Unexpected indentation.
.../bandas.py:docstring of buhtzology.bandas.read_and_validate_csv:17: ERROR: Unexpected indentation.
.../bandas.py:docstring of buhtzology.bandas.read_and_validate_csv:19: WARNING: Block quote ends without a blank line; unexpected unindent.
.../bandas.py:docstring of buhtzology.bandas.read_and_validate_csv:20: WARNING: Block quote ends without a blank line; unexpected unindent.
reStructuredText is not Markdown, and indentation alone is not enough to demarcate the code block. reStructuredText calls this a literal block. Although the use of :: is one option, you might want to explicitly specify the language (overriding the default) with the use of the code-block directive.
Also I noticed that you have invalid syntax in your code block—a missing ) and extra spaces in your indentation—which could have caused those errors.
Try this.
def read_and_validate_csv(basename, specs_and_rules):
"""Read a CSV file with respect to specifications about format and
rules about valid values.
Hints: Do not use objects of type type (e.g. str instead of "str") when
specificing the column type.
.. code-block:: python
specs_and_rules = {
'TEMPLATES': {
'T1l': ('Int16', [-9, ' '])
},
'ColumnA': 'str',
'ColumnB': ('str', 'no answer'),
'ColumnC': None,
'ColumnD': (
'Int16',
-9, {
'len': [1, 2, (4-8)],
'val': [0, 1, (3-9)]
}
)
}
Returns:
(pandas.DataFrame): Result.
"""

Can a logstash filter error be forwarded to elastic?

I'm having these json parsing errors from time to time:
2022-01-07T12:15:19,872][WARN ][logstash.filters.json ] Error parsing json
{:source=>"message", :raw=>" { the invalid json }", :exception=>#<LogStash::Json::ParserError: Unrecognized character escape 'x' (code 120)
Is there a way to get the :exception field in the logstash config file?
I opened the exact same thread on the elastic forum and got a working solution there. Thanks to #Badger on the forum, I ended up using the following raw ruby filter:
ruby {
code => '
#source = "message"
source = event.get(#source)
return unless source
begin
parsed = LogStash::Json.load(source)
rescue => e
event.set("jsonException", e.to_s)
return
end
#target = "jsonData"
if #target
event.set(#target, parsed)
end
'
}
which extracts the info I needed:
"jsonException" => "Unexpected character (',' (code 44)): was expecting a colon to separate field name and value\n at [Source: (byte[])\"{ \"baz\", \"oh!\" }\r\"; line: 1, column: 9]",
Or as the author of the solution suggested, get rid of the #target part and use the normal json filter for the rest of the data.

How do I call get twice in a row in Python?

I'm trying to get a Python script to append two different prefixes to a command that come from two different environment variables. Currently the script is only configured to accept different prefixes from ONE variable, and is stubbornly resisting my attempts to make it take two variables.
The code looks like this:
proc = subprocess.Popen([subst.get('TOOLCHAIN_PREFIX','') + 'readelf', '-d', lib], stdout = subprocess.PIPE)
If I just replace TOOLCHAIN_PREFIX with GNU_PREFIX, which is the other environment variable I want, the code works as-is. But if I try to do something like...
proc = subprocess.Popen([subst.get('TOOLCHAIN_PREFIX','') + subst.get('GNU_PREFIX','') + 'readelf', '-d', lib], stdout = subprocess.PIPE)
It doesn't work. In fact, Python seems to be stubbornly not allowing me to call "get" twice, or to combine the two flags.
What I want is to make it take both these variables as input and tack them onto the front of the command in order. Fixing the Makefiles was easy, but this Python script is like Greek to me. So for instance, I want something where this..
TOOLCHAIN_PREFIX = "s"
GNU_PREFIX = "g"
Can produce a command like this:
sgreadelf -d
And if either one isn't set, it's just omitted.
TOOLCHAIN_PREFIX = "s"
GNU_PREFIX = ""
Should produce:
sreadelf -d
TOOLCHAIN_PREFIX = ""
GNU_PREFIX = "g"
greadelf -d
It's very frustrating and the code seems to be very stubborn about only letting me pull in one environment variable. If I add the other variable as an argument, it says too many arguments. If I add it as a second command, it doesn't like that either.
Is there really just no way to pull in two environment variables and append both of them to the beginning of a string? It seems like it should be simple, but they seem bent on not letting me do it easily without reading an entire book on Python.
EDIT: Sorry, I forgot to mention where "subst" comes from. I thought it was a builtin command, but apparently it's defined elsewhere. I know next to nothing about Python.
config = buildObject.from_environment()
for var in ('topsrcdir', 'topobjdir', 'defines', 'non_global_defines',
'substs'):
value = getattr(config, var)
setattr(sys.modules[__name__], var, value)
substs = dict(substs)
for var in os.environ:
if var not in ('CPP', 'CXXCPP', 'SHELL') and var in substs:
substs[var] = os.environ[var]
And here is the error:
File "/export/home/jeremy/Downloads/git/Solaris-UXP/toolkit/library/dependentlibs.py", line 55
proc = subprocess.Popen([substs.get(('TOOLCHAIN_PREFIX', '') + ('GNU_PREFIX', '') + 'readelf', '-d', lib], stdout = subprocess.PIPE)
^
SyntaxError: invalid syntax
This time I tried it without subst.get, but it gives the exact same error whether I put subst.get in front of the second argument or not. It's like it just won't let me put anything else on that command line.
EDIT2: I've been messing with the code and trying various things to get it to work, sorry it wasn't the exact same error I reported the first time. Here's two other things I tried and the error output messages.
File "/export/home/jeremy/Downloads/git/Solaris-UXP/toolkit/library/dependentlibs.py", line 55, in dependentlibs_readelf
proc = subprocess.Popen([substs.get(('TOOLCHAIN_PREFIX', '') + ('GNU_PREFIX', '')) + 'readelf', '-d', lib], stdout = subprocess.PIPE)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
File "/export/home/jeremy/Downloads/git/Solaris-UXP/toolkit/library/dependentlibs.py", line 55, in dependentlibs_readelf
proc = subprocess.Popen([substs.get('TOOLCHAIN_PREFIX', '') + substs.get('GNU_PREFIX', '') + 'readelf', '-d', lib], stdout = subprocess.PIPE)
File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
EDIT3: I can't fit the entire output of printing 'substs' here, and it reveals more about what I'm working on than I really wanted to share originally, but basically it's about 60,000 characters of this:
{'MOZ_PERMISSIONS': '1', 'ANDROID_ANIMATED_VECTOR_DRAWABLE_AAR_LIB': '', 'X_LIBS': ' -L/usr/lib -R/usr/lib', 'CAIRO_FT_CFLAGS': ['-I/usr/include/freetype2', '-I/usr/include/libpng16', '-I/usr/include/harfbuzz', '-I/usr/include/glib-2.0', '-I/usr/lib/glib-2.0/include', '-I/usr/include/pcre', '-I/usr/include/freetype2', '-I/usr/include/libpng16', '-I/usr/include/harfbuzz', '-I/usr/include/glib-2.0', '-I/usr/lib/glib-2.0/include', '-I/usr/include/pcre'], 'MOZ_WIDGET_TOOLKIT': 'gtk2', 'MOZ_JPEG_LIBS': [], 'MOZ_ANDROID_MIN_SDK_VERSION': '', 'prefix': '/usr/local', 'TK_CFLAGS': ['-I/export/home/athenian/Downloads/git/Solaris-UXP/widget/gtk/compat', '-D_REENTRANT', '-D_PTHREADS', '-D_REENTRANT', '-D_POSIX_PTHREAD_SEMANTICS', '-I/usr/include/gtk-2.0', '-I/usr/include/gtk-unix-print-2.0', '-I/usr/include/gtk-2.0', '-I/usr/include/atk-1.0', '-I/usr/include/gtk-2.0', '-I/usr/lib/gtk-2.0/include', '-I/usr/include/pango-1.0', '-I/usr/include/fribidi', '-I/usr/include/gio-unix-2.0/', '-I/usr/include/gdk-pixbuf-2.0', '-I/usr/include/libpng16', '-I/usr/include/cairo', '-I/usr/include/pixman-1', '-I/usr/include/freetype2', '-I/usr/include/harfbuzz', '-I/usr/include/glib-2.0', '-I/usr/lib/glib-2.0/include', '-I/usr/include/pcre', '-I/usr/include/drm', '-I/usr/include/libpng16'], 'CPP': ['/usr/bin/gcc', '-E', '-std=gnu99'], 'MOZ_ALLOW_HEAP_EXECUTE_FLAGS': [], 'SSSE3_FLAGS': ['-mssse3'], 'MOZ_APP_BASENAME': 'Palemoon', 'AR_EXTRACT': '$(AR) x', 'mandir': '${prefix}/man', 'MOZ_APP_ANDROID_VERSION_CODE': '', 'ANDROID_PLAY_SERVICES_ADS_AAR': '', 'MOZILLA_VERSION': '4.4.0', 'TAR': '/usr/bin/gtar', 'MOZ_SYSTEM_JPEG': '', 'build_alias': 'i386-pc-solaris2.11', 'XLDFLAGS': ['-L/usr/lib', '-R/usr/lib'], 'STRIP': 'strip', 'MOZ_VPX_ERROR_CONCEALMENT': '', 'ZLIB_IN_MOZGLUE': '', 'CAIRO_TEE_CFLAGS': [], 'localstatedir': '${prefix}/var', 'ANDROID_SDK_ROOT': '', 'ANDROID_SUPPORT_V4_AAR': '', 'CLANG_CL': '', 'NONASCII': '', 'BUILD_ARM_NEON': '', 'build_vendor': 'pc', 'MOZ_FFVPX': '', 'VPX_ASFLAGS': ['-DPIC'], 'MOZ_APP_ID': '{8de7fcbb-c55c-4fbe-bfc5-fc555c87dbc4}', 'ANDROID_DESIGN_AAR': '', 'AAPT': '', 'MOZ_PNG_CFLAGS': [], 'MOZ_CAIRO_OSLIBS': ['-L/usr/lib', '-R/usr/lib', '-lXrender'], 'HAVE_TOOLCHAIN_SUPPORT_MSSE4_1': '1', 'AUTOCONF': '/usr/bin/autoconf-2.13', 'CLANG_LDFLAGS': '', 'RELEASE_OR_BETA': '1', 'WIN_UCRT_REDIST_DIR': '', 'HOST_LDFLAGS': '', 'VISIBILITY_FLAGS': ['-I/export/home/athenian/Downloads/git/Solaris-UXP/obj-release/dist/system_wrappers', '-include', '/export/home/athenian/Downloads/git/Solaris-UXP/config/gcc_hidden.h'], 'MOZILLA_UAVERSION': '4.4', 'MOZ_ASAN': '', 'MOZ_APP_UA_NAME': '', 'MOZ_STARTUP_NOTIFICATION_LIBS': [], 'LIBOBJS': '', 'MOZ_ENABLE_DWRITE_FONT': '', 'MOZ_GNOMEUI_LIBS': [], 'MOZ_COMPONENTS_VERSION_SCRIPT_LDFLAGS': '', 'MKSHLIB': '$(CXX) $(CXXFLAGS) $(DSO_PIC_CFLAGS) $(DSO_LDOPTS) -Wl,-h,$(DSO_SONAME) -o $#', 'MOZ_JPEG_CFLAGS': [], 'JS_POSIX_NSPR': '', 'host_cpu': 'i386', 'XCFLAGS': [], 'NSS_EXTRA_SYMBOLS_FILE': '', 'ICU_DATA_FILE': 'icudt58l.dat', 'DSO_LDOPTS': '-shared', 'ANDROID_PLAY_SERVICES_MEASUREMENT_AAR': '', 'PROFILE_USE_CFLAGS': '-fprofile-use
The rest is here, if you need it:
https://pastebin.com/cQr0AUTW
I managed to find a solution to this. It seems like the substrs.get command works in a very specific way that involves using keys and dictionaries, and everything you do with it has to be plugged in a certain way. It seems like it's extremely inflexible about how you invoke it for whatever reason.
This is the code that finally worked for me in this specific situation, it involved something called os.getenv. It is a bit weird-looking because I don't call both prefixes in the same way like I do in the Makefiles, but it effectively does what I want. I thought about changing TOOLCHAIN_PREFIX to match and use the same method, but I had weird issues with that as well. For some reason, this just works and it's good enough for my purposes.
proc = subprocess.Popen([substs.get('TOOLCHAIN_PREFIX','') + os.getenv('GNU_PREFIX') + 'readelf', '-d', lib], stdout = subprocess.PIPE)

Parsing value from non-trivial JSON using Ansibles uri module

I have this (in the example shown I reduced it by removing many lines) non-trivial JSON retrieved from a Spark server:
{
"spark.worker.cleanup.enabled": true,
"spark.worker.ui.retainedDrivers": 50,
"spark.worker.cleanup.appDataTtl": 7200,
"fusion.spark.worker.webui.port": 8082,
"fusion.spark.worker.memory": "4g",
"fusion.spark.worker.port": 8769,
"spark.worker.timeout": 30
}
I try to read fusion.spark.worker.memory but fail to do so. In my debug statements I can see that the information is there:
msg: "Spark memory: {{spark_worker_cfg.json}} shows this:
ok: [process1] => {
"msg": "Spark memory: {u'spark.worker.ui.retainedDrivers': 50, u'spark.worker.cleanup.enabled': True, u'fusion.spark.worker.port': 8769, u'spark.worker.cleanup.appDataTtl': 7200, u'spark.worker.timeout': 30, u'fusion.spark.worker.memory': u'4g', u'fusion.spark.worker.webui.port': 8082}"
}
The dump using var: spark_worker_cfg shows this:
ok: [process1] => {
"spark_worker_cfg": {
"changed": false,
"connection": "close",
"content_length": "279",
"content_type": "application/json",
"cookies": {},
"cookies_string": "",
"failed": false,
"fusion_request_id": "Pj2zeWThLw",
"json": {
"fusion.spark.worker.memory": "4g",
"fusion.spark.worker.port": 8769,
"fusion.spark.worker.webui.port": 8082,
"spark.worker.cleanup.appDataTtl": 7200,
"spark.worker.cleanup.enabled": true,
"spark.worker.timeout": 30,
"spark.worker.ui.retainedDrivers": 50
},
"msg": "OK (279 bytes)",
"redirected": false,
"server": "Jetty(9.4.12.v20180830)",
"status": 200,
"url": "http://localhost:8765/api/v1/configurations?prefix=spark.worker"
}
}
I can't access the value using {{spark_worker_cfg.json.fusion.spark.worker.memory}}, my problem seems to be caused by the names containing dots:
The task includes an option with an undefined variable. The error was:
'dict object' has no attribute 'fusion'
I have had a look at two SO posts (1 and 2) that look like duplicates of my question but could not derive from them how to solve my current issue.
The keys in the 'json' element of the data structure, contain literal dots, rather than represent a structure. This will causes issues, because Ansible will not know to treat them as literal if dotted notation is used. Therefore, use square bracket notation to reference them, rather than dotted:
- debug:
msg: "{{ spark_worker_cfg['json']['fusion.spark.worker.memory'] }}"
(At first glance this looked like an issue with a JSON encoded string that needed decoding, which could have been handled:"{{ spark_worker_cfg.json | from_json }}")
You could use the json_query filter to get your results. https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html
msg="{{ spark_worker_cfg.json | json_query('fusion.spark.worker.memory') }}
edit:
In response to your comment, the fact that we get an empty string returned leads me to believe that the query isn't correct. It can be frustrating to find the exact query while using the json_query filter so I usually use a jsonpath tool beforehand. I've linking one in my comment below but I, personally, use the jsonUtils addon in intelliJ to find my path (which still needs adjustment because the paths are handled a bit differently between the two).
If your json looked like this:
{
value: "theValue"
}
then
json_query('value')
would work.
The path you're passing to json_query isn't correct for what you're trying to do.
If your top level object was named fusion_spark_worker_memory (without the periods), then your query should work. The dots are throwing things off, I believe. There may be a way to escape those in the query...
edit 2: clockworknet for the win! He beat me to it both times. :bow:

Resources