rsyslog not escaping backslash in JSON - rsyslog

I have a rsyslogd instance running, producing the following JSON from syslog:
{"timegenerated":"2019-01-28T09:24:37.033990+00:00","type":"syslog","host":"REDACTED_HOSTNAME","host-ip":"REDACTED_IP","message":"<190>Jan 28 2019 10:24:35: %ASA-X-XXXXXX: Teardown TCP connection 82257709 for outside:REDACTED_IP\/REDACTED_PORT(LOCAL\ususername) to inside:REDACTED_IP\/REDACTED_PORT duration 0:01:52 bytes XXXX TCP FINs from outside (ususername)"}
This is invalid JSON, as the \ususe is interpreted as a hex representation of a unicode symbol. It should have been escaped as \\ususe.
I noticed on GitHub that there were an open issue (https://github.com/rsyslog/rsyslog/issues/1235), although it mentions another issue that resulted in a merged fix.
Here's some system info:
:~# rsyslogd -version
rsyslogd 8.24.0, compiled with:
PLATFORM: x86_64-pc-linux-gnu
PLATFORM (lsb_release -d):
FEATURE_REGEXP: Yes
GSSAPI Kerberos 5 support: Yes
FEATURE_DEBUG (debug build, slow code): No
32bit Atomic operations supported: Yes
64bit Atomic operations supported: Yes
memory allocator: system default
Runtime Instrumentation (slow code): No
uuid support: Yes
Number of Bits in RainerScript integers: 64
:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.4 (stretch)
Release: 9.4
Codename: stretch
Template used to create the JSON document is:
template(name="json_syslog"
type="list") {
constant(value="{")
constant(value="\"timegenerated\":\"") property(name="timegenerated" dateFormat="rfc3339")
constant(value="\",\"type\":\"syslograw")
constant(value="\",\"host\":\"") property(name="fromhost")
constant(value="\",\"host-ip\":\"") property(name="fromhost-ip")
constant(value="\",\"message\":\"") property(name="rawmsg" format="jsonr")
constant(value="\"}\n")
Is there any functionality in rsyslog that would allow me to fix this, or does it seem like an upstream bug?

I notice you are using format="jsonr" in the template for the message. There is a difference if you use json instead of jsonr, which the documentation describes very briefly as avoids double escaping the value. Using a template with
constant(value="\",\n\"json\":\"") property(name="rawmsg" format="json")
constant(value="\",\n\"jsonr\":\"") property(name="rawmsg" format="jsonr")
and providing input containing
LOCAL\ususer "abc"
produces the 2 lines
"json":"LOCAL\\ususer \"abc\",
"jsonr":"LOCAL\ususer \"abc\",
in which the json format has escaped the \u into \\u (tested with rsyslog-8.27.0).
If this is not right for you, you can always manipulate the message, for example as follows, adding before your action:
set $.msg2 = replace($rawmsg, "\\u", "\\\\u");
and in your template use
constant(value="\",\"message\":\"") property(name="$.msg2" format="jsonr")
The replace function does a global replace, so you may want to restrict it, for example with
set $.msg2 = replace($rawmsg, "LOCAL\\u", "LOCAL\\\\u");

Related

Cannot activate rust-analyzer: bootstrap error

Starting 2020-12-09, VSCode's Rust Analyzer extension no longer loads for me. On launch, it prints out this error message:
Cannot activate rust-analyzer: bootstrap error. See the logs in "OUTPUT > Rust Analyzer Client" (should open automatically). To enable verbose logs use { "rust-analyzer.trace.extension": true }
Enabling extension tracing produces the following diagnostic just before failing:
INFO [12/10/2020, 10:03:22 AM]: Using server binary at c:\Users\<user>\AppData\Roaming\Code\User\globalStorage\matklad.rust-analyzer\rust-analyzer-windows.exe
DEBUG [12/10/2020, 10:03:22 AM]: Checking availability of a binary at c:\Users\<user>\AppData\Roaming\Code\User\globalStorage\matklad.rust-analyzer\rust-analyzer-windows.exe
DEBUG [12/10/2020, 10:03:22 AM]: c:\Users\<user>\AppData\Roaming\Code\User\globalStorage\matklad.rust-analyzer\rust-analyzer-windows.exe --version: {
status: 3221225506,
signal: null,
output: [ null, '', '' ],
pid: 1648,
stdout: '',
stderr: ''
}
where <user> is the name of the user account I use to log into the system1.
The status value reported in the error diagnostic (3221225506) translates to 0xC0000022 (STATUS_ACCESS_DENIED). Navigating to the binary from within VSCode's integrated terminal and trying to execute rust-analyzer-windows.exe --version doesn't produce any output, which seems to reinstate that running this executable from VSCode is somehow blocked.
It appears that something changed with respect to access rights executing the server binary from within VSCode. In between Rust Analyzer working and Rust Analyzer no longer working I didn't update Rust, nor rustup, nor VSCode, nor any extensions.
I did install 2020-12 Cumulative Update for Windows 10 Version 20H2 for x64-based Systems (KB4592438), though, and the time Rust Analyzer started failing coincides with the time the update got installed. That could literally just be a coincidence.
What additional steps can I take to get to the root cause of the issue, and how do I get Rust Analyzer working again?
Version information:
Rust Analyzer (stable): v0.2.408
Windows 10 Pro: Version 10.0.19042 Build 19042
VSCode: 1.51.1 (user setup)
1 This is also the user account VSCode runs under, including all of its spawned processes. Navigating to the path from a command prompt running under this account reveals that rust-analyzer-windows.exe is present, and executing rust-analyzer-windows.exe --version prints a version identifier, as expected.
Unfortunately, I didn't quite get to investigate the root cause of this.
A system reboot that was forced upon me appears to have restored World Peace.
Clearing proxy config works for me.
I'm not sure this covered all situation, but it might be related to the network.

VLC : which file to update VLC Lua youtube script on Mac OS X?

I am trying to read a youtube video through VLC on my mac:
/Applications/VLC.app/Contents/MacOS/VLC -v https://www.youtube.com/watch?v=afzmwAKUppU&app=desktop
Which gives errors :
VLC media player 3.0.
8 Vetinari (revision 3.0.8-0-gf350b6b5a7)
[00007faf5b5e9140] lua generic warning: Error while running script /Applications/VLC.app/Contents/MacOS/share/lua/extensions/youtube.lua, function descriptor() not found
[00007faf5b4589c0] macosx interface warning: Failed to enable media key support, likely app needs to be whitelisted in Security Settings.
[00007faf5b784950] securetransport tls client warning: Ignoring ALPN request due to lack of support in the backend. Proxy behavior potentially undefined.
[00007faf5b770200] lua stream warning: Couldn't extract video URL, falling back to alternate youtube API
[00007faf5b6b5b60] securetransport tls client warning: Ignoring ALPN request due to lack of support in the backend. Proxy behavior potentially undefined.
[00007faf5f97ce70] securetransport tls client warning: Ignoring ALPN request due to lack of support in the backend. Proxy behavior potentially undefined.
2020-10-15 13:45:28.281 VLC[65658:198319] Can't find app with identifier com.spotify.client
[00007faf5b5d8580] lua stream error: Couldn't extract youtube video URL, please check for updates to this script
[00007faf5b44b570] main playlist: playlist is empty
The youtube.lua, I got it by downloading the file from internet :
curl "http://git.videolan.org/?p=vlc.git;a=blob_plain;f=share/lua/playlist/youtube.lua;hb=HEAD" -o /Applications/VLC.app/Contents/MacOS/share/lua/extensions/youtube.lua
Which works on my ubuntu, but not in my Mac: I am wondering if this is not the correct version for Mac OS. And so, which file should be put there ?
If I look on the VLC Lua directory, I find :
/Applications/VLC.app/Contents/MacOS/share/lua/extensions$ ls -l
total 192
-rw-r--r--# 1 romain admin 72K Aug 14 2019 VLSub.luac
-rw-r--r-- 1 root admin 22K Oct 15 13:35 youtube.lua
the youtube.lua is the new script I added, but maybe it was another one to put there ?
Probably a bit late here now, but in any case:
You need to take the youtube.lua from here: https://github.com/videolan/vlc/blob/master/share/lua/playlist/youtube.lua
Then rename it to youtube.luac and place it in the directory (for MacOS) /Applications/VLC.app/Contents/MacOS/share/lua/playlist
I would recommend renaming the old one and keeping it in case something goes awry.
At least on my computer this worked and I can now open YouTube videos in VLC again.
Have you tried the 3.0.11.1 release? https://get.videolan.org/vlc/3.0.11.1/macosx/vlc-3.0.11.1.dmg
You're correct that youtube.lua would be the correct file, though the issue might come from other parts of the code depending on it. FYI, since you are running a stable VLC build, the code you should look at is https://code.videolan.org/videolan/vlc-3.0/-/blob/master/share/lua/playlist/youtube.lua.
https://code.videolan.org/videolan/vlc/-/blob/master/share/lua/playlist/youtube.lua is nightly v4 unstable builds (though for the lua script, they are identical).
Additionally, look out for a new release with an updated script soon https://mailman.videolan.org/pipermail/vlc-devel/2020-October/139076.html

Prometheus does not scrape all metrics from PCP pmproxy

On my laptop with Fedora 30 I have Performance Co-Pilot (PCP) daemons installed and running, and Prometheus installed from the package golang-github-prometheus-prometheus-1.8.0-4.fc30.x86_64. In PCP collector's config I specified the following metric namespaces:
# Performance Metrics Domain Specifications
#
# This file is automatically generated during the build
# Name Id IPC IPC Params File/Cmd
#root 1 pipe binary /var/lib/pcp/pmdas/root/pmdaroot
#pmcd 2 dso pmcd_init /var/lib/pcp/pmdas/pmcd/pmda_pmcd.so
proc 3 pipe binary /var/lib/pcp/pmdas/proc/pmdaproc -d 3
#xfs 11 pipe binary /var/lib/pcp/pmdas/xfs/pmdaxfs -d 11
linux 60 pipe binary /var/lib/pcp/pmdas/linux/pmdalinux
#pmproxy 4 dso pmproxy_init /var/lib/pcp/pmdas/mmv/pmda_mmv.so
mmv 70 dso mmv_init /var/lib/pcp/pmdas/mmv/pmda_mmv.so
#jbd2 122 dso jbd2_init /var/lib/pcp/pmdas/jbd2/pmda_jbd2.so
#kvm 95 pipe binary /var/lib/pcp/pmdas/kvm/pmdakvm -d 95
[access]
disallow ".*" : store;
disallow ":*" : store;
allow "local:*" : all;
When I visit the URL localhost:44323/metrics the output is very rich and covers many namespaces, ie. mem, network, kernel, filesys, hotproc etc., but when I scrape it with Prometheus where the job is defined as:
scrape_configs:
- job_name: 'pcp'
scrape_interval: 10s
sample_limit: 0
static_configs:
- targets: ['127.0.0.1:44323']
I see the target status UP, but in the console only these two metric namespaces are available for querying: hinv and mem. I tried to copy other metric names from /metrics page, but queries result in the error: 'No datapoints found.' Initially I thought the problem could have been due to a limit on the number of samples per target or the sampling interval too small (I originally set it to 1s), but hinv and mem are not next to each other and there are other metrics (ie. filesys, kernel) in between them that are omitted. What could be the reason for that?
I have not found the exact cause of the problem, but it must have been version-specific, because after I downloaded and launched the latest version 2.19 the problem was gone and with exactly the same config Prometheus was reading all metrics from the target.
Adding another answer, because I have just seen this issue again in another environment where Prometheus v.2.19 was pulling metrics via PMAPI from PCP v.5 on CentOS 7 servers. In Prometheus config file the scrape was configured as a single job with multiple metric domains, ie.:
- job_name: 'pcp'
file_sd_configs:
- files: [...]
metric_path: '/metrics'
params:
target: ['kernel', 'mem', 'disk', 'network', 'mounts', 'lustre', 'infiniband']
When there was a problem with one metric domain, usually lustre or infiniband due to the lack of corresponding hardware on the host, only kernel metrics were collected and no other.
The issue was fixed by splitting the scrape job into multiple jobs with only one target each, ie.:
- job_name: 'pcp-kernel'
file_sd_configs:
- files: [...]
metric_path: '/metrics'
params:
target: ['kernel']
- job_name: 'pcp-mem'
file_sd_configs:
- files: [...]
metric_path: '/metrics'
params:
target: ['mem']
[...]
This way metrics from the core domains were always scraped successfully despite of one or all of the extra ones failing. Such setup seems to be more robust, however it makes the target status view busier, because there are more scrape jobs.

Lighttpd closes connection when system time is changed

These are some of the parameters of my lighttpd config file.
server.modules += ( "mod_wstunnel", "mod_auth")
wstunnel.debug = 4
wstunnel.server.max-read-idle = 86400
#wstunnel.ping-interval = 5
#wstunnel.timeout = 30
When I open my web application, connection is created properly using websocket and connects to my c++ server.
All functionalities work except one.
One requirement of my application is to change the system time of machine, but when system time is changed, connection is closed and in log file it shows as :
`2019-02-12 14:04:10: (gw_backend.c.308) released proc: pid: 0 socket: tcp:127.0.0.1:10002 load: 0`
I want to maintain the connection even if system time is changed.
What other parameters can be used or any modification is required in these parameters?
System OS : Fedora 26
Lighttpd version : 1.4.49
wstunnel.server.max-read-idle does not exist. Did you test the lighttpd config before running it and look at the error trace? It should have noted wstunnel.server.max-read-idle as an unrecognized directive.
The directives you seek are:
server.max-read-idle
server.max-write-idle
server.max-keep-alive-idle
However, if the time on your server (running lighttpd) is jumping more than a few seconds, then I suggest that is your primary problem.
Also, Fedora 26 reach end-of-life on May 29, 2018. Supported Fedora have newer version of lighttpd. The current version of lighttpd is lighttpd 1.4.53.

GnuCOBOL entry point not found

I've installed GnuCOBOL 2.2 on my Ubuntu 17.04 system. I've written a basic hello world program to test the compiler.
1 IDENTIFICATION DIVISION.
2 PROGRAM-ID. HELLO-WORLD.
3 *---------------------------
4 DATA DIVISION.
5 *---------------------------
6 PROCEDURE DIVISION.
7 DISPLAY 'Hello, world!'.
8 STOP RUN.
This program is entitled HelloWorld.cbl. When I compile the program with the command
cobc HelloWorld.cbl
HelloWorld.so is produced. When I attempt to run the compiled program using
cobcrun HelloWorld
I receive the following error:
libcob: entry point 'HelloWorld' not found
Can anyone explain to me what an entry point is in GnuCOBOL, and perhaps suggest a way to fix the problem and successfully execute this COBOL program?
According to the official manual of GNUCOBOL, you should compile your code with:
cobc -x HelloWorld.cbl
then run it with
./HelloWorld
You can also read GNUCOBOL wiki page which contains some exmaples for further information.
P.S. As Simon Sobisch said, If you change your file name to HELLO-WORLD.cbl to match the program ID, the same commands that you have used will be ok:
cobc HELLO-WORLD.cbl
cobcrun HELLO-WORLD
Can anyone explain to me what an entry point is in GnuCOBOL, and perhaps suggest a way to fix the problem and successfully execute this COBOL program?
An entry point is a point where you may enter a shared object (this is actually more C then COBOL).
GnuCOBOL generates entry points for each PROGRAM-ID, FUNCTION-ID and ENTRY. Therefore your entry point is HELLO-WORLD (which likely gets a conversion as - is no valid identifier in ANSI C - you won't have to think about this when CALLing a program as the conversion will be done internal).
Using cobcrun internally does:
search for a shared object (in your case HelloWord), as this is found (because you've generated it) it will be loaded
search for an entry point in all loaded modules - which isn't found
There are three possible options to get this working:
As mentioned in Ho1's answer: use cobc -x, the reason that this works is because you don't generate a shared object at all but a C main which is called directly (= the entry point doesn't apply at all)
preload the shared object and calling the program by its PROGRAM-ID (entry point), either manually with COB_PRE_LOAD=HelloWorld cobcrun HELLO-WORLD or through cobcrun (option available since GnuCOBOL 2.x) cobcrun -M HelloWorld HELLO-WORLD
change the PROGRAM-ID to match the source name (either rename or change the source, I'd do the second: PROGRAM-ID. HelloWorld.)

Resources