Communicating with Systemd service through socket mapped to stdin - bash

I'm creating my first background service and I want to communicate with it through a socket.
I have the following script /tmp/myservice.sh:
#! /usr/bin/env bash
while read received_cmd
do
echo "Received command ${received_cmd}"
done
And the following socket /etc/systemd/user/myservice.socket
[Unit]
Description=Socket to communicate with myservice
[Socket]
ListenSequentialPacket=/tmp/myservice.socket
And the following service:
[Unit]
Description=A simple service example
[Service]
ExecStart=/bin/bash /tmp/myservice.sh
StandardError=journal
StandardInput=socket
StandardOutput=socket
Type=simple
The idea is to understand how to communicate with a background service, here using an unix file socket. The script works well when launched from the shell and reading stdin and I thought that by setting StandardInput = "socket" it would read from the socket the same way.
Nevertheless, when I run nc -U /tmp/myservice.socket the command returns right away and I have the following output:
$ journalctl --user -u myservice
-- Logs begin at Sat 2020-10-24 17:26:25 BST, end at Thu 2020-10-29 14:00:53 GMT. --
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21941]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21942]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21943]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21944]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21945]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Start request repeated too quickly.
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Failed with result 'start-limit-hit'.
Oct 29 08:40:16 shiny systemd[1689]: Failed to start A simple service example.
Did I misunderstand how sockets work? Why read fails to read from the socket? Should I use another mechanism to communicate with my background service (as I said, it's my first background service so I may do unconventional things here)?

The only thing I have seen working with a shell script is ListenStream= rather than ListenSequentialPacket=. (Obviously, this means you lose packet boundaries, but the shell read is usually oriented to read lines ending \n from streams, so it is not usually a problem).
But the most important thing that is missing, is the extra Accept line:
[Socket]
ListenStream=...
Accept=true
As I understand it, without this the service will be passed a socket on which it must first do a socket accept() call, to get the actual connection socket (hence the read error). The service must also then handle all further connections.
By using Accept=true, a new service will be started for each new connection, and will be passed the immediately usable socket. Note, however, that this means the service must now be templated, i.e. called myservice#.service rather than myservice.service.
(For datagram sockets, Accept must be left defaulted to false). See man systemd.socket.

Related

collectd - exec plugin: Unable to parse command

I'm trying to return a value from a simple script. However, I'm getting the following error.
Feb 26 09:26:37 localhost systemd[1]: Starting Collectd statistics daemon...
Feb 26 09:26:37 localhost collectd[834]: plugin_load: plugin "exec" successfully loaded.
Feb 26 09:26:37 localhost collectd[834]: Systemd detected, trying to signal readyness.
Feb 26 09:26:37 localhost systemd[1]: Started Collectd statistics daemon.
Feb 26 09:26:37 localhost collectd[834]: Initialization complete, entering read-loop.
Feb 26 09:26:37 localhost collectd[834]: exec plugin: Unable to parse command, ignoring line: "73"
Feb 26 09:26:47 localhost collectd[834]: exec plugin: Unable to parse command, ignoring line: "74"
Feb 26 09:26:57 localhost collectd[834]: exec plugin: Unable to parse command, ignoring line: "73"
Feb 26 09:27:07 localhost collectd[834]: exec plugin: Unable to parse command, ignoring line: "73"
My config is
LoadPlugin exec
<Plugin exec>
Exec "cwagent" "/opt/aws/amazon-cloudwatch-agent/bin/supervisor.sh"
</Plugin>
and my script is
#!/bin/bash
VALUE=$(/bin/systemctl status | wc -l)
echo "$VALUE"
I realise that this is probably a silly mistake I'm making. I have spent a bit of time playing around and googling to try to understand the problem. But I'm afraid I've made little progress. Grateful for any advice :¬)
Number of things, your plugin is forked off by collectd with the expectation that it keeps running and producing consumable output, so you need to use a while loop like it lays out here: https://collectd.org/wiki/index.php/Plugin:Exec
Second, your output format is wrong. I found this bit of the documentation badly written because it isn't completely clear how the gauge name and metric name are constituted out of the string. Taking the example in the page above:
echo "PUTVAL \"$HOSTNAME/exec-magic/gauge-magic_level\" interval=$INTERVAL N:$VALUE"
Then:
exec-magic is the plugin name
magic_level is the metric name
gauge is the data source type from collectd types
N: is the abbreviation for "now" as defined in the exec plugin
So putting this together you'd something similar to:
#!/bin/bash
HOSTNAME="${COLLECTD_HOSTNAME:-localhost}"
INTERVAL="${COLLECTD_INTERVAL:-60}"
while sleep "$INTERVAL"; do
VALUE=$(/bin/systemctl status | wc -l)
echo "PUTVAL ${HOSTNAME}/cwagent/counter-line_count\" N:$VALUE"
done
In this case you are using the simple counter type and returning a single value equivalent to the number of lines you counted in your command.

Difference between **journalctl -u test.service** and **journalctl CONTAINER_NAME=test**

I have a systemd service file which run a docker container with log driver journald.
ExecStart=/usr/bin/docker run \
--name ${CONTAINER_NAME} \
-p ${PORT}:8080 \
--add-host ${DNS} \
-v /etc/localtime:/etc/localtime:ro \
--log-driver=journald \
--log-opt tag="docker.{{.Name}}" \
${RESPOSITORY_NAME}/${CONTAINER_NAME}
ExecStop=-/usr/bin/docker stop ${CONTAINER_NAME}
When I check the logs via journalctl I see two different _TRANSPORT.
With journalctl -u test.service I see _TRANSPORT=stdout. And with Journalctl CONTAINER_NAME=test I see _TRANSPORT=journal
What is the difference?
The difference here is in how the logs get to systemd-journald before they are logged.
As of right now, the supported transports (at least according to the _TRANSPORT field in systemd-journald) are: audit, driver, syslog, journal, stdout and kernel (see systemd.journal-fields(7)).
In your case, everything logged to stdout by commands executed by the ExecStart= and ExecStop= directives is logged under the _TRANSPORT=stdout transport.
However, Docker is internally capable of using the journald logging driver which, among other things, introduces several custom journal fields - one of them being CONTAINER_ID=. It's just a different method of delivering data to systemd-journald - instead of relying on systemd to catch and send everything from stdout to systemd-journald, Docker internally sends everything straight to systemd-journald by itself.
This can be achieved by using the sd-journal API (as described in sd-journal(3)). Docker uses the go-systemd Go bindings for the sd-journal C library.
Simple example:
hello.c
#include <stdio.h>
#include <systemd/sd-journal.h>
int main(void)
{
printf("Hello from stdout\n");
sd_journal_print(LOG_INFO, "Hello from journald");
return 0;
}
# gcc -o /var/tmp/hello -lsystemd hello.c
# cat > /etc/systemd/system/hello.service << EOF
[Service]
ExecStart=/var/tmp/hello
EOF
# systemctl daemon-reload
# systemctl start test.service
Now if I check journal, I'll see both messages:
# journalctl -u hello.service
-- Logs begin at Mon 2019-09-30 22:08:02 CEST, end at Fri 2020-03-27 17:11:29 CET. --
Mar 27 17:08:28 localhost systemd[1]: Started hello.service.
Mar 27 17:08:28 localhost hello[921852]: Hello from journald
Mar 27 17:08:28 localhost hello[921852]: Hello from stdout
Mar 27 17:08:28 localhost systemd[1]: hello.service: Succeeded.
But each of them arrived using a different transport:
# journalctl -u hello.service _TRANSPORT=stdout
-- Logs begin at Mon 2019-09-30 22:08:02 CEST, end at Fri 2020-03-27 17:12:29 CET. --
Mar 27 17:08:28 localhost hello[921852]: Hello from stdout
# journalctl -u hello.service _TRANSPORT=journal
-- Logs begin at Mon 2019-09-30 22:08:02 CEST, end at Fri 2020-03-27 17:12:29 CET. --
Mar 27 17:08:28 localhost systemd[1]: Started hello.service.
Mar 27 17:08:28 localhost hello[921852]: Hello from journald
Mar 27 17:08:28 localhost systemd[1]: hello.service: Succeeded.

Gammu stops receiving SMS after a while

I have a problem that's been bugging me for a while now. I've been searching for solutions for 2 weeks now without any result. These guys have the same problem as me but no answers there..
I'm running gammu (1.31) and gammu-smsd on a Rpi with raspbian.
Using a Huawei E367.
Don't know why I got 3 devices /dev/ttyUSB0, /dev/ttyUSB1, /dev/ttyUSB2
Since I don't know the difference between these I tried different settings and got it running with the following; gammuconf ttyUSB0 and gammu-smsdrc ttyUSB2. Both as root and normal users.
Sending SMS works great. Then comes the problem. Receiving SMS works for a while, then just stops. If I reboot the system it starts to work again. For a while, but the same thing happens after a while.
# Configuration file for Gammu SMS Daemon
# Gammu library configuration, see gammurc(5)
[gammu]
# Please configure this!
port = /dev/ttyUSB2
connection = at
# Debugging
#logformat = textall
# SMSD configuration, see gammu-smsdrc(5)
[smsd]
service = files
logfile = /home/pi/gammu/log/log_smsdrc.txt
# Increase for debugging information
debuglevel = 0
# Paths where messages are stored
inboxpath = /home/pi/gammu/inbox/
outboxpath = /home/pi/gammu/outbox/
sentsmspath = /home/pi/gammu/sent/
errorsmspath = /home/pi/gammu/error/
ReceiveFrequency = 2
LoopSleep = 1
GammuCoding = utf8
CommTimeout = 0
#RunOnReceive =
Log
Tue 2015/03/31 11:05:19 gammu-smsd[7379]: Starting phone communication...
Tue 2015/03/31 11:07:07 gammu-smsd[7379]: Terminating communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2091]: Warning: No PIN code in /etc/gammu-smsdrc file
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Created POSIX RW shared memory at 0xb6f6d000
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error
opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Going to 30 seconds sleep because of too much connection errors
Tue 2015/03/31 11:08:14 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:08:21 gammu-smsd[2116]: Soft reset return code: Function not supported by phone. (NOTSUPPORTED[21])
Tue 2015/03/31 11:08:27 gammu-smsd[2116]: Read 2 messages
Tue 2015/03/31 11:08:27 gammu-smsd[2116]: Received
IN20150331_110600_00_+xxxxxx_00.txt
Tue 2015/03/31 11:08:27 gammu-smsd[2116]: Received
IN20150331_110820_00_+xxxxxx_00.txt
Tue 2015/03/31 11:09:38 gammu-smsd[2116]: Read 1 messages
Tue 2015/03/31 11:09:38 gammu-smsd[2116]: Received
IN20150331_110934_00_+xxxxxx_00.txt
Tue 2015/03/31 11:13:57 gammu-smsd[2116]: Read 1 messages
Tue 2015/03/31 11:13:57 gammu-smsd[2116]: Received
IN20150331_111352_00_+xxxxxx_00.txt
I guess the early warnings are before my modeswitch command kicks in.
in rc.local:
sudo usb_modeswitch -v 0x12d1 -p 0x1446 -V 0x12d1 -P 0x1506 -m 0x01 -M 55534243123456780000000000000011062000000100000000000000000000 -I
I have the same Problem, so I wrote a shell script to reactivate the clean-quick /dev/ttyUSB[0-2] device, and then added it to cron job
*/5 * * * * /home/sysadmin/scripts/reanimate-usb-stick.sh >/dev/null 2>&1
reanimate-usb-stick.sh
#!/bin/bash
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
USBDEVICES=$(ls -l /dev/* | awk '/\/dev\/ttyUSB[0-7]/ {print $6}' | wc -l)
DEVICEINFO=""
DEVICEPORT=""
if [ $USBDEVICES = 0 ]
then
datas=$(lsusb | grep -i hua | awk '/Bus/ {print $6}' | tr ":" "\n")
counter=0
for line in $datas
do
counter=$((counter+1))
if [ $counter = 1 ]
then
DEVICEINFO=$(echo "$line")
fi
if [ $counter = 2 ]
then
DEVICEPORT=$(echo "$line")
fi
done
usb_modeswitch -v $DEVICEINFO -p $DEVICEPORT -J
echo "$DEVICEINFO - $DEVICEPORT"
else
echo "ALLES OK : $USBDEVICES"
exit
fi
This looks pretty much same as https://github.com/gammu/gammu/issues/4 and even though there were some attempts to fix this in Gammu, it seems that the Huawei modems firmware is simply not stable enough for this usage. Simply asking it several times for listing received messages makes it unresponsive.
Also which device you use might make slight difference, see Gammu manual and dd-wrt wiki for more information on that topic.
I had similar problem with Huawei 3g modem e1750. I added following lines to /etc/gammu-smsdrc file:
ReceiveFrequency = 60
StatusFrequency = 60
CommTimeout = 60
SendTimeout = 60
LoopSleep = 10
CheckSecurity = 0
The idea is to minimalize ammount of communication between gammu-smsd and 3g modem. Especially the default value LoopSleep=1 means that gammu sends commands to modem each second and it could be too much for modem firmware, so I used 10.
Next thing is something standard in all Raspberry/ARM embedded projects: Use powerfull power source. I'm using charger with fixed cable (I belive that some reusable cables could be inappriopriate for currents above 2A) that looks like that:
http://botland.com.pl/9240-thickbox_default/zasilacz-extreme-microusb-5v-21a-raspberry-pi.jpg
With that the modem still hangs after about 50-100 hours of operation, but it's enouth for my project.

Parsing entry name from a log

Writing bash parsing scripts is my own personal nightmare, so here I am.
The server log format is below:
197 INFO Thu Mar 27 10:10:32 2014
seq_1_1..JobControl (DSWaitForJob): Waiting for job job_1_1_1 to finish
198 INFO Thu Mar 27 10:10:36 2014
seq_1_1..JobControl (DSWaitForJob): Job job_1_1_1 has finished, status = 3 (Aborted)
199 WARNING Thu Mar 27 10:10:36 2014
seq_1_1..JobControl (#job_1_1_1): Job job_1_1_1 did not finish OK, status = 'Aborted'
From here I need to parse out the string which follows the format:
Job job_name has finished, status = 3 (Aborted)
So from the output above I should get: job_1_1_1
What would the script for that look like if I get this server log as a certain command output?
Thanks xx
Using grep -P:
grep -oP '\w+(?= has finished, status = 3)' file
job_1_1_1

hadoop multiline mixed records

I would like to parse logfiles produced by fidonet mailer binkd, which are multi-line and much worse - mixed: several instances can write into one logfile, for example:
27 Dec 16:52:40 [2484] BEGIN, binkd/1.0a-545/Linux -iq /tmp/binkd.conf
+ 27 Dec 16:52:40 [2484] session with 123.45.78.9 (123.45.78.9)
- 27 Dec 16:52:41 [2484] SYS BBSName
- 27 Dec 16:52:41 [2484] ZYZ First LastName
- 27 Dec 16:52:41 [2484] LOC City, Country
- 27 Dec 16:52:41 [2484] NDL 115200,TCP,BINKP
- 27 Dec 16:52:41 [2484] TIME Thu, 27 Dec 2012 21:53:22 +0600
- 27 Dec 16:52:41 [2484] VER binkd/0.9.6a-173/Win32 binkp/1.1
+ 27 Dec 16:52:43 [2484] addr: 2:1234/56.78#fidonet
- 27 Dec 16:52:43 [2484] OPT NDA CRYPT
+ 27 Dec 16:52:43 [2484] Remote supports asymmetric ND mode
+ 27 Dec 16:52:43 [2484] Remote requests CRYPT mode
- 27 Dec 16:52:43 [2484] TRF 0 0
*+ 27 Dec 16:52:43 [1520] done (from 2:456/78#fidonet, OK, S/R: 0/0 (0/0 bytes))*
+ 27 Dec 16:52:43 [2484] Remote has 0b of mail and 0b of files for us
+ 27 Dec 16:52:43 [2484] pwd protected session (MD5)
- 27 Dec 16:52:43 [2484] session in CRYPT mode
+ 27 Dec 16:52:43 [2484] done (from 2:1234/56.78#fidonet, OK, S/R: 0/0 (0/0 bytes))
So the logfile is not only multi-line with unpredictable number of lines per session, but also several records can be mixed in between, like session 1520 has finished in the middle of session 2484.
What would be the right direction in hadoop to parse such a file? Or shall I just parse line-by-line and then merge them somehow into a record later and write those records into a SQL database using another set of jobs later on?
Thanks.
Right direction for Hadoop will be to develop your own input format who's record reader will
read input line by line and produce logical records.
Can be stated - that you actually can do it in mapper also - it might be a bit simpler. Drawback will be that it is not standard packaging of such code for hadoop and thus it is less reusable.
Other direction you mentioned is not "natural" for hadoop in my view. Specifically - why to use all complicated (and expensive) machinery of shuffling to join together several lines which are already in hands.
First of all, parsing the file is not what you are trying to do; you are trying to extract some information from your data.
In your case you can consider multi-step MR job where first MR job will essentially (partially) sort your input by session_id (do some filtering? Some aggregation? Multiple reducers?) and then reducer or next MR job will do actual calculation.
Without explanation of what you are trying to extract from your log files it is hard to give more definitive answer.
Also if your data is small, maybe you can process it without MR machinery at all?

Resources