Unable to connect to local redis docker although up and running - macos

Using the following redis.conf
▶ cat redis.conf
bind 0.0.0.0
spinning up a redis container
▶ docker run -d --name redis-test -p 11111:6379 -v /Users/redis.conf:/redis.conf redis redis-server /redis.conf
59eb1612e8c3e2403e18ce889ce1438f6c6a23a7c70bed30b46ff765b7fe7038
logs seem healthy
▶ docker logs -f 59eb1612e8c3e2403e18ce889ce1438f6c6a23a7c70bed30b46ff765b7fe7038
1:C 18 Mar 2021 17:57:13.954 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 18 Mar 2021 17:57:13.954 # Redis version=6.2.1, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 18 Mar 2021 17:57:13.954 # Configuration loaded
1:M 18 Mar 2021 17:57:13.955 * monotonic clock: POSIX clock_gettime
1:M 18 Mar 2021 17:57:13.955 * Running mode=standalone, port=6379.
1:M 18 Mar 2021 17:57:13.955 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 18 Mar 2021 17:57:13.956 # Server initialized
1:M 18 Mar 2021 17:57:13.956 * Ready to accept connections
container seems up
▶ docker ps | grep -i redis
59eb1612e8c3 redis "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 0.0.0.0:11111->6379/tcp redis-test
If all the above are more or less good indications, why am I unable to connect to the container
▶ redis-cli -h localhost -p 11111
Could not connect to Redis at localhost:11111: Connection refused
not connected>
▶ redis-cli -h 127.0.0.1 -p 11111
Could not connect to Redis at 127.0.0.1:11111: Connection refused
not connected>
Working on MacOS Catalina

Find the IP Address of container called redis-test by running this command (I'm in Linux, but I think that should be the same on MacOS, sorry if that's not the same):
docker inspect redis-test | grep -i ipaddress
The result should be something like this:
"IPAddress": "172.21.0.2"
Now try:
redis-cli -h 172.21.0.2 -p 11111

Related

htpdate does not update time

I have two machines, a Rpi4 and Ubuntu PC. The two machines are offgrid and connected to their network. The idea is to sync the RPi4 time with the Ubuntu machine. The NTP aproach failed because port and firewall issues. So I used htpdate instead. I've noticed however that I can not set the correct system time of the Rpi4. Regardless of the command I can not get rid of the offset.
I looked elsewhere on how to sync without success:
synchronising-time-between-two-linux-machines
how-to-sync-raspberry-pi-system-clock
how-to-know-if-htpdate-has-synchronized-system-clock
htpdate.8.en.html
htpdate.php
The problem is I can not get rid of the offset. The output of my sessions are:
pi#CMPL01-003-21:~ $ sudo htpdate -qd 10.42.0.1
burst: 1 try: 1 when: 500000
10.42.0.1 80 14 Feb 2023 13:37:24 GMT (0.003) => 38
burst: 1 try: 2 when: 500000
10.42.0.1 80 14 Feb 2023 13:37:25 GMT (0.006) => 38
#: 1 mean: 38 average: 38.000
Offset 38.000 seconds
poll 1800 s
pi#CMPL01-003-21:~ $ sudo htpdate -xqd 10.42.0.1
burst: 1 try: 1 when: 500000
10.42.0.1 80 14 Feb 2023 13:39:25 GMT (0.003) => 38
burst: 1 try: 2 when: 500000
10.42.0.1 80 14 Feb 2023 13:39:26 GMT (0.003) => 38
#: 1 mean: 38 average: 38.000
Adjusting 38.000 seconds
poll 1800 s
pi#CMPL01-003-21:~ $ sudo htpdate -sd 10.42.0.1
burst: 1 try: 1 when: 500000
10.42.0.1 80 14 Feb 2023 13:40:26 GMT (0.003) => 38
burst: 1 try: 2 when: 500000
10.42.0.1 80 14 Feb 2023 13:40:27 GMT (0.005) => 38
#: 1 mean: 38 average: 38.000
Setting 38.000 seconds
Set: Tue Feb 14 14:40:27 2023
poll 1800 s
Is there a missing hidden htpdate parameter or some rules to apply to a folder maybe to clear the offset?
I upgraded the version of htpdate from 1.2.0 to 1.3.7 and tried again. The result was the same and the offset was still there with the difference that the command output noted the client machine was running the automatic ntp sync service. Afterward I was able to sync correctly with htpdate after disabling the service with sudo timedatectl -set-ntp false. Case closed.

Communicating with Systemd service through socket mapped to stdin

I'm creating my first background service and I want to communicate with it through a socket.
I have the following script /tmp/myservice.sh:
#! /usr/bin/env bash
while read received_cmd
do
echo "Received command ${received_cmd}"
done
And the following socket /etc/systemd/user/myservice.socket
[Unit]
Description=Socket to communicate with myservice
[Socket]
ListenSequentialPacket=/tmp/myservice.socket
And the following service:
[Unit]
Description=A simple service example
[Service]
ExecStart=/bin/bash /tmp/myservice.sh
StandardError=journal
StandardInput=socket
StandardOutput=socket
Type=simple
The idea is to understand how to communicate with a background service, here using an unix file socket. The script works well when launched from the shell and reading stdin and I thought that by setting StandardInput = "socket" it would read from the socket the same way.
Nevertheless, when I run nc -U /tmp/myservice.socket the command returns right away and I have the following output:
$ journalctl --user -u myservice
-- Logs begin at Sat 2020-10-24 17:26:25 BST, end at Thu 2020-10-29 14:00:53 GMT. --
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21941]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21942]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21943]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21944]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21945]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Start request repeated too quickly.
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Failed with result 'start-limit-hit'.
Oct 29 08:40:16 shiny systemd[1689]: Failed to start A simple service example.
Did I misunderstand how sockets work? Why read fails to read from the socket? Should I use another mechanism to communicate with my background service (as I said, it's my first background service so I may do unconventional things here)?
The only thing I have seen working with a shell script is ListenStream= rather than ListenSequentialPacket=. (Obviously, this means you lose packet boundaries, but the shell read is usually oriented to read lines ending \n from streams, so it is not usually a problem).
But the most important thing that is missing, is the extra Accept line:
[Socket]
ListenStream=...
Accept=true
As I understand it, without this the service will be passed a socket on which it must first do a socket accept() call, to get the actual connection socket (hence the read error). The service must also then handle all further connections.
By using Accept=true, a new service will be started for each new connection, and will be passed the immediately usable socket. Note, however, that this means the service must now be templated, i.e. called myservice#.service rather than myservice.service.
(For datagram sockets, Accept must be left defaulted to false). See man systemd.socket.

Difference between **journalctl -u test.service** and **journalctl CONTAINER_NAME=test**

I have a systemd service file which run a docker container with log driver journald.
ExecStart=/usr/bin/docker run \
--name ${CONTAINER_NAME} \
-p ${PORT}:8080 \
--add-host ${DNS} \
-v /etc/localtime:/etc/localtime:ro \
--log-driver=journald \
--log-opt tag="docker.{{.Name}}" \
${RESPOSITORY_NAME}/${CONTAINER_NAME}
ExecStop=-/usr/bin/docker stop ${CONTAINER_NAME}
When I check the logs via journalctl I see two different _TRANSPORT.
With journalctl -u test.service I see _TRANSPORT=stdout. And with Journalctl CONTAINER_NAME=test I see _TRANSPORT=journal
What is the difference?
The difference here is in how the logs get to systemd-journald before they are logged.
As of right now, the supported transports (at least according to the _TRANSPORT field in systemd-journald) are: audit, driver, syslog, journal, stdout and kernel (see systemd.journal-fields(7)).
In your case, everything logged to stdout by commands executed by the ExecStart= and ExecStop= directives is logged under the _TRANSPORT=stdout transport.
However, Docker is internally capable of using the journald logging driver which, among other things, introduces several custom journal fields - one of them being CONTAINER_ID=. It's just a different method of delivering data to systemd-journald - instead of relying on systemd to catch and send everything from stdout to systemd-journald, Docker internally sends everything straight to systemd-journald by itself.
This can be achieved by using the sd-journal API (as described in sd-journal(3)). Docker uses the go-systemd Go bindings for the sd-journal C library.
Simple example:
hello.c
#include <stdio.h>
#include <systemd/sd-journal.h>
int main(void)
{
printf("Hello from stdout\n");
sd_journal_print(LOG_INFO, "Hello from journald");
return 0;
}
# gcc -o /var/tmp/hello -lsystemd hello.c
# cat > /etc/systemd/system/hello.service << EOF
[Service]
ExecStart=/var/tmp/hello
EOF
# systemctl daemon-reload
# systemctl start test.service
Now if I check journal, I'll see both messages:
# journalctl -u hello.service
-- Logs begin at Mon 2019-09-30 22:08:02 CEST, end at Fri 2020-03-27 17:11:29 CET. --
Mar 27 17:08:28 localhost systemd[1]: Started hello.service.
Mar 27 17:08:28 localhost hello[921852]: Hello from journald
Mar 27 17:08:28 localhost hello[921852]: Hello from stdout
Mar 27 17:08:28 localhost systemd[1]: hello.service: Succeeded.
But each of them arrived using a different transport:
# journalctl -u hello.service _TRANSPORT=stdout
-- Logs begin at Mon 2019-09-30 22:08:02 CEST, end at Fri 2020-03-27 17:12:29 CET. --
Mar 27 17:08:28 localhost hello[921852]: Hello from stdout
# journalctl -u hello.service _TRANSPORT=journal
-- Logs begin at Mon 2019-09-30 22:08:02 CEST, end at Fri 2020-03-27 17:12:29 CET. --
Mar 27 17:08:28 localhost systemd[1]: Started hello.service.
Mar 27 17:08:28 localhost hello[921852]: Hello from journald
Mar 27 17:08:28 localhost systemd[1]: hello.service: Succeeded.

Why is my ec2 instance sometimes not terminating when I use 'shutdown now'?

I am running a user-data script on startup of an ec2 machine which shuts the machine down after checking the exit status of the last executed command. I confirmed that the last executed command ran successfully, so I am not sure why the machine is not terminating. This doesn't happen every time, it seems to only happen when the user-data script finishes quickly.
Here is the end of my bash script (after executing several commands):
python myscript.py
ret=$?
echo $ret
if [[ $ret -eq 0 ]]; then
shutdown now
fi
This produces the following output, but does not terminate the instance:
0
Cloud-init v. 0.7.9 running 'init-local' at Wed, 28 Nov 2018 20:15:38 +0000. Up 11.12 seconds.
Cloud-init v. 0.7.9 running 'init' at Wed, 28 Nov 2018 20:15:41 +0000. Up 14.67 seconds.
ci-info: ++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++
ci-info: Device Up Address Mask Scope Hw-Address
ci-info: lo: True 127.0.0.1 255.0.0.0 . .
ci-info: lo: True . . d .
ci-info: eth0: True 10.90.1.222 255.255.255.0 . 0e:c9:6e:60:5d:e8
ci-info: eth0: True . . d 0e:c9:6e:60:5d:e8
ci-info: +++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++
ci-info: Route Destination Gateway Genmask Interface Flags
ci-info: 0 0.0.0.0 10.90.1.1 0.0.0.0 eth0 UG
ci-info: 1 10.90.1.0 0.0.0.0 255.255.255.0 eth0 U
ci-info: 2 169.254.169.254 0.0.0.0 255.255.255.255 eth0 UH
Cloud-init v. 0.7.9 running 'modules:config' at Wed, 28 Nov 2018 20:15:44 +0000. Up 17.35 seconds.
Cloud-init v. 0.7.9 running 'modules:final' at Wed, 28 Nov 2018 20:15:45 +0000. Up 18.45 seconds.
Connection to 10.90.1.222 closed by remote host. 20:15:46 +0000. Datasource DataSourceEc2. Up 19.56 seconds
I am trying to determine if Cloud-init is somehow preventing the instance from terminating. Is it because the script finished while other background processes were still initializing?

session.save_path incorrect in magento + memcache for session

I am trying to configure Magento to use memcache for session. I have installed memcached and also php5-memcache. I have also added "extension=memcache.so" in memcache.ini.
I have made sure the memcached instance is also running in the localhost port number 11213. However, when I try to login to Magento admin I get an error -
Warning: Unknown: Failed to write session data (memcache). Please verify that the current setting of session.save_path is correct (tcp://127.0.0.1:11213?persistent=0&weight=2&timeout=10&retry_interval=10) in Unknown on line 0
The following is the memcache configuration in local.xml -
<session_save><![CDATA[memcache]]></session_save>
<session_save_path><![CDATA[tcp://127.0.0.1:11213?persistent=0&weight=2&timeout=10&retry_interval=10]]></session_save_path>
The following are the grep for memcached,
www-data 1329 1 0 08:13 ? 00:00:00 /usr/bin/memcached -d -m 64 -p 11213 -u www-data -l 127.0.0.1
www-data 1511 1 0 08:18 ? 00:00:00 /usr/bin/memcached -d -m 64 -p 11211 -u www-data -l 127.0.0.1
www-data 1518 1 0 08:18 ? 00:00:00 /usr/bin/memcached -d -m 64 -p 11212 -u www-data -l 127.0.0.1
I have been meddling up with this for a couple of days now and I am not sure what the issue. Any help is appreciated.
Thanks,
G
Please note there is a difference between memcache and memcached. I’ve found that the Magento sessions integration expects you to use this:
<session_save><![CDATA[memcached]]></session_save>
You should install the PHP memcached libraries, as well.

Resources