docker installation of openproject: Phusion passenger fails to start after installation - passenger

I am trying to install openproject using docker on centos7.6 but Phusion passenger fails to start after installation. Error is suggesting it failed to parse response.
The preloader process sent an unparseable response:. I don't know how to fix this issue.
stdout:
-----> Database setup finished.
On first installation, the default admin credentials are login: admin, password: admin
-----> Launching supervisord...
2019-05-08 08:14:46,313 CRIT Supervisor running as root (no user in config file)
2019-05-08 08:14:46,318 INFO supervisord started with pid 1
2019-05-08 08:14:47,321 INFO spawned: 'postgres' with pid 155
2019-05-08 08:14:47,325 INFO spawned: 'apache2' with pid 156
2019-05-08 08:14:47,328 INFO spawned: 'web' with pid 157
2019-05-08 08:14:47,331 INFO spawned: 'worker' with pid 158
2019-05-08 08:14:47,351 INFO spawned: 'postfix' with pid 159
2019-05-08 08:14:47,360 INFO spawned: 'memcached' with pid 160
2019-05-08 08:14:47.634 UTC [172] LOG: database system was shut down at 2019-05-08 08:14:44 UTC
2019-05-08 08:14:47,634 INFO success: postfix entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2019-05-08 08:14:47.649 UTC [172] LOG: MultiXact member wraparound protections are now enabled
2019-05-08 08:14:47.653 UTC [155] LOG: database system is ready to accept connections
2019-05-08 08:14:47.663 UTC [177] LOG: autovacuum launcher started
2019-05-08 08:14:48,670 INFO success: postgres entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: apache2 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: memcached entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
2019-05-08 08:14:50,198 INFO exited: postfix (exit status 0; expected)
--> Downloading a Phusion Passenger agent binary for your platform
--> Installing Nginx 1.15.8 engine
--------------------------
[passenger_native_support.so] trying to compile for the current user (app) and Ruby interpreter...
(set PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY=0 to disable)
Compilation successful. The logs are here:
/tmp/passenger_native_support-15tsfhk.log
[passenger_native_support.so] successfully loaded.
=============== Phusion Passenger Standalone web server started ===============
PID file: /app/tmp/pids/passenger.8080.pid
Log file: /app/log/passenger.8080.log
Environment: production
Accessible via: http://0.0.0.0:8080/
You can stop Phusion Passenger Standalone by pressing Ctrl-C.
Problems? Check https://www.phusionpassenger.com/library/admin/standalone/troubleshooting/
===============================================================================
[ N 2019-05-08 08:15:01.7338 404/Tb age/Cor/SecurityUpdateChecker.h:519 ]: Security update check: no update found (next check in 24 hours)
Forcefully loading the application. Use :environment to avoid eager loading.
[auth_saml] Missing settings from '/app/config/plugins/auth_saml/settings.yml', skipping omniauth registration.
hook registered
App 439 output: [auth_saml] Missing settings from '/app/config/plugins/auth_saml/settings.yml', skipping omniauth registration.
App 439 output: hook registered
Creating scope :order_by_name. Overwriting existing method Sprint.order_by_name.
App 439 output: Creating scope :order_by_name. Overwriting existing method Sprint.order_by_name.
[Worker(host:d0b3748f627a pid:158)] Starting job worker
2019-05-08T08:15:45+0000: [Worker(host:d0b3748f627a pid:158)] Starting job worker
App 439 output: /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `handle_spawn_command'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:78:in `accept_and_process_next_client'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:167:in `run_main_loop'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:207:in `<module:App>'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:30:in `<module:PhusionPassenger>'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:29:in `<main>'
[ E 2019-05-08 08:15:46.6971 404/Tc age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /app: The preloader process sent an unparseable response:
Error ID: d7825364
Error details saved to: /tmp/passenger-error-wjSTKF.html
[ E 2019-05-08 08:15:46.7028 404/T8 age/Cor/Con/CheckoutSession.cpp:276 ]: [Client 1-1] Cannot checkout session because a spawning error occurred. The identifier of the error is d7825364. Please see earlier logs for details about the error.
[ W 2019-05-08 08:34:24.7967 404/Tk age/Cor/Spa/SmartSpawner.h:572 ]: An error occurred while spawning an application process: Cannot connect to Unix socket '/tmp/passenger.PKROzbY/apps.s/preloader.hyl9g8': No such file or directory (errno=2)
[ W 2019-05-08 08:34:24.7968 404/Tk age/Cor/Spa/SmartSpawner.h:574 ]: The application preloader seems to have crashed, restarting it and trying again...
App 543 output: [auth_saml] Missing settings from '/app/config/plugins/auth_saml/settings.yml', skipping omniauth registration.
App 543 output: hook registered
App 543 output: Creating scope :order_by_name. Overwriting existing method Sprint.order_by_name.
App 543 output: /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `handle_spawn_command'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:78:in `accept_and_process_next_client'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:167:in `run_main_loop'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:207:in `<module:App>'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:30:in `<module:PhusionPassenger>'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:29:in `<main>'
[ E 2019-05-08 08:34:52.2521 404/Tk age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /app: The preloader process sent an unparseable response:
Error ID: c2ce0823
Error details saved to: /tmp/passenger-error-bpsfAC.html
[ E 2019-05-08 08:34:52.2570 404/T8 age/Cor/Con/CheckoutSession.cpp:276 ]: [Client 1-2] Cannot checkout session because a spawning error occurred. The identifier of the error is c2ce0823. Please see earlier logs for details about the error.
Thanks.

The import line in the log is this one:
App 439 output: /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
This means your container is unable to allocate necessary memory. It could be that your system is in a OOM state and things are being killed or due to some other restriction on the daemon that prevents it from allocating additional memory
For reference:
https://success.docker.com/article/docker-daemon-error-cannot-allocate-memory

Related

/usr/lib/systemd/systemd --system failed to start User Manager because Failed to allocate manager object: Permission denied

running exec /usr/lib/systemd/systemd --system on ubuntu20.04 container ,After starting the system logging services and creating user slice . it failed to start manager service for the user.
server-ubuntu-20_04-1 | [ OK ] Started System Logging Service.
server-ubuntu-20_04-1 | [ OK ] Created slice system-modprobe.slice.
server-ubuntu-20_04-1 | [ OK ] Created slice User and Session Slice.
server-ubuntu-20_04-1 | Starting Login Service...
server-ubuntu-20_04-1 | [ OK ] Started Login Service.
server-ubuntu-20_04-1 | [ OK ] Created slice User Slice of UID 109.
server-ubuntu-20_04-1 | Starting User Runtime Directory /run/user/109...
server-ubuntu-20_04-1 | [ OK ] Finished User Runtime Directory /run/user/109.
server-ubuntu-20_04-1 | Starting User Manager for UID 109...
server-ubuntu-20_04-1 | [FAILED] Failed to start User Manager for UID 109.
root#server-ubuntu-2004-1:/# systemctl status user#109.service
it is showing failed to allocate manager object: permission denied
● user#109.service - User Manager for UID 109
Loaded: loaded (/lib/systemd/system/user#.service; static; vendor preset: enabled)
Drop-In: /usr/lib/systemd/system/user#.service.d
└─timeout.conf
Active: failed (Result: exit-code) since Mon 2021-03-01 12:24:26 UTC; 3min 36s ago
Docs: man:user#.service(5)
Process: 21190 ExecStart=/lib/systemd/systemd --user (code=exited, status=1/FAILURE)
Main PID: 21190 (code=exited, status=1/FAILURE)
CGroup: /docker/8b7b069bf5f393997a45a292ac3c29c7b2a4aa85406fdb5506a7ae498fe61150/user.slice/user-
109.slice/user#109.service
Mar 01 12:24:26 server-ubuntu-2004-1 systemd[1]: Starting User Manager for UID 109...
Mar 01 12:24:26 server-ubuntu-2004-1 systemd[21190]: pam_unix(systemd-user:session): session opened
for user smmsp by (uid=0)Mar 01 12:24:26 server-ubuntu-2004-1 systemd[21190]: Failed to allocate manager object: Permission denied
Mar 01 12:24:26 server-ubuntu-2004-1 systemd[1]: user#109.service: Main process exited, code=exited, status=1/FAILURE
Mar 01 12:24:26 server-ubuntu-2004-1 systemd[1]: user#109.service: Failed with result 'exit-code'.
Mar 01 12:24:26 server-ubuntu-2004-1 systemd[1]: Failed to start User Manager for UID 109.
Installed systemd packages
libnss-systemd/focal-updates,now 245.4-4ubuntu3.4 amd64 [installed,automatic]
libpam-systemd/focal-updates,now 245.4-4ubuntu3.4 amd64 [installed,automatic]
libsystemd0/focal-updates,now 245.4-4ubuntu3.4 amd64 [installed]
systemd-sysv/focal-updates,now 245.4-4ubuntu3.4 amd64 [installed]
systemd-timesyncd/focal-updates,now 245.4-4ubuntu3.4 amd64 [installed,automatic]
systemd/focal-updates,now 245.4-4ubuntu3.4 amd64 [installed]
root#server-ubuntu-2004-1:/# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
dev-mapper-centos\x2dhome.device loaded activating tentative /dev/mapper/centos-home
dev-mapper-centos\x2droot.device loaded activating tentative /dev/mapper/centos-root
-.mount loaded active mounted Root Mount
dev-mqueue.mount loaded active mounted POSIX Message Queue File System
etc-hostname.mount loaded active mounted /etc/hostname
etc-hosts.mount loaded active mounted /etc/hosts
etc-resolv.conf.mount loaded active mounted /etc/resolv.conf
logs.mount loaded active mounted /logs
pbssrc.mount loaded active mounted /pbssrc
run-user-109.mount loaded active mounted /run/user/109
src.mount loaded active mounted /src
workspace-etc.mount loaded active mounted /workspace/etc
init.scope loaded active running System and Service Manager
session-c1.scope loaded active abandoned Session c1 of user smmsp
ci-script-wrapper.service loaded active exited Run ci docker entrypoint script at startup after all systemd services are loaded
console-getty.service loaded active running Console Getty
dbus.service loaded active running D-Bus System Message Bus
getty#tty1.service loaded active running Getty on tty1
rsyslog.service loaded active running System Logging Service
sendmail.service loaded active running LSB: powerful, efficient, and scalable Mail Transport Agent
ssh.service loaded active running OpenBSD Secure Shell server
systemd-journald.service loaded active running Journal Service
systemd-logind.service loaded active running Login Service
systemd-remount-fs.service loaded active exited Remount Root and Kernel File Systems
systemd-tmpfiles-setup-dev.service loaded active exited Create Static Device Nodes in /dev
systemd-tmpfiles-setup.service loaded active exited Create Volatile Files and Directories
systemd-user-sessions.service loaded active exited Permit User Sessions
user-runtime-dir#109.service loaded active exited User Runtime Directory /run/user/109
● user#109.service loaded failed failed User Manager for UID 109
-.slice loaded active active Root Slice
system-getty.slice loaded active active system-getty.slice
system-modprobe.slice loaded active active system-modprobe.slice
system.slice loaded active active System Slice
user-109.slice loaded active active User Slice of UID 109
user.slice loaded active active User and Session Slice
dbus.socket loaded active running D-Bus System Message Bus Socket
syslog.socket loaded active running Syslog Socket
systemd-journald-dev-log.socket loaded active running Journal Socket (/dev/log)
systemd-journald.socket loaded active running Journal Socket
systemd-networkd.socket loaded active listening Network Service Netlink Socket
basic.target loaded active active Basic System
getty.target loaded active active Login Prompts
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target loaded active active Local File Systems
multi-user.target loaded active active Multi-User System
network-online.target loaded active active Network is Online
paths.target loaded active active Paths
slices.target loaded active active Slices
sockets.target loaded active active Sockets
swap.target loaded active active Swap
sysinit.target loaded active active System Initialization
timers.target loaded active active Timers
systemd-tmpfiles-clean.timer loaded active waiting Daily Cleanup of Temporary Directories
Linux server-ubuntu-2004-1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Run cron with supervisor and docker

I'm stuck on an issue with a legacy Laravel project. It uses supervisor and cron to run the scheduled tasks, but it seems that the cronjobs won't run (and have never run apparently).
This is the Dockerfile:
FROM 704666026001.dkr.ecr.eu-central-1.amazonaws.com/laravel-prod
# Copy project
COPY . /var/www/html/
# Copy cronjob setup fro laravel scheduler
COPY docker/cron/cron.txt /etc/docker/cron/cron.txt
# Copy laravel queue worker supervisor conf
COPY docker/supervisor /etc/docker/supervisor/conf
RUN mkdir -p /var/www/html/storage/framework/cache/data \
&& /usr/bin/crontab -u www-data /etc/docker/cron/cron.txt \
&& chown -R www-data:www-data /var/www/html/
In the docker/supervisor folder, there a two files:
One named queue-worker.conf with:
[group:laravel]
programs=laravel-worker
priority=30
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work --sleep=3 --tries=3
user=www-data
numprocs=1
startsecs=10
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
And cron.conf with:
[group:cron]
programs=crond
priority=40
[program:crond]
process_name=%(program_name)s
command=crond -f
user=www-data
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
And the file docker/cron/cron.txt has one line:
* * * * * php /var/www/html/artisan schedule:run >> /dev/null 2>&1
The docker image does build without any errors. When i run it locally, this is the output:
2020-06-16 10:21:05,045 INFO Included extra file "/etc/docker/supervisor/conf/cron.conf" during parsing
2020-06-16 10:21:05,045 INFO Included extra file "/etc/docker/supervisor/conf/nginx.conf" during parsing
2020-06-16 10:21:05,045 INFO Included extra file "/etc/docker/supervisor/conf/php-fpm.conf" during parsing
2020-06-16 10:21:05,045 INFO Included extra file "/etc/docker/supervisor/conf/queue-worker.conf" during parsing
2020-06-16 10:21:05,062 INFO RPC interface 'supervisor' initialized
2020-06-16 10:21:05,063 INFO supervisord started with pid 1
2020-06-16 10:21:06,073 INFO spawned: 'nginxd' with pid 9
2020-06-16 10:21:06,078 INFO spawned: 'php-fpmd' with pid 10
2020-06-16 10:21:06,084 INFO spawned: 'laravel-worker_00' with pid 11
2020-06-16 10:21:06,088 INFO spawned: 'crond' with pid 12
2020/06/16 10:21:06 [notice] 9#9: using the "epoll" event method
2020/06/16 10:21:06 [notice] 9#9: nginx/1.16.1
2020/06/16 10:21:06 [notice] 9#9: OS: Linux 4.19.76-linuxkit
2020/06/16 10:21:06 [notice] 9#9: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2020/06/16 10:21:06 [notice] 9#9: start worker processes
2020-06-16 10:21:06,121 INFO success: nginxd entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-06-16 10:21:06,121 INFO success: php-fpmd entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020/06/16 10:21:06 [notice] 9#9: start worker process 13
2020/06/16 10:21:06 [notice] 9#9: start worker process 14
2020/06/16 10:21:06 [notice] 9#9: start worker process 15
2020/06/16 10:21:06 [notice] 9#9: start worker process 16
2020/06/16 10:21:06 [notice] 9#9: start cache manager process 17
2020/06/16 10:21:06 [notice] 9#9: start cache loader process 18
[16-Jun-2020 10:21:06] NOTICE: fpm is running, pid 10
[16-Jun-2020 10:21:06] NOTICE: ready to handle connections
2020-06-16 10:21:07,259 INFO success: crond entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-06-16 10:21:16,253 INFO success: laravel-worker_00 entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
It does show 'crond entered RUNNING state', but the cronjob isn't run in any way.
Does anyone have an idea why? Is this setup even valid?
Thanks in advance for the help!
Supervisor stops runs if they are not doing anything for certain amount of time.
So with cron you have task only running intermittently. So it gets shut down.

Running Kafka-Manager inside Docker container on Windows

I am following this tutorial to run Kafka inside a Docker container on windows.
When I try to launch Kafka-Manager by opening http://localhost:9000 in the browser as described there, I get ERR_CONNECTION_REFUSED.
Something I think might be related is that at the first time I ran docker-compose up, PowerShell showed an error saying I needed to run some command first, to open a virtual machine or something like that.
Then I ran the command that PowerShell had told me and then I managed to run docker-compose up successfully. However the tutorial didn't mention anything about it, and since then every time I tried to run docker-compose up I managed to to it without running another command first, even if I closed and reopened PowerShell.
I suspect PowerShell remembers I'm connected to a virtual machine so docker-compose up runs Kafka inside a virtual machine, and therefore I can't reach Kafka-Manager in the browser, although I see shows the following message:
kafkamanager | [info] p.c.s.NettyServer - Listening for HTTP on
/0.0.0.0:9000
Edit:
docker logs for kafka container:
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2020-02-28 08:37:37,274 CRIT Supervisor running as root (no user in config file)
2020-02-28 08:37:37,274 WARN Included extra file "/etc/supervisor/conf.d/zookeeper.conf" during parsing
2020-02-28 08:37:37,274 WARN Included extra file "/etc/supervisor/conf.d/kafka.conf" during parsing
2020-02-28 08:37:37,303 INFO RPC interface 'supervisor' initialized
2020-02-28 08:37:37,303 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-02-28 08:37:37,303 INFO supervisord started with pid 1
2020-02-28 08:37:38,306 INFO spawned: 'zookeeper' with pid 8
2020-02-28 08:37:38,308 INFO spawned: 'kafka' with pid 9
2020-02-28 08:37:39,372 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-28 08:37:39,372 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-28 21:16:01,095 WARN received SIGTERM indicating exit request
2020-02-28 21:16:01,095 INFO waiting for zookeeper, kafka to die
2020-02-28 21:16:02,102 INFO stopped: kafka (terminated by SIGTERM)
2020-02-28 21:16:02,442 INFO stopped: zookeeper (exit status 143)
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2020-02-28 21:17:50,843 CRIT Supervisor running as root (no user in config file)
2020-02-28 21:17:50,843 WARN Included extra file "/etc/supervisor/conf.d/zookeeper.conf" during parsing
2020-02-28 21:17:50,843 WARN Included extra file "/etc/supervisor/conf.d/kafka.conf" during parsing
2020-02-28 21:17:50,858 INFO RPC interface 'supervisor' initialized
2020-02-28 21:17:50,858 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-02-28 21:17:50,859 INFO supervisord started with pid 1
2020-02-28 21:17:51,862 INFO spawned: 'zookeeper' with pid 8
2020-02-28 21:17:51,864 INFO spawned: 'kafka' with pid 9
2020-02-28 21:17:52,926 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-28 21:17:52,927 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-28 21:17:59,672 INFO exited: kafka (exit status 1; not expected)
2020-02-28 21:18:00,675 INFO spawned: 'kafka' with pid 297
2020-02-28 21:18:01,694 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-29 19:42:18,487 WARN received SIGTERM indicating exit request
2020-02-29 19:42:18,487 INFO waiting for zookeeper, kafka to die
2020-02-29 19:42:18,488 INFO stopped: kafka (terminated by SIGTERM)
2020-02-29 19:42:18,821 INFO stopped: zookeeper (exit status 143)
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2020-02-29 19:42:26,841 CRIT Supervisor running as root (no user in config file)
2020-02-29 19:42:26,841 WARN Included extra file "/etc/supervisor/conf.d/zookeeper.conf" during parsing
2020-02-29 19:42:26,842 WARN Included extra file "/etc/supervisor/conf.d/kafka.conf" during parsing
2020-02-29 19:42:26,854 INFO RPC interface 'supervisor' initialized
2020-02-29 19:42:26,854 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-02-29 19:42:26,855 INFO supervisord started with pid 1
2020-02-29 19:42:27,857 INFO spawned: 'zookeeper' with pid 8
2020-02-29 19:42:27,859 INFO spawned: 'kafka' with pid 9
2020-02-29 19:42:28,903 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-29 19:42:28,903 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-29 19:42:34,985 INFO exited: kafka (exit status 1; not expected)
2020-02-29 19:42:35,988 INFO spawned: 'kafka' with pid 297
2020-02-29 19:42:37,014 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-29 19:43:20,590 WARN received SIGTERM indicating exit request
2020-02-29 19:43:20,590 INFO waiting for zookeeper, kafka to die
2020-02-29 19:43:20,590 INFO stopped: kafka (terminated by SIGTERM)
2020-02-29 19:43:20,784 INFO stopped: zookeeper (exit status 143)
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2020-02-29 19:45:38,600 CRIT Supervisor running as root (no user in config file)
2020-02-29 19:45:38,600 WARN Included extra file "/etc/supervisor/conf.d/zookeeper.conf" during parsing
2020-02-29 19:45:38,600 WARN Included extra file "/etc/supervisor/conf.d/kafka.conf" during parsing
2020-02-29 19:45:38,619 INFO RPC interface 'supervisor' initialized
2020-02-29 19:45:38,629 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-02-29 19:45:38,630 INFO supervisord started with pid 1
2020-02-29 19:45:39,632 INFO spawned: 'zookeeper' with pid 8
2020-02-29 19:45:39,634 INFO spawned: 'kafka' with pid 9
2020-02-29 19:45:40,687 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-29 19:45:40,689 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-29 19:45:47,740 INFO exited: kafka (exit status 1; not expected)
2020-02-29 19:45:48,743 INFO spawned: 'kafka' with pid 297
2020-02-29 19:45:49,763 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-02-29 19:46:20,659 WARN received SIGTERM indicating exit request
2020-02-29 19:46:20,659 INFO waiting for zookeeper, kafka to die
2020-02-29 19:46:20,660 INFO stopped: kafka (terminated by SIGTERM)
2020-02-29 19:46:20,991 INFO stopped: zookeeper (exit status 143)
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2020-03-13 22:16:26,128 CRIT Supervisor running as root (no user in config file)
2020-03-13 22:16:26,128 WARN Included extra file "/etc/supervisor/conf.d/zookeeper.conf" during parsing
2020-03-13 22:16:26,128 WARN Included extra file "/etc/supervisor/conf.d/kafka.conf" during parsing
2020-03-13 22:16:26,157 INFO RPC interface 'supervisor' initialized
2020-03-13 22:16:26,162 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-03-13 22:16:26,162 INFO supervisord started with pid 1
2020-03-13 22:16:27,164 INFO spawned: 'zookeeper' with pid 8
2020-03-13 22:16:27,167 INFO spawned: 'kafka' with pid 9
2020-03-13 22:16:28,226 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-03-13 22:16:28,227 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-03-13 22:16:36,496 INFO exited: kafka (exit status 1; not expected)
2020-03-13 22:16:37,499 INFO spawned: 'kafka' with pid 298
2020-03-13 22:16:38,511 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-03-13 22:17:20,939 WARN received SIGTERM indicating exit request
2020-03-13 22:17:20,940 INFO waiting for zookeeper, kafka to die
2020-03-13 22:17:20,940 INFO stopped: kafka (terminated by SIGTERM)
2020-03-13 22:17:21,268 INFO stopped: zookeeper (exit status 143)
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2020-03-27 21:25:59,495 CRIT Supervisor running as root (no user in config file)
2020-03-27 21:25:59,496 WARN Included extra file "/etc/supervisor/conf.d/zookeeper.conf" during parsing
2020-03-27 21:25:59,497 WARN Included extra file "/etc/supervisor/conf.d/kafka.conf" during parsing
2020-03-27 21:25:59,520 INFO RPC interface 'supervisor' initialized
2020-03-27 21:25:59,522 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-03-27 21:25:59,523 INFO supervisord started with pid 1
2020-03-27 21:26:00,530 INFO spawned: 'zookeeper' with pid 8
2020-03-27 21:26:00,532 INFO spawned: 'kafka' with pid 9
2020-03-27 21:26:01,620 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-03-27 21:26:01,620 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
docker logs for kafka manager container seems fine:
[info] o.a.z.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[info] o.a.z.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[info] o.a.z.ZooKeeper - Client environment:java.compiler=<NA>
[info] o.a.z.ZooKeeper - Client environment:os.name=Linux
[info] o.a.z.ZooKeeper - Client environment:os.arch=amd64
[info] o.a.z.ZooKeeper - Client environment:os.version=4.9.93-boot2docker
[info] o.a.z.ZooKeeper - Client environment:user.name=root
[info] o.a.z.ZooKeeper - Client environment:user.home=/root
[info] o.a.z.ZooKeeper - Client environment:user.dir=/kafka-manager-1.3.3.4
[info] o.a.z.ZooKeeper - Initiating client connection, connectString=kafkaserver:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#7a27a9b4
[info] o.a.z.ClientCnxn - Opening socket connection to server kafka.kafka_kafkanet/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
[info] k.m.a.KafkaManagerActor - zk=kafkaserver:2181
[info] k.m.a.KafkaManagerActor - baseZkPath=/kafka-manager
[info] o.a.z.ClientCnxn - Socket connection established to kafka.kafka_kafkanet/172.18.0.2:2181, initiating session
[info] o.a.z.ClientCnxn - Session establishment complete on server kafka.kafka_kafkanet/172.18.0.2:2181, sessionid = 0x1711de33be70001, negotiated timeout = 40000
[info] k.m.a.KafkaManagerActor - Started actor akka://kafka-manager-system/user/kafka-manager
[info] k.m.a.KafkaManagerActor - Starting delete clusters path cache...
[info] k.m.a.DeleteClusterActor - Started actor akka://kafka-manager-system/user/kafka-manager/delete-cluster
[info] k.m.a.DeleteClusterActor - Starting delete clusters path cache...
[info] k.m.a.DeleteClusterActor - Adding kafka manager path cache listener...
[info] k.m.a.DeleteClusterActor - Scheduling updater for 10 seconds
[info] k.m.a.KafkaManagerActor - Starting kafka manager path cache...
[info] k.m.a.KafkaManagerActor - Adding kafka manager path cache listener...
[info] play.api.Play - Application started (Prod)
[info] p.c.s.NettyServer - Listening for HTTP on /0.0.0.0:9000
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.KafkaManagerActor - Updating internal state...
This log is a lot longer so I've ommited the beginning but it seems fine.
Yes, there's a hypervisor, not a full VM. You can open the hyperV manager to look at it
You compose file needs a port forward
ports:
- '9000:9000'
If you are using docker toolbox on windows you can try to access kafka-manager with this address: http://192.168.99.100:9000
Note: 192.168.99.100 is the default ip address of VM which docker running on.
docker-compose.yaml is totally fine which is given in the tutorial. Can you do docker-compose down and then again bring up the docker-compose up?
Then try to browse http://localhost:9000 and you should be able to see it.
Possible errors:-
Port forwarding (already done in the docker-compose)
Instead of HTTP, you are opening HTTPS in the browser.

macOS Server 5.3 Calendar pg_ctl not starting

After updating macOS Server to 5.3 (running on macOS 10.12.4) my Calendar & Contacts have stopped syncing.
It seems that it's having trouble starting Postgres for cluster /Library/Server/Calendar and Contacts/Data/Database.xpg/cluster.pg and possibly trouble with the agent too.
The GUI seems to think that the Calendar & Contacts services have started and are available, but when I run $ sudo serveradmin fullstatus calendar from the command line I get:
calendar:setStateVersion = 1
calendar:readWriteSettingsVersion = 1
calendar:state = "STARTING"
calendar:contactsState = "STARTING"
calendar:calendarState = "STARTING"
System log is being spammed with:
Apr 22 11:58:42 com.apple.xpc.launchd[1] (org.calendarserver.agent[44649]): Service exited with abnormal code: 1
Apr 22 11:58:42 com.apple.xpc.launchd[1] (org.calendarserver.agent): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
Apr 22 11:58:52 com.apple.xpc.launchd[1] (org.calendarserver.agent[44659]): Service exited with abnormal code: 1
Apr 22 11:58:52 com.apple.xpc.launchd[1] (org.calendarserver.agent): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
Apr 22 11:59:02 com.apple.xpc.launchd[1] (org.calendarserver.agent[44668]): Service exited with abnormal code: 1
Apr 22 11:59:02 com.apple.xpc.launchd[1] (org.calendarserver.agent): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
Apr 22 11:59:07 com.apple.xpc.launchd[1] (org.calendarserver.calendarserver[44676]): Service exited with abnormal code: 1
Apr 22 11:59:07 com.apple.xpc.launchd[1] (org.calendarserver.calendarserver): Service only ran for 0 seconds. Pushing respawn out by 60 seconds.
Here's the output of $ sudo /Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_diagnose
Any ideas?
OS Build: 16E195
Server Build: 16S4123
/Library/Server/Preferences/Calendar.plist exists and can be parsed
Prefs plist says ServerRoot directory is: /Library/Server/Calendar and Contacts
ServerRoot volume ok
/Library/Server/Calendar and Contacts/Config/caldavd-system.plist exists and can be parsed
/Library/Server/Calendar and Contacts/Config/caldavd-user.plist does not exist
Configuration:
Calendar and Contacts service processes:
USER PID %CPU %MEM RSS ELAPSED STARTED COMMAND
root 42554 0.0 0.1 11072 07:49 Sat 22 Apr 11:32:16 2017 servermgr_calendar
Serverd status:
org.calendarserver.agent is enabled
org.calendarserver.calendarserver is enabled
org.calendarserver.relocate is enabled
Disk space on boot volume:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 999G 777G 222G 78% 8520180 4286447099 0% /
Disk space on service data volume:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 999G 777G 222G 78% 8520180 4286447099 0% /
Disk space used by Calendar and Contacts service:
20K /Library/Server/Calendar and Contacts/Config
1014M /Library/Server/Calendar and Contacts/Data
200M /Library/Server/Calendar and Contacts/Logs
Postgres status for cluster /Library/Server/Calendar and Contacts/Data/Database.xpg/cluster.pg:
pg_ctl: no server running
Agent:
Attempting to send a request to the agent...
Can't connect to agent: timed out
Server connection:
Traceback (most recent call last):
File "/Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_diagnose", line 14, in <module>
load_entry_point('CalendarServer==9.1a1.dev0+56b4197875debefef19d9c19840f903a8e480c88.head', 'console_scripts', 'calendarserver_diagnose')()
File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python2.7/site-packages/calendarserver/tools/diagnose.py", line 145, in main
connectToCaldavd(keys)
File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python2.7/site-packages/calendarserver/tools/diagnose.py", line 584, in connectToCaldavd
url = "https://{host}/principals/".format(host=keys["ServerHostName"])
KeyError: 'ServerHostName'

Docker stop exit code -1 if the default CMD is a shell script

I am building a tomcat container in Docker with supervisord. If the default command in the Dockerfile is
CMD supervisord -c /etc/supervisord.conf
and when i dispatch docker stop command, the container exits successfully with the exit code 0.
But instead if i have
CMD ["/run"]
and in run.sh,
supervisord -c /etc/supervisord.conf
The docker stop command gives me a exit code -1. On viewing the logs, it seems that the supervisord did not receive the SIGTERM indicating the exit request.
2014-10-06 19:48:54,420 CRIT Supervisor running as root (no user in config file)
2014-10-06 19:48:54,450 INFO RPC interface 'supervisor' initialized
2014-10-06 19:48:54,451 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2014-10-06 19:48:54,451 INFO supervisord started with pid 6
2014-10-06 19:48:55,457 INFO spawned: 'tomcat' with pid 9
2014-10-06 19:48:56,503 INFO success: tomcat entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
as opposed to the previous logs where it receives a sigterm and gracefully exits.
2014-10-06 20:02:59,527 CRIT Supervisor running as root (no user in config file)
2014-10-06 20:02:59,556 INFO RPC interface 'supervisor' initialized
2014-10-06 20:02:59,556 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2014-10-06 20:02:59,557 INFO supervisord started with pid 1
2014-10-06 20:03:00,561 INFO spawned: 'tomcat' with pid 9
2014-10-06 20:03:01,602 INFO success: tomcat entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2014-10-06 20:05:11,690 WARN received SIGTERM indicating exit request
2014-10-06 20:05:11,690 INFO waiting for tomcat to die
2014-10-06 20:05:12,450 INFO stopped: tomcat (exit status 143)
Any help appreciated.
Thanks,
Karthik
UPDATE:
supervisord.conf file
[supervisord]
nodaemon=true
logfile=/var/log/supervisor/supervisord.log
[program:mysql]
command=/usr/bin/pidproxy /var/run/mysqld/mysqld.pid /usr/bin/mysqld_safe --pid-file=/var/run/mysqld/mysqld.pid
stdout_logfile=/tmp/mysql.log
stderr_logfile=/tmp/mysql_err.log
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
When you run the process via run.sh, signals are only sent to that process. Unless you are
going out of your way to send signals to child processes, e.g. with trap
sending signals to the process group.
doing exec supervisord ... in run.sh
the child process won't get the signals.

Resources