laravel command with mqtt client in background stop without error - laravel

I have an laravel application in docker environment. In my application, I have a console command which will connect to a MQTT broker and subscribe to a certain topic, then command will handle message recieved from MQTT broker.
This is my handle() method in command class:
public function handle()
{
$this->logger->info('Start connecting to broker.');
$subscribeTopic = '#';
$mqtt = MQTT::connection(MqttClient::MQTT_SUBSCRIBE_CONNECTION);
// Using a infinite loop for re-creating new connection in case current connection is broken
while (true) {
try {
if (!$mqtt->isConnected()) {
$this->logger->info('Client is disconnected. Re-connecting.');
$mqtt->connect();
}
$mqtt->subscribe($subscribeTopic, $this->handleSubscriberCallback());
$mqtt->loop(true);
} catch (Exception $exception) {
$this->logger->error($exception);
}
}
}
When container started, this process is automatically started in background by command
php artisan mqtt:start > /dev/null &
Checking process is running by command ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
www 1 0.0 0.0 3832 3016 ? Ss 02:12 0:00 bash /usr/local/bin/cmd
www 64 0.1 1.0 126336 61000 ? S 02:12 0:02 php artisan mqtt:start
www 65 0.0 0.5 230460 35488 ? Ss 02:12 0:00 php-fpm: master process (/usr/local/etc/php-fpm.conf)
www 66 0.3 0.9 308336 57684 ? S 02:12 0:05 php-fpm: pool www
www 67 0.3 0.7 233172 46008 ? S 02:12 0:04 php-fpm: pool www
www 75 0.1 0.0 4096 3360 pts/0 Ss 02:34 0:00 bash
www 83 0.0 0.0 6696 2820 pts/0 R+ 02:35 0:00 ps aux
This command worked fine, but problem comes when MQTT client (my artisan command) receive messages continuously, process stop without any log, error or warning.
I've tried to publish messages with differences delay each time publish message:
With 0.01 second -> process stop after ~4k5 messages
With 0.1 second -> process stop after ~9k5 messages
My system:
WSL2 in window 11
WSL OS: Ubuntu 20.04
RAM: 6GB
CPU: 4
Can someone know where problem come from? Or how can I automatically restart my process when it stopped. Thanks

Incidentally, review my question and update some information for someone needed.
Thanks to #Namoshek, using supervisor solved the problem by auto restart command when it stopped.
Additionally, the supervisor log file for the process helped me debug my error. It's due to an exhausted memory error caused by Laravel Telescope. When I turn off Telescope, this error doesn't happen anymore. Besides, when I start the command when starting the docker container, I use the command php artisan mqtt:start > /dev/null &, which /dev/null removes all output errors. Thus, I could not track error thrown in the docker container log.

Related

Can't send signals to process created by PTY.spawn() in Ruby

I'm running ruby in an Alpine docker container (it's a sidekiq worker, if that matters). At a certain point, my application receives instructions to shell out to a subcommand. I need to be able to stream STDOUT rather than have it buffer. This is why I'm using PTY instead of system() or another similar answer. I'm executing the following line of code:
stdout, stdin, pid = PTY.spawn(my_cmd)
When I connect to the docker container and run ps auxf, I see this:
root 7 0.0 0.4 187492 72668 ? Sl 01:38 0:00 ruby_program
root 12378 0.0 0.0 1508 252 pts/4 Ss+ 01:38 0:00 \_ sh -c my_cmd
root 12380 0.0 0.0 15936 6544 pts/4 S+ 01:38 0:00 \_ my_cmd
Note how the child process of ruby is "sh -c my_cmd", which itself then has a child "my_cmd" process.
"my_cmd" can take a significant amount of time to run. It is designed so that sending a signal USR1 to the process causes it to save its state so it can be resumed later and abort cleanly.
The problem is that the pid returned from "PTY.spawn()" is the pid of the "sh -c my_cmd" process, not the "my_cmd" process. So when I execute:
Process.kill('USR1', pid)
it sends USR1 to sh, not to my_cmd, so it doesn't behave as it should.
Is there any way to get the pid that corresponds to the command I actually specified? I'm open to ideas outside the realm of PTY, but it needs to satisfy the following constraints:
1) I need to be able to stream output from both STDOUT and STDERR as they are written, without waiting for them to be flushed (since STDOUT and STDERR get mixed together into a single stream in PTY, I'm redirecting STDERR to a file and using inotify to get updates).
2) I need to be able to send USR1 to the process to be able to pause it.
I gave up on a clean solution. I finally just executed
pgrep -P #{pid}
to get the child pid, and then I could send USR1 to that process. Feels hacky, but when ruby gives you lemons...
You should send your arguments as arrays. So instead of
stdout, stdin, pid = PTY.spawn("my_cmd arg1 arg2")
You should use
stdout, stdin, pid = PTY.spawn("my_cmd", "arg1", "arg2")
etc.
Also see:
Process.spawn child does not receive TERM
https://zendesk.engineering/running-a-child-process-in-ruby-properly-febd0a2b6ec8

nohup job control rhel6/centos

I have a question about UNIX job control in RHEL6
Basically, I am trying to implement passenger debug log rotation using logrotate. I am following the instructions here:
https://github.com/phusion/passenger/wiki/Phusion-Passenger-and-logrotation
I've got everything setup correctly (I think). My problem is this; when I spawn the background job using
nohup pipetool $HOME/passenger.log < $HOME/passenger.pipe &
And then log out and back in, if I inspect the process table, for example by using 'ps aux' if I check the pid of the process it appears as with the command 'bash'. I have tried changing the first line of the command to "#!/usr/bin/ruby". Here is an example of this:
[root#server]# nohup pipetool /var/log/nginx/passenger-debug.log < /var/pipe/passenger.pipe &
[1] 63767
[root#server]# exit
exit
[me#server]$ sudo su
[sudo] password for me:
[root#server]# ps aux | grep 63767
root 63767 0.0 0.0 108144 2392 pts/0 S 15:26 0:00 bash
root 63887 0.0 0.0 103236 856 pts/0 S+ 15:26 0:00 grep 63767
[root#server]#
When this occurs the line in the supplied logrotate file ( killall -HUP pipetool ) fails because the 'pipetool' is not matched. Again, I've tried changing the first line to #!/usr/bin/ruby. This had no impact. So, my question is basically; is there any good way to have the actual command appear in the process table instead of just 'bash' when spawned using job control? I am using bash as the shell when I invoke the pipetool. I appreciate you taking the time to help me.
This should work for you: edit pipetool to set the global variable $PROGRAM_NAME:
$PROGRAM_NAME = 'pipetool'
The script should then show up as pipetool in the process list.

How to disable guard after uninstalling Ruby?

I installed Ruby on a server (1.9.3 via RVM), set up Guard on some directories, then established I didn't need any of this anymore and uninstalled Ruby (via an RVM command).
My problem is, any directory access to the directories Guard was watching still triggers an attempt to launch Ruby (which is no longer there), therefore causing an error.
I thought that, since Guard was a Ruby gem, uninstalling Ruby would "turn off" guard. It seems there's more to it than that, and that some process still remains.
How do I kill the ghost of guard?
Another thread suggested I run ps aux | grep guard to find the PID of the guard process then kill it, but the only thing that finds is the grep guard itself:
root 6754 0.0 0.0 6384 676 pts/1 S+ 13:45 0:00 grep guard
It seems like whatever this "ghost of guard" is, it's not called guard.
It's probably not relevant, but in case it is, guard was launched via the Drupal Drush command drush omega-guard which is part of the Drupal theme Omega-4, and here's an example of an error that the ghost of guard is causing (this is accessing the Centos server from windows using SFTP):
This command should list all the processes using Linux inotify subsystem on which Guard is based:
$ ps -p `find /proc -name task -prune -o -type l -lname anon_inode:inotify -print 2> /dev/null | cut -d/ -f3`
PID TTY STAT TIME COMMAND
1102 ? Ssl 0:16 evince
3651 ? Ss 0:01 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
4071 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/ibus/ibus-gconf
4075 ? Sl 1:08 /usr/lib/x86_64-linux-gnu/ibus/ibus-x11 --kill-daemon
4092 ? Sl 0:18 /usr/lib/ibus-mozc/ibus-engine-mozc --ibus
4468 ? Ssl 188:36 skype
4788 ? S<l 622:27 /usr/bin/pulseaudio --start --log-target=syslog
7102 pts/0 S+ 0:00 inotifywait -r -m -e modify --format %f JavaFXSceneBuilder2.0/
7998 ? Ssl 6:53 gvim
8549 ? Ssl 11:11 /opt/google/chrome/chrome
8597 ? Ssl 307:04 /usr/lib/firefox/firefox
9459 ? Sl 50:05 /usr/lib/firefox/plugin-container /usr/lib/flashplugin-installer/libflashplayer.so -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8597 true plugin
16444 ? Ssl 1:31 gvim
16452 ? Ssl 24:39 /home/nodakai/.dropbox-dist/dropbox-lnx.x86_64-2.10.27/dropbox
24514 ? S 0:01 /usr/lib/gvfs/gvfs-gdu-volume-monitor
24527 ? Sl 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor
32491 ? Sl 11:10 /usr/lib/libreoffice/program/soffice.bin --splash-pipe=5
You might as well install Ruby and Guard again to uninstall them in a proper way.

Foreman + unicorn - heavy cpu

I have a ruby 2.0 sinatra 'faceless' app that serves up json by calling an external service. It works fine.
The main app is run on port 80 in a ubuntu machine.
I also start an instance using 'foreman start' - so it runs on port 5000 on the same ubuntu virtual machine.
On the port 80 instance, the process 'foreman master' soaks up CPU time, while with the same load, the one on port 5000 uses essentially 0 CPU.
$ ps -a l
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND
4 0 1615 1614 20 0 26140 17236 wait Sl+ tty1 0:28 foreman: master
0 1000 1899 1659 20 0 25036 16612 wait Sl+ pts/1 0:00 foreman: master
The apps were started at the same time and both had the same load (very light for 20 mins).
The only difference I can see is that the problem one is started on port 80 using a sudo command, and the other one is just started as a user process.
Is there a difference in how foreman needs to output log entries in a tty terminal vs a pts/1 terminal?
Note that with 40 people banging away on the app, the foreman master process is using 90% cpu while all the other ruby processes that are supposed to be doing the work are at 1% (9 unicorns).
I think its something to do with terminal output handled differently but I'm not sure.
Thanks for any help.
Is there a way to tell foreman or ruby to not write log stuff out at all?
EDIT
I now think that it is related to terminal logging, since i turned 95% of it off for the deployment app, and loads are better, but still higher than the normal non rvmsudo command.

apache, phusion passenger, and memory usage

I read this article somewhere:
"With a plain Apache server, it doesn’t matter much if you run many child processes—the processes are about 1 MB each (most of it shared), so they don’t eat a lot of RAM. The situation is different with mod_perl, where the processes can easily grow to 10 MB and more. For example, if you have MaxClients set to 50, the memory usage becomes 50 × 10 MB = 500 MB.Do you have 500 MB of RAM dedicated to the mod_perl server?"
I'm not using mod_perl on my server. I am using phusion passenger and ruby on rails with apache2. I am using prefork MPM and the MaxClients is set to the default 256. That means I can have 256 processes running concurrently at any given time. The article piqued my interest because I never have 256 apache2 processes running concurrently, usually I only have 80 apache2 processes running at any given time. But sometimes even just 80 bogs down my server to the point where the site just hangs when you try to load it.
When I run the following command, it sometimes shows 80 apache2 processes, for example:
ps aux | grep apache2
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1130 0.0 0.1 149080 10600 ? Ss 12:36 0:00 /usr/sbin/apache2 -k start
www-data 2051 0.0 0.3 163608 23592 ? S 16:46 0:00 /usr/sbin/apache2 -k start
www-data 2506 0.0 0.1 149376 7952 ? S 16:47 0:00 /usr/sbin/apache2 -k start
www-data 5149 0.0 0.1 149416 7980 ? S 16:49 0:00 /usr/sbin/apache2 -k start
www-data 5175 0.0 0.1 149368 7876 ? S 16:49 0:00 /usr/sbin/apache2 -k start
www-data 10212 0.0 0.1 149368 7848 ? S 16:53 0:00 /usr/sbin/apache2 -k start
www-data 19114 0.0 0.1 149368 7904 ? S 17:01 0:00 /usr/sbin/apache2 -k start
www-data 19138 0.0 0.1 150768 11856 ? S 17:01 0:00 /usr/sbin/apache2 -k start
www-data 20592 0.0 0.1 149428 8092 ? S 16:35 0:00 /usr/sbin/apache2 -k start
www-data 21336 0.0 0.1 149368 7808 ? S 17:03 0:00 /usr/sbin/apache2 -k start
www-data 21375 0.0 0.1 149432 7916 ? S 17:03 0:00 /usr/sbin/apache2 -k start
1000 26458 0.0 0.0 8112 896 pts/6 S+ 17:07 0:00 grep apache2
www-data 30848 0.0 0.1 149396 8044 ? S 16:43 0:00 /usr/sbin/apache2 -k start
But under memory, they range from 0.1 to 0.4, which doesn't seem like a lot of memory. So my question is when you send a request to the site from the browser, in addition to spawning a new apache2 process as a child process to the parent apache2 process, does passenger also create another process, something that could possibly be bogging down memory? When I run the top command, I notice sometimes it shows a ruby process at %100 CPU. I am wondering is that ruby process somehow linked to the apache2 processes via passenger. Something must be causing these processes to grow to big memory consumers, like that article stated. There must be something I am not looking at.
By the way, I have more than 5 gigs of memory on the machine:
$ cat meminfo
MemTotal: 6113156 kB
Phusion Passenger spawns your Ruby application processes separately. View them with passenger-memory-stats or passenger-status.

Resources