I read this article somewhere:
"With a plain Apache server, it doesn’t matter much if you run many child processes—the processes are about 1 MB each (most of it shared), so they don’t eat a lot of RAM. The situation is different with mod_perl, where the processes can easily grow to 10 MB and more. For example, if you have MaxClients set to 50, the memory usage becomes 50 × 10 MB = 500 MB.Do you have 500 MB of RAM dedicated to the mod_perl server?"
I'm not using mod_perl on my server. I am using phusion passenger and ruby on rails with apache2. I am using prefork MPM and the MaxClients is set to the default 256. That means I can have 256 processes running concurrently at any given time. The article piqued my interest because I never have 256 apache2 processes running concurrently, usually I only have 80 apache2 processes running at any given time. But sometimes even just 80 bogs down my server to the point where the site just hangs when you try to load it.
When I run the following command, it sometimes shows 80 apache2 processes, for example:
ps aux | grep apache2
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1130 0.0 0.1 149080 10600 ? Ss 12:36 0:00 /usr/sbin/apache2 -k start
www-data 2051 0.0 0.3 163608 23592 ? S 16:46 0:00 /usr/sbin/apache2 -k start
www-data 2506 0.0 0.1 149376 7952 ? S 16:47 0:00 /usr/sbin/apache2 -k start
www-data 5149 0.0 0.1 149416 7980 ? S 16:49 0:00 /usr/sbin/apache2 -k start
www-data 5175 0.0 0.1 149368 7876 ? S 16:49 0:00 /usr/sbin/apache2 -k start
www-data 10212 0.0 0.1 149368 7848 ? S 16:53 0:00 /usr/sbin/apache2 -k start
www-data 19114 0.0 0.1 149368 7904 ? S 17:01 0:00 /usr/sbin/apache2 -k start
www-data 19138 0.0 0.1 150768 11856 ? S 17:01 0:00 /usr/sbin/apache2 -k start
www-data 20592 0.0 0.1 149428 8092 ? S 16:35 0:00 /usr/sbin/apache2 -k start
www-data 21336 0.0 0.1 149368 7808 ? S 17:03 0:00 /usr/sbin/apache2 -k start
www-data 21375 0.0 0.1 149432 7916 ? S 17:03 0:00 /usr/sbin/apache2 -k start
1000 26458 0.0 0.0 8112 896 pts/6 S+ 17:07 0:00 grep apache2
www-data 30848 0.0 0.1 149396 8044 ? S 16:43 0:00 /usr/sbin/apache2 -k start
But under memory, they range from 0.1 to 0.4, which doesn't seem like a lot of memory. So my question is when you send a request to the site from the browser, in addition to spawning a new apache2 process as a child process to the parent apache2 process, does passenger also create another process, something that could possibly be bogging down memory? When I run the top command, I notice sometimes it shows a ruby process at %100 CPU. I am wondering is that ruby process somehow linked to the apache2 processes via passenger. Something must be causing these processes to grow to big memory consumers, like that article stated. There must be something I am not looking at.
By the way, I have more than 5 gigs of memory on the machine:
$ cat meminfo
MemTotal: 6113156 kB
Phusion Passenger spawns your Ruby application processes separately. View them with passenger-memory-stats or passenger-status.
Related
I have an laravel application in docker environment. In my application, I have a console command which will connect to a MQTT broker and subscribe to a certain topic, then command will handle message recieved from MQTT broker.
This is my handle() method in command class:
public function handle()
{
$this->logger->info('Start connecting to broker.');
$subscribeTopic = '#';
$mqtt = MQTT::connection(MqttClient::MQTT_SUBSCRIBE_CONNECTION);
// Using a infinite loop for re-creating new connection in case current connection is broken
while (true) {
try {
if (!$mqtt->isConnected()) {
$this->logger->info('Client is disconnected. Re-connecting.');
$mqtt->connect();
}
$mqtt->subscribe($subscribeTopic, $this->handleSubscriberCallback());
$mqtt->loop(true);
} catch (Exception $exception) {
$this->logger->error($exception);
}
}
}
When container started, this process is automatically started in background by command
php artisan mqtt:start > /dev/null &
Checking process is running by command ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
www 1 0.0 0.0 3832 3016 ? Ss 02:12 0:00 bash /usr/local/bin/cmd
www 64 0.1 1.0 126336 61000 ? S 02:12 0:02 php artisan mqtt:start
www 65 0.0 0.5 230460 35488 ? Ss 02:12 0:00 php-fpm: master process (/usr/local/etc/php-fpm.conf)
www 66 0.3 0.9 308336 57684 ? S 02:12 0:05 php-fpm: pool www
www 67 0.3 0.7 233172 46008 ? S 02:12 0:04 php-fpm: pool www
www 75 0.1 0.0 4096 3360 pts/0 Ss 02:34 0:00 bash
www 83 0.0 0.0 6696 2820 pts/0 R+ 02:35 0:00 ps aux
This command worked fine, but problem comes when MQTT client (my artisan command) receive messages continuously, process stop without any log, error or warning.
I've tried to publish messages with differences delay each time publish message:
With 0.01 second -> process stop after ~4k5 messages
With 0.1 second -> process stop after ~9k5 messages
My system:
WSL2 in window 11
WSL OS: Ubuntu 20.04
RAM: 6GB
CPU: 4
Can someone know where problem come from? Or how can I automatically restart my process when it stopped. Thanks
Incidentally, review my question and update some information for someone needed.
Thanks to #Namoshek, using supervisor solved the problem by auto restart command when it stopped.
Additionally, the supervisor log file for the process helped me debug my error. It's due to an exhausted memory error caused by Laravel Telescope. When I turn off Telescope, this error doesn't happen anymore. Besides, when I start the command when starting the docker container, I use the command php artisan mqtt:start > /dev/null &, which /dev/null removes all output errors. Thus, I could not track error thrown in the docker container log.
I'm trying to get a list of running domains from an application server. It takes a few seconds for the query to respond; so, it would be nice to run it in the background. However, it hangs, apparently waiting on something even though the command takes no input. When I bring it to the foreground, it immediately displays the results and quits. I also tried disconnecting stdin with 0<&-.
java -jar appserver-cli.jar list-domains &
How can I diagnose the issue? Or better yet, what's the problem?
I can see some open pipes and sockets.
ps --forest
PID TTY TIME CMD
16876 pts/1 00:00:00 bash
2478 pts/1 00:00:00 \_ java
2499 pts/1 00:00:00 | \_ stty
ls -l /proc/2478/fd
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 0 -> /dev/pts/1
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 1 -> /dev/pts/1
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 10 -> socket:[148228]
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 2 -> /dev/pts/1
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 24 -> socket:[148389]
lr-x------. 1 vagrant vagrant 64 Mar 23 09:08 73 -> pipe:[18170535]
lr-x------. 1 vagrant vagrant 64 Mar 23 09:08 75 -> pipe:[18170536]
I also see the following signal which does not show up when I run the process in the foreground.
futex(0x7fda7e0309d0, FUTEX_WAIT, 9670, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGTTOU {si_signo=SIGTTOU, si_code=SI_KERNEL} ---
--- stopped by SIGTTOU ---
The following resources were helpful in figuring this out.
sigttin-sigttou-deep-dive-linux
cannot-rewrite-trap-command-for-sigtstp-sigttin-and-sigttou
From my post, it's clear the process is receiving SIGTTOU. It may be trying to configure terminal settings. I noticed that when I run from a script instead of interactive, there is no empty handler defined for SIGTTOU. The following resolves the issue.
bash -c "java -jar appserver-cli.jar list-domains &"
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I got myself into a bit of a pickle.
I got a 1-core, 1GB memory VPS from digital ocean and took a shot at installing chef server on the box, even though the guide had a few warnings that chef requires at least 4 cores and more memory.
During the chef-server-ctl reconfigure step, I ran into postgresql errors (if you're curious, more here) and mistakenly hit CTRL-C to kill the process. I noticed several chef processes running and even restarted the server to try and kill them, but they still persisted.
root#hal:~# ps aux | grep chef
root 597 0.0 0.0 4212 72 ? Ss 07:39 0:00 runsv opscode-erchef
opscode 611 0.0 0.0 4356 88 ? S 07:39 0:00 svlogd -tt /var/log/opscode/opscode-erchef
opscode 612 0.7 3.7 534704 38400 ? Ssl 07:39 0:09 /opt/opscode/embedded/service/opscode-erchef/erts-5.10.4/bin/beam.smp -K true -A 5 -- -root /opt/opscode/embedded/service/opscode-erchef -progname oc_erchef -- -home /var/opt/opscode/opscode-erchef -- -noshell -boot /opt/opscode/embedded/service/opscode-erchef/releases/1.5.0/oc_erchef -embedded -config /opt/opscode/embedded/service/opscode-erchef/etc/app.config -name erchef#127.0.0.1 -setcookie erchef -smp enable -pa lib/patches -- runit
opscode+ 1473 0.0 0.4 314352 4520 ? Ss 07:39 0:00 postgres: opscode_chef opscode_chef 127.0.0.1(52205) idle
opscode+ 1475 0.0 0.3 313928 3964 ? Ss 07:40 0:00 postgres: opscode_chef opscode_chef 127.0.0.1(56254) idle
opscode+ 1477 0.0 0.3 313928 3972 ? Ss 07:40 0:00 postgres: opscode_chef opscode_chef 127.0.0.1(56509) idle
opscode+ 1479 0.0 0.4 313928 4152 ? Ss 07:40 0:00 postgres: opscode_chef opscode_chef 127.0.0.1(56740) idle
opscode+ 1546 0.0 0.4 313928 4148 ? Ss 07:40 0:00 postgres: opscode_chef opscode_chef 127.0.0.1(41027) idle
opscode+ 1563 0.0 0.4 313928 4144 ? Ss 07:40 0:00 postgres: opscode_chef opscode_chef 127.0.0.1(56678) idle
....
....
This is hogging so much memory I can't run some other basic processes. I even tried uninstalling it with chef-server-ctl uninstall but that too failed with
/opt/opscode/embedded/lib/ruby/gems/2.1.0/gems/omnibus-ctl-0.3.1/lib/omnibus-ctl.rb:295:in `run_sv_command_for_service': undefined method `exitstatus' for nil:NilClass (NoMethodError)
from /opt/opscode/embedded/lib/ruby/gems/2.1.0/gems/omnibus-ctl-0.3.1/lib/omnibus-ctl.rb:285:in `block in run_sv_command'
from /opt/opscode/embedded/lib/ruby/gems/2.1.0/gems/omnibus-ctl-0.3.1/lib/omnibus-ctl.rb:284:in `each'
from /opt/opscode/embedded/lib/ruby/gems/2.1.0/gems/omnibus-ctl-0.3.1/lib/omnibus-ctl.rb:284:in `run_sv_command'
from /opt/opscode/embedded/lib/ruby/gems/2.1.0/gems/omnibus-ctl-0.3.1/lib/omnibus-ctl.rb:219:in `cleanup_procs_and_nuke'
from /opt/opscode/embedded/lib/ruby/gems/2.1.0/gems/omnibus-ctl-0.3.1/lib/omnibus-ctl.rb:256:in `uninstall'
from /opt/opscode/embedded/lib/ruby/gems/2.1.0/gems/omnibus-ctl-0.3.1/lib/omnibus-ctl.rb:555:in `run'
from /opt/opscode/embedded/lib/ruby/gems/2.1.0/gems/omnibus-ctl-0.3.1/bin/omnibus-ctl:31:in `<top (required)>'
from /opt/opscode/embedded/bin/omnibus-ctl:23:in `load'
from /opt/opscode/embedded/bin/omnibus-ctl:23:in `<main>'
Any thoughts on how to get around this?
Thank you!
stop private-chef-runsvdir
OR:
/usr/bin/private-chef-ctl stop
pkill -HUP -P 1 runsv$
I installed Ruby on a server (1.9.3 via RVM), set up Guard on some directories, then established I didn't need any of this anymore and uninstalled Ruby (via an RVM command).
My problem is, any directory access to the directories Guard was watching still triggers an attempt to launch Ruby (which is no longer there), therefore causing an error.
I thought that, since Guard was a Ruby gem, uninstalling Ruby would "turn off" guard. It seems there's more to it than that, and that some process still remains.
How do I kill the ghost of guard?
Another thread suggested I run ps aux | grep guard to find the PID of the guard process then kill it, but the only thing that finds is the grep guard itself:
root 6754 0.0 0.0 6384 676 pts/1 S+ 13:45 0:00 grep guard
It seems like whatever this "ghost of guard" is, it's not called guard.
It's probably not relevant, but in case it is, guard was launched via the Drupal Drush command drush omega-guard which is part of the Drupal theme Omega-4, and here's an example of an error that the ghost of guard is causing (this is accessing the Centos server from windows using SFTP):
This command should list all the processes using Linux inotify subsystem on which Guard is based:
$ ps -p `find /proc -name task -prune -o -type l -lname anon_inode:inotify -print 2> /dev/null | cut -d/ -f3`
PID TTY STAT TIME COMMAND
1102 ? Ssl 0:16 evince
3651 ? Ss 0:01 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
4071 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/ibus/ibus-gconf
4075 ? Sl 1:08 /usr/lib/x86_64-linux-gnu/ibus/ibus-x11 --kill-daemon
4092 ? Sl 0:18 /usr/lib/ibus-mozc/ibus-engine-mozc --ibus
4468 ? Ssl 188:36 skype
4788 ? S<l 622:27 /usr/bin/pulseaudio --start --log-target=syslog
7102 pts/0 S+ 0:00 inotifywait -r -m -e modify --format %f JavaFXSceneBuilder2.0/
7998 ? Ssl 6:53 gvim
8549 ? Ssl 11:11 /opt/google/chrome/chrome
8597 ? Ssl 307:04 /usr/lib/firefox/firefox
9459 ? Sl 50:05 /usr/lib/firefox/plugin-container /usr/lib/flashplugin-installer/libflashplayer.so -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8597 true plugin
16444 ? Ssl 1:31 gvim
16452 ? Ssl 24:39 /home/nodakai/.dropbox-dist/dropbox-lnx.x86_64-2.10.27/dropbox
24514 ? S 0:01 /usr/lib/gvfs/gvfs-gdu-volume-monitor
24527 ? Sl 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor
32491 ? Sl 11:10 /usr/lib/libreoffice/program/soffice.bin --splash-pipe=5
You might as well install Ruby and Guard again to uninstall them in a proper way.
I have a ruby 2.0 sinatra 'faceless' app that serves up json by calling an external service. It works fine.
The main app is run on port 80 in a ubuntu machine.
I also start an instance using 'foreman start' - so it runs on port 5000 on the same ubuntu virtual machine.
On the port 80 instance, the process 'foreman master' soaks up CPU time, while with the same load, the one on port 5000 uses essentially 0 CPU.
$ ps -a l
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND
4 0 1615 1614 20 0 26140 17236 wait Sl+ tty1 0:28 foreman: master
0 1000 1899 1659 20 0 25036 16612 wait Sl+ pts/1 0:00 foreman: master
The apps were started at the same time and both had the same load (very light for 20 mins).
The only difference I can see is that the problem one is started on port 80 using a sudo command, and the other one is just started as a user process.
Is there a difference in how foreman needs to output log entries in a tty terminal vs a pts/1 terminal?
Note that with 40 people banging away on the app, the foreman master process is using 90% cpu while all the other ruby processes that are supposed to be doing the work are at 1% (9 unicorns).
I think its something to do with terminal output handled differently but I'm not sure.
Thanks for any help.
Is there a way to tell foreman or ruby to not write log stuff out at all?
EDIT
I now think that it is related to terminal logging, since i turned 95% of it off for the deployment app, and loads are better, but still higher than the normal non rvmsudo command.