I'm starting up three thin processes with bundle exec thin start -C /etc/thin/staging.yml
I use rvm, ruby version is ree-1.8.7
Contents of /etc/thin/staging.yml:
---
timeout: 30
pid: /home/myuser/apps/g/shared/pids/thin.pid
max_persistent_conns: 512
servers: 3
chdir: /home/myuser/apps/g/current
port: 3040
require: []
log: /home/myuser/apps/g/shared/log/thin.log
daemonize: true
address: 0.0.0.0
max_conns: 1024
wait: 30
environment: staging
lsof -i :3040-3042 will show three ruby processes listening on ports 3040-3042, but the pid files contain three different (slightly lower) pids. All six processes are called merb : merb : Master
When I stop thin with bundle exec thin stop -C /etc/thin/staging.yml, thin first sends a QUIT signal to the processes in the pid files, then, after a timeout, a KILL signal.
The pid files are now gone, the thin logs show that the server has stopped, but there are still three ruby processes listening on ports 3040-3042, so a subsequent thin start will fail.
The only differences between the output of lsof -p of both processes is a /lib/libnss_files-2.12.so library and a postgres socket.
My questions are:
why do I get a timeout during thin stop?
why are there two processes per server instead of one?
how do I fix this elegantly (without kill -9)
Apparently the Merb bootloader does a fork. How braindead is that!
Set Merb::Config[:fork_for_class_load] = false in your config.ru.
Related
I am attempting to use bash to permanently kill the process sharingd.
I have tried using the command "sudo kill -9 (pid of port sharingd is using)", but sharingd just reopens on another port.
lsof -i
sudo kill -9 PID
My expected results should stop sharingd from running, but it just uses a different port each time.
Pardon my inability to display my code as actual code, I am somewhat new to Stack Overflow.
I've written a TCP Server in ruby running on port 2000 with event machine.
Right now, what I do is ssh to my server and run the command ruby lib/tcp_server.rb to turn on the server, but it shuts down when I log out.
I've tried nohup and using & but nothing seems to stick for the server for a long time.
So my question is, how do I deploy this server on port 2000 and keep it running, like how we deploy Rails to nginx.
It's not a webserver, but an a tcp server for a connected device, if that helps.
Thanks!
Solution 1: tmux or screen
This is the simplest way to approach, you will have to create a tmux or screen session, then start your server in that session.
Solution 2: nohup
nohup ruby lib/tcp_server.rb > stdout.log 2> stderr.log &
You've tried nohup and using &, I suppose you've already known how to do.
Solution 3: daemonize
You can detach from the shell and daemonize the process by forking
it twice, setting the session ID and changing the current working directory.
def daemonize
exit if fork
Process.setsid
exit if fork
Dir.chdir '/'
end
With this approach, you will have to redirect stdout and stderr to keep logs.
Another way to daemonize is to use gems like daemons.
update:
To restart the process automatically after being killed, you need a process manager like god or pm2.
To start the process automatically after booting, you need to compose an init scripts but how it looks like depends on your service management system and operating system. One of the most well-known is System V. If you are using Ubuntu, you might want to take a look at Upstart or systemd.
I'm trying to use Elasticsearch in Docker for local dev. While I can find containers that work, when docker stop is sent, the containers hang for the default 10s, then docker forcibly kills the container. My assumption here is that ES is either not on PID 1 or other services prevent it from shutting down immediately.
I'm curious if anyone can expand on this, or explain why this is happening more accurately. I'm running numerous tests and 10s+ to shutdown is just annoying when other containers shutdown after 1-2s.
If you don't want to wait the 10 seconds, you can run a docker kill instead of a docker stop. You can also adjust the timeout on docker stop with the -t option, e.g. docker stop -t 2 $container_id to only wait 2 seconds instead of the default 10.
As for why it's ignoring the sigkill, that may depend on what image you are running (there's more than one for elasticsearch). However, if pid 1 is a shell like /bin/sh or /bin/bash, it will not pass signals through. If pid 1 is the elasticsearch process, it may ignore the signal, or 10 seconds may not be long enough for it to fully cleanup and shutdown.
I use Sidekiq for job processing. Using Foreman, I set up six processes in my Procfile:
redirects: bundle exec sidekiq -c 10 -q redirects
redirects2: bundle exec sidekiq -c 10 -q redirects
redirects3: bundle exec sidekiq -c 10 -q redirects
redirects4: bundle exec sidekiq -c 10 -q redirects
redirects5: bundle exec sidekiq -c 10 -q redirects
redirects6: bundle exec sidekiq -c 10 -q redirects
These processes performed at about 1600+ jobs (simple job to increment some hashes in Redis) per second with all 10 threads busy most of the time. I scaled my Digital Ocean droplet from 8 to 12-core, and performance fell to ~400 per second. For each process, there are only 3-5 busy threads out of 10.
What I did to try to fix the issue:
Make perform method empty
Use less/more process count
Use less/more concurrency
Split queue to server-specific queues (there are three express.js clients from another servers puts jobs in queues)
Trying with different hz values in redis.conf
Setting somaxconn to 1024 (tcp-backlog in redis.conf too)
Turning off RDB save and use only AOF
Flush all Redis dbs (there are two databases for all that logic: one for sidekiq and another for hashes in my workers)
Running sidekiq from terminal without Foreman (to check if it is Foreman issue)
None of above helped me. What could have caused the performance loss?
I have a ruby 2.0 sinatra 'faceless' app that serves up json by calling an external service. It works fine.
The main app is run on port 80 in a ubuntu machine.
I also start an instance using 'foreman start' - so it runs on port 5000 on the same ubuntu virtual machine.
On the port 80 instance, the process 'foreman master' soaks up CPU time, while with the same load, the one on port 5000 uses essentially 0 CPU.
$ ps -a l
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND
4 0 1615 1614 20 0 26140 17236 wait Sl+ tty1 0:28 foreman: master
0 1000 1899 1659 20 0 25036 16612 wait Sl+ pts/1 0:00 foreman: master
The apps were started at the same time and both had the same load (very light for 20 mins).
The only difference I can see is that the problem one is started on port 80 using a sudo command, and the other one is just started as a user process.
Is there a difference in how foreman needs to output log entries in a tty terminal vs a pts/1 terminal?
Note that with 40 people banging away on the app, the foreman master process is using 90% cpu while all the other ruby processes that are supposed to be doing the work are at 1% (9 unicorns).
I think its something to do with terminal output handled differently but I'm not sure.
Thanks for any help.
Is there a way to tell foreman or ruby to not write log stuff out at all?
EDIT
I now think that it is related to terminal logging, since i turned 95% of it off for the deployment app, and loads are better, but still higher than the normal non rvmsudo command.