All dynos are stopped as soon as one finished with heroku local - heroku

I have a very simple FLASK webserver that I run in the web dyno of the heroku platform. I also have a very simple python program that runs on a separate dyno, which prints "hello world" and then exits.
My procfile looks like this
web: gunicorn --bind 0.0.0.0:$PORT wsgi
test: python helloWorld.py
With heroku local, as soon as the hello world program finishes, the web dyno gets killed too:
$ heroku local
8:54:29 AM test.1 | Hello World
[DONE] Killing all processes with signal SIGINT
8:54:29 AM test.1 Exited Successfully
8:54:29 AM web.1 | Traceback (most recent call last):
8:54:29 AM web.1 | File "/anaconda3/bin/gunicorn", line 7, in <module>
8:54:29 AM web.1 | from gunicorn.app.wsgiapp import run
8:54:29 AM web.1 | File "/anaconda3/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 9, in <module>
8:54:29 AM web.1 | from gunicorn.app.base import Application
8:54:29 AM web.1 | File "/anaconda3/lib/python3.7/site-packages/gunicorn/app/base.py", line 11, in <module>
8:54:29 AM web.1 | from gunicorn._compat import execfile_
8:54:29 AM web.1 | File "/anaconda3/lib/python3.7/site-packages/gunicorn/_compat.py", line 267, in <module>
8:54:29 AM web.1 | import inspect
8:54:29 AM web.1 | File "/anaconda3/lib/python3.7/inspect.py", line 1087, in <module>
8:54:29 AM web.1 | 'args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations')
8:54:29 AM web.1 | File "/anaconda3/lib/python3.7/collections/__init__.py", line 397, in namedtuple
8:54:29 AM web.1 | exec(s, namespace)
8:54:29 AM web.1 | File "<string>", line 1, in <module>
8:54:29 AM web.1 | KeyboardInterrupt
8:54:29 AM web.1 Exited with exit code null
How can I prevent this?
I know that the problem is with helloWorld.py exiting. If I create a script that doesn't terminate, the problem doesn't occur.
print("Hello World", flush=True)
while True:
pass

Apparently this is deliberate behavior from foreman. See the discussion on Github:
Foreman assumes all the processes are long-lived. If any of the processes exits, Foreman stops all processes.
The only solution would be to make HelloWorld.py long lived.

Related

How to start Unicorn with systemctl and systemd

I would like to start Unicorn with systemctl on the amazon-linux-2, but the Unicorn doesn't start.
I've wrote a /etc/systemd/system/unicorn.service file.
[Unit]
Description=The unicorn process
[Service]
User=myname
WorkingDirectory=/var/www/rails/myapp
SyslogIdentifier=unicorn
Type=simple
ExecStart=/bin/bash -l -c 'bundle exec unicorn_rails -c /var/www/rails/myapp/config/unicorn.conf.rb -E production -D'
[Install]
WantedBy=multi-user.target
Here are the commands I used to start the service
sudo systemctl daemon-reload
sudo systemctl start unicorn.service
I can't find any process about unicorn with command ps -ef | grep unicorn | grep -v grep.
Here I check the status:
$ sudo systemctl status unicorn -l
● unicorn.service - The unicorn process
Loaded: loaded (/etc/systemd/system/unicorn.service; disabled; vendor preset: disabled)
Active: inactive (dead)
xxx.compute.internal systemd[1]: Started The unicorn process.
xxx.compute.internal systemd[1]: Starting The unicorn process...
Here is unicorn.log. (there aren't any error log)
I, [2020-11-25T20:00:24.564840 #6604] INFO -- : Refreshing Gem list
I, [2020-11-25T20:00:25.278814 #6604] INFO -- : unlinking existing socket=/var/www/rails/myapp/tmp/sockets/.unicorn.sock
I, [2020-11-25T20:00:25.279020 #6604] INFO -- : listening on addr=/var/www/rails/myapp/tmp/sockets/.unicorn.sock fd=9
I, [2020-11-25T20:00:25.299977 #6604] INFO -- : master process ready
I, [2020-11-25T20:00:25.406567 #6604] INFO -- : reaped #<Process::Status: pid 6607 exit 0> worker=0
I, [2020-11-25T20:00:25.406659 #6604] INFO -- : reaped #<Process::Status: pid 6608 exit 0> worker=1
I, [2020-11-25T20:00:25.406760 #6604] INFO -- : master complete
Why the unicorn doesn't start ?
Change Type=simple to Type=forking.

Ruby Broken pipe # io_write - <STDOUT>

while trying to run a ruby program and piping the output to another program like this:
ruby hello.rb | whoami
The command whoami is executed first as expected, but after that, hello.rb crashes with:
Traceback (most recent call last):
2: from p.rb:2:in `<main>'
1: from p.rb:2:in `print'
p.rb:2:in `write': Broken pipe # io_write - <STDOUT> (Errno::EPIPE)
This happens only when STDOUT.sync is set to true
STDOUT.sync = true
STDOUT.print "Hello!"
[and a similar error is raised with STDOUT.flush after STDOUT.puts when piped to another program]
What is the reason behind this crash?
Introduction
Firstly, an explanation can be found here.
Anyways, here's my thought...
When a pipe is used like this:
a | b
Both a and b are executed concurrently. b waits for standard input from a.
Speaking of Errno::EPIPE, The Linux man page of write says:
EPIPE fd is connected to a pipe or socket whose reading end is
closed. When this happens the writing process will also
receive a SIGPIPE signal. (Thus, the write return value is
seen only if the program catches, blocks or ignores this
signal.)
Talking about the problem in the question:
When the program whoami is run, it exits and no longer accepts standard inputs that ruby program hello.rb is sending - resulting in a broken pipe.
Here I wrote 2 ruby programs, named p.rb and q.rb to test that:
p.rb
#!/usr/bin/env ruby
print ?* * 100_000
q.rb
#!/usr/bin/ruby
exit! 0
Running:
bash[~] $ ruby p.rb | ruby q.rb
Traceback (most recent call last):
2: from p.rb:2:in `<main>'
1: from p.rb:2:in `print'
p.rb:2:in `write': Broken pipe # io_write - <STDOUT> (Errno::EPIPE)
Let's change the code of q.rb a bit, so that it accepts inputs:
#!/usr/bin/ruby -w
STDIN.gets
Running:
bash[~] $ ruby p.rb | ruby q.rb
Right, it displays nothing actually. The reason is that q.rb now waits for standard inputs. Apparently, the waiting is what matters the most here. Now, p.rb will not crash with even with STDOUT.sync or STDOUT.flush when piped to this q.rb.
Another Example:
p.rb
STDOUT.sync = true
loop until print("\e[2K<<<#{Time.now.strftime('%H:%M:%S:%2N')}>>>\r")
[warning: the loop without sleep may bring up your CPU usage]
q.rb
sleep 3
Running:
bash[~] $ time ruby p.rb | q.rb
Traceback (most recent call last):
2: from p.rb:2:in `<main>'
1: from p.rb:2:in `print'
p.rb:2:in `write': Broken pipe # io_write - <STDOUT> (Errno::EPIPE)
real 0m3.186s
user 0m0.282s
sys 0m0.083s
You see the program crashed after 3 seconds. It will crash after 5.1 seconds if q.rb had sleep 5. Similarly sleep 0 in q.rb will crash p.rb after 0.1 seconds. I guess the additional 0.1 seconds depends on the system because my system takes 0.1 seconds to load the ruby interpreter.
I wrote p.cr and q.cr Crystal programs to test. Crystal is compiled and doesn't take the long 0.1 seconds to load up.
The Crystal Programs:
p.cr
STDOUT.sync = true
loop do print("\e[2KHi!\r") end rescue exit
q.cr
sleep 3
I compiled them, and ran:
bash[~] $ time ./p | ./q
real 0m3.013s
user 0m0.007s
sys 0m0.019s
The binary ./p, in very close to 3 seconds, handles Unhandled exception: Error writing file: Broken pipe (Errno) and exits. Again, 0.01 seconds may be taken by the two crystal programs to execute and maybe the Kernel also takes a bit time to run the processes.
Also note that STDERR#print, STDERR#puts, STDERR#putc, STDERR#printf, STDERR#write, STDERR#syswrite doesn't raise Errno::EPIPE even if the output is in sync.
Conclusion
Pipe is arcane. Setting STDOUT#sync to true or using STDOUT#flush flushes all buffered data to the underlying operating system.
When running hello.rb | whoami, without sync, I can write 8191 bytes of data, and the program hello.rb doesn't crash. But with sync, writing 1 byte via pipe will crash hello.rb.
So when hello.rb synchronizes standard outputs with the piped program whoami, and whoami doesn't wait for hello.rb; hello.rb raises Errno::EPIPE because the pipe between these two programs is broken (correct me if I am lost here).

How to capture process output using God?

Trying to get a simple God demo working.
In an empty directory I created the following files as per the God documentation:
simple.rb:
loop do
puts 'Hello'
sleep 1
end
simple.rb:
God.watch do |w|
w.name = "simple"
w.start = "ruby simple.rb"
w.log = 'myprocess.log'
w.keepalive
end
Then I run:
$ sudo god -c simple.god -D
and get this output:
I [2018-10-31 23:19:39] INFO: Loading simple.god
I [2018-10-31 23:19:39] INFO: Syslog enabled.
I [2018-10-31 23:19:39] INFO: Using pid file directory: /var/run/god
I [2018-10-31 23:19:39] INFO: Started on drbunix:///tmp/god.17165.sock
I [2018-10-31 23:19:39] INFO: simple move 'unmonitored' to 'init'
I [2018-10-31 23:19:39] INFO: simple moved 'unmonitored' to 'init'
I [2018-10-31 23:19:39] INFO: simple [trigger] process is running (ProcessRunning)
I [2018-10-31 23:19:39] INFO: simple move 'init' to 'up'
I [2018-10-31 23:19:39] INFO: simple registered 'proc_exit' event for pid 11741
I [2018-10-31 23:19:39] INFO: simple moved 'init' to 'up'
but I can't seem to capture the actual output from the watched process. The 'myprocess.log' file never gets created or written to.
But beyond that I'm just experiencing some really weird behavior. Like sometimes when I run it it spews an endless stream of output showing processes starting and exiting one after another. Sometimes it logs to files after I've renamed them. I can't get a peg on why it's behaving so erratically.
God 0.13.7 / ruby 2.3.0 / OSX 10.13.6
Check the example in the documentation that you linked to again:
God.watch do |w|
w.name = "simple"
w.start = "ruby /full/path/to/simple.rb"
w.keepalive
end
You are using a relative path, not a full path. If you try to use a relative path it's going to error out and say it can't create the log file there. This will cause it to loop through start/exit as you described.
Also, make sure that after you CTRL-C the god process that you kill your backgrounded ruby process. You can see that even after killing god that it's running with ps aux | grep ruby.
Finally, puts does log to the log file, but the output is buffered by god until the ruby process for simple.rb is terminated. Repeat this process to confirm:
# Confirm no running ruby processes, otherwise kill the processes and re-verify
ps aux | grep ruby
# Start the daemon
god -c simple.god -D
Switch to a new shell and run:
ps aux | grep ruby
foo 51279 0.0 0.1 4322084 11888 ?? Ss 12:46AM 0:00.09 ruby /Users/foo/simple.rb
foo 51241 0.0 0.2 4343944 26208 s000 S+ 12:46AM 0:00.45 ruby /Users/foo/.rvm/gems/ruby-2.6.0-preview2/bin/god -c simple.god -D
# Kill the process for simple.rb, which causes god to dump the output to the log and restart it
kill 51279
# Verify log file contains expected output
cat myprocess.log
Hello
Hello
Hello
Hello
I recommend you keep reading the documentation for god. There's a lot to it, and the answers are all there.

systemd and StandardInput. taking control of tty

I have a systemd unit script which looks something like this
cat /usr/lib/systemd/system/hello.service
[Unit]
Description=Simple Hello World service
After=syslog.target network.target
[Service]
Type=forking
EnvironmentFile=/root/hello.env
ExecStart=/bin/gdb /root/hello
StandardInput=tty-force
StandardOutput=inherit
TTYPath=/dev/pts/0
TTYReset=yes
TimeoutStartSec=infinty
[Install]
WantedBy=multi-user.target
The whole point is, i want to start the service with gdb on start up.[Since the process involves lot of environmental variables i cannot use the gdb directly on the process.]
systemctl start hello (which is actually working).
But once i exit out of gdb tty is completely messed up.None of the control key work, ^Z, ^C.
This are the observation till now.
As describer by systemd man pages with "StandardInput=tty-force", will actually force the executing process to take control of tty.
Before i launch the process
# tty
/dev/pts/0
# ps -aef | grep bash
root 2805 2803 0 10:42 pts/0 00:00:00 -bash
root 2860 2805 0 10:45 pts/0 00:00:00 grep --color=auto bash
After i launch
# tty
/dev/pts/0
# ps -aef | grep bash
root 2805 2803 0 10:42 ? 00:00:00 -bash
root 2884 2805 0 10:47 ? 00:00:00 grep --color=auto bash
Tried reset the terminal, still doesn't work.
subsequent systemctl command dsplay the below error
systemctl stop hello
Error creating textual authentication agent: Error opening current controlling terminal for the process (`/dev/tty'): No such device or address (polkit-error-quark, 0)
So the question is is there a way to reset the tty back to bash ?

Resque worker foreman failing to start workers

I have a foreman script starting up some workers on a standalone ruby app. Here's the script
Foreman script
worker: bundle exec rake resque:work BACKGROUND=true QUEUE=image VERBOSE=true
When I run the script this is the output I get.
$ foreman start
22:00:38 worker.1 | started with pid 882
22:00:38 worker.1 | exited with code 0
22:00:38 system | sending SIGTERM to all processes
SIGTERM received
The process seems to have exited but when I look at the ps -eaf | grep resque log it shows a resque worker running with pid 884. I've tested this and its always a pid +2 than the original.
When I run the bundle exec command straight from the terminal without foreman, the command executes just fine. Is there anything I'm missing with the foreman script?
So apparently, when running BACKGROUND=true the resque workers get daemonized and therefore the original pid gets deleted and a new one gets spanwed as an orphan process for the worker.
Still, there is an issue when creating 2 background workers with foreman because once one of the workers gets daemonized, foreman will end all processes and only one daemonized worker will be created instead of two.
You should not daemonize the workers with foreman - foreman needs to have all the processes running in the foreground. If you want multiple workers, simply use something like this:
image_worker: bundle exec rake resque:work QUEUE=image VERBOSE=true
other_worker: bundle exec rake resque:work QUEUE=other VERBOSE=true
To start multiple workers on the same queue:
foreman start -m image_worker=2

Resources