Why isn't the simple command ruby my app.rb working to boot up my Sinatra application from within a Docker container?
I have a very simple Sinatra app:
# myapp.rb
require 'sinatra'
get '/' do
'Hello world!'
end
I run this locally with ruby myapp.rb and I get the following output
== Sinatra (v2.1.0) has taken the stage on 4567 for development with backup from Puma
Puma starting in single mode...
* Puma version: 5.1.1 (ruby 2.7.0-p0) ("At Your Service")
* Min threads: 0
* Max threads: 5
* Environment: development
* PID: 49242
* Listening on http://127.0.0.1:4567
* Listening on http://[::1]:4567
Use Ctrl-C to stop
Opens up on http://127.0.0.1:4567 with no issue. When moving to Dockerize the app, I create a Gemfile with Sinatra and the following Dockerfile.
FROM ruby:2.7.0
WORKDIR /code
COPY . /code
RUN bundle install
CMD ["ruby", "myapp.rb"]
Standing up the container, it seem successful (Docker Desktop is green, no terminal errors), but clicking on the suggested link http://localhost:4567/ doesn't load (sad Chrome face). Logs from within the container look like so
[2020-12-27 18:04:52] INFO WEBrick 1.6.0
[2020-12-27 18:04:52] INFO ruby 2.7.0 (2019-12-25) [x86_64-linux]
== Sinatra (v2.1.0) has taken the stage on 4567 for development with backup from WEBrick
[2020-12-27 18:04:52] INFO WEBrick::HTTPServer#start: pid=1 port=4567
However, when I add the below config.ru file and change the last line of my Dockerfile to CMD ["bundle", "exec", "rackup", "--host", "0.0.0.0", "-p", "4567"], http://localhost:4567/ opens with no issue.
# config.ru
require './myapp'
run Sinatra::Application
Why are these tweaks necessary to make the app work? The logs from with the container look nearly the same.
[2020-12-27 18:01:49] INFO WEBrick 1.6.0
[2020-12-27 18:01:49] INFO ruby 2.7.0 (2019-12-25) [x86_64-linux]
[2020-12-27 18:01:49] INFO WEBrick::HTTPServer#start: pid=1 port=4567
172.17.0.1 - - [27/Dec/2020:18:02:44 +0000] "GET / HTTP/1.1" 200 12 0.0420
I'm not necessarily wondering about "best practices" here (this is a side project). I'm more just trying to understand what I might be missing about how Dockerizing apps works.
Docker commands for both cases (and I clear the images/containers between runs):
docker build --tag sinatra-img .
docker run --name sinatra-app -dp 4567:4567 sinatra-img
When you start your app with ruby myapp.rb in a Docker container, your app is listening on localhost because it is running in development mode. If your Docker server runs in a VM, you won't be able to access your app. To fix this, when you run your app in a Docker container, make sure that it is listening on 0.0.0.0: ruby myapp.rb -o 0.0.0.0
NOTE:
The following answer relates to a previous version of the question. The new question has a different answer (fixing the binding address using the -o 0.0.0.0 CLI argument).
The Sinatra framework is based on Rack and requires a Rack compatible server... either that, or it can also fallback on the WEBrick server that's included with the Ruby language bundle.
WEBrick is a decent server, but it wasn't designed for the heavier loads or the needs of an actual web application running in production.
For this reason, you SHOULD use a Rack compatible server.
However, this does not mean that you have to use the rackup CLI helper.
Some servers, like Puma, iodine and passenger include their own CLI, so you could run your application using:
CMD ["bundle", "exec", "puma", "-p", "4567"]
Type puma -h (or iodine -h) for more command line options. A server's specific CLI might offer some server specific features you don't get with backup. For example, Iodine exposes some security options through it's CLI (maximum file upload size, maximum total header length, web socket message limits, etc').
Using the server's CLI interface should be considered a better option.
In addition, although I wouldn't recommend it, some servers also provide a Ruby API that allows you start the server from a Ruby script (instead of a config.ru file). i.e., with iodine (I'm biased):
ENV['PORT'] ||= "4567"
require 'iodine' # will test the `ENV['PORT']` value
require 'sinatra'
get '/' do
'Hello world!'
end
Iodine.listen service: :http, public: './public', handler: Sinatra::Application
# Iodine.threads = 16 # or whatever.
# Iodine.workers = -2 # half the core count (negative value).
Iodine.start
I wouldn't use this approach. It tends to be more fragile and it also hardcodes both the environment and the server settings in the application.
I would just add the config.ru and use a decent server (I like iodine, but Puma is much more popular and unless you need real-time pub/sub, websockets or some specific security/performance features, popular is often safer).
EDIT (according to comment):
If what you're really looking for is to embed the command bundle exec into the Ruby script (for version control using a gemfile), you can start the script with the lines
#!/usr/bin/env ruby
require 'bundler'
Bundler.require
Or, if you don't want to use a gemfile at all (or don't require version control), you can jus start the first line with:
#!/usr/bin/env ruby
Then you can start your server directly:
CMD ["puma", "-p", "4567"]
Or, without using the server's CLI, using the example script above, run:
CMD ["my_script.rb"]
I am developing a static site locally. To view it in a browser, I run this command
ruby -run -ehttpd . -p8000
to run a local webserver at localhost:8000.
Starting yesterday, when I run it, I get the error
INFO WEBrick 1.3.1
INFO ruby 2.0.0 (2015-12-16) [universal.x86_64-darwin16]
WARN TCPServer Error: Address already in use - bind(2)
INFO WEBrick::HTTPServer#start: pid=1158 port=8000
So I change the port number -p8000 to p8001 and I get the same error. I try 8002, 8003, 8888, 1313, 8004. I the same error on every single port number. Ruby, or Webrick, thinks that every port is already in use.
All the solutions to this problem I can find online suggest finding whatever process is hogging the port using commands like lsof | grep '8000' or lsof -wni tcp:8000 and then killing that process. But those commands don't return anything. There are no processes using those ports.
This happens on a fresh restart of my machine. Wifi turned off.
I am trying to set up a development environment for Apache/AirFlow on MacBook with macOS 10.14.x.
I have installed docker, virtualbox and created virtual machine and created containers with web_server, worker, scheduler and redis, postgres.
I run :
docker-compose up -d
But, when I visited http://localhost:8080, I got:
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
In the docker-compose log file, I found:
[mwebserver_1 [INFO] Parent changed, shutting down: <Worker 34>
[mwebserver_1 [INFO] Worker exiting (pid: 34)
[mwebserver_1 {{cli.py:815}} ERROR - No response from gunicorn master within 120 seconds
[mwebserver_1 {{cli.py:816}} ERROR - Shutting down webserver
I am not sure what the problem could be.
Any suggestions would be appreciated.
After start the docker container's.
Try to run docker exec -it NAME_OF_CONTAINER /bin/bash. After that you're gonna into container bash and you can run airflow webserver.
I installed boot2docker as explained on the docker website. Here are some command runs to show that I have things installed correctly:
$$:~ kv$ boot2docker start
Waiting for VM and Docker daemon to start...
...................ooo
Started.
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/key.pem
Your environment variables are already set correctly.
$$:~ kv$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 14.04 b39b81afc8ca 11 days ago 188.3 MB
hello-world latest e45a5af57b00 3 weeks ago 910 B
After this, I ran the following command:
docker run -t -i ubuntu:14.04 /bin/bash
Inside the container, I installed zeromq, and started a zeromq server on port 5555 using tcp.
My questions are following:
If I exit out of the container, will it save all the work I do inside it?
I have no idea how to connect to the server running on port 5555. I read something about exposing a port, but I am not sure how to go about doing that. I did an ifconfig inside the container, and tried to connect to the server from the host like this:
$$:~ kv$ ./zmq_client tcp://container_ip:5555
This did not work. Can someone please lists the steps I need to take in order to connect to the server running within the container.
For completion sake, I am providing the list of my environment variables:
TERM_PROGRAM=Apple_Terminal
TERM=xterm-256color
SHELL=/bin/bash
TMPDIR=/var/folders/km/5kbpdx4s7cg4rmyc6d5q9l9r0000gq/T/
DOCKER_HOST=tcp://192.168.109.103:2376
Apple_PubSub_Socket_Render=/tmp/launch-1tWMHJ/Render
TERM_PROGRAM_VERSION=326
OLDPWD=/Users
TERM_SESSION_ID=262CBC8B-0A74-4B70-9F28-D9FA51FF713C
USER=kv
SSH_AUTH_SOCK=/tmp/launch-ZTWNGL/Listeners
__CF_USER_TEXT_ENCODING=0x1F7:0:0
DOCKER_TLS_VERIFY=1
__CHECKFIX1436934=1
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin
PWD=/Users/kv
DOCKER_CERT_PATH=/Users/kv/.boot2docker/certs/boot2docker-vm
HOME=/Users/kv
SHLVL=1
LOGNAME=kv
LC_CTYPE=UTF-8
DISPLAY=/tmp/launch-rco9zt/org.macosforge.xquartz:0
_=/usr/bin/env
One last question I have is about code performance. So within my Mac OS X, I have a docker container running (which runs Ubuntu). If I run the application, like a zeromq based server inside the container, will it not be slower as compared to running it on Mac OS X directly. Please explain the benefits of using docker in such a scenario..
You should really do some more reading and research before turning to SO, then ask about anything you can't figure out. But:
No. If the container is "exited" you can restart it and your files will still be there, but once it is removed your files are gone. You can use docker commit to save them to an image, but the best bet is to use a Dockerfile.
docker run -p 5000:8000 image will expose port 8000 in the container as port 5000 on the host.
Yes, it will be slower due to the boot2docker VM. It would not be slower if you were running on a Linux host. The advantage is that zeromq is now running in an isolated container with all its dependencies.
I'm trying to debug a sinatra app using RubyMine. I am using rackup to run the app on localhost and unicorn to run it on remote host. My ruby version is 1.9.3.
I should also note that the "run debug mode icon" is grayed out. I don't know what is missing from the configuration.
What gems do I need? What else do I need to do?
update:
I have run the server process on localhost using rackup -p 9000. In order to start debugging -run rdebug-ide --port 1234 -- rackup and got this message :
Fast Debugger (ruby-debug-ide 0.4.17.beta16, ruby-debug-base 0.10.5.rc1) listens on 127.0.0.1:1234
I still don't understand how to debug using Rubymine. I have opened the browser in http://0.0.0.0:1234 and I don't get any response (it keeps loading)
I run the remote host using unicorn like so :
unicorn -c etc/fin_srv_unicorn.conf -E staging
how shold I set up remote debugging? I have tried also rack and ruby remote.
Tried connection to the remote host and running the service (using the command listed above), and then running the rdebug like so :
rdebug-ide --port 1911 -- $SCRIPT$
where for $SCRIPT$ I have tried app/main.rb staging , unicorn -E staging, unicorn -c etc/fin_srv_unicorn.conf -E staging