Configure uwsgi and nginx using Docker - bash

I have configured uwsgi and nginx separately for a python production server following this link. I have configured them separately with working configuration. My uwsgi alone works fine, and nginx alone works fine. My problem is I am planning to use docker for this setup and am not able to run both uwsgi and nginx simultaneously, even though I am using a bash file. Below are the relevant parts in my configuration.
Dockerfile :
#python setup
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s mysite.conf /etc/nginx/sites-enabled/
EXPOSE 80
CMD ["/bin/bash", "start.sh"]
mysite.conf
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket
}
server {
listen 80;
server_name aa.bb.cc.dd; # ip address of the server
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file
}
}
start.sh :
service nginx status
uwsgi --socket :8001 --module server.wsgi
service nginx restart
service nginx status # ------- > doesn't get executed :(
out put of the shell file
Can someone help me how to set this up using a bash script ?

Your start.sh script has the risk to end immediately after executing those two commands.
That would terminate the container right after starting it.
You would need at least to make sure nginx start command does not exit right away.
The official nginx image uses:
nginx -g daemon off;
Another approach would be to keep your script as is, but use for CMD a supervisor, declaring your script in /etc/supervisor/conf.d/supervisord.conf.
That way, you don't expose yourself to the "PID 1 zombie reaping issue": stopping your container will wait for both processes to terminate, before exiting.

I think there is a very basic but important alternative worth pointing out.
Your initial scenario was:
Production environment.
Both uwsgi and nginx working fine alone.
TCP socket for uwsgi <=> nginx communication.
I don't think you should go with some complicated trick to run both processes in the same container.
You should simply run uwsgi and nginx in separate containers.
That way you achieve:
Functional Isolation: If you want to replace Nginx by Apache, don't need to modify/rebuild/redeploy your uwsgi container.
Resource Isolation: You can limit memory, CPU and IO separately for nginx and uwsgi.
Loose Coupling: If you want you could even deploy the containers on separate machines (just need to make your upstream server uri configurable).

Related

Docker containers onlys up when access the host with ssh

I have two containers it was builded with command > docker-compose up --build -d.
All containers build normally and stays up, but when I leave the machine the containers stays up at least 2 hours until que he drops again.
This containers is running an API in PHP LARAVEL Framework and a nginx reverse proxy.
Docker Image Started as 46Hours ago and UP 2 seconds
When I start the application and leave the machine where Docker is installed, it is in max two hours running. If I access the machine via ssh and then after that access the application and it is running without the need to do a docker-compose up. And the api was written in Laravel PHP with a Nginx container making a reverse Proxy.
What do I have to do to make these containers stand up as a productive environment?
There is a command that can help you when it goes down or stops:
sudo docker run --restart unless-stopped --name <Name you want to use> <Name of your container>
don't use these <> in your command
after doing this anytime that container is down it will restart the container for you automatically.
I think this trick is really useful when you have multiple containers running, and helpful when you want to update the server packages too.

Docker on Mac is running but refusing to expose port

Mac here, running Docker Community Edition Version 17.12.0-ce-mac49 (21995).
I have Dockerized a web app with a Dockerfile like so:
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
ADD application.yml /opt/myapp
ADD logback.groovy /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
I then build that image like so:
docker build -t myapp .
I then run a container of that image like so:
docker run -it -p 9200:9200 --net="host" --env-file ~/myapp-local.env --name myapp myapp
In the console I see the app start up without any errors, and all seems to be well. Even my metrics publishes (which publish heartbeat and other health metrics every 20 seconds) are printing to the console as I would expect them to. Everything seems to be fine.
Except when I go to run a curl against my app from another terminal/session:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"heyitsme","password":"12345"}' http://localhost:9200/v1/auth/signIn
curl: (7) Failed to connect to localhost port 9200: Connection refused
Now, if this were a situation where the /v1/auth/signIn path wasn't valid, or if there was something wrong with my request entity/payload, the server would pick up on it and send an error (I assure you; as I can confirm this exact same curl works when I run the server outside of Docker as just a standalone service).
So this is definitely a situation where the curl command can't connect to localhost:9200. Again, when I run my app outside of Docker, that same curl command works perfectly, so I know my app is trying to standup on port 9200.
Any ideas as to what could be going wrong here, or how I could begin troubleshooting?
The way you run your container has 2 conflicting parts:
-p 9200:9200 says: "publish (bind) port 9200 of the container to port 9200 of the host"
--net="host" says: "use the host's networking stack"
According to Docker for Mac - Networking docs / Known limitations, use cases, and workarounds, you should only publish a port:
I want to connect to a container from the Mac
Port forwarding works for localhost; --publish, -p, or -P all work. Ports exposed from Linux are forwarded to the Mac.
Our current recommendation is to publish a port, or to connect from another container. This is what you need to do even on Linux if the container is on an overlay network, not a bridge network, as these are not routed.
The command to run the nginx webserver shown in Getting Started is an example of this.
$ docker run -d -p 80:80 --name webserver nginx
Check that your app bind to 0.0.0.0:9200 and not localhost:9200 or something similar
Problem seems to be in the network mode you are running the container.
Quick test: Login to your container and run the curl cmd there, hopefully it works. That would isolate the problem to request not being forwarded from host to container.
Try running your container on the default bridge network and test.
Refer to this blog for details on the network modes in docker
TLDR; You will need to add an IPtables entry to allow the traffic to enter your container.

how to diagnose 404 not found error nginx on docker?

I'm trying to make working out laradock (docker+laravel)
following: https://github.com/LaraDock/laradock instructions
I installed docker + cloned laradock.git
laradock folder is located at
/myHD/...path../www/laradock
at the same level I have my laravel projects
/myHD/...path../www/project01
I edited laradock/docker-compose.xml
### Laravel Application Code Container ######################
volumes_source:
image: tianon/true
volumes:
- ../project01/:/var/www/laravel
After this, bu I'm not sure it this is how to reload correctly after editing docker-file, I did:
docker-compose up -d nginx mysql
now since I have a nginx error 404 not found: how can I debug the problem ?
Additional info:
I entered the machine via bash:
docker-compose exec --user=laradock workspace bash
but I cant find
/etc/nginx/... path (nginx folder doesn't exists !?!?)
Guessing your nginx is not located in the workspace container, it resides on a separate container, You've executed the following:
docker-compose up -d nginx mysql
That would probably only run nginx and mysql containers, not your php-fpm container. Also the path to you volume is important as the configurations in you nginx server depends on this.
To run php-fpm, add php-fpm or something similar to the docker-compose up command, check out what this is called in your docker-compose.yaml file e.g
docker-compose up -d nginx mysql phpfpm
To assess you nginx container first of all execute
`docker ps -a`
From the list, look for the ID of your nginx container, before running
docker exec -it <container-id> bash
This should then give you assess to your nginx container to make the required changes.
Or without directly accessing the container, simply make changes in the nginx configuration file, look for 'server' and the 'root' change the root from var/www/laravel/public to the new directory /project01/public.
Execute the command to bring down your containers
docker-composer down
Star over again with
docker-compose up -d nginx mysql phpfpm
Give it a go

Can't access docker container on port 80 on OSX

In my current job we have development environment made with docker-compose.
One container is nginx, which provide routing to other containers.
Everything seems fine and work to my colleague on windows and osx. But on my system (osx El Capitan), there is problem with accessing nginx container on port 80.
There is setup of container from docker-compose.yml
nginx:
build: ./dockerbuild/nginx
ports:
- 80:80
links:
- php
volumes_from:
- app
... and more
In ./dockerbuild/nginx there is nothing special, just nginx config as we know it from everywhere.
When I run everyting with docker-compose create and docker-compose start. Then docker ps give me
3b296c1e4775 docker_nginx "nginx -g 'daemon off" About an hour ago Up 47 minutes 0.0.0.0:80->80/tcp, 443/tcp docker_nginx_1
But when I try to access it for example via curl I get error. curl: (7) Failed to connect to localhost port 80: Connection refused
I try to run container with port 81 and everything works fine.
Port is really binded to docker
22:47 $ sudo lsof -i -n -P | grep TCP
...
com.docke 14718 schovi 38u IPv4 0x6e9c93c51ec4b617 0t0 TCP *:80 (LISTEN)
...
Firewall in osx is turned off and I have no other security.
if you are using docker-for-mac:
Accessing by localhost:80 is correct, though you still have to ensure you do not have a local apache/nginx service running. Often leftovers from boxen/homebrew exist binding that port, because thats what developers did back then :)
if you are using dockertoolbox/virtualbox/whatever hypervisor
You will not be able to access it by localhost, by by the docker-machine ip, so write docker-machine ip default and the use http://$ip:80 in your browser
if that does not help
Ensure your nginx container actually does work, so connect to the container: docker exec -i -t <containerid> bash
and then run ps aux nginx or if telnet is installed try to connect to localhost
Solved!
Problem was, that long long time ago I installed pow (super simple automated rails server which run application on app_name.local domain). And this beast left LaunchAgent script which update pf to forward port 80 to pow port.
In my current job we have development environment made with docker-compose.
A privilege to use.
[W]hen I try to access [nginx on port 80] for example via curl I get error.
Given there's nothing from causing you from accessing docker on your host os you should look at the app running inside the container to ensure it's binding to the correct host, e.g. 0.0.0.0 and not localhost.
For example, if you're running Nuxt inside a container with nuxt-ts observe Nuxt will default to localhost thereby causing the container not to connect to the docker network whereas npx nuxt-ts -H 0.0.0.0 gets things squared away with the container's internal server connecting to the ip of the docker network used (verify ip like docker container inspect d8af01990363).

Docker tomcat7 container cannot connect to host activemq

I am admittedly relatively new to using Docker for environment isolation, but I've run into a problem I am yet to solve, and I'm looking for some advice on how to proceed. Apologies if this is dirt simple.
I have an image built with this Dockerfile:
FROM java:7-jre
MAINTAINER me <email redacted>
ENV CATALINA_HOME="/usr/local/tomcat"
ENV PATH=$CATALINA_HOME/bin:$PATH
RUN mkdir -p "$CATALINA_HOME"
#Add tomcat tarball with configs
#need to figure out if war files should be auto-deploy or manual-deploy via manager
ADD ./ $CATALINA_HOME
WORKDIR $CATALINA_HOME
RUN tar -xmvf tomcat.tar.gz --strip-components=1 \
&& rm bin/*.bat \
&& rm tomcat.tar.gz*
EXPOSE 8080
#quite possibly unnecessary to expose 61616
EXPOSE 61616
CMD catalina.sh run
Because my host is Mac OSX, I'm using the boot2docker package. The port forwarding is a real PITA, but for now I'm just binding host 8080 to container 8080 when I run the container (-p 8080:8080) and I have 8080 forwarded in the boot2docker networking setup.
This image runs a container just fine, and I am able to manually upload and deploy .war files to this container while it's running.
On my local machine, I am running ActiveMQ. Eventually I'll put this in a container but I need to get past this hurdle first. ActiveMQ is running with the default port 61616 listening, as shown in this netstat output:
14:14 $ netstat -a | grep 6161
tcp46 0 0 *.61616 *.* LISTEN
The problem I'm having is that deployed war files in my tomcat container are unable to talk to the physical host on 61616. Here is the actual error from the catalina.out log on the container (I added some line breaks to make it easier to read):
Could not refresh JMS Connection for destination 'request' - retrying in 5000 ms.
Cause: Error while attempting to add new Connection to the pool; nested exception is javax.jms.JMSException:
Could not connect to broker URL: tcp://localhost:61616.
Reason: java.net.ConnectException: Connection refused
Admittedly, I think it's because the war file is configured to use localhost:61616 to connect to AMQ -- it doesn't feel right for localhost inside the container to "work" reaching back to the host. I'm not sure what variable value I should set that to, or if that's even the actual issue. I would think that if it's a dynamically-allocated black-magic IP address, it'd be relatively painful to keep reconfiguring inside war files.
Corollary: are there other considerations I would need to make beyond this configuration if I wanted to link this tomcat container with an AMQ one?
Thanks in advance for your attention. ~P
First, you shouldn't need to EXPOSE 61616 on the container. (That would allow the container to listen on port 61616, which is not what you want.)
What you do need though is to access docker's localhost (your boot2docker VM) from within the docker container. The best way I've found to do this, so far, from this answer, is to run inside your docker container:
export DOCKER_HOST_IP=$(route -n | awk '/UG[ \t]/{print $2}')
That is going to give you the IP address of your boot2docker VM, as seen from within the current docker container. I'll leave it up to you to figure out how to configure your JMS client to connect to that IP address, but one idea that comes to mind is something like:
echo $DOCKER_HOST_IP my-jms-hostname >> /etc/hosts
And then you can hardcode your JMS configuration to hit my-jms-hostname:61616
I recommend that you put the above two commands into a start script that you use to startup your application server in the container.
Next, you will need to find a way to tunnel that port on your boot2docker VM to your local host OS. For example, on your local host OS, run
boot2docker ssh -R61616:localhost:61616
That will listen on the remote (boot2docker VM's) port 61616 and forward it to your local host OS's localhost:61616, which is where ActiveMQ is hopefully listening happily for an incoming connection from your application server's JMS client.

Resources