how to diagnose 404 not found error nginx on docker? - laravel

I'm trying to make working out laradock (docker+laravel)
following: https://github.com/LaraDock/laradock instructions
I installed docker + cloned laradock.git
laradock folder is located at
/myHD/...path../www/laradock
at the same level I have my laravel projects
/myHD/...path../www/project01
I edited laradock/docker-compose.xml
### Laravel Application Code Container ######################
volumes_source:
image: tianon/true
volumes:
- ../project01/:/var/www/laravel
After this, bu I'm not sure it this is how to reload correctly after editing docker-file, I did:
docker-compose up -d nginx mysql
now since I have a nginx error 404 not found: how can I debug the problem ?
Additional info:
I entered the machine via bash:
docker-compose exec --user=laradock workspace bash
but I cant find
/etc/nginx/... path (nginx folder doesn't exists !?!?)

Guessing your nginx is not located in the workspace container, it resides on a separate container, You've executed the following:
docker-compose up -d nginx mysql
That would probably only run nginx and mysql containers, not your php-fpm container. Also the path to you volume is important as the configurations in you nginx server depends on this.
To run php-fpm, add php-fpm or something similar to the docker-compose up command, check out what this is called in your docker-compose.yaml file e.g
docker-compose up -d nginx mysql phpfpm
To assess you nginx container first of all execute
`docker ps -a`
From the list, look for the ID of your nginx container, before running
docker exec -it <container-id> bash
This should then give you assess to your nginx container to make the required changes.
Or without directly accessing the container, simply make changes in the nginx configuration file, look for 'server' and the 'root' change the root from var/www/laravel/public to the new directory /project01/public.
Execute the command to bring down your containers
docker-composer down
Star over again with
docker-compose up -d nginx mysql phpfpm
Give it a go

Related

Testcontainers ; Running #Testcontainers Tests inside docker [Running Docker inside Docker]

How To Run #Testcontainers based test cases inside the docker container ?
I have Simple Spring Boot App that has Integration Test (Component level) that are interacting with containers using Testcontainers. Test cases are ruining fine from outside container(Local machine).
We are running everything in containers and build is running on docker jenkins image.
Docker file is creating jar and then image. #Testcontainers is not able to find docker installed.
Below is my docker file.
FROM maven:3.6-jdk-11-openj9
VOLUME ["/var/run/docker.sock"]
RUN apt-get update
RUN apt-get -y install docker.io
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN mvn -Dmaven.repo.local=/root/m2 --batch-mode -f pom.xml clean package
EXPOSE 8080
CMD ["/bin/bash"]
While running build i am getting below below error
org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy - ping failed with configuration Environment variables, system properties and defaults. Resolved dockerHost=unix:///var/run/docker.sock due to org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
Whats the best way to handle this case ? I want to run my component level integration test during mvn build phase using docker file.
below reference did not helped me.
https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/
This is not complete answer but you should enable access to a docker daemon from inside your container. Installing Docker and running it's daemon inside your container is complicated so not recommended. Docker can be controlled via Unix socket or over TCP (I assume the host system is a Linux).
How Test containers look for Docker:
By default it tries to connect to Unix socket /var/run/docker.sock. You can specify other socket path or TCP address by setting environment variables (DOCKER_HOST).
How docker exposes it's control API:
By default via Unix socket /var/run/docker.sock (on your host). You can expose docker API elsewhere by adding following parameters to docker start command (the location of command launching your docker is system dependent): -H fd:// -H tcp://127.0.0.1:2376. Note that you can specify more than one option. -H fd:// - is the default, tcp://127.0.0.1:2376 - tells Docker to listen on localhost port 2376.
How to make Docker available inside your container ("Docker in Docker"): If you enabled network access - no need to do additional config except pointing Testcontaners to it as mentioned above. If you want to use default Unix socket then you can map (mount) it into container via volume option:
docker run --volume /var/run/docker.sock:/var/run/docker.sock your-image-id-here
The remaining problem is that mounted docker.sock inside container will also be owned by root:docker (with same uid:gid as on your host system) so Testcontainers would work only if your container user can connect to that socket. That is user of running process is root or happen to have exact same group id inside your container as group id of docker on your host system.
I do not know yet a good solution to this one, so for starters you can run your tests inside container as root, or hard-code container's user group-id to match your host's docker group id.

Dockerized nginx isn't serving HTML page

Mac OS here, running Docker Version 17.12.0-ce-mac49. I have the following super-simple Dockerfile:
FROM nginx
COPY index.html /usr/share/nginx/html
I create my Docker image:
docker build -t mydocbox .
So far so good, no errors. I then create a container from that image:
docker run -it -p 8080:8080 -d --name mydocbox mydocbox
And I see it running (I have confirmed its running by issuing docker ps as well as SSHing into the box via docker exec -it <containerId> bash and confirming /usr/share/nginx/html/index.html exists)!
When I open a browser and go to http://localhost:8080, I get an empty/blank/nothing-to-see-here screen, not my expected index.html page.
I don't see any errors anywhere and nothing to indicate configuration is bad or that firewalls are causing issues. Any ideas as to what the problem could be or how I could troubleshoot?
See if you can follow this example:
FROM nginx:alpine
COPY default.conf /etc/nginx/conf.d/default.conf
COPY index.html /usr/share/nginx/html/index.html
It uses a default.conf file which does specify the index.html used
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
Change in the default.conf the listening port from 80 to 8080, and EXPOSE it.
Or simply docker run with -p 8080:80 (hostPort:containerPort).

Loading Elasticsearch from a config file in Docker

Hi i'm new to elasticsearch and docker so forgive me if this question is a bit basic.
I want to load up an elasticsearch container (using the official elasticsearch image on dockerhub) with a config file on my host machine
currently what I do is
docker run -d elasticsearch elasticsearch -Des.cluster.name="testcluster1" -Des.node.name="node1"
and so on.
is there a way to load the container using a confi file i.e the elasticsearch.yml file from my host machine?
host machine is running centos (not sure if relevant but thought i'd add it just incase)
Thank you
You can do research on ONBUILD in Dockerfile.
If one line is marked with ONBUILD, it will only be triggered when run the container.
$ cat Dockerfile
FROM elasticsearch
ONBUILD ADD elasticsearch.yml /SOME_PATH
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
Secondly, you can also mount a host folder when run the container.
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch -Des.cluster.name="testcluster1" -Des.node.name="node1"
Refer:
Dockerfile Best Practices

Not able to access Kibana running in a Docker container on port 5601

I have built a docker image with the following Docker file.
# gunicorn-flask
FROM devdb/kibana
MAINTAINER John Doe <user.name#gmail.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python python-pip python-virtualenv gunicorn
# Setup flask application
RUN mkdir -p /deploy/app
COPY gunicorn_config.py /deploy/gunicorn_config.py
COPY app /deploy/app
RUN pip install -r /deploy/app/requirements.txt
WORKDIR /deploy/app
EXPOSE 5000 5601 9200
# Start gunicorn
CMD ["/usr/bin/gunicorn", "--config", "/deploy/gunicorn_config.py", "listener:app"]
I am running the container from the image created from this Docker file as follows.
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -v /home/Workspace/xits/config/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml -v /home/Workspace/xits/config/kibana.yml:/opt/kibana/config/kibana.yml es-kibana-gunicorn:latest
The issue I am facing is that I cannot access Kibana port 5601 on my host machine. My browser page says ERR_CONNECTION_REFUSED
I am able to access port 5000 though.
I can't figure out why this is.Any help would be greatly appreciated.
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself.
When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles.
Since your CMD only starts gunicorn, elasticsearch and kibana won't ever be started. That's why there is no response on their respective network ports.
The Docker image you inherits from inherits itself from phusion/baseimage which has its own way of making multiple processes run in Docker containers. I recommend you follow the instructions on their README file to learn how to add your gunicorn to the list of services to start. Basically you would have to define a script named run and add it to your docker image within the /etc/service/<service name>/ directory.
In your Dockerfile, add:
COPY run /etc/service/gunicorn/
and the run script should be something similar to:
#!/bin/bash
cd /deploy/app
/usr/bin/gunicorn --config /deploy/gunicorn_config.py listener:app

How to disable Nginx caching when running Nginx using Docker

I use the official nginx docker image (https://registry.hub.docker.com/_/nginx/). When I modify the Index.html I don't see my change. Setting sendfile off in nginx.conf didn't help.
I only see the change if i rebuild my image.
Here is my Dockerfile:
FROM nginx
COPY . /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
And that's the commands I use to build and run it:
docker build -t some-nginx .
docker run --name app1 -p 80:80 -v $(pwd):/user/share/nginx/html -d some-nginx
Thank you
It's not caching. Once a file is copied into a container image (using the COPY instruction), modifying it from the host will have no effect - it's a different file.
You've attempted to overwrite the file by bind-mounting a volume from the host using the -v argument to docker run. This will work - you will now be using the same file on host and container, except you made a typo - it should be /usr not /user.
Just modify sendfile off in nginx.conf file can be work.

Resources