I use the official nginx docker image (https://registry.hub.docker.com/_/nginx/). When I modify the Index.html I don't see my change. Setting sendfile off in nginx.conf didn't help.
I only see the change if i rebuild my image.
Here is my Dockerfile:
FROM nginx
COPY . /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
And that's the commands I use to build and run it:
docker build -t some-nginx .
docker run --name app1 -p 80:80 -v $(pwd):/user/share/nginx/html -d some-nginx
Thank you
It's not caching. Once a file is copied into a container image (using the COPY instruction), modifying it from the host will have no effect - it's a different file.
You've attempted to overwrite the file by bind-mounting a volume from the host using the -v argument to docker run. This will work - you will now be using the same file on host and container, except you made a typo - it should be /usr not /user.
Just modify sendfile off in nginx.conf file can be work.
Related
Mac OS here, running Docker Version 17.12.0-ce-mac49. I have the following super-simple Dockerfile:
FROM nginx
COPY index.html /usr/share/nginx/html
I create my Docker image:
docker build -t mydocbox .
So far so good, no errors. I then create a container from that image:
docker run -it -p 8080:8080 -d --name mydocbox mydocbox
And I see it running (I have confirmed its running by issuing docker ps as well as SSHing into the box via docker exec -it <containerId> bash and confirming /usr/share/nginx/html/index.html exists)!
When I open a browser and go to http://localhost:8080, I get an empty/blank/nothing-to-see-here screen, not my expected index.html page.
I don't see any errors anywhere and nothing to indicate configuration is bad or that firewalls are causing issues. Any ideas as to what the problem could be or how I could troubleshoot?
See if you can follow this example:
FROM nginx:alpine
COPY default.conf /etc/nginx/conf.d/default.conf
COPY index.html /usr/share/nginx/html/index.html
It uses a default.conf file which does specify the index.html used
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
Change in the default.conf the listening port from 80 to 8080, and EXPOSE it.
Or simply docker run with -p 8080:80 (hostPort:containerPort).
I'm trying to make working out laradock (docker+laravel)
following: https://github.com/LaraDock/laradock instructions
I installed docker + cloned laradock.git
laradock folder is located at
/myHD/...path../www/laradock
at the same level I have my laravel projects
/myHD/...path../www/project01
I edited laradock/docker-compose.xml
### Laravel Application Code Container ######################
volumes_source:
image: tianon/true
volumes:
- ../project01/:/var/www/laravel
After this, bu I'm not sure it this is how to reload correctly after editing docker-file, I did:
docker-compose up -d nginx mysql
now since I have a nginx error 404 not found: how can I debug the problem ?
Additional info:
I entered the machine via bash:
docker-compose exec --user=laradock workspace bash
but I cant find
/etc/nginx/... path (nginx folder doesn't exists !?!?)
Guessing your nginx is not located in the workspace container, it resides on a separate container, You've executed the following:
docker-compose up -d nginx mysql
That would probably only run nginx and mysql containers, not your php-fpm container. Also the path to you volume is important as the configurations in you nginx server depends on this.
To run php-fpm, add php-fpm or something similar to the docker-compose up command, check out what this is called in your docker-compose.yaml file e.g
docker-compose up -d nginx mysql phpfpm
To assess you nginx container first of all execute
`docker ps -a`
From the list, look for the ID of your nginx container, before running
docker exec -it <container-id> bash
This should then give you assess to your nginx container to make the required changes.
Or without directly accessing the container, simply make changes in the nginx configuration file, look for 'server' and the 'root' change the root from var/www/laravel/public to the new directory /project01/public.
Execute the command to bring down your containers
docker-composer down
Star over again with
docker-compose up -d nginx mysql phpfpm
Give it a go
Hi i'm new to elasticsearch and docker so forgive me if this question is a bit basic.
I want to load up an elasticsearch container (using the official elasticsearch image on dockerhub) with a config file on my host machine
currently what I do is
docker run -d elasticsearch elasticsearch -Des.cluster.name="testcluster1" -Des.node.name="node1"
and so on.
is there a way to load the container using a confi file i.e the elasticsearch.yml file from my host machine?
host machine is running centos (not sure if relevant but thought i'd add it just incase)
Thank you
You can do research on ONBUILD in Dockerfile.
If one line is marked with ONBUILD, it will only be triggered when run the container.
$ cat Dockerfile
FROM elasticsearch
ONBUILD ADD elasticsearch.yml /SOME_PATH
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
Secondly, you can also mount a host folder when run the container.
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch -Des.cluster.name="testcluster1" -Des.node.name="node1"
Refer:
Dockerfile Best Practices
I have built a docker image with the following Docker file.
# gunicorn-flask
FROM devdb/kibana
MAINTAINER John Doe <user.name#gmail.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python python-pip python-virtualenv gunicorn
# Setup flask application
RUN mkdir -p /deploy/app
COPY gunicorn_config.py /deploy/gunicorn_config.py
COPY app /deploy/app
RUN pip install -r /deploy/app/requirements.txt
WORKDIR /deploy/app
EXPOSE 5000 5601 9200
# Start gunicorn
CMD ["/usr/bin/gunicorn", "--config", "/deploy/gunicorn_config.py", "listener:app"]
I am running the container from the image created from this Docker file as follows.
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -v /home/Workspace/xits/config/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml -v /home/Workspace/xits/config/kibana.yml:/opt/kibana/config/kibana.yml es-kibana-gunicorn:latest
The issue I am facing is that I cannot access Kibana port 5601 on my host machine. My browser page says ERR_CONNECTION_REFUSED
I am able to access port 5000 though.
I can't figure out why this is.Any help would be greatly appreciated.
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself.
When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles.
Since your CMD only starts gunicorn, elasticsearch and kibana won't ever be started. That's why there is no response on their respective network ports.
The Docker image you inherits from inherits itself from phusion/baseimage which has its own way of making multiple processes run in Docker containers. I recommend you follow the instructions on their README file to learn how to add your gunicorn to the list of services to start. Basically you would have to define a script named run and add it to your docker image within the /etc/service/<service name>/ directory.
In your Dockerfile, add:
COPY run /etc/service/gunicorn/
and the run script should be something similar to:
#!/bin/bash
cd /deploy/app
/usr/bin/gunicorn --config /deploy/gunicorn_config.py listener:app
I am trying to build an image for my flask based web-application using docker build. My Dockerfile looks like this:
FROM beehive-webstack:latest
MAINTAINER Anuvrat Parashar <anuvrat#zopper.com>
EXPOSE 5000
ADD . /srv/beehive/
RUN pip install -i http://localhost:4040/root/pypi/+simple/ -r /srv/beehive/requirements.txt
pip install without the -i flag works, but it downloads everything from pypi which, naturally is slow.
The problem is that pip does not access the devpi server running on my laptop. How can I go about achieving that?
localhost refers to the docker container, not to your host as RUN lines are just executed commands in the container. You thus have to use a network reachable IP of your laptop.
Con: This makes your Dockerfile unportable, if others don't have a pypi mirror running.
One answer is a devpi helper container. You start docker devpi image and have it expose port 3141. Then you can add this as an extra source for pip install using environmental variables in your docker file.
Starting devpi using docker compose:
devpi:
image: scrapinghub/devpi
container_name: devpi
expose:
- 3141
volumes:
- /path/to/devpi:/var/lib/devpi
myapp:
build: .
external_links:
- devpi:devpi
docker-compose up -d devpi
Now you need to configure the client docker container. It needs pip configured:
In your Dockerfile:
ENV PIP_EXTRA_INDEX_URL=http://devpi:3141/root/pypi/+simple/ \
PIP_TRUSTED_HOST=devpi
Check it's working by logging into your container:
docker-compose run myapp bash
pip install --verbose nose
Output should include
2 location(s) to search for versions of nose:
* https://pypi.python.org/simple/nose/
* http://devpi:3141/root/pypi/+simple/nose/
Now you can upload packages to your container either from another container or sftp.
This approach has the advantages of speeding builds but not breaking them if the devpi container is not present.
Notes: Don't publish ports to devpi without a strong password as it's a security issue. People could use it to upload arbitrary code which you application would install and execute.