I'm mounting files from my laptop's/host machine's working directory into the container, and I'm not sure why the mounted files are owned by www-data. q was created on the host machine, so it's owned by root, as expected.
Inside container:
Some other observations:
Host UID (my OS X account): 1783256022 (not sure why it looks like this)
Container UID (www-data): 33
Dockerfile:
FROM nginx:1.13.8
RUN apt-get update
RUN apt-get install -y nano && \
apt-get install -y git && \
apt-get install -y procps && \
apt-get install net-tools
RUN mkdir -p /var/www/html
WORKDIR /var/www/html
docker-compose.yml:
version: '3'
services:
web:
build:
context: ./docker/nginx
image: tc-web:0.1.0
container_name: tc-web
volumes:
- ./:/var/www/html/
- ./docker/nginx/.bashrc:/root/.bashrc
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
When you mount a directory using volumes in a Unix like system, files will keep the same permissions they have in the host in terms of uid a gid. That means that the uid that created the files in your host (your user probably) has the same user id of the www-data user inside the container.
When you create a file from inside the container, since the container is run as the root user, files will be owned by the root user (inside and outside the Container).
Related
I want to run the streamlit through docker. I did not find any official image. Can someone please guide me with the steps required to achieve this or Dockerimage for streamlit?
Here is the details
Operating System: Windows 10 Home
Docker version 19.03.1
Streamlit, version 0.61.0
You can look into this docker hub image.
docker run -it -p 80:80 --entrypoint "streamlit" marcskovmadsen/awesome-streamlit:latest run app.py
Not sure about the streamlit version but you can create one base on this Dockerfile.
Or you can explore streamlit-docker, working for me on my local system.
Quick Setup (own image)
Dockerfile
# Nicked from: https://github.com/markdouthwaite/streamlit-project/blob/master/Dockerfile
FROM python:3.8.4-slim
RUN pip install -U pip
COPY requirements.txt app/requirements.txt
RUN pip install -r app/requirements.txt
# copy into a directory of its own (so it isn't in the toplevel dir)
COPY . /app
WORKDIR /app
CMD ["python", "-m", "streamlit.cli", "run", "main.py", "--server.port=8080"]
EXPOSE 8080
requirements.txt
Then, in the same directory, example contents of the requirements.txt file:
streamlit==0.76.0
pandas==1.2.1
numpy==1.19.5
docker-compose.yml
In a directory above your Dockerfile and source code, you can add:
version: "3.7"
services:
streamlit:
build:
context: streamlit/
volumes:
- ./streamlit:/app
ports:
- 8080:8080
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
When executing my init.sh I am calling the command:
sudo bash ${INSTALLPATH}seafile.sh start
Following this, the error:
seafile_1_f2341d904d27 | /bin/sh: 1: sudo: not found
occurs.
opening the directory "bin" and looking at "sh" it is just some unreadable charakters..
The dockerfile calling init.sh does this by:
FROM debian
#FROM armv7/armhf-debian
MAINTAINER me
# install packages
RUN apt-get update && apt-get install sudo -y
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates \
python2.7 \
python-setuptools \
python-imaging \
python-ldap \
python-urllib3 \
sqlite3 \
wget
# Copy scripts
ADD ./scripts /scripts
# set environment variables
ENV SERVER_NAME mdh-seafile
ENV SERVER_IP 127.0.0.1
ENV FILESERVER_PORT 8082
ENV SEAFILE_DIR /data
ENV SEAFILE_VERSION seafile-server-6.3.4
ENV INSTALLPATH /opt/seafile/${SEAFILE_VERSION}/
# clean for smaller image
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Volumes for persistent configuration
VOLUME /opt/seafile/
# added
COPY /opt/seafile/${SEAFILE_VERSION}/seafile.sh .
COPY /opt/seafile/${SEAFILE_VERSION}/seahub.sh .
# set entrypoint
ENTRYPOINT sudo bash /scripts/init.sh
Init.sh:
else
# start seafile
# whoami -> Output: root
sudo bash ${INSTALLPATH}seafile.sh start
sudo bash ${INSTALLPATH}seahub.sh start
# keep seafile running in foreground to prevent docker container shutting down
while true; do
sudo tail -f /opt/seafile/logs/seafile.log
sleep 10
done
fi
I'm executing everything by calling sudo bash install.sh which is executing docker-compose file which links to the components.
The Docker-Compose:
version: '2'
services:
db:
#image: hypriot/rpi-mysql
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=###
volumes:
- /mnt/data/mysql:/var/lib/mysql
duply:
build: .
volumes:
- ./config:/config
- /mnt/data:/mnt/data
- ./webinterface:/var/www/html/MyDigitalHome
- /mnt/guestbackup:/mnt/guestbackup/backup
#- /mnt/usb-removable:/usb-removable
ports:
- "8080:80"
- "24:22"
links:
- db
seafile:
build: seafile/
volumes:
- ./seafile/config:/config
- /mnt/data/seafile:/data
ports:
- "8000:8000"
- "8082:8082"
environment:
- SEAFILE_ADMIN=####mydigitalhome.xy
- SEAFILE_ADMIN_PW=###
owncloud:
build: owncloud/
volumes:
- /mnt/data/owncloud:/data
- ./owncloud/config:/var/www/html/config
ports:
- "8090:80"
links:
- db:mysql
The current errors are:
ERROR: Service 'seafile' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder845725722/opt/seafile/seafile-server-6.3.4/seafile.sh: no such file or directory
Attaching to mdh_seafile_1_f2341d904d27, mdh_db_1_46bebe733124, mdh_duply_1_170a5db26129, mdh_owncloud_1_260c3a56f2a5
seafile_1_f2341d904d27 | bash: seafile.sh: No such file or directory
seafile_1_f2341d904d27 | bash: seahub.sh: No such file or directory
seafile_1_f2341d904d27 | tail: cannot open '/opt/seafile/logs/seafile.log' for reading: No such file or directory
seafile_1_f2341d904d27 | tail: no files remaining
I am assuming you are running this in docker.
you can add to your dockerfile
RUN apt update && apt install -y sudo
This should resolve your problem.
Starting from debian image you need to install sudo, you can do it by adding RUN apt-get update && apt-get install sudo -y to the beginning of your Dockerfile. Then rebuild your image with docker build . and run the command again.
I have a Dockerfile and docker-compose.yml that I developed on OSX. It looks like this:
cat Dockerfile:
FROM alpine:3.7
ADD ./app /home/app/
WORKDIR /home/app/
RUN apk add --no-cache postgresql-dev gcc python3 python3-dev musl-dev && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
rm -r /root/.cache && \
pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3", "app.py"]
cat docker-compose.yml:
version: "3"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- ./app/:/home/app/
depends_on:
- db
db:
image: postgres:10
environment:
- POSTGRES_USER=testusr
- POSTGRES_PASSWORD=password
- POSTGRES_DB=testdb
expose:
- 5432
I then start via: docker-compose up --build -d and open the web browser on port 5000, it shows a form (as expected). Again, this works perfect on OSX.
However, on Windows 10 it just shows a blank page. And docker port <CONTAINER> shows 5000/tcp -> 0.0.0.0:5000 so I know it's bound to the right port.
Additionally, if I docker logs -f <python container> it doesn't show the request ever getting to the container. Normally a new line is printed (w/status code) for each flask/nginx response. In this case it looks like it's not even getting to the python application container.
So, any idea why this doesn't work on Windows?
So the answer is that you have to use docker-machine ip to see what IP the container is bound to. It doesn't just bind to localhost/127.0.0.1 like it does on OSX.
I built the feathers-chat demo with the feathers-chat-vuex client attached via socketio, but I am having trouble with remote access via socketio. The demo works fine as long as I access the client from a browser on the same system, which happens to be an Ubuntu VM running under VMware Fusion on a Macbook Pro. But if I try to connect to the client from a browser in the host MacOS, it brings up the login page but it fails to log in. In the devtools console, it says "WebSocket connection to ...localhost:3030 failed," which of course it did, because the feathers-chat server is not running on this localhost, it is running in the VM. The socketio is set up in feathers-client.js like this: "const socket = io('http://localhost:3030', {transports: ['websocket']})". If I hard-code the VM IP address in it like this: "const socket = io('http://172.16.184.194:3030', {transports: ['websocket']})" then the remote access works fine. But of course I cannot do that because in general I don't know the IP address of the server at run time. So can someone tell me the right way to set this up? Thanks!
I haven't worked with the socketio on feathers but I've dealt with this on a rest feathers app (so hopefully this applies!).
Put the IP address into your config file. When you eventually deploy the app you will be pointing it at the IP address of the production server anyway so it makes sense to have it configurable.
Then for you local dev it can be the IP of the VM, and when you deploy your config/production.json can have the live IP.
Here's the docs about using config variables:
https://docs.feathersjs.com/api/configuration.html#example
Update
If you are using Docker instead of VMWare for your local development, then you can use docker-compose and use the built in networking functionality. Here is an example of a docker-compose file for feathers app that I'm working on. It's just an API so I'm not dealing with the client but I think it should be a good starting point for you.
With docker installed, put these two files in your project root and then run docker-compose up -d. It exposes the port 3030 on the feathers server and you should be able to connect with http://localhost:3030 again from your host machine. If you want to add another docker container to docker-compose you can and then from that container you can access the server at http://server:3030 because it will use the internal docker networking.
docker-compose.yml:
version: "3"
services:
server:
build: .
volumes:
- .:/usr/src/app
ports:
- "3030:3030"
stdin_open: true
tty: true
mongo:
image: mongo
volumes:
- ./protected_db/howlround-mongodb:/data/db
command:
- --storageEngine=mmapv1
And my Dockerfile:
FROM ubuntu:16.04
# Dockerfile based on https://github.com/nodejs/docker-node/blob/master/6.4/slim/Dockerfile
# gpg keys listed at https://github.com/nodejs/node
RUN set -ex \
&& for key in \
9554F04D7259F04124DE6B476D5A82AC7E37093B \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
0034A06D9D9B0064CE8ADF6BF1747F4AD2306D93 \
FD3A5288F042B6850C66B31F09FE44734EB7990E \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
B9AE9905FFD7803F25714661B63B535A4C206CA9 \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
; do \
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done
ENV NPM_CONFIG_LOGLEVEL info
ENV NODE_VERSION 8.11.1
ENV NODE_ENV dev
RUN buildDeps='xz-utils curl ca-certificates' \
&& set -x \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
&& curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
&& rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
&& apt-get purge -y --auto-remove $buildDeps
# Install yarn and feathers-cli
RUN npm install -g yarn #feathersjs/cli pm2#latest
# Install Git
RUN add-apt-repository -y ppa:git-core/ppa;\
apt-get update;\
apt-get -y install git
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN cd /usr/src/app
# Install app dependencies
RUN yarn install
EXPOSE 3030
CMD [ "node" ]
I'm using this scheme :
1/ I'm working on windows 7
2/ I'm using vagrant to mount a "ubuntu/trusty64" box
3/ I apt-get install ansible
4/ I install docker and docker-compose with ansibe
5/ I create a docker image with this dockerfile :
FROM php:7-apache
MAINTAINER Bruno DA SILVA "bruno.dasilva#foo.com"
COPY containers-dirs-and-files/var/www/html/ /var/www/html/
WORKDIR /var/www/html
6/ I run it :
sudo docker build -t 10.100.200.200:5000/pimp-hello-world .
sudo docker run -p 80:80 -d --name test-php 10.100.200.200:5000/pimp-hello-world
7/ apache can't display the page, I have to add :
RUN chmod -R 755 /var/www/html
to the dockerfile in order to have it visible.
so here is my question : can I handle files permission while working on windows (and how)? Or do I have to move under linux?
This happens in Linux. Docker copies the files and put root as owner. The only way I have found to overcome this without using chmod, is archiving the files in a tar file and then use
ADD content.tgz /var/www/html
It will expand automatically
Regards