installing transmission on debian with docker, right trouble - shell

I am a biginner with docker, I try to make my transmission container work!
First I am running on a debian 8.1.
To make things work I created this 2 folder:
mkdir -p /opt/docker/transmission/config
mkdir -p /opt/docker/transmission/downloads
after this I added the "good" right:
chown -R root:docker /opt/docker
chmod -R 775 /opt/docker
At the end I tried to create my docker by doing this:
docker run -d \
--net="host" \
--name="Transmission" \
-e USERNAME="root" \
-e PASSWORD="mdp" \
-v /opt/docker/transmission/config:/config \
-v /opt/docker/transmission/downloads:/downloads \
-v /etc/localtime:/etc/localtime:ro \
gfjardim/transmission
The command docker logs transmission gives:
Couldn't save temporary file "/config/resume/myfile.resume.tmp.hYhTPF": No such file or directory
I guessed that the folder resume in config was not created so I created it, but I didn't work.
The message in transmission GUI is:
unable to save resume file: permission denied

I cannot reproduce you errors using your commands. I suggest you delete your /opt/docker/transmission/ directory and then run the docker run command.
Docker will take care of creating those repositories.

I didn't find a solution for gfjardim/transmission container
But by changing container it worked directly.
I followed the dperson/transmission in the hub site, and it work perfectly.
Thanks for your help.

Related

Add bash to Docker image [duplicate]

Seems like a basic issue but couldnt find any answers so far ..
When using ADD / COPY in Dockerfile and running the image on linux, the default file permission of the file copied in the image is 644. The onwner of this file seems to be as 'root'
However, when running the image, a non-root user starts the container and any file thus copied with 644 permission cannot execute this copied/added file and if the file is executed at ENTRYPOINT it fails to start with permission denied error.
I read in one of the posts that COPY/ADD after Docker 1.17.0+ allows chown but in my case i dont know who will be the non-root user starting so i cannot set the permission as that user.
I also saw another work around to ADD/COPY files to a different location and use RUN to copy them from the temp location to actual folder like what am doing below. But this approach doesnt work as the final image doesnt have the files in /otp/scm
#Installing Bitbucket and setting variables
WORKDIR /tmp
ADD atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz .
COPY bbconfigupdater.sh .
#Copying Entrypoint script which will get executed when container starts
WORKDIR /tmp
COPY entrypoint.sh .
RUN ls -lrth /tmp
WORKDIR /opt/scm
RUN pwd && cp /tmp/bbconfigupdater.sh /opt/scm \
&& cp /tmp/entrypoint.sh /opt/scm \
&& cp -r /tmp/atlassian-bitbucket-${BITBUCKET_VERSION} /opt/scm \
&& chgrp -R 0 /opt/ \
&& chmod -R 755 /opt/ \
&& chgrp -R 0 /scm/bitbucket \
&& chmod -R 755 /scm/bitbucket \
&& ls -lrth /opt/scm && ls -lrth /scmdata
Any help is appreciated to figure out how i can get my entrypoint script copied to the desired path with execute permissions set.
The default file permission is whatever the file permission is in your build context from where you copy the file. If you control the source, then it's best to fix the permissions there to avoid a copy-on-write operation. Otherwise, if you cannot guarantee the system building the image will have the execute bit set on the files, a chmod after the copy operation will fix the permission. E.g.
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
A better option with newer versions of docker (and which didn't exist when this answer was first posted) is to use the --chmod flag (the permissions must be specified in octal at last check):
COPY --chmod=0755 entrypoint.sh .
You do not need to know who will run the container. The user inside the container is typically configured by the image creator (using USER) and doesn't depend on the user running the container from the docker host. When the user runs the container, they send a request to the docker API which does not track the calling user id.
The only time I've seen the host user matter is if you have a host volume and want to avoid permission issues. If that's your scenario, I often start the entrypoint as root, run a script called fix-perms to align the container uid with the host volume uid, and then run gosu to switch from root back to the container user.
A --chmod flag was added to ADD and COPY instructions in Docker CE 20.10. So you can now do.
COPY --chmod=0755 entrypoint.sh .
To be able to use it you need to enable BuildKit.
# enable buildkit for docker
DOCKER_BUILDKIT=1
# enable buildkit for docker-compose
COMPOSE_DOCKER_CLI_BUILD=1
Note: It seems to not be documented at this time, see this issue.

Permission denied to Docker daemon socket at unix:///var/run/docker.sock

I have this Dockerfile:
FROM chekote/gulp:latest
USER root
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y sudo libltdl-dev
ARG dockerUser='my-user-name';
ARG group='docker';
# crate group if not exists
RUN if ! grep -q -E "^$group:" /etc/group; then groupadd $group; fi
# create user if not exists
RUN if ! grep -q -E "^$dockerUser:" /etc/passwd; then useradd -c 'Docker image creator' -m -s '/bin/bash' -g $group $dockerUser; fi
# add user to the group (if it was present and not created at the line above)
RUN usermod -a -G ${group} ${dockerUser}
# set default user that runs the container
USER ${dockerUser}
That I build this way:
docker build --tag my-gulp:latest .
and finally run by script this way:
#!/bin/bash
image="my-gulp:latest";
workDir='/home/gulp/project';
docker run -it --rm \
-v $(pwd):${workDir} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
${image} /bin/bash
that logs me into the docker container properly but when I want to see images
docker images
or try to pull image
docker pull hello-world:latest
I get this error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/images/json: dial unix /var/run/docker.sock: connect: permission denied
How to create docker image from chekote/gulp:latest so I can use docker inside it without the error?
Or maybe the error is because of wrong docker run command?
A quick way to avoid that. Add your user to the group.
sudo gpasswd -a $USER docker
Then set the proper permissions.
sudo setfacl -m "user:$USER:rw" /var/run/docker.sock
Should be good from there.
The permission matching happens only on numeric user ID and group ID. If the socket file is mode 0660 and owned by user ID 0 and group ID 32, and you're calling it as a user with user ID 1000 and group IDs 1000 and 16, it doesn't matter if one /etc/group file names gid 32 as docker and the other one names gid 16 the same; the numeric gids are different and you can't access the file. Also, since the actual numeric gid of the Docker group will vary across systems, this isn't something you can bake into the Dockerfile.
Many Docker images just run as root; if they do, they can access a bind-mounted Docker socket file regardless of its permissions.
If you run as a non-root user, you can use the docker run --group-add option to add a (numeric) gid to the effective user; it doesn't specifically need to be mentioned in the /etc/groups file. On a Linux host you might run:
docker run --group-add $(stat -c '%g' /var/run/docker.sock) ...
You wouldn't usually install sudo in a Dockerfile (it doesn't work well for non-interactive programs, you usually don't do a whole lot in interactive shells because of the ephemeral nature of containers, and you can always docker exec -u 0 to get a root shell) though installing some non-root user is often considered a best practice. You could reduce the Dockerfile to
FROM node:8
RUN apt-get update
# Trying to use the host's `docker` binary may not work well
RUN apt-get install -y docker.io
# Install the single node tool you need
RUN npm install -g gulp
# Get your non-root user
RUN adduser myusername
# Normal Dockerfile bits
WORKDIR ...
COPY ...
RUN gulp
USER myusername
CMD ["npm", "run", "start"]
(That Docker base image has a couple of things that don't really match Docker best practices, and doesn't seem to be updated routinely; I'd just use the standard node image as a base and add the one build tool you need on top of it.)
open terminal and type this command
sudo chmod 666 /var/run/docker.sock
let me know the results...
You need the --privileged flag with your docker run command.
By the way , you can just use the docker in docker , image from docker for this kind of use case.
https://asciinema.org/a/24707
https://hub.docker.com/_/docker/
The error has nothing to do with docker pull or docker image subcommand, but rather that you need to call the docker command as either a user with write access to the docker socket (for example, by being root, using sudo, or by being in the docker group).

executable file not found in $PATH Dockerfile

I am building a Dockerfile for an application. I want to execute a bash script with parameters when the container starts to run, so I have made it an entry point. However, Docker cannot find the directory in which my script is located. Thi script is located in the Intellij Idea project folder and the path practically looks like this: /home/user/Documents/folder1/folder2/folder3/Projectname/runapp.sh
I have tried to mount this directory as volume, but while running built image an error occurred:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"runapp.sh\": executable file not found in $PATH": unknown.
What may be the reason of such behavior? How else can I reach this bash script from Dockerfile?
Here is how the Dockerfile looks like:
FROM java:8
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
ENV NEO4J_CONFIG default
ENV BENCHMARK_NAME default
WORKDIR /opt
# Install Scala
RUN \
cd /root && \
curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -xf scala-$SCALA_VERSION.tgz && \
rm scala-$SCALA_VERSION.tgz && \
echo >> /root/.bashrc && \
echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc
# Install SBT
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb
# Install Spark
RUN \
cd /opt && \
curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
tar xvfz $SPARK_ARCH && \
rm $SPARK_ARCH && \
echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc
EXPOSE 9851 9852 4040 7474 7687 7473
VOLUME /home/user/Documents/folder1/folder2/folder3/Projectname /workdir1
WORKDIR /workdir1
ENTRYPOINT ["runapp.sh"]
CMD ["$NEO4J_CONFIG", "$BENCHMARK_NAME"]
You misunderstood volumes in Docker I think. (see What is the purpose of VOLUME in Dockerfile)
I'm citing #VonC answer:
A volume is a persistent data stored in /var/lib/docker/volumes/...
You can either declare it in a Dockerfile, which means each time a container is stated from the image, the volume is created (empty), even if you don't have any -v option.
You can declare it on runtime docker run -v [host-dir:]container-dir.
combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/....
docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.
So you should use docker run -v /home/user/Documents/folder1/folder2/folder3/Projectname:/workdir1 when starting the container
And your Dockerfile volume declaration should be:
VOLUME /workdir1
That being said, you define both Entrypoint and CMD. What is the CMD being for ? You will never use your image without using runapp.sh ? I prefer using only CMD for development since you can still do docker run -it my_container bash for debugging purpose with this syntax.
This time I'm using #Daishi answer from What is the difference between CMD and ENTRYPOINT in a Dockerfile?
The ENTRYPOINT specifies a command that will always be executed when the container starts.
The CMD specifies arguments that will be fed to the ENTRYPOINT.
If you want to make an image dedicated to a specific command you will use ENTRYPOINT ["/path/dedicated_command"]
Otherwise, if you want to make an image for general purpose, you can leave ENTRYPOINT unspecified and use CMD ["/path/dedicated_command"] as you will be able to override the setting by supplying arguments to docker run
Moreover, runapp.sh isn't in your $PATH and you call it without absolute path, so it will not find the file even if the volume is mounted correctly.
You could just use:
CMD /workdir1/runapp.sh "$NEO4J_CONFIG" "$BENCHMARK_NAME"
Now be careful, on your host you mention that the shell script is named script.sh and you call runapp.sh in your Dockerfile, I hope it's a typo. By the way your script needs to be executable.

I have made a dockerfile and I was going to run it on AWS ECS but I cant as it requires -t

Here is my docker run and the docker file is there a reason why it requires -t and isnt working on ECS thanks for any help. I dont
understand what -t does so if someone could also help with that thanks.
This is just a basic docker that connects to my rds and uses wordpress. I dont have any plugins and shapely is the theme i'm using .
command docker run -t --name wordpress -d -p 80:80 dockcore/wordpress
FROM ubuntu
#pt-get clean all
RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda
RUN rm -fr /var/cashe/*files neeeded
ADD wordpress.conf /etc/apache2/sites-enabled/000-default.conf
# Wordpress install
RUN wget -P /var/www/html/ https://wordpress.org/latest.zip
RUN unzip /var/www/html/latest.zip -d /var/www/html/
RUN rm -fr /var/www/html/latest.zip
# Copy he wp config file
RUN cp /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php
# Expose web port
EXPOSE 80
# wp config for database
RUN sed -ie 's/database_name_here/wordpress/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/username_here/root/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/password_here/password/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/localhost/wordpressrds.xxxxxxxxxxxxxx.ap-southeast-2.rds.amazonaws.com:3306/g' /var/www/html/wordpress/wp-config.php
RUN rm -fr /var/www/html/wordpress/wp-content/themes/*
RUN rm -fr /var/www/html/wordpress/wp-content/plugins/*
ADD /shapely /var/www/html/wordpress/wp-content/themes/
# Start apache on boot
RUN echo "service apache2 start" >> ~/.bashrc
I see a couple problems. First of all your container should never require -t in order to run unless it is a temporary container that you plan to interact with using a shell. Background containers shouldn't require an interactive TTY interface, they just run in the background autonomously.
Second in your docker file I see a lot of RUN statements which are basically the build time commands for setting up the initial state of the container, but you don't have any CMD statement.
You need a CMD which is the process to actually kick off and start in the container when you try to run the container. RUN statements only execute once during the initial docker build, and then the results of those run statements are saved into the container image. When you run a docker container it has the initial state that was setup by the RUN statements, and then the CMD statement kicks off a running process in the container.
So it looks like that last RUN in your Dockerfile should be a CMD since the Apache server is the long running process that you want to run with the container state that you previously setup using all those RUN statements.
Another thing you should do is chain many of those consecutive RUN statements into one. Docker creates a separate layer for each RUN command, where each layer is kind of like a Git commit of the state of the container. So it is very wasteful to have so many RUN statements because it makes way too many container layers. You can do something like this ot chain RUN statements together instead to make a smaller, more efficient container:
RUN apt-get -y update && \
DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda && \
rm -fr /var/cashe/*files neeeded
I recommend reading through this guide from Docker that covers best practices for writing a Dockerfile: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#cmd

Docker ADD then RUN

I'm building a docker image for our specific implementation of an ElasticSearch server. It requires config files. In my Dockerfile, I have the following commands:
RUN mkdir /elasticsearch/plugins
ADD plugins/es_plugins.tar /elasticsearch/plugins
RUN \
cd /elasticsearch/plugins && \
tar -xvf es_plugins.tar && \
rm -f es_plugins.tar
Docker build succeeds on the mkdir, and the ADD. But fails with a tar error "es_plugins.tar: Cannot open: No such file or directory"
This is my first time with Docker. Any ideas why the "ADD" shows successful but the RUN fails?
It took an hour, but I figured out how to run my container and drop into a shell.
Seems that when I do "ADD file.tar /destination/dir" it actually moves the files/directories inside the tar into the destination. Which is what I wanted, but I am surprised. I will have to do more research.

Resources