I am trying to create a tensorflow serving docker container but I am getting the following error while running the docker create command
I am unable to figure out if its because of any location error or my /bin/bash file is broken. What can I do to fix this issue ? Thanks in advance.
What base image are you using for your container image? I checked busybox and alpine. They have ash by default but not bash. Once you create your image you can run it as follows:
docker run -it my-image-name "sh"
This should get you into an interactive shell. The cd into /bin and check which commands are available using ls.
I got this in alpine
/ # ls /bin
ash df getopt linux64 mpstat rev sync
base64 dmesg grep ln mv rm tar
bbconfig dnsdomainname gunzip login netstat rmdir touch
busybox dumpkmap gzip ls nice run-parts true
cat echo hostname lzop pidof sed umount
chgrp ed ionice makemime ping setpriv uname
chmod egrep iostat mkdir ping6 setserial usleep
chown false ipcalc mknod pipe_progress sh watch
conspy fatattr kbd_mode mktemp printenv sleep zcat
cp fdflush kill more ps stat
date fgrep link mount pwd stty
dd fsync linux32 mountpoint reformime su
A container is an instance created from a container-image. In your case your container tf_container_gpu has been created from the image you specified. You can give your container a name only the time you create it. After that you just need to start it with that name.
docker start tf_container_gpu should do.
if you want to recreate your container (say after you re-build your image) first remove the earlier container instance
docker container rm tf_container_gpu. Then run the container again
docker run --name=tf_container_gpu <image-name>
To just start and stop the container
docker start tf_container_gpu
docker stop tf_container_gpu
Related
I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'
I have a script on the docker host that i want to run and create symbolic links inside its containers. However, i cannot seem to get the symbolic link created when i run my script below:
#!/bin/bash
declare -A containers
while IFS== read -r key value; do
containers[$key]=${value}
S=$(sudo docker exec -t $key ln -s /srv/my.cnf /etc/mysql/my.cnf);
done < "/opt/containers.txt"
The weird thing is that when i run the command outside the script directly in the terminal, it actually works. E.g.,
sudo docker exec -t db-test-1 ln -s /srv/my.cnf /etc/mysql/my.cnf
So not sure why it won't run in the script. Any suggestions?
For testing, you can see if isolating the command (here ln) in its own shel helps:
S=$(sudo docker exec -t $key /bin/sh -c "ln -s /srv/my.cnf /etc/mysql/my.cnf");
I am trying to create a shell script that will check for a new file then cp to a Docker Container. The code I have so far is...
#!/bin/sh
source="/var/www/html/"
dest="dev_ubuntu:/var/www/html/"
inotifywait -m "/var/www/html" -e create -e moved_to |
while read file; do
sudo docker cp /var/www/html/$file dev_ubuntu:/var/www/html
done
But this code gives the following error:
Setting up watches.
Watches established.
"docker cp" requires exactly 2 argument(s).
See 'docker cp --help'.
Usage: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Copy files/folders between a container and the local filesystem
What am I doing wrong?
Do you have spaces in your file names? Use double quotes to avoid separating filenames by words:
echo $file
sudo docker cp "$file" dev_ubuntu:"$file"
I've also echoed the file name to see what is happening.
I want to get rid of huge container log files on my docker env.
I have problem finding them when running native Docker on a Mac. I am not using docker-machine (virtualbox) thing. My docker version is 1.13.1.
When I do
docker inspect <container-name>
I see there is
"LogPath": "/var/lib/docker/containers/<container-id>/<container-id>-json.log
But there is not even directory /var/lib/docker on my mac (host).
I have also looked in
~/Library/Containers/com.docker.docker/
but didn't find any container specific loggings there.
I could use tail, but it is not that convenient always to me.
So the question is, how can I clear the log files of my containers on my native Docker Mac environment.
Docker daemon runs in a separate VM, so in order to clear logs you should do the following steps:
First, you can find the log path inside the VM, with:
docker inspect --format='{{.LogPath}}' NAME|ID
You can connect to the VM with screen
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Here you can simply use output redirection to clear the log
> /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log
And finally you can detach the screen with hitting Control+a d
I added the following to my bash_profile.
it gets the logpath for the docker container, opens a screen to the docker machine and deletes the logfile.
clearDockerLog(){
dockerLogFile=$(docker inspect $1 | grep -G '\"LogPath\": \"*\"' | sed -e 's/.*\"LogPath\": \"//g' | sed -e 's/\",//g')
rmCommand="rm $dockerLogFile"
screen -d -m -S dockerlogdelete ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen -S dockerlogdelete -p 0 -X stuff $"$rmCommand"
screen -S dockerlogdelete -p 0 -X stuff $'\n'
screen -S dockerlogdelete -X quit
}
use as follows:
clearDockerLog <container_name>
This will remove all your docker logs in macOS.
echo "rm /var/lib/docker/containers/*/*.log" | nc -U -w 0 ~/Library/Containers/com.docker.docker/Data/debug-shell.sock
This is the only solution that worked for macOS 10.14
docker run -it --rm --privileged --pid=host NAME nsenter -t 1 -m -u -n -i -- sh -c 'truncate -s0 /var/lib/docker/containers/*/*-json.log'
Replace NAME with your container name
Hope this helps
This worked for me, at least from the commandline: screen $(cat ~/Library/Containers/com.docker.docker/Data/vms/0/tty)
This might work better with the script if the above doesn't: screen /dev/ttys000
gist with more things to try
So i've written a Dockerfile for a project, i've defined a CMD to run on starting the container to bootstrap the application.
The Dockerfile looks like
# create our mount folders and volumes
ENV MOUNTED_VOLUME_DIR=sites
RUN mkdir /$MOUNTED_VOLUME_DIR
ENV PATH=$MOUNTED_VOLUME_DIR/sbin:$MOUNTED_VOLUME_DIR/common/bin:$PATH
RUN chown -Rf www-data:www-data /$MOUNTED_VOLUME_DIR
# Mount folders
VOLUME ["/$MOUNTED_VOLUME_DIR/"]
# Expose Ports
EXPOSE 443
# add our environment variables to the server
ADD ./env /env
# Add entry point script
ADD ./start.sh /usr/bin/startContainer
RUN chmod 755 /usr/bin/startContainer
# define entrypoint command
CMD ["/bin/bash", "/usr/bin/startContainer"]
The start.sh script, does some git stuff like cloning the right repo, setting environment vars, as well as starting supervisor.
The start script begins with this
#!/bin/bash
now=$(date +"%T")
echo "Container Start Time : $now" >> /tmp/start.txt
/usr/bin/supervisord -n -c /etc/supervisord.conf
I start my new container like this
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
when i login to the container i see that supervisor hasn't been started, and neither has nginx or php5-fpm. the /tmp/start.txt file with a timestamp set from the startContainer script doesn't exist, showing its never ran the CMD in the Dockerfile.
Any hints on to get this fixed would be great
This:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
Says 'run /bin/bash' after instantiating the container. E.g. skip CMD.
Try this:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID