h2o Driverless AI Install on GCP - h2o

I'm installing H20 Driverless AI on Google Cloud Platform on Ubuntu 16.04.
I'm following these instructions:
http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/UsingDriverlessAI.pdf
It goes well - or so I think - until step 15, the last one.
I type the following
docker run \
> --rm \
> -u `id -u`:`id -g` \
> -p 12345:12345 \
> -p 9090:9090 \
> -v `pwd`/data:/data \
> -v `pwd`/log:/log \
> -v `pwd`/license:/license \
> -v `pwd`/tmp:/tmp \
> opsh2oai/h2oai-runtime
And get:
mkdir: cannot create directory '/log/20180111-180304': Permission denied
20180111-180304 corresponds to the timestamp of the action.
When ls, here is the list of the files and folders present on the virtual machine:
data demo driverless-ai-docker-runtime-rel-1.0.5.gz install.sh jupyter license log scripts tmp
I'd be keen to hear if you've encountered a similar error or understand what I am doing wrong.
I've also tried sudo docker run \; similar outcome

In this case the command presented is suggesting you want to mount
the docker host /log into the docker containers /log
The /log folder in question must have the privileges to write
of the user who launched the docker run command.
Or launch the container with sudo

Related

Bcrypt docker passwd using --admin-passwd

What is wrong with the following command? It is intended to create a portainer container with admin passwd 'portainer':
docker run --rm -d --name "portainer" -p "127.0.0.1:9001:9000" -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer --admin-password='$2a$10$0PW6gPY0TSeYzry2RSakl.7VUVmzdmD6mQPcemiG6i2vfJGGGePYu'
It leads to a Portainer container that will deny access for 'admin', saying that passwd 'portainer' is invalid. Details:
I put it into a .bat file. The thing runs on docker CE in Windows 10.
The longish crypt string within single quotes is a bcrypt equivalent of 'portainer', the designated admin password. I created and checked it here: https://www.javainuse.com/onlineBcrypt
Prior to running the command I stopped and removed an old portainer container, and even said docker volume rm portainer_data.
Doubling the "$" to "$$" did not solve the issue.
The command is deeply inspired by the official portainer docs: https://documentation.portainer.io/v2.0/deploy/initial/
For now I have a simple workaround: Simply drop that --admin-passwd parameter. Given that I grant a volume to portainer, I can just define a passwd at first start. However, I'd still prefer the script-only solution. Any ideas?
Here it is the solution you need:
docker run --detach \
--name=portainer-ce \
-p 8000:8000 \
-p 9000:9000 \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /volume1/docker/portainer-ce:/data \
portainer/portainer-ce \
--admin-password="$(htpasswd -nb -B admin adminpwPC | cut -d ':' -f 2)"

chowning the host's bound `docker.sock` inside container breaks host docker

On a vanilla install of Docker for Mac my docker.sock is owned by my local user:
$ stat -c "%U:%G" /var/run/docker.sock
juliano:staff
Even if I add the user and group on my Dockerfile, when trying to run DinD as me, the mount of the docker.sock is created with root:root.
$ docker run -it --rm \
--volume /var/run/docker.sock:/var/run/docker.sock \
--group-add staff \
--user $(id -u):$(id -g) \
"your-average-container:latest" \
/bin/bash -c 'ls -l /var/run/docker.sock'
srw-rw---- 1 root root 0 Jun 17 07:34 /var/run/docker.sock
Going the other way, running DinD as root, chowning the socket, then running commands breaks the host docker.
$ docker run -it --rm \
--volume /var/run/docker.sock:/var/run/docker.sock \
--group-add staff \
"your-average-container:latest" \
/bin/bash
$ chown juliano:staff /var/run/docker.sock
$ sudo su juliano
$ docker ps
[some valid docker output]
$ exit
$ docker ps
Error response from daemon: Bad response from Docker engine
I've seen people reporting chowning as the way to go, so maybe I'm doing something wrong.
Questions:
Why does the host docker break?
Is there some way to prevent host docker from breaking and still giving my user permission to the socket inside docker?
I believe that when you are mounting the volume the owner UID/GID is set to the same as in the host machine (the --user flag simply allows you to run the command as a specific UID/GID and it doesn't have impact on the permission for mounted volume)
The main question is - why would you need to chown? Can't you just run the commands inside the container as root?

Run a Docker Container from a Desktop File Without Terminal GUI?

I have a couple of Docker images I've built for this and that;one for scanner program, another for a browser etc. Once I had them working, I created .desktop files that execute a bash run scripts I've created to run a container with them.
My question is: is there a way to run the .desktop file without the terminal GUI showing up? I've tried a couple of approaches with no success.
For instance, I've tried:
[Desktop Entry]
Name=gscan2pdf
Icon=gscan2pdf.png
Exec=gnome-terminal -e
"/home/hildy/Documents/repos/docker/gscan2pdf/run_gscan.sh"
Type=Application
Terminal=false
As well as:
[Desktop Entry]
Name=gscan2pdf
Icon=gscan2pdf.png
Exec="/home/hildy/Documents/repos/docker/gscan2pdf/run_gscan.sh"
Type=Application
Terminal=true
Both of these execute the scripts just fine of course, I'd just like it better if the application launched without a terminal GUI launching first.
Host System:
CentOS 7 - Gnome 3 Desktop
One example of a run script:
#!/bin/bash
HOST_UID=$(id -u)
HOST_GID=$(id -g)
XSOCK=/tmp/.X11-unix &&
XAUTH=/tmp/.docker.xauth &&
touch $XAUTH &&
xauth nlist :0 | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge - &&
#These are only run the first time a container is run from the image
#docker run -e NEW_USER="${USER}" -e NEW_UID="${HOST_UID}" -e
#NEW_GID="${HOST_GID}" hildy/gscan2pdf:v1
#LAST_CONTAINER=$(docker ps -lq) &&
#docker commit "${LAST_CONTAINER}" hildy/gscan2pdf:v1
docker run \
-ti \
--user $USER \
--privileged \
-v /dev/bus/usb:/dev/bus/usb \
-v $XAUTH:$XAUTH -v $XSOCK:$XSOCK -v /home/$USER:/home/$USER \
-e XAUTHORITY=$XAUTH -e DISPLAY \
--entrypoint "" hildy/gscan2pdf:v1 gscan2pdf &>/dev/null
I have found an answer to my question. The issue was that the command to run the container contained the -i option for an interactive terminal. #sneep was correct in the comments to the question when he said "It should work with Terminal=false." His technique to add a line to the script to create a log file is also a great technique, which I will certainly use in the future and it helped me to diagnose the issue.
I can also confirm that replacing -it with -d for detached mode, as suggested by #Oleg Skylar, works.
Amended Docker command for the run script:
docker run \
-t \
--user $USER \
--privileged \
-v /dev/bus/usb:/dev/bus/usb \
-v $XAUTH:$XAUTH -v $XSOCK:$XSOCK -v /home/$USER:/home/$USER \
-e XAUTHORITY=$XAUTH -e DISPLAY \
--entrypoint "" hildy/gscan2pdf:v1 gscan2pdf &>/dev/null
Amended .desktop file:
[Desktop Entry]
Name=gscan2pdf
Icon=gscan2pdf.png
Exec=/home/hildy/Documents/repos/docker/gscan2pdf/run_gscan.sh
Type=Application
Terminal=false
StartupNotify=true

How to run cucumber/selenium tests in Docker?

I am struggling to run my cucumber tests from a Docker image.
Here is my setup:
I use OSX with XQuartz to run an X11 session
I use an Ubuntu 14 Vagrant image for development where I forward my X11 session
I am trying to run a docker image with Firefox that will use my XQuartz session for display
So far, I managed to start Firefox with the following setup:
# Dockerfile
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y firefox
# Replace 1000 with something appropriate ;)
RUN export uid=1000 gid=1000 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/dev:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
USER developer
ENV HOME /home/developer
CMD /usr/bin/firefox
I can start Firefox with --net=host from my Vagrant machine:
docker build -t firefox .
docker run --net=host -ti --rm -e DISPLAY=$DISPLAY -v $HOME/.Xauthority:/home/developer/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix:rw firefox:latest
But this is not ideal because I can't link other containers to my machine in the docker-compose.yml file. Ideally, I would like to run my docker machine without --net=host like this:
docker build -t firefox .
docker run -ti --rm -e DISPLAY=$DISPLAY -v $HOME/.Xauthority:/home/developer/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix:rw firefox:latest
But I get the following error:
error: XDG_RUNTIME_DIR not set in the environment.
Error: cannot open display: localhost:10.0
Please help :)
You could simply use elgalu/docker-selenium to avoid dealing with what's already solved for you, and maintained:
docker run --rm -ti --net=host --pid=host --name=grid \
-e SELENIUM_HUB_PORT=4444 -e TZ="US/Pacific" \
-v /dev/shm:/dev/shm --privileged elgalu/selenium
If you need advanced features like a dashboard with video recording for example, or live preview, you can use Zalenium and start it with:
curl -sSL https://raw.githubusercontent.com/dosel/t/i/p | bash -s start -i

HDFS as volume in cloudera quickstart docker

I am fairly new to both hadoop and docker.
I haven been working on extending the cloudera/quickstart docker image docker file and wanted to mount a directory form host and map it to hdfs location, so that performance is increased and data are persist localy.
When i mount volume anywhere with -v /localdir:/someDir everything works fine, but that's not my goal. But when i do -v /localdir:/var/lib/hadoop-hdfs both datanode and namenode fails to start and I get : "cd /var/lib/hadoop-hdfs: Permission denied". And when i do -v /localdir:/var/lib/hadoop-hdfs/cache no permission denied but datanode and namenode, or one of them fails to start on starting the docker image and i can't find any useful information in log files about the reason for that.
Mayby someone came across this problem, or have some other solution for putting hdfs outside the docker container?
I've the same problem and I've managed the situation copying the entire /var/lib directory from container to a local directory
From terminal, start the cloudera/quickstart container without start all hadoop services:
docker run -ti cloudera/quickstart /bin/bash
In another terminal copy the container directory to the local directory
:
mkdir /local_var_lib
docker exec your_container_id tar Ccf $(dirname /var/lib) - $(basename /var/lib) | tar Cxf /local_var_lib -
After all files copied from container to local dir, stop the container and point the /var/lib to the new target. Make sure the /local_var_lib directory contains the hadoop directories (hbase, hadoop-hdfs, oozie, mysql, etc).
Start the container:
docker run --name cloudera \
--hostname=quickstart.cloudera \
--privileged=true \
-td \
-p 2181:2181 \
-p 8888:8888 \
-p 7180:7180 \
-p 6680:80 \
-p 7187:7187 \
-p 8079:8079 \
-p 8080:8080 \
-p 8085:8085 \
-p 8400:8400 \
-p 8161:8161 \
-p 9090:9090 \
-p 9095:9095 \
-p 60000:60000 \
-p 60010:60010 \
-p 60020:60020 \
-p 60030:60030 \
-v /local_var_lib:/var/lib \
cloudera/quickstart /usr/bin/docker-quickstart
You should run a
docker exec -it "YOUR CLOUDERA CONTAINER" chown -R hdfs:hadoop /var/lib/hadoop-hdfs/

Resources