GCP Vertex AI Managed Notebook cannot use custom container - google-cloud-vertex-ai

In GCP Vertex AI, I created a Managed Notebook by specifying one of our custom containers which work perfectly with User-Managed Notebook kernels.
The Managed Notebook starts, and Jupyter Lab seems to work without any signs of error.
Unfortunately, if I look at the available kernels in Jupyter Lab, only the default kernels are listed but not my custom kernel.
An activity log entry on the right shows a spinning wheel "Loading kernel from [custom container]" which never disappears.
Taking a look at the terminal,
docker image ls
does not show the custom container either; obviously, it was not even pulled to the Managed Notebook.
If I perform
docker pull [custom container]
in the terminal, to test connectivity to the Artifact Registry then it pulls the container correctly as expected.
However, the custom kernel is still not visible in Jupyter Lab (even after a notebook restart).

I faced the same issue today, and after some experimentation, I found that I haven't fulfilled all the requirements in Vertex AI docs.
One in particular - The Docker container image must support sleep infinity.
And my Dockerfile contained
EXPOSE 8080
ENTRYPOINT [ "jupyter", "lab", "--allow-root", "--ip", "0.0.0.0", "--config", "/opt/jupyter/.jupyter/jupyter_notebook_config.py" ]
To enable sleep infinity I replaced ENTRYPOINT with
ENTRYPOINT ["/bin/sh", "-c", "sleep infinity"]
then the kernelspec was imported, but the notebook wasn't able to connect.
So I removed the ENTRYPOINT entirely and then the kernelspec was imported and the notebook was able to connect successfully.
Hope that helps.

Related

How to enable Docker API access from Windows running Docker Toolbox (docker machine)

I am running the latest Docker Toolbox, using latest Oracle VirtualBox, with Windows 7 as a host OS.
I am trying to enable non-TLS access to Docker remote API, so I could use Postman REST client running on Windows and hit docker API running on docker-machine in the VirtualBox. I found that if Docker configuration included -H tcp://0.0.0.0:2375, that would do the trick exposing the API on port 2375 of the docker machine, but for the life of me I can't find where this configuration is stored and can be changed.
I did docker-machine ssh from the Toolbox CLI, and then went and pocked around the /etc/init.d/docker file, but no changes to the file survive docker-machine restart.
I was able to find answer to this question for Ubuntu and OSX, but not for Windows.
#CarlosRafaelRamirez mentioned the right place, but I will add a few details and provide more detailed, step-by-step instructions, because Windows devs are often not fluent in Linux ecosystem.
Disclaimer: following steps make it possible to hit Docker Remote API from Windows host, but please keep in mind two things:
This should not be done in production as it makes Docker machine very not secure.
Current solution disables most of the docker-machine and all docker CLI functionality. docker-machine ssh remains operational, forcing one to SSH into docker machine to access docker commands.
Solution
Now, here are the steps necessary to switch Docker API to non-TLS port. (Docker machine name is assumed to be "default". If your machine name has a different name, you will need to specify it in the commands below.)
Start "Docker Quickstart Terminal". It starts Bash shell and is the place where all following commands will be run. Run docker-machine ip command and note the IP address of the docker host machine. Then do
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile This starts "vi" editor in elevated privileges mode required for editing "profile" file, where Docker host settings are. (If as a Windows user you are not familiar with vi, here's is super-basic crash course on it. When file is open in the vi, vi is not in editing mode. Press "i" to start edit mode. Now you can make changes. After you made all the changes, hit Esc and then ZZ to save changes and exit vi. If you need to exit vi without saving changes, after Esc please type :q! and hit Enter. ":" turns on vi's command mode, and "q!" command means exit without saving. Detailed vi command info is here.)
Using vi, change DOCKER_HOST to be DOCKER_HOST='-H tcp://0.0.0.0:2375', and set DOCKER_TLS=no. Save changes as described above.
exit to leave SSH session.
docker-machine restart
After doocker machine has restarted, your sould be able to hit docker API URL, like http://dokerMachineIp:2375/containers/json?all=1, and get valid JSON back.
This is the end of steps required to achieve the main goal.
However, if at this point you try to run docker-machine config or docker images, you will see an error message indicating that docker CLI client is trying to get to the Docker through the old port/TLS settings, which is understandable. What was not expected to me though, is that even after I followed all the Getting Started directions, and ran export DOCKER_HOST=tcp://192.168.99.101:2375 and export DOCKER_TLS_VERIFY=0, resulting in
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2375
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
the result was the same:
$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
"192.168.99.101:2376"
If you see a problem with how I changed environment variables to point Docker CLI to the new Docker host address, please comment.
To work around this problem, use docker-machine ssh command and run your docker commands after that.
I encountered the same problem and thanks to #VladH made it working not changing any internal Docker profile properties. All you have to do is correctly define Windows local env variables (or configure maven plugin properties, if you use io.fabric8 docker-maven-plugin).
Note that 2375 port is used for non-TLS connections, and 2376 only for TLS connections.
DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default

Easiest way to connect with PuTTY to an existing docker container

Often I come across this situation:
I have an existing docker container, running a certain service, usually set up from a Dockerfile from Github, etc., usually based on Ubuntu
I am able to run commands inside this container (with docker exec or by setting an entrypoint), including sh
Interactive commands like vi, nano, aptitude or mc don't work, because of the buggy terminal of Docker Toolbox - with errors ranging from defective arrow keys over garbled characters to plain crashes.
Now the question:
Can I run anything inside my container to connect to a machine with a proper terminal? For example I could SSH into the docker host, so maybe I can run something there that the container can connect to?
I tried mosh, but it seems the mosh client does not run a shell by itself, but instead tries to forward to sshd, which the container doesn't have.
Docker is used to create light weight containers that can run a service with as minimal resources as possible. In addition, docker does not limit what code, apps or utilities you would want to run. That being said, if you are trying to connect to the container as you would to other linux servers, via ssh, you would need to be sure that the docker instance contains and is running an ssh server such as openssh-server and that you expose the port, normally port 22, when you execute the 'docker run' command.

Is it possible to run kubernetes as a docker container?

I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.

What is the best practices of using Docker for front end development on OS X and passthrough ENV from host to container

I am looking for best practices about front-end developing on OSX with docker and I have found number of projects on github. Here they are:
docker-osx-dev
boot2docker-xhyve
coreos-xhyve
docker-unison
hodor
The fact is I need two-way syncing files from host system to virtual container and vice versa via mounted (synced) folder and IO performance should be like native one. Therefore I don't consider shared folders FS like vboxsf and vmhgfs. Also it's needed to have some build tools (gulp etc) with working wathcer within shared folder.
What do you think about xhyve (with NFS) instead of VirtualBox? Who tried the unison, what the performance docker provides with it?
At last I have a special task I want to run app.js via nodejs through host to container ENV if it is possible. In other words I have to add ENV variable for PATH to nodejs (within virtual container) to my ~/.bash_profile. Is there any chance to do passthrough NODE_PATH from host to container at all?
Thanks.
Not sure if "best practice" is asking for opinions (which is against SO policy), note that this also heavily depends on your tools chain.
I'm not a fan of boot2docker as it works to date (although it may improve and it may be the best approach in the long term as it is the official approach maintained by the docker team).
EDIT: boot2docker was discontinued and replaced by Docker Machine which does pretty much the same thing but in a more generic way, allowing you to manage Docker daemons locally, in LAN or in the cloud.
For Me, I'm on Windows, but I face the same (even more) difficulties as OSX devs. As I'm using Hyper-V, boot2docker (VirtualBox) can't run, so I have to roll my own. Also, last time I tried boot2docker - it ran TinyCoreLinux, which is another Linux distribution I'd have to learn while my focus is CoreOS in the cloud, so I'd rather just focus on CoreOS.
The target for setting up your dev is as follows:
Have ssh access with mounting rights to a docker host (either in VM or on LAN): this is CoreOS on Hyper-V for me.
Have a native docker client & export DOCKER_HOST=<ip or hostname here>
mount /mnt/from/host working directory into your docker host for live reload: this works through mount.cifs on CoreOS with a systemd unit for me.
Make dev.Dockerfile for your dev requirements, if you're a node developer, start from the node image, npm install gulp/browserify/.. whatever you need as a base image for your projects & docker build -f dev.Dockerfile -t my_dev_container .
docker run -it -v /mnt/from/host/:/src/app/ -e my_dev_container
You are now in a terminal with a fully isolated environment which can be put under source control & replicated between project members and has full live reload abilities.
Draw backs: if you rely on REPL or intelliSense from your IDE, you'll have to have an IDE that can use the remote server. Or you have to run your IDE within the dev container (cloud9 or use X server).
Of course if you live in a terminal and are fluent in vim, you are good to go.

dockerizing an application on Mac OS X

I installed boot2docker as explained on the docker website. Here are some command runs to show that I have things installed correctly:
$$:~ kv$ boot2docker start
Waiting for VM and Docker daemon to start...
...................ooo
Started.
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/key.pem
Your environment variables are already set correctly.
$$:~ kv$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 14.04 b39b81afc8ca 11 days ago 188.3 MB
hello-world latest e45a5af57b00 3 weeks ago 910 B
After this, I ran the following command:
docker run -t -i ubuntu:14.04 /bin/bash
Inside the container, I installed zeromq, and started a zeromq server on port 5555 using tcp.
My questions are following:
If I exit out of the container, will it save all the work I do inside it?
I have no idea how to connect to the server running on port 5555. I read something about exposing a port, but I am not sure how to go about doing that. I did an ifconfig inside the container, and tried to connect to the server from the host like this:
$$:~ kv$ ./zmq_client tcp://container_ip:5555
This did not work. Can someone please lists the steps I need to take in order to connect to the server running within the container.
For completion sake, I am providing the list of my environment variables:
TERM_PROGRAM=Apple_Terminal
TERM=xterm-256color
SHELL=/bin/bash
TMPDIR=/var/folders/km/5kbpdx4s7cg4rmyc6d5q9l9r0000gq/T/
DOCKER_HOST=tcp://192.168.109.103:2376
Apple_PubSub_Socket_Render=/tmp/launch-1tWMHJ/Render
TERM_PROGRAM_VERSION=326
OLDPWD=/Users
TERM_SESSION_ID=262CBC8B-0A74-4B70-9F28-D9FA51FF713C
USER=kv
SSH_AUTH_SOCK=/tmp/launch-ZTWNGL/Listeners
__CF_USER_TEXT_ENCODING=0x1F7:0:0
DOCKER_TLS_VERIFY=1
__CHECKFIX1436934=1
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin
PWD=/Users/kv
DOCKER_CERT_PATH=/Users/kv/.boot2docker/certs/boot2docker-vm
HOME=/Users/kv
SHLVL=1
LOGNAME=kv
LC_CTYPE=UTF-8
DISPLAY=/tmp/launch-rco9zt/org.macosforge.xquartz:0
_=/usr/bin/env
One last question I have is about code performance. So within my Mac OS X, I have a docker container running (which runs Ubuntu). If I run the application, like a zeromq based server inside the container, will it not be slower as compared to running it on Mac OS X directly. Please explain the benefits of using docker in such a scenario..
You should really do some more reading and research before turning to SO, then ask about anything you can't figure out. But:
No. If the container is "exited" you can restart it and your files will still be there, but once it is removed your files are gone. You can use docker commit to save them to an image, but the best bet is to use a Dockerfile.
docker run -p 5000:8000 image will expose port 8000 in the container as port 5000 on the host.
Yes, it will be slower due to the boot2docker VM. It would not be slower if you were running on a Linux host. The advantage is that zeromq is now running in an isolated container with all its dependencies.

Resources