Unable to install cntk on mac - macos

Firsrt of all , I use macOS (High Serria 10.13.6) and have my docker installed.
I have implemented following commands in shell
docker pull microsoft/cntk:2.5.1-cpu-python3.5
docker run -d -p 8888:8888 --name cntk-jupyter-notebooks -t microsoft/cntk
docker exec -it cntk-jupyter-notebooks bash -c "source /cntk/activate-cntk && jupyter-notebook --no-browser --port=8888 --ip=0.0.0.0 --notebook-dir=/cntk/Tutorials --allow-root"
Then I have received following feedback:
++++++++++++++++++++++++++++++++++++++++++
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://0.0.0.0:8890/?token=ecd25750b4aaf098140ee3ccf7941c4414a36efebf67fcb0
++++++++++++++++++++++++++++++++++++++++++
In the few weeks ago, what I did is just paste the url to my browser. Then cntk can be run. But now when I paste it, it only directs my to the (ordinary) Anaconda-Jupyter-Notebook. And I can't import cntk at all.
Please help !
Reference : https://learn.microsoft.com/en-us/cognitive-toolkit/cntk-docker-containers

Related

Can't run docker from terminal

I can't run image ubuntu using terminal. When I run image ubuntu using app Desktop Docker, then it works, but using terminal it is not working.
Terminal:
docker run -d -p 80:80 docker/getting-started - works ok
docker run -d -p 80:80 ubuntu - not working

Unable to open remote display on Mac when running Docker

I have a Dockerfile written as below:
FROM joesan/raspi_opencv_3:latest
RUN apt-get update
RUN sudo apt-get install --no-install-recommends xserver-xorg
RUN sudo apt-get install --no-install-recommends xinit
RUN apt-get install -qqy x11-apps
RUN mkdir -p /raspi_motion_detection/project
WORKDIR /raspi_motion_detection/project
COPY ./ $WORKDIR/
COPY ./requirements.txt $WORKDIR/
ADD . $WORKDIR
CMD xclock
I have a Raspberry Pi to which I ssh from my Mac (running High Sierra).
Here is what I do:
I ssh into the RaspPi from my Mac
I execute the docker command using:
docker run -ti --device=/dev/vcsm \
--device=/dev/vchiq \
-e DISPLAY=$DISPLAY:0 \
-e XAUTHORITY=/.Xauthority \
-v /tmp/.X11-unix:/tmp/.X11-unix \
joesan/motion_detector
I get an error message as below:
Error: Can't open display: localhost:11.0:0
But when I just run xclock directly on the ssh terminal, I can see that the xclock window opens up.
So I could not understand why running xclock from within a Docker container would prevent the display port being opened? Any reasons? I also came across this post here and followed what has been described there, but i could not get it to work!
https://medium.com/#dimitris.kapanidis/running-gui-apps-in-docker-containers-3bd25efa862a
A bit simplified: Each docker container runs inside the docker daemon, which basically provides a stripped down os to each container. That os has no window manager.
That is why the command xclock inside a docker container exits with an error.
When you connect via ssh to your raspberry pi and call xclock it is executed inside the raspberry's os (propably raspian), which has a running window manager.
Ok! So I thinkI found the solution to my problem! Here is what I did!
Re-installed Raspberry Stretch Lite on my SD card. The old one seems to have gotten some stale files! You can skip this step, but for me there was some corrupt files on the old installation, so I decided to get a fresh install!
On my Raspberry Pi, run the following command:
xauth list
I copy the cookie locally to a text editor as I need it later!
Removed the xclock command from the Dockerfile that I originally had!
Build the Dockerfile using the following command:
docker run -it --net=host --device=/dev/vcsm --device=/dev/vchiq -e
DISPLAY -v /tmp/.X11-unix joesan/motion_detector bash
Notice that I'm running a bash command to my Docker run so that I can get a basj prompt from the running image!
The result of step 3 would give me a bash prompt from the container that I just ran at step 3
I need to now install xauth in the image
apt-get install xauth
I then add the xauth cookie from step 0
It is after this Bang! I got what I want!

Run a docker image on Windows results in "oci runtime error: exec: "bash": executable file not found in $PATH."

I'm running Docker on Windows ("Docker Toolbox", not "Docker for Windows").
I've built an image with a rails app inside. It works properly on my Mac OS but stucks on production on Windows.
Using Docker 1.12 and docker-machine 0.8.0 on both machines.
When I create a machine and try to run the container from image, I do:
docker run -it myRepo:myTag bash
which opens me a interactive terminal on Mac OS, but Windows 7 and Windows Server 2011 are both responding with:
"Error response from daemon: oci runtime error: exec: "bash":
executable file not found in $PATH."
I use the MINGW64 shell via the Docker Quickstart Terminal but the old cmd.exe returns the same.
Can anybody help me with this issue? I've tried several hours to find a solution but there are too few answers for Windows.
Thank you in advance!
I also use Windows 7 with MINGW64. Here is what I get using nginx as example:
$docker run -it nginx bash
cannot enable tty mode on non tty input
I don't think you can open a tty using MINGW64.
You can try:
$docker run -i nginx bash
ls
bin
...
You will so no prompt or any indication you are inside the container. Just run ls and it should work inside your container.
Another option is to try to use winpty for the tty:
$ winpty docker run -it myRepo:myTag bash
root#644f59e6f818:/#
Have you tried?
$ winpty docker run -it myRepo:myTag /bin/bash
I haven't got the problem you are mentioning but I have seen it before when I was mapping volumes.
If you are mapping volumes using MINGW64, you will need to add an extra / before the local mapping. For example:
docker run -p 8080:80 -v "/$PWD":/var/share/nginx/html nginx
Let me know your findings.

Installing elasticsearch docker image fails on command not found

I'm trying to install elasticsearch from an image I've created on another machine, however when I create the container it say that the command not found. I've understood that the command is not exported as part of the image, however I can't find the right command to spesify for it to work.
on Machine A I'm creating the image as such:
sudo docker pull elasticsearch
sudo docker save -o "elastic.image.tar" elasticsearch
On machine B I'm importing the image and trying to run it
sudo docker import "elastic.image.tar"
sudo docker run -d elasticsearch
on the docker run I get
docker: Error response from daemon: No command specified.
See 'docker run --help'.
I've also tried the following commands:
sudo docker run -d elasticsearch elasticsearch
sudo docker run -d elasticsearch "/docker-entrypoint.sh elasticsearch"
sudo docker run -d elasticsearch "/usr/share/elasticsearch/docker-entrypoint.sh elasticsearch"
sudo docker run -d elasticsearch "/bin/bash"
None of them worked, all returned responce like:
docker: Error response from daemon: Container command 'XXX' not found or does not exist..
What is the right command to specify here?
Can you try docker load -i elastic.image.tar and call docker run elasticsearch. Check this question to know some differences between save/load and export/import.

How to run a meteor app in docker on mac?

I am trying to run a meteor app in a docker container on my mac using the meteord base image, but getting a
=> You don't have an meteor app to run in this image.
error message when
$ docker run -it -e ROOT_URL=http://localhost -e MONGO_URL=mongodb://192.168.99.101:27017/meteor -v /Users/me/build/bundle -p 8080:80 meteorhacks/meteord:base
I built the meteor bundle by
$ meteor build --architecture=os.linux.x86_64 ./
Can I use meteord on the mac?
As you can see in base/scripts/run_app.sh#L3-L21, that error message pops up when there is no /bundle, $BUNDLE_URL or /build_app path in the container.
And -v /Users/me/build/bundle isn't enough to declare a /bundle path in the container: you need to map it (mount a host directory):
-v /Users/me/build/bundle:/bundle
-v /Users/me/build/bundle alone declares a data volume, it doesn't mount anything from the host.

Resources