Chronos setting environment variable leads to error - mesos

I tried this both on the chronos config and in my job definition:
"environmentVariables": [
{
"name": "DOCKER_API_VERSION",
"value": "$(docker version --format '{{.Server.Version}}')"
}
],
It always fails with:
docker: Error response from daemon: 404 page not found.
See 'docker run --help'.
The reason I'm trying to set that variable is because I'm running docker in docker and the client docker API sometimes has a different version than the server docker version and it has to be started with the DOCKER_API_VERSION env set in order to work.
I'm suspecting it's because of the computed value being set instead of a string value.
In the logs I can see it runs as supposed and I don't know why it crashes to be honest:
docker run ... -e DOCKER_API_VERSION=$(docker version --format '{{.Server.Version}}') ...

Related

Near Mainnet Archivel Node Set up

I tried setting up the NEAR mainnet archival node using docker by following this documentation - https://github.com/near/nearup#building-the-docker-image. The docker run command does not specify any port in the document.
So I also ran the docker run without any port, but when I tried to check the port by docker ps it does not show any port but the neard node runs.
I did not find any docs on the node APIs, can we use the archival APIs - https://docs.near.org/docs/api/rpc to query the node.
Docker run command used to set up archival mainnet node:
sudo docker run -d -v $PWD:/root/.near --name nearup nearprotocol/nearup run mainnet
JSON RPC on nearcore is explosed on port 3030
As for the running an archival node you might be interested in this doc page https://docs.near.org/docs/roles/integrator/exchange-integration#steps-to-start-archive-node
P. S. nearup is considered oldish though still in use.
I have updated the documentation for nearup to specify the port binding for RPC now: https://github.com/near/nearup#building-the-docker-image
You can use the following command:
docker run -v $HOME/.near:/root/.near -p 3030:3030 --name nearup nearprotocol/nearup run mainnet
And you can validate nearup is running and the RPC /status endpoint is available by running:
docker exec nearup nearup logs
and
curl 0.0.0.0:3030/status
Also please make sure that you have changed the ~/.near/mainnet/config.json to contain the variable:
{
...
"archive": true,
...
}

Docker Command Issue Running Outside of Bash

I have a Docker container that handles an application. I am attempting to write tests for my system using npx and nightwatchJS.
I use CI and to run the tests for my entire suite I docker-compose build then run commands from outside of the container like so:
Example of backend python test being called (this works and is run as expected):
docker-compose run --rm web sh -c "pytest apps/login/tests -s"
Now I am trying to run an npx command to do some front-end testing but I am getting errors with something I cannot seem to diagnose:
Error while running .navigateTo() protocol action: An unknown server-side error occurred while processing the command. – unknown error: net::ERR_CONNECTION_REFUSED
Here is that command:
docker-compose run --rm web sh -c "npx nightwatch apps/login/tests/nightwatch/login_test.js"
The odd part of this is that if I go into bash:
docker-compose exec web bash
And then run:
npx nightwatch apps/login/tests/nightwatch/login_test.js
I don't get that error as I'm in bash.
This leads me to believe that I have an error in something with the command. Can somebody please help with this?
Think as containers as a separate computers.
When you run on your computer pytest apps/login/tests -s and then I run on my computer npx nightwatch apps/login/tests/nightwatch/login_test.js surely my computer will not connect to yours. I will get "connection refused" kind of error.
With docker run you run a separate new "computer" that runs that command - it has it's own pid space, it's own network address, etc. Than inside "that computer" you can execute another command with docker exec. To have you command to connect with localhost, you have to run them on the same "computer".
So when you run docker run with the client, it does not connect to a separate docker run. Either specify correct ip address or run both commands inside the same container.
I suggest to research how docker works. The above is a very crude oversimplification.

Cannot build docker image after run Minikube build-env

I'm using Minikube in Windows 10 and I'd like to use locally built Docker images instead of images hosted in a registry, so, according this tutorial, I have to run next commands:
Use local kubernetes and images:
> minikube docker-env
The output is:
PS C:\WINDOWS\system32> minikube docker-env
$Env:DOCKER_TLS_VERIFY = "1"
$Env:DOCKER_HOST = "tcp://10.98.38.126:2376"
$Env:DOCKER_CERT_PATH = "C:\Users\MyUser\.minikube\certs"
# Run this command to configure your shell:
# & minikube docker-env | Invoke-Expression
To configure the shell, run this:
> & minikube docker-env | Invoke-Expression
After that, I need to build a new image:
PS D:\repos\test> docker build -t miImage:v1 .
And I have next error:
PS D:\repos\test> docker build -t miImage:v1 .
Sending build context to Docker daemon 8.62MB
Step 1/10 : FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
Get https://mcr.microsoft.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
This is error is thrown since I configured it to use local images, is there any way to fix it?
it looks like the machine that you're using to build is unable to reach https://mcr.microsoft.com/v2/ to confirm that, try to send a simple GET to the URL
wget https://mcr.microsoft.com/v2/
if that's the problem, you can use a different machine to pull the image, then save it to a file and load it on the target machine.
#on a machine connected to internet
docker pull mcr.microsoft.com/dotnet/core/sdk:2.2
docker save mcr.microsoft.com/dotnet/core/sdk:2.2 > dotnetsdk2_2.tar
# download the file
# on the target machine
docker load < dotnetsdk2_2.tar
then your build should work without a problem using the local version of the image.

Error "Docker: invalid publish opts format " runing Graphviz docker container on Mac Os

I'm completely new to docker and am using it for the first time.
I have installed Docker Desktop for Mac OS and run the 'Hello-world' container successfully. I am now trying to run this 'omerio/graphviz-server' from https://hub.docker.com/r/omerio/graphviz-server (which is what I really want Docker for) and although the 'docker pull omerio/graphviz-server' command completes successfully:
devops$ docker pull omerio/graphviz-server
Using default tag: latest
latest: Pulling from omerio/graphviz-server
863735b9fd15: Pull complete
4fbaa2f403df: Pull complete
44be94a95984: Pull complete
a3ed95caeb02: Pull complete
ae092b5d3a08: Pull complete
d0edb8269c6a: Pull complete
Digest: sha256:02cd3e2355526a927e951a0e24d63231a79b192d4716e82999ff80e0893c4adc
Status: Downloaded newer image for omerio/graphviz-server:latest
the command to start the container (given on https://hub.docker.com/r/omerio/graphviz-server): 'docker run -d -p : omerio/graphviz-server' gives me the error message:
devops$ docker run -d -p : omerio/graphviz-server
docker: invalid publish opts format (should be name=value but got ':').
See 'docker run --help'.
Searching for this error message returns no information at all. I see that the container in question was last updated over 3 years ago - could it be an old format that Docker no longer supports?
-p option of docker run command binds ports between host and container (see docs), and its usage is most of the time the following :
docker run <other options> \
-p <port on the host>:<port in the container> \
<my_image> <args>
As for your example : it seems that running the image needs an argument (the port in the container). Let's choose 8080 for example (that means port 8080 will be used by the application inside the container).
If you want to access it directly on your host (via localhost), you should bind 8080 port (in the container, the port we chose previously) to any available port on your host (let's say 8081), like this :
docker run \
-p 8081:8080 \
omerio/graphviz-server 8080
You should now be able to access the application (port 8080 of the application running in the container) from your host via localhost:8081.

Docker complaining about ALL_PROXY environment variable with "proxy: unknown scheme: http"

I'm facing the following issue with my Docker containers: When I try to enter the container using
docker exec -it container-id /bin/bash
Docker (I assume it's Docker) complains with the following message:
proxy: unknown scheme: http
I have traced this back to the following environment variable that's set on my host machine, since I'm using a proxy server to access the web:
ALL_PROXY=http://myproxy:8080
The error message seems to come from the net/proxy.go file, which can be found here - the error message is issued on the last line of the file. Why would http not be a registered URL scheme in the Docker case?
As soon as I unset ALL_PROXY on the host, I can enter the container without any issues.
Environment:
Mac OS X v10.11.5
Docker v1.11.1
Docker-Machine v0.7.0
Any idea how to fix this (other than unsetting the variable each time)?
I am facing the same issue with docker 1.11.2. I believe the error is coming from method FromURL.
After checking the relevant commit https://github.com/docker/docker/commit/16effc66c028a7800096ed92174ca4bceba229ad, it turns out from v1.11.0-rc1 up to v1.12.0-rc4 are including this commit.
So the solution for me is to install a lower version (I used v1.10.3) of docker toolbox, after which "docker run hello-world" works.

Resources