Fail to start tasks/services in Docker Swarm: hnsCall failed in Win32: The parameter is incorrect - windows

I am trying the Docker Get Started tutorial, Part 3 (Services). So the part where I need to init a swarm and deploy a stack, all my service status is rejected:
The full error (using --no-trunc) is:
hnsCall failed in Win32: The parameter is incorrect. (0x57)
Here are the steps I am doing:
Ensure my image is correct (the docker run works well, I accessed localhost:4000 successfully). Then I stopped the container to make sure it does not interfere.
When I init the swarm, it says I have multiple addresses, so I chose a random one (I tried with either of them, same result) using --advertise-addr.
docker stack deploy works, but when I check the status with docker service ps, none of them are up. localhost:4000 has no listener.
Note: I switched Docker to a Windows container.
I am new to Docker and this is beyond me. Can anyone please suggest a solution/debug way?

I tried everything but cannot get it to run on a Windows container so I switched to Linux container. The Get Started part 3 runs well.

Related

Docker Windows and Linux Containers Simultaneously

How Docker Windows and Linux Containers run Simultaneously (how does it works) and why building doesn't work.
I try to run mixed windows and linux containers using docker compose.
I followed this
article and its worked (using compose version 2.4 and latest docker desktop).
Can someone explain me how does it work? Does it work only with wsl or I can use it using linux hyperv?
When I try to build new linux image (pulling works fine) I got an error, some commands gave me Invalid signal: WINCH and some gave me
returned a non-zero code: 4294967295:
failed to shutdown container: container ad12191abf0849d5e49bb5dc0570d6ba8eaf2cc5b4e7d77127ed381901fcb672
encountered an error during hcsshim::System::waitBackground: failure in a Windows system call:
The virtual machine or container with the specified identifier is not running. (0xc0370110):
subsequent terminate failed container ad12191abf0849d5e49bb5dc0570d6ba8eaf2cc5b4e7d77127ed381901fcb672
encountered an error during hcsshim::System::waitBackground: failure in a Windows system call:
The virtual machine or container with the specified identifier
Someone solved it?

Error syncing pod on starting Beam - Dataflow pipeline from docker

We are constantly getting an error while starting our Beam Golang SDK pipeline (driver program) from a docker image which works when started from local / VM instance. We are using Dataflow runner for our pipeline and Kubernetes to deploy.
LOCAL SETUP:
We have GOOGLE_APPLICATION_CREDENTIALS variable set with service account for our GCP cluster. When running the job from local, job gets submitted to dataflow and completes successfully.
DOCKER SETUP:
Build image used is FROM golang:1.14-alpine. When we pack the same program with Dockerfile and try to run, it fails with error
User program exited: fork/exec /bin/worker: no such file or directory
On checking Stackdriver logs for more details, we see this:
Error syncing pod 00014c7112b5049966a4242e323b7850 ("dataflow-go-job-1-1611314272307727-
01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"),
skipping: failed to "StartContainer" for "sdk" with CrashLoopBackOff:
"back-off 2m40s restarting failed container=sdk pod=dataflow-go-job-1-
1611314272307727-01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"
Found reference to this error in Dataflow common errors doc, but it is too generic to figure out whats failing. After multiple retries, we were able to eliminate any permission / access related issues from pods. Not sure what else could be the problem here.
After multiple attempts, we decided to start the job manually from a new Debian 10 based VM instance and it worked. This brought to our notice that we are using alpine based golang image in Docker which may not have all the required dependencies installed to start the job.
On golang docker hub, we found a golang:1.14-buster where buster is codename for Debian 10. Using that for docker build helped us solve the issue. Self answering here to help anyone else facing the same issues.

Cannot forward ports when running Linux container on Windows10 as a host

I'm new using Docker. I have been trying to deploy a Linux container (with Windows as a host) with a Google Cloud image inside using Docker. I'm able to do everything well, at the end the server is running perfectly, but when I want to check the server, using the localhost in the browser, I got a blank page with:
Blank page
This is the Dockerfile:
FROM google/cloud-sdk
ENV PATH /usr/lib/google-cloud-sdk/bin:$PATH
WORKDIR docker_folder
COPY local_folder/ .
RUN pwd
EXPOSE 8080
CMD ["java_dev_appserver.sh", "."]
This is the command I'm using to build my image (in the CMD):
docker build --tag serverdeploy .
This is the command I'm using to run my container
docker run -p 8080:8080 serverdeploy
This is the stack trace that I got when I run the server
where I know that I running the server
I did some research and looks like Docker had a problem with the ports when you use a Linux container in Windows (Not sure if it's already solved or not). I've already tried all the possible solutions that I found out there (even trying to replace 'localhost' by all the ip's that I get when I run ipconfig on the cmd) but I still get the same error.
And, as last hope, I need your help to understand what I'm doing wrong, or if I missing something
You are running your service bind to localhost - that means no remote connections are accepted (as well as binding to 127.0.0.1. And for your container the host is a remote connection.
Change binding to 0.0.0.0 (which I guess is default) and enjoy.
Btw sharing your java_dev_appserver.sh would be helpful for answering the question.

Docker on Win10 - is it possible to have multiple terminals?

I am new to Docker - installed DockerToolbox on Win10. And managed to run some basic docker examples. But I cannot seem to figure out how to access same container from multiple command line windows.
Quick online search suggested finding container ID (e.g. 2c7e29b9b666) via
docker ps
then running (I presume in new command window)
docker exec -it 2c7e29b9b666 bash
but that does not work in Win10.
Error message received is:
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Edit: in the first window I started docker machine via:
docker-machine create --driver virtualbox tombox
and I can see 'tombox' in second window if I do:
docker-machine ls
Do I need to somehow reference 'tombox' when running docker commands in second window?

Is it possible to run kubernetes as a docker container?

I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.

Resources