Related
I'm using ansible-semaphore API to handle playbook execution, when i call the start task endpoint:
start task endpoint
I need the tasks to run in the order they're added to the queue via API.
But apparently they are running in a random order:
tasks queue
In my settings I have set to only run one task at a time, which stopped it from running multiple tasks also in random order.
UI settings
In Semaphore's config.json I noticed the "concurrency_mode" setting:
{
"bolt": {
"host": "/home/ubuntu/semaphore.bolt",
},
"mysql": {
"host": "localhost",
"user": "root",
"pass": "*****",
"name": "semaphore",
"options": {}
},
"postgres": {
"host": "localhost",
"user": "postgres",
"pass": "*****",
"name": "semaphore",
"options": {}
},
"dialect": "postgres",
"port": "",
"interface": "",
"tmp_path": "/tmp/semaphore",
"cookie_hash": "*****",
"cookie_encryption": "*****",
"access_key_encryption": "*****",
"email_sender": "",
"email_host": "",
"email_port": "",
"web_host": "",
"ldap_binddn": "",
"ldap_bindpassword": "",
"ldap_server": "",
"ldap_searchdn": "",
"ldap_searchfilter": "",
"ldap_mappings": {
"dn": "",
"mail": "",
"uid": "",
"cn": ""
},
"telegram_chat": "",
"telegram_token": "",
"concurrency_mode": "",
"max_parallel_tasks": 0,
"email_alert": false,
"telegram_alert": false,
"slack_alert": false,
"ldap_enable": false,
"ldap_needtls": false
}
But the docs don't mention the options for this field:
Is it possible to run the queue in chronological order? What setting do I have to change?
I tried changing the "concurrency_mode" in the config.json to "false" with no results.
I also tried adding a few seconds delay between the tasks addition to queue, no results either.
I have a consul running locally on a dev machine. I also have one golang service running on two different ports on the same machine. Is there a way to register them as one service but two instances in consul using golang API (for example, is it possible to specify the node name when registering)?
Here's a very basic example which registers two instances of a service named my-service. Each instance is configured to listen on a different port, 8080 and 8081 respectively.
The key thing to note is that the service instances are also registered with a unique service ID in order to disambiguate between instance A and instance B of my-service which are running on the same agent.
package main
import (
"fmt"
"github.com/hashicorp/consul/api"
)
func main() {
// Get a new client
client, err := api.NewClient(api.DefaultConfig())
if err != nil {
panic(err)
}
service_name := "my-service"
service_ports := [2]int{8080, 8081}
for idx, port := range service_ports {
svc_reg := &api.AgentServiceRegistration{
ID: fmt.Sprintf("%s-%d", service_name, idx),
Name: service_name,
Port: port,
}
client.Agent().ServiceRegister(svc_reg)
}
}
After running go mod init consul-register (or any module name), and executing the code with go run main.go, you can see the service has been registered in the catalog.
$ consul catalog services
consul
my-service
Both service instances are correctly being returned for service discovery queries over DNS or HTTP.
$ dig #127.0.0.1 -p 8600 -t SRV my-service.service.consul +short
1 1 8080 b1000.local.node.dc1.consul.
1 1 8081 b1000.local.node.dc1.consul.
$ curl localhost:8500/v1/health/service/my-service
[
{
"Node": {
"ID": "11113853-a8e0-5787-7482-538078db855a",
"Node": "b1000.local",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"lan_ipv4": "127.0.0.1",
"wan": "127.0.0.1",
"wan_ipv4": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 11,
"ModifyIndex": 13
},
"Service": {
"ID": "my-service-0",
"Service": "my-service",
"Tags": [],
"Address": "",
"Meta": null,
"Port": 8080,
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"Proxy": {
"Mode": "",
"MeshGateway": {},
"Expose": {},
"TransparentProxy": {}
},
"Connect": {},
"CreateIndex": 14,
"ModifyIndex": 14
},
"Checks": [
{
"Node": "b1000.local",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {},
"CreateIndex": 11,
"ModifyIndex": 11
}
]
},
{
"Node": {
"ID": "11113853-a8e0-5787-7482-538078db855a",
"Node": "b1000.local",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"lan_ipv4": "127.0.0.1",
"wan": "127.0.0.1",
"wan_ipv4": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 11,
"ModifyIndex": 13
},
"Service": {
"ID": "my-service-1",
"Service": "my-service",
"Tags": [],
"Address": "",
"Meta": null,
"Port": 8081,
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"Proxy": {
"Mode": "",
"MeshGateway": {},
"Expose": {},
"TransparentProxy": {}
},
"Connect": {},
"CreateIndex": 15,
"ModifyIndex": 15
},
"Checks": [
{
"Node": "b1000.local",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {},
"CreateIndex": 11,
"ModifyIndex": 11
}
]
}
]
Laravel echo server keeps returning failed status on my server
Code
laravel-echo-server.json
{
"authHost": "http://example.com",
"authEndpoint": "/broadcasting/auth",
"clients": [
{
"appId": "311dr094tf98745ce",
"key": "fa43bffb3rth63f5ac9c386916ae28e6"
}
],
"database": "redis",
"databaseConfig": {
"redis": {},
"sqlite": {
"databasePath": "/database/laravel-echo-server.sqlite"
}
},
"devMode": false,
"host": null,
"port": "6001",
"protocol": "http",
"socketio": {},
"secureOptions": 67108864,
"sslCertPath": "",
"sslKeyPath": "",
"sslCertChainPath": "",
"sslPassphrase": "",
"subscribers": {
"http": true,
"redis": true
},
"apiOriginAllow": {
"allowCors": true,
"allowOrigin": "http://localhost:80",
"allowMethods": "GET, POST",
"allowHeaders": "Origin, Content-Type, X-Auth-Token, X-Requested-With, Accept, Authorization, X-CSRF-TOKEN, X-Socket-Id"
}
}
echo-pm2.json
{
"name": "echo",
"script": "laravel-echo-server.json",
"args": "start"
}
then i run pm2 start echo-pm2.json and status is online but when i visit my web page i keep getting
GET https://www.example.com:6001/socket.io/?EIO=3&transport=polling&t=NA2SNHk net::ERR_TIMED_OUT
Any idea?
"authHost": "http://example.com"
I have this set to the url of my localhost, not sure if it helps you
Solved
The problem was my url was set to https (GET https://www.example.com) while my actual address is running on http (GET http://www.example.com)
so that extra S... :)
I am new to Docker and want to understand image management better. A new image that I just created using
docker image build -t jefe/mh_db:v.1.1.0 ./
When I try to delete using
docker image rm d4c0c9225252
where d4c0c9225252 is the Image ID, returns
Error response from daemon: conflict: unable to delete d4c0c9225252 (cannot be forced) - image has dependent child images
Yes it's related to other posts regarding cannot delete. But I want to understand why a dependency exists.
How can this child image have images that are dependent upon it? I literally just created it
The dockerfile that is used in building the image
FROM mysql:5.7.27
MAINTAINER jefe
# Specify ports
EXPOSE 3306
Update
docker image ls | grep d4c0c9225252
jefe/mh_db v.1.1.0 d4c0c9225252 2 hours ago 373MB
Additionally
docker inspect d4c0c9225252
[
{
"Id": "sha256:d4c0c922525201d62e49ac73d03e27653e77e2ac5e3f11334a7a09d7c6d977fe",
"RepoTags": [
"jefe/mh_db:v.1.1.0"
],
"RepoDigests": [],
"Parent": "sha256:b0fead29523e498fd0f990abcc2b2bbb46952ad3361fbebcc304e31be69bd840",
"Comment": "",
"Created": "2019-08-08T15:24:57.324861036Z",
"Container": "6abc71375823faeb4819720a09ae348b0da4d9ae213c167c3911ca706d7c8b92",
"ContainerConfig": {
"Hostname": "6abc71375823",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3306/tcp": {},
"33060/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOSU_VERSION=1.7",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.27-1debian9"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"EXPOSE 3306"
],
"ArgsEscaped": true,
"Image": "sha256:b0fead29523e498fd0f990abcc2b2bbb46952ad3361fbebcc304e31be69bd840",
"Volumes": {
"/var/lib/mysql": {}
},
"WorkingDir": "",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {}
},
"DockerVersion": "19.03.1",
"Author": "jefe",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3306/tcp": {},
"33060/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOSU_VERSION=1.7",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.27-1debian9"
],
"Cmd": [
"mysqld"
],
"ArgsEscaped": true,
"Image": "sha256:b0fead29523e498fd0f990abcc2b2bbb46952ad3361fbebcc304e31be69bd840",
"Volumes": {
"/var/lib/mysql": {}
},
"WorkingDir": "",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 373273403,
"VirtualSize": 373273403,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/11891a42dc63fb6851e3fb12a1dd7e7285d18df83ecfd1f5aa40e44466921c58/diff:/var/lib/docker/overlay2/e0c695335789cba5a8e6524804bd1c2d1836db16650105e6863e7023bc289753/diff:/var/lib/docker/overlay2/2b4e10627f78a0185f8975b62b954a14e79fc6fb71a3caae07180e8e00f51b44/diff:/var/lib/docker/overlay2/f9dd057675e39a3eab99143f07bcb7df8a44eba5d1f1d15e567471a7c3a9e491/diff:/var/lib/docker/overlay2/c28612d7a8d1b187ddb7fc995c5fe733e0c18def13df798f1de5af2bdac9d3f4/diff:/var/lib/docker/overlay2/f8e41886d7ae8d8939ae2dc11b4ff941ef931aa18bc2f8cb0f724cc9e270ab3c/diff:/var/lib/docker/overlay2/8796390ee625b42b56d7822d128cc50bf88fbfd1f7f5ac9e7ecda9e721944946/diff:/var/lib/docker/overlay2/86552fa54367c794979383dbba257f1292f2a137dae0304d1eb036ba2249bc7b/diff:/var/lib/docker/overlay2/168353bdf70d8140026c2cf58da64eee7409f710832be601dad3e0cc6a02c01a/diff:/var/lib/docker/overlay2/ac908915344e5f65df6e0121f77bd48fdaa822974317cdb83a59f0618893ddb2/diff",
"MergedDir": "/var/lib/docker/overlay2/1826972009d9ce18265129a2d4928b708bf6780370530e4c1be8b1efd096b2cd/merged",
"UpperDir": "/var/lib/docker/overlay2/1826972009d9ce18265129a2d4928b708bf6780370530e4c1be8b1efd096b2cd/diff",
"WorkDir": "/var/lib/docker/overlay2/1826972009d9ce18265129a2d4928b708bf6780370530e4c1be8b1efd096b2cd/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:d56055da3352f918f4d8a42350385ea5b10d0906e746a8fbb4b850f9284deee5",
"sha256:b78ec9586b345b0efdb0297261c0044652563045a28a7cc6d27dd314eda1e0eb",
"sha256:c6926fcee1912ebb41215a70b1d0ed77e3b8db38cfe69b936d18b346096e144c",
"sha256:007a7f930352c0fd98663021fb1ee08768462eb5bc9045342da9e9f73fd79a7f",
"sha256:2f1b41b24201f4ae635819b1d7717ab04c000f04e7708de3bb012a60d3ef630b",
"sha256:77737de99484a6e2e2ae4bea0cf7ec4d3063827a6dd49a243694ef00929350d2",
"sha256:7e7fffcdabb3e0655bf46756dd04018ce051f81fbaba8bff3703ac987def88be",
"sha256:83bba64580292cc5af1fd3cabb74b18c143e05cd45d882c9e09edc8ff79a1119",
"sha256:94f63a189eef2bdb32668faa0ce08dc5da01eccb91ad548f28052048e810e5f8",
"sha256:0c3e10ddbe75e0a4efcee6aa06716b651227ceb358e78922b9fe9ea7f5a63992",
"sha256:5572431ce4dea5defe6a0d0586ad3b25a74d59bfbbb05c2a257c5d71a27eba4c"
]
},
"Metadata": {
"LastTagTime": "2019-08-08T15:24:57.393863976Z"
}
}
Docker wont let you delete an image if you have more than one tag on it. To find out the tags you can filter the output of docker image ls
docker image ls | grep d4c0c9225252
I'm unable to access content in a container created using docker-compose; it's been suggested to me that this could be because the content folder on the host is not being mounted correctly. (Note: I don't know how to validate this advice, so I must assume that it's correct.)
Here's my docker-compose.yml file:
version: "2.1"
services:
docs:
image: docs/docstage
ports:
- "4000:4000"
volumes:
- "./:/usr/src/app"
Here's the output of my docker-compose command:
D:\Dev\Git\docker.github.io>docker-compose up
Creating dockergithubio_docs_1 ...
Creating dockergithubio_docs_1 ... done
Attaching to dockergithubio_docs_1
docs_1 | Configuration file: none
docs_1 | Configuration file: none
docs_1 | Source: /usr/src/app
docs_1 | Destination: /_site
docs_1 | Incremental build: disabled. Enable with --incremental
docs_1 | Generating...
docs_1 | done in 0.017 seconds.
docs_1 | Auto-regeneration: enabled for '/usr/src/app'
docs_1 | Configuration file: none
docs_1 | Server address: http://0.0.0.0:4000/
docs_1 | Server running... press ctrl-c to stop.
docs_1 | [2017-07-17 20:58:02] ERROR `/favicon.ico' not found.
...and here's the result:
C:\Users\Admin>docker exec -it 863a59969066 bash
root#863a59969066:/usr/src/app# ls
root#863a59969066:/usr/src/app#
As we can see, there's no content in the container. Also, browsing to the URL reveals an empty directory:
Here's the result of docker container inspect:
C:\Users\Admin>docker inspect dockergithubio_docs_1
[
{
"Id": "863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779",
"Created": "2017-07-17T20:57:06.7250794Z",
"Path": "/bin/sh",
"Args": [
"-c",
"jekyll serve -d /_site --watch -H 0.0.0.0 -P 4000"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 3252,
"ExitCode": 0,
"Error": "",
"StartedAt": "2017-07-17T20:57:08.0003358Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:9670258d73f081ef2c7dd476c56fc5945627ee68867e1296fbe19e612ddd29a4",
"ResolvConfPath": "/var/lib/docker/containers/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779/hostname",
"HostsPath": "/var/lib/docker/containers/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779/hosts",
"LogPath": "/var/lib/docker/containers/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779-json.log",
"Name": "/dockergithubio_docs_1",
"RestartCount": 0,
"Driver": "overlay2",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/D/Dev/Git/docker.github.io:/usr/src/app:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "dockergithubio_default",
"PortBindings": {
"4000/tcp": [
{
"HostIp": "",
"HostPort": "4000"
}
]
},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": -1,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/8f7ba6861640a6fb639f64c475db0260cb4c9ded686711b05625ff37c19737fa-init/diff:/var/lib/docker/overlay2/0772e69f7faba8d149e7d9aed149d4607c905f1d01b28b97f5453772e5326904/diff:/var/lib/docker/overlay2/6d2a854de0c3c7af4e8e3b6ef831af1dde8c400f5aa8fd809d76a06f3ba5c705/diff:/var/lib/docker/overlay2/8a4466b60f60d0141625c1ad32233f3fee49821f534f8709685c2d6514b9d3f6/diff:/var/lib/docker/overlay2/6a4fe33cae424e9a671300332244aa19f5a314d90c945b399f35ea487e01d333/diff:/var/lib/docker/overlay2/de35de0b23cb93e811a7f2ec6b59e3e282faf770131179c60cad588c522551be/diff:/var/lib/docker/overlay2/e7f896a4b4d0da7ddbddd208a9130affea2358f4b1fd147f403b82fe7fe748aa/diff:/var/lib/docker/overlay2/b09694bfeb6b2e7d75de351286d95bf9af18181004f9d3c2d9bf73ea6538ba56/diff:/var/lib/docker/overlay2/4feb0e4dccefd6570fee715baf80ebe6ea77ab133cc3ac15fd850bb737f7e8b2/diff:/var/lib/docker/overlay2/1291c76b0bb03c133b70dad4dd08147f3c753b52f8ac3070d2e0f9bbdd99e874/diff:/var/lib/docker/overlay2/9166f2a32c7b3284fab5a95803ac66c83cba936161083f0405b630178f5dbeb2/diff:/var/lib/docker/overlay2/46499476944e8234be84f662104f3968f8717f3e36a67bb06d814f9c70998d9f/diff:/var/lib/docker/overlay2/fc1f9d566f52e9f994bd02dd73528fb3402a98a2618c5b3a9dbf10c8c5ae554c/diff",
"MergedDir": "/var/lib/docker/overlay2/8f7ba6861640a6fb639f64c475db0260cb4c9ded686711b05625ff37c19737fa/merged",
"UpperDir": "/var/lib/docker/overlay2/8f7ba6861640a6fb639f64c475db0260cb4c9ded686711b05625ff37c19737fa/diff",
"WorkDir": "/var/lib/docker/overlay2/8f7ba6861640a6fb639f64c475db0260cb4c9ded686711b05625ff37c19737fa/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/D/Dev/Git/docker.github.io",
"Destination": "/usr/src/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "863a59969066",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"4000/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/bundle/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"RUBY_MAJOR=2.3",
"RUBY_VERSION=2.3.3",
"RUBY_DOWNLOAD_SHA256=241408c8c555b258846368830a06146e4849a1d58dcaf6b14a3b6a73058115b7",
"RUBYGEMS_VERSION=2.6.8",
"BUNDLER_VERSION=1.13.6",
"GEM_HOME=/usr/local/bundle",
"BUNDLE_PATH=/usr/local/bundle",
"BUNDLE_BIN=/usr/local/bundle/bin",
"BUNDLE_SILENCE_ROOT_WARNING=1",
"BUNDLE_APP_CONFIG=/usr/local/bundle",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_MAJOR_VERSION=4",
"GITHUB_GEM_VERSION=112"
],
"Cmd": [
"/bin/sh",
"-c",
"jekyll serve -d /_site --watch -H 0.0.0.0 -P 4000"
],
"ArgsEscaped": true,
"Image": "docs/docstage",
"Volumes": {
"/usr/src/app": {}
},
"WorkingDir": "/usr/src/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "f86127819d2d94cf924f8d7ef0fe8579286043aebafc2940e6ca0b1d1b4828b7",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "dockergithubio",
"com.docker.compose.service": "docs",
"com.docker.compose.version": "1.14.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "b6ad8a59f8f902f5a2fff0e4d6656bed6b3ecf1904424504886543614524f570",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"4000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "4000"
}
]
},
"SandboxKey": "/var/run/docker/netns/b6ad8a59f8f9",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"dockergithubio_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"docs",
"863a59969066"
],
"NetworkID": "8c5980632aa0810c818544573e76247a7b27f95e86d137e5f755cbff5b16b6aa",
"EndpointID": "ead13e880ebeede298f16c912d4eac0f5eb89ec5600da208202d54868273927d",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02",
"DriverOpts": null
}
}
}
}
]
At first glance this appears OK, but I must admit to a lack of knowledge on exactly interpreting the detail.
I've opened an issue here, but it seems I've exhausted all resources on that thread.
How can I determine whether there's a mount error occurring, and—if so—how can I fix it?
You need to configure docker to share your D drive into the embedded docker VM. Without that, the VM has nothing at this location and when mounting a volume in a container to a directory that doesn't exist (inside the docker VM, not on your windows machine), you get the resulting empty directory.
See the windows install steps for how to share this drive: