Windows containers on ECS shows RUNNING but dont do anything - windows

I have a microsoft/iis task set on a ECS cluster running on AMI optimized windows 2016 server.
The task starts fine (it shows RUNNING), but after a few hours, it just stops with: Essential container in task exited.
Looking at the EC2 instance, I can see the container running, but it does not show the regular logs I expect on this particular container.
Another curious thing is that if I manually do a docker run <my image> on the EC2, it starts the container and the log is populated.
Any idea what could be wrong?
EDIT
adding task definition (JSON format)
{
"ipcMode": null,
"executionRoleArn": null,
"containerDefinitions": [
{
"dnsSearchDomains": null,
"environmentFiles": null,
"logConfiguration": null,
"entryPoint": [
"powershell",
"-Command"
],
"portMappings": [
{
"hostPort": 8080,
"protocol": "tcp",
"containerPort": 80
}
],
"command": [
"New-Item -Path C:\\inetpub\\wwwroot\\index.html -ItemType file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p>' -Force ; C:\\ServiceMonitor.exe w3svc"
],
"linuxParameters": null,
"cpu": 0,
"environment": [],
"resourceRequirements": null,
"ulimits": null,
"dnsServers": null,
"mountPoints": [],
"workingDirectory": null,
"secrets": null,
"dockerSecurityOptions": null,
"memory": null,
"memoryReservation": null,
"volumesFrom": [],
"stopTimeout": null,
"image": "microsoft/iis",
"startTimeout": null,
"firelensConfiguration": null,
"dependsOn": null,
"disableNetworking": null,
"interactive": null,
"healthCheck": null,
"essential": true,
"links": null,
"hostname": null,
"extraHosts": null,
"pseudoTerminal": null,
"user": null,
"readonlyRootFilesystem": null,
"dockerLabels": null,
"systemControls": null,
"privileged": null,
"name": "windows_sample_app"
}
],
"placementConstraints": [],
"memory": "1024",
"taskRoleArn": null,
"compatibilities": [
"EC2"
],
"taskDefinitionArn": "arn:aws:ecs:<my-region>:<my-id>:task-definition/windows-simple-iis:1",
"family": "windows-simple-iis",
"requiresAttributes": [],
"pidMode": null,
"requiresCompatibilities": [],
"networkMode": null,
"cpu": "512",
"revision": 1,
"status": "ACTIVE",
"inferenceAccelerators": null,
"proxyConfiguration": null,
"volumes": []
}

For a windows task CPU shares should not be zero. "cpu": 0.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that is described in the task definition. Updating the CPU shares should get everything running and logs to populate.

Related

Elasticsearch AccessDeniedException[/usr/share/elasticsearch/data/nodes] on ECS Fargate

I am deploying elasticserch 7.16.1 on ECS Fargate using bind mount. The container terminates with following error.
org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.16.1.jar:7.16.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:157) ~[elasticsearch-7.16.1.jar:7.16.1]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) ~[elasticsearch-7.16.1.jar:7.16.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) ~[elasticsearch-cli-7.16.1.jar:7.16.1]
at org.elasticsearch.cli.Command.main(Command.java:77) ~[elasticsearch-cli-7.16.1.jar:7.16.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:122) ~[elasticsearch-7.16.1.jar:7.16.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-7.16.1.jar:7.16.1]
Caused by: org.elasticsearch.ElasticsearchException: failed to bind service
at org.elasticsearch.node.Node.<init>(Node.java:1090) ~[elasticsearch-7.16.1.jar:7.16.1]
at org.elasticsearch.node.Node.<init>(Node.java:309) ~[elasticsearch-7.16.1.jar:7.16.1]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234) ~[elasticsearch-7.16.1.jar:7.16.1]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234) ~[elasticsearch-7.16.1.jar:7.16.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) ~[elasticsearch-7.16.1.jar:7.16.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:166) ~[elasticsearch-7.16.1.jar:7.16.1]
Looks like the container does not have write permission on the volume, but the volume is not read-only. The task-definition is as below.
{
"ipcMode": null,
"executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"dnsSearchDomains": null,
"environmentFiles": null,
"logConfiguration": {
"logDriver": "awslogs",
"secretOptions": null,
"options": {
"awslogs-group": "/ecs/elasticsearch-dev",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
},
"entryPoint": null,
"portMappings": [
{
"hostPort": 9200,
"protocol": "tcp",
"containerPort": 9200
}
],
"command": null,
"linuxParameters": null,
"cpu": 0,
"environment": [
{
"name": "discovery.type",
"value": "single-node"
},
{
"name": "ES_JAVA_OPTS",
"value": "-Xms512m -Xmx512m"
}
],
"resourceRequirements": null,
"ulimits": [
{
"name": "nofile",
"softLimit": 65536,
"hardLimit": 65536
}
],
"dnsServers": null,
"mountPoints": [
{
"readOnly": null,
"containerPath": "/usr/share/elasticsearch/data",
"sourceVolume": "host-data"
}
],
"workingDirectory": null,
"secrets": null,
"dockerSecurityOptions": null,
"memory": null,
"memoryReservation": null,
"volumesFrom": [],
"stopTimeout": null,
"image": "docker.elastic.co/elasticsearch/elasticsearch:7.16.1",
"startTimeout": null,
"firelensConfiguration": null,
"dependsOn": null,
"disableNetworking": null,
"interactive": null,
"healthCheck": null,
"essential": true,
"links": null,
"hostname": null,
"extraHosts": null,
"pseudoTerminal": null,
"user": null,
"readonlyRootFilesystem": null,
"dockerLabels": null,
"systemControls": null,
"privileged": null,
"name": "elasticsearch-dev"
}
],
"placementConstraints": [],
"memory": "8192",
"taskRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole",
"compatibilities": [
"EC2",
"FARGATE"
],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:xxxxxxxxxxxx:task-definition/elasticsearch-dev:7",
"family": "elasticsearch-dev",
"requiresAttributes": [
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "ecs.capability.execution-role-awslogs"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "ecs.capability.task-eni"
}
],
"pidMode": null,
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"runtimePlatform": null,
"cpu": "1024",
"revision": 7,
"status": "ACTIVE",
"inferenceAccelerators": null,
"proxyConfiguration": null,
"volumes": [
{
"fsxWindowsFileServerVolumeConfiguration": null,
"efsVolumeConfiguration": null,
"name": "host-data",
"host": {
"sourcePath": null
},
"dockerVolumeConfiguration": null
}
]
}
elasticsearch:7.13.0 has added a USER directive in the Dockerfile that ensures elasticsearch container to be run by elasticsearch user instead of root user. But AWS ECS still mounts volume owned by root user, that's why elasticsearch user can't write to it raising AccessDeniedException.
ECS docs suggests to add a VOLUME directive after USER directive in the Dockerfile to make the volume owned by the non-root user. I ended up with the following Dockerfile to build elasticsearch image myself and push it to ECR to make it work.
FROM docker.elastic.co/elasticsearch/elasticsearch:7.16.1
VOLUME ["/usr/share/elasticsearch/data"]

Ansible to Proxmox returning 500 error

I am trying to use Ansible to provision some VMs on my newly set up Proxmox VE. I have installed proxmoxer and request with PIP on both my local Mac and Proxmox VE (Python 2 on Proxmox and Python 3 locally). I use Ansible 2.4.3.0, Proxmox version: 5.1-41.
I do have a vm with the id of 100, which was created from a Debian template, the vm is located on local-lvm (pve).
My full playbook can be found at: https://github.com/atwright147/ansible-contact-book-proxmox-provisioner, the specific task is pasted below:
---
- proxmox_kvm:
api_user: root#pam
api_password: REDACTED
api_host: pve
vmid: 100
state: current
When running this script via: ansible-playbook -vvv --connection=local -i hosts site.yml I get the following error:
The full traceback is:
File "/tmp/ansible_TDEJsZ/ansible_module_proxmox_kvm.py", line 1227, in main
current = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status']
File "/usr/local/lib/python2.7/dist-packages/proxmoxer/core.py", line 84, in get
return self(args)._request("GET", params=params)
File "/usr/local/lib/python2.7/dist-packages/proxmoxer/core.py", line 79, in _request
resp.content))
fatal: [192.168.0.22]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"acpi": true,
"agent": null,
"api_host": "pve",
"api_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"api_user": "root#pam",
"args": null,
"autostart": false,
"balloon": 0,
"bios": null,
"boot": "cnd",
"bootdisk": null,
"clone": null,
"cores": 1,
"cpu": "kvm64",
"cpulimit": null,
"cpuunits": 1000,
"delete": null,
"description": null,
"digest": null,
"force": null,
"format": "qcow2",
"freeze": null,
"full": true,
"hostpci": null,
"hotplug": null,
"hugepages": null,
"ide": null,
"keyboard": null,
"kvm": true,
"localtime": null,
"lock": null,
"machine": null,
"memory": 512,
"migrate_downtime": null,
"migrate_speed": null,
"name": null,
"net": null,
"newid": null,
"node": null,
"numa": null,
"numa_enabled": null,
"onboot": true,
"ostype": "l26",
"parallel": null,
"pool": null,
"protection": null,
"reboot": null,
"revert": null,
"sata": null,
"scsi": null,
"scsihw": null,
"serial": null,
"shares": null,
"skiplock": null,
"smbios": null,
"snapname": null,
"sockets": 1,
"startdate": null,
"startup": null,
"state": "current",
"storage": null,
"tablet": false,
"target": null,
"tdf": null,
"template": false,
"timeout": 30,
"update": false,
"validate_certs": false,
"vcpus": null,
"vga": "std",
"virtio": null,
"vmid": 100,
"watchdog": null
}
},
"msg": "Unable to get vm None with vmid = 100 status: 500 Internal Server Error: {\"data\":null}"
}
Ansible info:
ansible 2.4.3.0
config file = /Users/andy/Development/proxmox-playbooks/contact-book/ansible.cfg
configured module search path = ['/Users/andy/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.4 (default, Jan 25 2018, 18:48:20) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.38)]
What am I doing wrong?
It turned out that I was making several mistakes here.
I should have been using the proxmox module rather than proxmox_kvm
I needed to use the storage param to set up the container in local-lvm e.g. storage: local-lvm
My final, working task looks like this:
- name: "Create a Linux Container (LXC)"
proxmox:
node: pve
api_user: root#pam
api_password: proxmox_password
api_host: pve
password: vm_password
hostname: vm.hostname.local
ostemplate: "local:vztmpl/ubuntu-16.04-standard_16.04-1_amd64.tar.gz"
storage: local-lvm
cores: 2
state: present

Unable to see files inside a container

I'm unable to access content in a container created using docker-compose; it's been suggested to me that this could be because the content folder on the host is not being mounted correctly. (Note: I don't know how to validate this advice, so I must assume that it's correct.)
Here's my docker-compose.yml file:
version: "2.1"
services:
docs:
image: docs/docstage
ports:
- "4000:4000"
volumes:
- "./:/usr/src/app"
Here's the output of my docker-compose command:
D:\Dev\Git\docker.github.io>docker-compose up
Creating dockergithubio_docs_1 ...
Creating dockergithubio_docs_1 ... done
Attaching to dockergithubio_docs_1
docs_1 | Configuration file: none
docs_1 | Configuration file: none
docs_1 | Source: /usr/src/app
docs_1 | Destination: /_site
docs_1 | Incremental build: disabled. Enable with --incremental
docs_1 | Generating...
docs_1 | done in 0.017 seconds.
docs_1 | Auto-regeneration: enabled for '/usr/src/app'
docs_1 | Configuration file: none
docs_1 | Server address: http://0.0.0.0:4000/
docs_1 | Server running... press ctrl-c to stop.
docs_1 | [2017-07-17 20:58:02] ERROR `/favicon.ico' not found.
...and here's the result:
C:\Users\Admin>docker exec -it 863a59969066 bash
root#863a59969066:/usr/src/app# ls
root#863a59969066:/usr/src/app#
As we can see, there's no content in the container. Also, browsing to the URL reveals an empty directory:
Here's the result of docker container inspect:
C:\Users\Admin>docker inspect dockergithubio_docs_1
[
{
"Id": "863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779",
"Created": "2017-07-17T20:57:06.7250794Z",
"Path": "/bin/sh",
"Args": [
"-c",
"jekyll serve -d /_site --watch -H 0.0.0.0 -P 4000"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 3252,
"ExitCode": 0,
"Error": "",
"StartedAt": "2017-07-17T20:57:08.0003358Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:9670258d73f081ef2c7dd476c56fc5945627ee68867e1296fbe19e612ddd29a4",
"ResolvConfPath": "/var/lib/docker/containers/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779/hostname",
"HostsPath": "/var/lib/docker/containers/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779/hosts",
"LogPath": "/var/lib/docker/containers/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779/863a59969066444d0b6e908a46d0f05b68605b7fe72bfd4b0ddf2036847b0779-json.log",
"Name": "/dockergithubio_docs_1",
"RestartCount": 0,
"Driver": "overlay2",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/D/Dev/Git/docker.github.io:/usr/src/app:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "dockergithubio_default",
"PortBindings": {
"4000/tcp": [
{
"HostIp": "",
"HostPort": "4000"
}
]
},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": -1,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/8f7ba6861640a6fb639f64c475db0260cb4c9ded686711b05625ff37c19737fa-init/diff:/var/lib/docker/overlay2/0772e69f7faba8d149e7d9aed149d4607c905f1d01b28b97f5453772e5326904/diff:/var/lib/docker/overlay2/6d2a854de0c3c7af4e8e3b6ef831af1dde8c400f5aa8fd809d76a06f3ba5c705/diff:/var/lib/docker/overlay2/8a4466b60f60d0141625c1ad32233f3fee49821f534f8709685c2d6514b9d3f6/diff:/var/lib/docker/overlay2/6a4fe33cae424e9a671300332244aa19f5a314d90c945b399f35ea487e01d333/diff:/var/lib/docker/overlay2/de35de0b23cb93e811a7f2ec6b59e3e282faf770131179c60cad588c522551be/diff:/var/lib/docker/overlay2/e7f896a4b4d0da7ddbddd208a9130affea2358f4b1fd147f403b82fe7fe748aa/diff:/var/lib/docker/overlay2/b09694bfeb6b2e7d75de351286d95bf9af18181004f9d3c2d9bf73ea6538ba56/diff:/var/lib/docker/overlay2/4feb0e4dccefd6570fee715baf80ebe6ea77ab133cc3ac15fd850bb737f7e8b2/diff:/var/lib/docker/overlay2/1291c76b0bb03c133b70dad4dd08147f3c753b52f8ac3070d2e0f9bbdd99e874/diff:/var/lib/docker/overlay2/9166f2a32c7b3284fab5a95803ac66c83cba936161083f0405b630178f5dbeb2/diff:/var/lib/docker/overlay2/46499476944e8234be84f662104f3968f8717f3e36a67bb06d814f9c70998d9f/diff:/var/lib/docker/overlay2/fc1f9d566f52e9f994bd02dd73528fb3402a98a2618c5b3a9dbf10c8c5ae554c/diff",
"MergedDir": "/var/lib/docker/overlay2/8f7ba6861640a6fb639f64c475db0260cb4c9ded686711b05625ff37c19737fa/merged",
"UpperDir": "/var/lib/docker/overlay2/8f7ba6861640a6fb639f64c475db0260cb4c9ded686711b05625ff37c19737fa/diff",
"WorkDir": "/var/lib/docker/overlay2/8f7ba6861640a6fb639f64c475db0260cb4c9ded686711b05625ff37c19737fa/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/D/Dev/Git/docker.github.io",
"Destination": "/usr/src/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "863a59969066",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"4000/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/bundle/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"RUBY_MAJOR=2.3",
"RUBY_VERSION=2.3.3",
"RUBY_DOWNLOAD_SHA256=241408c8c555b258846368830a06146e4849a1d58dcaf6b14a3b6a73058115b7",
"RUBYGEMS_VERSION=2.6.8",
"BUNDLER_VERSION=1.13.6",
"GEM_HOME=/usr/local/bundle",
"BUNDLE_PATH=/usr/local/bundle",
"BUNDLE_BIN=/usr/local/bundle/bin",
"BUNDLE_SILENCE_ROOT_WARNING=1",
"BUNDLE_APP_CONFIG=/usr/local/bundle",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_MAJOR_VERSION=4",
"GITHUB_GEM_VERSION=112"
],
"Cmd": [
"/bin/sh",
"-c",
"jekyll serve -d /_site --watch -H 0.0.0.0 -P 4000"
],
"ArgsEscaped": true,
"Image": "docs/docstage",
"Volumes": {
"/usr/src/app": {}
},
"WorkingDir": "/usr/src/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "f86127819d2d94cf924f8d7ef0fe8579286043aebafc2940e6ca0b1d1b4828b7",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "dockergithubio",
"com.docker.compose.service": "docs",
"com.docker.compose.version": "1.14.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "b6ad8a59f8f902f5a2fff0e4d6656bed6b3ecf1904424504886543614524f570",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"4000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "4000"
}
]
},
"SandboxKey": "/var/run/docker/netns/b6ad8a59f8f9",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"dockergithubio_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"docs",
"863a59969066"
],
"NetworkID": "8c5980632aa0810c818544573e76247a7b27f95e86d137e5f755cbff5b16b6aa",
"EndpointID": "ead13e880ebeede298f16c912d4eac0f5eb89ec5600da208202d54868273927d",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02",
"DriverOpts": null
}
}
}
}
]
At first glance this appears OK, but I must admit to a lack of knowledge on exactly interpreting the detail.
I've opened an issue here, but it seems I've exhausted all resources on that thread.
How can I determine whether there's a mount error occurring, and—if so—how can I fix it?
You need to configure docker to share your D drive into the embedded docker VM. Without that, the VM has nothing at this location and when mounting a volume in a container to a directory that doesn't exist (inside the docker VM, not on your windows machine), you get the resulting empty directory.
See the windows install steps for how to share this drive:

panic when start traefik with boltdb backend

I'm trying to start traefik with the boltdb backend. Sadly I always get a panic. Following my setup and the terminal output.
Start Command
./traefik_darwin-amd64 -d -c config.toml
config.toml
debug = true
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":8081"
[web]
address = ":8082"
[boltdb]
endpoint = "/Users/tochti/Tmp/traefik.db"
watch = true
prefix = "/traefik"
Terminal Output
After a long time of waiting the following panic is displayed
https://pastebin.com/1vUXbgTU
Output of traefik version: (What version of Traefik are you using?)
Version: v1.3.1
Codename: raclette
Go version: go1.8.3
Built: 2017-06-16_11:00:34AM
OS/Arch: darwin/amd64
What is your environment & configuration (arguments, toml, provider, platform, ...)?
{
"GraceTimeOut": 10000000000,
"Debug": true,
"CheckNewVersion": true,
"AccessLogsFile": "",
"TraefikLogsFile": "",
"LogLevel": "ERROR",
"EntryPoints": {
"http": {
"Network": "",
"Address": ":8081",
"TLS": null,
"Redirect": null,
"Auth": null,
"Compress": false
}
},
"Cluster": null,
"Constraints": [],
"ACME": null,
"DefaultEntryPoints": [
"http"
],
"ProvidersThrottleDuration": 2000000000,
"MaxIdleConnsPerHost": 200,
"IdleTimeout": 180000000000,
"InsecureSkipVerify": false,
"Retry": null,
"HealthCheck": {
"Interval": 30000000000
},
"Docker": null,
"File": null,
"Web": {
"Address": ":8082",
"CertFile": "",
"KeyFile": "",
"ReadOnly": false,
"Statistics": null,
"Metrics": null,
"Path": "",
"Auth": null
},
"Marathon": null,
"Consul": null,
"ConsulCatalog": null,
"Etcd": null,
"Zookeeper": null,
"Boltdb": {
"Watch": true,
"Filename": "",
"Constraints": [],
"Endpoint": "/Users/tochti/Tmp/traefik.db",
"Prefix": "/traefik",
"TLS": null,
"Username": "",
"Password": ""
},
"Kubernetes": null,
"Mesos": null,
"Eureka": null,
"ECS": null,
"Rancher": null,
"DynamoDB": null,
"ConfigFile": "/Users/tochti/Tmp/traefik_test/config.toml"
}
Bolt "traefik" Bucket Items
/traefik/backends/srv1_example_com/servers/server_1/url http://127.0.0.1:32001
/traefik/frontends/srv1_example_com/backend srv1_example_com
/traefik/frontends/srv1_example_com/routes/test_1/rule Host:srv1.example.com
Thanks for help!

What's the container of a base docker image?

I am learning to use docker. I know that each docker image is build on a base image which doesn't have a parent.
Then from a base image, we can customize some stuff via containers (maybe a very short life container) to commit a new image.
So I understand that the process is like this: base image -> container -> new image 1 -> container -> new image 2
However, I inspect the json data of the base image. I still can see that it has a container's information:
[{
"Architecture": "amd64",
"Author": "",
"Comment": "",
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": null,
"CpuShares": 0,
"Cpuset": "",
"Domainname": "",
"Entrypoint": null,
"Env": null,
"ExposedPorts": null,
"Hostname": "3f37dbc61890",
"Image": "",
"Labels": null,
"MacAddress": "",
"Memory": 0,
"MemorySwap": 0,
"NetworkDisabled": false,
"OnBuild": null,
"OpenStdin": false,
"PortSpecs": null,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Container": "3f37dbc61890b0bb37cc8479db94602bcc2d6e177d76c0f3d7d53346c0dc580c",
"ContainerConfig": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ADD file:777fad733fc954c0c161670c48c10ea1787a6e5d544daa20e55d593279df3fa3 in /"
],
"CpuShares": 0,
"Cpuset": "",
"Domainname": "",
"Entrypoint": null,
"Env": null,
"ExposedPorts": null,
"Hostname": "3f37dbc61890",
"Image": "",
"Labels": null,
"MacAddress": "",
"Memory": 0,
"MemorySwap": 0,
"NetworkDisabled": false,
"OnBuild": null,
"OpenStdin": false,
"PortSpecs": null,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2015-04-21T22:18:45.67739694Z",
"DockerVersion": "1.6.0",
"Id": "706766fe101906a1a6628173c2677173a5f8c6c469075083f3cf3a8f5e5eb367",
"Os": "linux",
"Parent": "",
"Size": 188104128,
"VirtualSize": 188104128
}]
3f37dbc61890b0bb37cc8479db94602bcc2d6e177d76c0f3d7d53346c0dc580c is
the container id of the base image.
what is a container of a base image? I feel like it becomes the hostname.
What is the hostname actually?
I'm not sure of your terminology. If a base image is by definition an image that has no parent, then the image in your example is not a base image.
But an image may have no parent. scratch, for example, has no parent:
$ docker inspect -f '{{.Container}}' ubuntu
a4c15f8c80978475a53f96721f935de5823bc8c29aff14eb00a15f9b9d96cddd
$ docker inspect -f '{{.Container}}' scratch
$
You can also create an image that has no parent using import:
$ echo hello world > foo && tar -cf- foo | docker import -
3e8fc0cb69fae0bd3f9711031df6d3b7bf6a7e8c9745657d9261e7b803718c67
$ docker inspect -f '{{.Container}}' 3e8fc0c
$
Unlike scratch, this image may have files in it. In fact, you can flatten a complex image using this technique.
$ docker create ubuntu # create container from image
6b90bf145c193ef8e4ecb789372d2fd619769a20d96c8f3f586dcfbc501b0611
$ docker export 6b90bf1 > ubuntu.tar # export container fs to tarball
$ docker import - flat_ubuntu < ubuntu.tar
4ef4ffb9514212acf6a19b2eeda8855b8c0445924311043ba5cba6574d40d772
$ docker inspect -f '{{.Container}}' 4ef4ffb
$
It's important to note that while this new image has the exact same files as the original image, it doesn't have the other docker features like environment, volume, entrypoint, etc.
I would not necessarily call this a "base" image. I would call it a "flat" image. I would say a base image is the image you indicate in a Dockerfile FROM directive. In my terminology a base image need not be flat.

Resources