Docker for Windows 10 and Volume - windows

All,
I'm learning Docker on my Windows Desktop 10. Windows is Pro edition and Docker is 18.09.
When I run the below -
docker run -it nanoserver/iis -v
C:\ProgramData\Docker\volumes\vol01:C:\vol01 cmd.exe
I get the below error -
docker: Error response from daemon: container
5a1229eca277cbddeefd5637e69554458003c54be3f30cc44ca41c8fa68a4a94
encountered an error during CreateProcess: failure in a Windows system
call: The system cannot find the file specified. (0x2) [Event Detail:
Provider: 00000000-0000-0000-0000-000000000000] extra info:
{"CommandLine":"-v C:\ProgramData\Docker\volumes\vol01:C:\vol01
cmd.exe","WorkingDirectory":"C:\","EmulateConsole":true,"CreateStdInPipe":true,"CreateStdOutPipe":true,"ConsoleSize":[63,237]}.
The volume does exist -
docker volume inspect vol01
[
{
"CreatedAt": "2018-12-26T03:01:01-05:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "C:\ProgramData\Docker\volumes\vol01\_data",
"Name": "vol01",
"Options": {},
"Scope": "local"
} ]
I don't know what is wrong. Can someone point me in the right direction?
Thanks,
rgn

You should declare volumes before image name for docker run. Consider:
docker run -it -v C:\ProgramData\Docker\volumes\vol01:C:\vol01 nanoserver/iis cmd.exe

Related

Unable to debug with SAM local invoke when using layers extension in Lambdas

trying to use a sample example for Node.js platform of AWS Lambda Layers something during registration phase brakes the debugger when runs locally on VS Code using sam local invoke --port 6767 ... on a Lambda that includes this layer.
The following is a snap of the debug logs when launched from VS Code:
START RequestId: .....-....-....-.....-....... Version: $LATEST
nodejs-example-extension launching extension
Debugger listening on ws://0.0.0.0:6767/.....-....-....-.....-.......
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
hello from extension
register
extensionId .....-....-....-.....-.......
next
Starting inspector on 0.0.0.0:6767 failed: address already in use
invoke
next
...
notice the Starting inspector on 0.0.0.0:6767 failed: address already in use error line in logs, after that all set breakpoints are ignored and it's impossible to debug every Lambdas.
this is the VS Code debug task setup:
{
"name": "lambda_with_example_layer",
"type": "pwa-node",
"request": "attach",
"address": "localhost",
"port": 6767,
"localRoot": "${workspaceFolder}/lambda_with_example_layer",
"sourceMaps": true,
"remoteRoot": "/var/task",
"protocol": "inspector",
"stopOnEntry": false,
"skipFiles": [
"<node_internals>/**"
]
}
Is there any debugger/SAM/Docker configuration that i missed? Or is something else deeper in the layer execution flows that broke debugging in a container with SAM local?

problemin connecting apache superset running inside docker container to Kylin

I have a running apache-superset inside a docker container that i want to connect to a running apache-kylin (Not inside docker ).
I am recieving the following error whenever i test connection with this alchemy URI : 'kylin://ADMIN#KYLIN#local:7070/test ':
[SupersetError(message='(builtins.NoneType) None\n(Background on this error at: http://sqlalche.me/e/13/dbapi)', error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: 'GENERIC_DB_ENGINE_ERROR'>, level=<ErrorLevel.ERROR: 'error'>, extra={'engine_name': 'Apache Kylin', 'issue_codes': [{'code': 1002, 'message': 'Issue 1002 - The database returned an unexpected error.'}]})]
"POST /api/v1/database/test_connection HTTP/1.1" 422 -
superset_app | 2021-07-02 18:44:17,224:INFO:werkzeug:172.28.0.1 - - [02/Jul/2021 18:44:17] "POST /api/v1/database/test_connection HTTP/1.1" 422 -
You might need to check your superset_app network first.
use docker inspect [container name] i.e.
docker inspect superset_app
in my case, it is running in superset_default network
"Networks": {
"superset_default": {
.....
}
}
Next, you need to connect your kylin docker container to this network i.e.
docker network connect superset_default kylin
kylin is your container name.
Now, your superset_app and kylin container has been exposed within the same network. You can docker inspect your kylin container
docker inspect kylin
and find the IPAddress
"Networks": {
"bridge": {
....
},
"superset_default": {
...
"IPAddress": "172.18.0.5",
...
}
}
In superset you can now connect your kylin docker container
We have hosted Dremio and Superset on an AKS Cluster in Azure and we are trying to connect Superset to the Dremio Database(Lakehouse) for fetching some dashboards. We have installed all the required drivers(arrowflight, sqlalchemy_dremio and unixodc/dev) to establish the connection.
Strangely we are able not able to connect to Dremio from the Superset UI using the connection strings:
dremio+flight://admin:password#dremiohostname.westeurope.cloudapp.azure.com:32010/dremio
dremio://admin:adminpass#dremiohostname.westeurope.cloudapp.azure.com:31010/databaseschema.dataset/dremio?SSL=0
Here’s the error:
(builtins.NoneType) None\n(Background on this error at: https://sqlalche.me/e/14/dbapi)", "error_type": "GENERIC_DB_ENGINE_ERROR", "level": "error", "extra": {"engine_name": "Dremio", "issue_codes": [{"code": 1002, "message": "Issue 1002 - The database returned an unexpected error."}]}}]
However, while trying from inside the Superset pod, using this python script [here][1] 5, the connection goes through without any issues.
PS - One point to note is that, we have not enabled SSL certificates for our hostnames.

Docker Desktop for windows fails on search/build with corporate proxy

I have installed Docker Desktop for windows Docker version 18.09.2, build 6247962, and I'm not able to build and image. Even a docker search does not seem to work.
The error message (for example, when doing a docker search) is:
Error response from daemon: Get https://index.docker.io/v1/search?q=ubuntu&n=25: proxyconnect tcp: dial tcp 172.17.14.133:3128: connect: no route to host
My office is behind a proxy. So on the "Proxies" settings of DockerDesktop I set http://172.17.14.133:3128 for both HTTP and HTTTPS. But it still does not seem to work.
I have defined some ENV variables (both user and system) like this:
HTTPS_PROXY=http://proxypmi.tradyso.com:3128
HTTP_PROXY=http://proxypmi.tradyso.com:3128
Also:
C:\Users\my.user\AppData\Roaming\Docker\http_proxy.json:
{
"http": "http://172.17.14.133:3128",
"https": "http://172.17.14.133:3128",
"exclude": null,
"transparent_http_ports": [],
"transparent_https_ports": []
}
C:\Users\my.user\AppData\Roaming\Docker\settings.json:
{
"settingsVersion": 1,
"autoStart": false,
"checkForUpdates": true,
"analyticsEnabled": false,
"displayedWelcomeWhale": true,
"displayed14393Deprecation": false,
"displayRestartDialog": true,
"displaySwitchWinLinContainers": true,
"latestBannerKey": "",
"debug": false,
"memoryMiB": 2048,
"swapMiB": 1024,
"cpus": 2,
"diskPath": null,
"diskSizeMiB": 64000000000,
"networkCIDR": "10.0.75.0/24",
"proxyHttpMode": true,
"overrideProxyHttp": "http://172.17.14.133:3128",
"overrideProxyHttps": "http://172.17.14.133:3128",
"overrideProxyExclude": null,
"useDnsForwarder": true,
"dns": "10.44.24.10",
"kubernetesEnabled": false,
"showKubernetesSystemContainers": false,
"kubernetesInitialInstallPerformed": false,
"cliConfigCreationDate": "03/22/2019 12:23:58",
"linuxDaemonConfigCreationDate": "03/22/2019 12:22:19",
"windowsDaemonConfigCreationDate": null,
"versionPack": "default",
"sharedDrives": {},
"executableDate": "",
"useWindowsContainers": false,
"swarmFederationExplicitlyLoggedOut": false,
"activeOrganizationName": null,
"exposeDockerAPIOnTCP2375": false
}
C:\Users\my.user\.docker\config.json:
{
"stackOrchestrator": "swarm",
"auths": {},
"credsStore": "wincred",
"proxies":
{
"default":
{
"httpProxy": "http://172.17.14.133:3128",
"httpsProxy": "http://172.17.14.133:3128",
"noProxy": ""
}
}
}
I also tried passing build-arg to tocker build:
docker build --build-arg HTTP_PROXY=http://172.17.14.133:3128 --build-arg HTTPS_PROXY=http://172.17.14.133:3128 ...
Finally, on the Docker Desktop network configuration, I have tried with DNSs both "Automatic" and Manual (Using my corporate dns servers)
None of this has worked.
Any hint on what should I do?
Thank you.
A collegue found out the problem:
By default, the bridge network that docker creates uses the same subnet as our office (172.17.0.0/16) and that causes trouble with the proxy ip address (172.17.14.133).
To solve this:
[Edit]: for a simpler method, use the following:
On daemon configuration, add "bip": "new_subbet". For example: "bip": "172.19.0.1/16". Then, restart docker.
Now, you won't even need to pass the extra --network="docker_gwbridge" parameter to the commands.
This shorter solution may not work with older versions of Docker for windows, so you may consider the original answer if this option does not work.
[Edit]: original method for old versions of docker for windows:
The bridge network cannot be deleted, but We can tell docker not to create it.
Go to Daemon Settings, Advanced => add "bridge": "none", to the configuration
After applying changes, Docker will restart and now We will be able to create our own custom bridge network
In this example, We are going to use (172.19.0.0/16)
Open a console and write:
docker network create --subnet=172.19.0.0/16 --gateway 172.19.0.1 -o com.docker.network.bridge.enable_icc=true -o com.docker.network.bridge.name=docker_gwbridge -o com.docker.network.bridge.enable_ip_masquerade=true docker_gwbridge
Then we can do a docker ls for checking that the previous command was successful:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
2a3635a49ffa docker_gwbridge bridge local
4e9ec758ee9f host host local
737c6c5fc82b none null local
Now do a docker search ubuntu (for example). It should be able to connect to the internet and fetch the images.
Important: From now on, some commands that need internet access will need to specify the new docker network with the extra parameter --network="docker_gwbridge"
For example a docker build command could be:
docker build --network="docker_gwbridge" --build-arg http_proxy=http://172.17.14.133:3128 --build-arg https_proxy=http://172.17.14.133:3128 -t foobar .

mup setup/deploy - ECONNREFUSED on 127.0.0.1

I'm trying to mup deploy the todos example of Meteor to a Vagrant VM running Ubuntu 14.04 LTS x64.
Meteor Up supports Windows (I'm on Windows 7):
You can use install and use Meteor Up from Linux, Mac and Windows.
This is my c:\code\todos\mup.json:
{
"servers": [
{
"host": "127.0.0.1",
"port": 2222,
"username": "vagrant",
"password": "vagrant"
}
],
"setupMongo": true,
"setupNode": true,
"nodeVersion": "0.12.4",
"setupPhantom": true,
"enableUploadProgressBar": false,
"appName": "todos-app",
"app": "/code/todos",
"env": {
"ROOT_URL": "http://127.0.0.1",
"PORT": "3001", // The port you want to bind to on your server.
"UPSTART_UID": "vagrant" // The user you want to run meteor as.
},
"deployCheckWaitTime": 30
}
My Vagrant VM is up and PuTTYTray is connected via vagrant:vagrant#127.0.0.7:2222. Yet mup deploy fails:
C:\code\todos>mup deploy
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
" Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup "
Building Started: /code/todos
? Can't build for mobile on Windows. Skipping the following platforms:
android, ios
Started TaskList: Deploy app 'todos-app' (linux)
[127.0.0.1] - Uploading bundle
events.js:85
throw er; // Unhandled 'error' event
^
Error: connect ECONNREFUSED
at exports._errnoException (util.js:746:11)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1010:19)
Same for mup setup. And in the VM mup deploy encounters a "weird error".
Should I downgrade mup?
Downgrading mup doesn't help:
C:\code\todos>npm install -g mup#0.9.7
C:\Users\Cees.Timmerman\AppData\Roaming\npm\mup -> C:\Users\Cees.Timmerman\AppData\Roaming\npm\node_modules\mup\bin\mup
mup#0.9.7 C:\Users\Cees.Timmerman\AppData\Roaming\npm\node_modules\mup
├── colors#0.6.2
├── underscore#1.7.0
├── uuid#1.4.2
├── async#0.9.2
├── rimraf#2.4.0 (glob#4.5.3)
├── cjson#0.3.1 (jsonlint#1.6.0)
└── nodemiral#0.3.11 (debug#0.7.4, ejs#0.8.8, handlebars#1.0.12)
C:\code\todos>mup -v
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
sshpass required for password based authentication: refer http://git.io/_vHbvQ
C:\code\todos>mup deploy
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
sshpass required for password based authentication: refer http://git.io/_vHbvQ
SSHPass doesn't work on Windows.
Manually setting up my Meteor app worked with help from this answer.

packer building amazon-chroot - simple example does not work

I'm trying to build an Amazon AMI centos using Packer. I am using the amazon-chroot builder.
The ami exists, but I am getting the build error
[root#ip-10-32-11-16 retel-base]# packer build retel-base.json
amazon-chroot output will be in this color.
==> amazon-chroot: Gathering information about this EC2 instance...
==> amazon-chroot: Inspecting the source AMI...
==> amazon-chroot: Couldn't find root device!
Build 'amazon-chroot' errored: Couldn't find root device!
==> Some builds didn't complete successfully and had errors:
--> amazon-chroot: Couldn't find root device!
==> Builds finished but no artifacts were created.
cat retel-base.json
{
"variables": {
"ACCESS_KEY_ID": "{{env `ACCESS_KEY_ID`}}",
"SECRET_ACCESS_KEY": "{{env `SECRET_ACCESS_KEY`}}"
},
"builders": [{
"type": "amazon-chroot",
"access_key": "{{user `ACCESS_KEY_ID`}}",
"secret_key": "{{user `SECRET_ACCESS_KEY`}}",
"source_ami":"ami-a40df4cc",
"ami_name": "base image built with packer {{timestamp}}"
}]
}
I think this might be to do with a mismatch between the name of the root device and the block device mapping.
In the official CentOS AMIs, the root device is named /dev/sda but the block device mapping only lists /dev/sda1, which is apparently a partition on the root device.
The Aminator by Netflix has a similar problem with partitioned volumes: https://github.com/Netflix/aminator/issues/129

Resources