Azure Iotedge start docker with --net=host so that I can access my IP - spring-boot

I would like in my Java code to find my IP address. My code is inside a docker container and I always get the IP address of the docker container instead of my machine.
I run the docker like this
docker run -p 8080:8080 --privileged --net=host -d 6b45f71550a3
This is my Java code
InetAddress addr = InetAddress.getLocalHost();
String hostname = InetAddress.getByName(addr.getHostName()).toString();
I need to modify the deployment.template.json so that the generated docker does take the IP Address of the machine
"modules": {
"MyModule": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "dev.azurecr.io/dev:0.0.1-arm32v7",
"createOptions": {
"ExposedPorts":{"8080/tcp": {}},
"HostConfig": {
"PortBindings": {
"8080/tcp": [
{
"HostPort": "8080"
}
]
}
}
}
}
}
}

I was going to say that you can't do that but apparently you can by using
"createOptions": {
"NetworkingConfig": {
"EndpointsConfig": {
"host": {}
}
},
"HostConfig": {
"NetworkMode": "host"
}
}
I haven't tried it. I found it here: https://github.com/Azure/iot-edge-v1/issues/517. Maybe that will help.

Related

Bash redirection in docker container failing when ran in ECS task on Amazon Linux 2 instances

I am trying to run an ECS task that contains 3 containers - postgres, redis, and an image from a private ECR repository. The custom image container definition has a command to wait until the postgres container can receive traffic via a bash command
"command": [
"/bin/bash",
"-c",
"while !</dev/tcp/postgres/5432; do echo \"Waiting for postgres database to start...\"; /bin/sleep 1; done; /bin/sh /app/start-server.sh;"
],
When I run this via docker-compose on my local machine through docker it works, but on the Amazon Linux 2 EC2 machine this is printed when the while loop runs:
/bin/bash: line 1: postgres: Name or service not known
/bin/bash: line 1: /dev/tcp/postgres/5432: Invalid argument
The postgres container runs without error and the last log from that container is
database system is ready to accept connections
I am not sure if this is a docker network issue or an issue with amazon linux 2's bash not being compiled with --enable-net-redirections which I found explained here
Task Definition:
{
"networkMode": "bridge",
"containerDefinitions": [
{
"environment": [
{
"name": "POSTGRES_DB",
"value": "metadeploy"
},
{
"name": "POSTGRES_USER",
"value": "<redacted>"
},
{
"name": "POSTGRES_PASSWORD",
"value": "<redacted>"
}
],
"essential": true,
"image": "postgres:12.9",
"mountPoints": [],
"name": "postgres",
"memory": 1024,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "metadeploy-postgres",
"awslogs-region": "us-east-1",
"awslogs-create-group": "true",
"awslogs-stream-prefix": "mdp"
}
}
},
{
"essential": true,
"image": "redis:6.2",
"name": "redis",
"memory": 1024
},
{
"command": [
"/bin/bash",
"-c",
"while !</dev/tcp/postgres/5432; do echo \"Waiting for postgres database to start...\"; /bin/sleep 1; done; /bin/sh /app/start-server.sh;"
],
"environment": [
{
"name": "DJANGO_SETTINGS_MODULE",
"value": "config.settings.local"
},
{
"name": "DATABASE_URL",
"value": "<redacted-postgres-url>"
},
{
"name": "REDIS_URL",
"value": "redis://redis:6379"
},
{
"name": "REDIS_HOST",
"value": "redis"
}
],
"essential": true,
"image": "the private ecr image uri built from here https://github.com/SFDO-Tooling/MetaDeploy",
"links": [
"redis"
],
"mountPoints": [
{
"containerPath": "/app/node_modules",
"sourceVolume": "AppNode_Modules"
}
],
"name": "web",
"portMappings": [
{
"containerPort": 8080,
"hostPort": 8080
},
{
"containerPort": 8000,
"hostPort": 8000
},
{
"containerPort": 6006,
"hostPort": 6006
}
],
"memory": 1024,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "metadeploy-web",
"awslogs-region": "us-east-1",
"awslogs-create-group": "true",
"awslogs-stream-prefix": "mdw"
}
}
}
],
"family": "MetaDeploy",
"volumes": [
{
"host": {
"sourcePath": "/app/node_modules"
},
"name": "AppNode_Modules"
}
]
}
The corresponding docker-compose.yml contains:
version: '3'
services:
postgres:
environment:
POSTGRES_DB: metadeploy
POSTGRES_USER: postgres
POSTGRES_PASSWORD: sample_db_password
volumes:
- ./postgres:/var/lib/postgresql/data:delegated
image: postgres:12.9
restart: always
redis:
image: redis:6.2
web:
build:
context: .
dockerfile: Dockerfile
command: |
/bin/bash -c 'while !</dev/tcp/postgres/5432; do echo "Waiting for postgres database to start..."; /bin/sleep 1; done; \
/bin/sh /app/start-server.sh;'
ports:
- '8080:8080'
- '8000:8000'
# Storybook server
- '6006:6006'
stdin_open: true
tty: true
depends_on:
- postgres
- redis
links:
- redis
environment:
DJANGO_SETTINGS_MODULE: config.settings.local
DATABASE_URL: postgres://postgres:sample_db_password#postgres:5432/metadeploy
REDIS_URL: redis://redis:6379
REDIS_HOST: redis
volumes:
- .:/app:cached
- /app/node_modules
Do I need to recompile bash to use --enable-net-redirections, and if so how can I do that?
Without bash's net redirection feature, your best bet is to use something like nc or netcat (if available) to determine if the port is open. If those aren't available, it may be worth modifying your app logic to better handle database failure cases.
Alternately, a potential better approach would be:
Adding a healthcheck to the postgres image.
Modifying the web service's depends_on clause "long syntax" to add a dependency on postgres being service_healthy instead of the default service_started.
This approach has two key benefits:
The postgres image likely has the tools to detect if the database is up and running.
The web service no longer needs to manually check if the database is ready or not.

Access Caddy server API from http

I'm running Caddy server on an EC2 instance.
Write now I'm able to write config JSON inside the config vim app.json and load it from the SSH terminal.
curl localhost:2019/load -H 'Content-Type: application/json' -d #app.json
Now I want to load the configuration from another server over HTTP. Thus I have added the admin configuration to the app.json
{
"admin": {
"disabled": false,
"enforce_origin": false,
"origins": ["localhost:2019","103.55.1.2:2019","54.190.1.2:2019"]
},
"apps": {
"HTTP": {
"servers": {
"scanning": {
"listen": [":443"],
"routes": [{
"handle": [{
"handler": "file_server",
"root": "/var/www/html/app-frontend"
}],
"match": [{
"host": ["caddy.example.com"]
}]
}]
}
}
}
}
}
Where the IP address
103.55.1.2: My ISP IP address
54.190.1.2: The EC2 private IP address
I'm trying to get the config from the postman using the EC2 IP address but it does not work.
http://54.190.1.2:2019/config/
How can I get the config and load config in Caddy over HTTP?

How to edit Linux containers configs and observe it hashes in Docker Windows?

I installed Docker on Windows. It's switched to Switched to Linux containers.
When I type in my console: docker inspect e3a934c54979 I see an information:
[
{
...
"Image": "sha256:2359fa12fdedef2af79d9b836a26175808d4b1433b5e7022d2d73c72b2a43b60",
"ResolvConfPath": "/var/lib/docker/containers/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8/hostname",
"HostsPath": "/var/lib/docker/containers/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8/hosts",
"LogPath": "/var/lib/docker/containers/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8-json.log",
"Name": "/festive_edison",
...
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"80/tcp": [
{
"HostIp": "",
"HostPort": "80"
}
]
},
...
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/10f5348d5bfa76612ab30d1a253f17a6989fcd3f7ce23642b313c49f99a95f44-init/diff:/var/lib/docker/overlay2/028eac1b0f37fd3be798d222f7d1da48a40f0ef9c4470709e63c4c8f322a477f/diff:/var/lib/docker/overlay2/d15e7ce0f29f82d6d3b9537980b766c32e7f6ffc81374cdb26fede3872afed1e/diff:/var/lib/docker/overlay2/efab543606225e581832ef6e2b732a78c82b2f6d9fe662babe09b188f600dd72/diff:/var/lib/docker/overlay2/263366359e8a86cc6c009f70fa00a158dbcbcfd2a4e31d9538c559dd82e29b10/diff:/var/lib/docker/overlay2/32ea6c48b53f4846284e1baac83dffcfb039a53a8d2f33ac2728691160f5d100/diff:/var/lib/docker/overlay2/685745d44609453debf484b2ccf63035532b334e75b9f18a00c5e1253e18841a/diff:/var/lib/docker/overlay2/e30c0a304544255bc9eba90dfb720c332e168b4972df926a79ef27df707889fd/diff:/var/lib/docker/overlay2/a5743532bc060895f0a495249182787322400a1a33fd187b3210895e1ca83129/diff",
"MergedDir": "/var/lib/docker/overlay2/10f5348d5bfa76612ab30d1a253f17a6989fcd3f7ce23642b313c49f99a95f44/merged",
"UpperDir": "/var/lib/docker/overlay2/10f5348d5bfa76612ab30d1a253f17a6989fcd3f7ce23642b313c49f99a95f44/diff",
"WorkDir": "/var/lib/docker/overlay2/10f5348d5bfa76612ab30d1a253f17a6989fcd3f7ce23642b313c49f99a95f44/work"
},
"Name": "overlay2"
},
...
}
]
But Windows doesn't have those directories. It only has "MobyLinuxVM.vhdx" which, I think, contains this stuff.
My question is how to edit "config.json" and "hostconfig.json" in this case? How do I view a GUID-json.log? How do I view container's hashes (/var/lib/docker/aufs/diff)?
Information from https://blog.jongallant.com/2017/11/ssh-into-docker-vm-windows/
In a Windows command prompt enter:
docker run --privileged -it -v
/var/run/docker.sock:/var/run/docker.sock
jongallant/ubuntu-docker-client
docker run --net=host --ipc=host --uts=host --pid=host -it
--security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
chroot /host
From here you'll have access to the /var/lib/Docker/containers/ directories for the hostconfig.json and other files.

Docker disconnect all containers from docker network

I have docker network "my_network". I want to remove this docker network with docker network rm my_network. Before it I should disconnect all my containers from this network. I can use docker network inspect and get output like
[
{
"Name": "my_network",
"Id": "aaaaaa",
"Scope": "some_value",
"Driver": "another_value",
"EnableIPv6": bool_value,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.0.0.0/1"
}
]
},
"Internal": false,
"Containers": {
"bbb": {
"Name": "my_container_1",
"EndpointID": "ENDPOITID1",
"MacAddress": "MacAddress1",
"IPv4Address": "0.0.0.0/1",
"IPv6Address": ""
},
"ccc": {
"Name": "my_container_2",
"EndpointID": "ENDPOINTID2",
"MacAddress": "MacAddress2",
"IPv4Address": "0.0.0.0/2",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
It is okay to manual disconnect if I have only several containers but if I have 50 containers I have problem.
How can I disconnect all containers from this network with single or several command?
docker network inspect has a format option.
That means you can list all Container names with:
docker network inspect -f '{{range .Containers}}{{.Name}}{{end}}' network_name
That should then be easy, by script, to read each name and call docker network disconnect.
wwerner proposes below in the comments the following command:
for i in ` docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' network_name`; do docker network disconnect -f network_name $i; done;
In multiple line for readability:
for i in ` docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' network_name`;\
do \
docker network disconnect -f network_name $i; \
done;
Adding:
Note that there is a space in the format as opposed to the answer to split the names by a space.

Mesos / Marathon : forward port make deployement fail

I have marathon/mesos successfully deploying app, but if I add a port mapping, it doesn't work anymore.
My slave container is runned as :
docker run --privileged -v /data/docker/notebook:/notebook:rw -v /etc/localtime:/etc/localtime:ro --net=host -e NFS_PATH=$NFS_PATH -e IP=$IP -e RESOURCES=$RESOURCES -e ATTRIBUTES=$ATTRIBUTES -e HOSTNAME=$HOSTNAME -e MASTER=$MASTER -e SLAVE_PORT=$SLAVE_PORT -d -p 5151:5151 --name $CONTAINER_NAME $IMAGE_NAME
Then in the slave container I have to start by hand the daemon because of a strange [time="2015-10-17T12:27:40.963674511Z" level=fatal msg="Error starting daemon: error initializing graphdriver: operation not permitted"] error, so I do :
docker -d -D --insecure-registry=localhost:5000 -g /var/test
Then I see my slave on Mesos as a working ressource, and I can post some app to marathon :
{
"id": "rstudiorocker2",
"container": {
"type" : "DOCKER",
"volumes" : [],
"docker" : {
"image" : "localhost:5000/rocker/rstudio",
"privileged" : true,
"parameters" : [],
"forcePullImage" : true
}
}
}
Here the app is instantaenously deployed on the slave. The issue is that rocker is listening on port 8787, and I want to access on it on another port, so I try to make a port mapping :
{
"id": "rstudiorocker",
"container": {
"type" : "DOCKER",
"volumes" : [],
"docker" : {
"image" : "192.168.0.38:5000/rocker/rstudio",
"privileged" : true,
"parameters" : [],
"forcePullImage" : true,
"network":"BRIDGE",
"portMappings": [
{ "containerPort": 8787,
"hostPort": 2036,
"protocol": "tcp" }
, { "containerPort": 8787,
"hostPort": 2036,
"protocol": "udp" }
]}
}
}
and here the problem appear : the app stay on "stagging" stage, without never being deployed (even if I delete all other app first) :(
What could go wrong ?
You've tried to map the same container port twice, which is not allowed by Marathon:
"portMappings": [
{ "containerPort": 8787,
"hostPort": 2036,
"protocol": "tcp" },
{ "containerPort": 8787,
"hostPort": 2036,
"protocol": "udp" }
]}
Marathon will reject this configuration with a message like
{"message":"Bean is not valid","errors":[{"attribute":"ports","error":"Elements must be unique"}]}
Try changing one of the containerPort values, eg:
"portMappings": [
{ "containerPort": 8787,
"hostPort": 0,
"protocol": "tcp" },
{ "containerPort": 8789,
"hostPort": 0,
"protocol": "udp" }
]}

Resources