Docker fails to provide creds for awslogs logging driver - macos

My docker-compose file:
version: "2"
services:
app:
build:
# Build an image from the Dockerfile in the current directory
context: .
ports:
- 5000:5000
environment:
PORT: 5000
NODE_ENV: production
And docker-compose.override
version: "2"
networks:
# This special network is configured so that the local metadata
# service can bind to the specific IP address that ECS uses
# in production
credentials_network:
driver: bridge
ipam:
config:
- subnet: "169.254.170.0/24"
gateway: 169.254.170.1
services:
# This container vends credentials to your containers
ecs-local-endpoints:
# The Amazon ECS Local Container Endpoints Docker Image
image: amazon/amazon-ecs-local-container-endpoints
volumes:
# Mount /var/run so we can access docker.sock and talk to Docker
- /var/run:/var/run
# Mount the shared configuration directory, used by the AWS CLI and AWS SDKs
# On Windows, this directory can be found at "%UserProfile%\.aws"
- $HOME/.aws/:/home/.aws/
environment:
# define the home folder; credentials will be read from $HOME/.aws
HOME: "/home"
# You can change which AWS CLI Profile is used
AWS_PROFILE: "default"
networks:
credentials_network:
# This special IP address is recognized by the AWS SDKs and AWS CLI
ipv4_address: "169.254.170.2"
# Here we reference the application container that we are testing
# You can test multiple containers at a time, simply duplicate this section
# and customize it for each container, and give it a unique IP in 'credentials_network'.
app:
logging:
driver: awslogs
options:
awslogs-region: eu-west-3
awslogs-group: sharingmonsterlog
depends_on:
- ecs-local-endpoints
networks:
credentials_network:
ipv4_address: "169.254.170.3"
environment:
AWS_DEFAULT_REGION: "eu-west-3"
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/creds"
I have added my AWS credentials to my mac using the aws configure command and the credentials are stored correctly in ~/.aws/credentials.
I am using docker 2.2.0.4, docker-compose 1.25.4 and docker-machine 0.16.2.
When I run docker-compose up I get the following error:
ERROR: for scraper Cannot start service scraper: Failed to initialize logging driver: NoCredentialProviders: no valid providers in chain.
Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
ERROR: Encountered errors while bringing up the project.
I believe this is because I need to set the AWS credentials in the Docker Daemon but I cannot work out how this is done on macOs High Sierra.

We need to pass the credential to Docker daemon. On Systemd based Linux, as the docker daemon is managed by systemd, we need to setup systemd configuration for docker daemon.
Pass Credentials to the awslogs Docker Logging Driver on Ubuntu
For mac, we need to find a way to do the similar in Mac, and need to understand how Mac docker daemon is configured and started.
Apparently there is an issue with Mac docker daemon not being able to pass environment variables, so it would limit the options. The person posted the issue ended up with docker-cloudwatchlogs
Add ability to set environment variables for docker daemon
There are a few ways on Mac mentioned in the stackoverflow.
How do I provide credentials to the docker awslogs driver using Docker for Mac?
However, if this is about docker compose, there could be another way to pass AWS credential via environment variables using Docker Composer features.
Environment variables in Compose
Or simply setup environment variable when running docker compose command line.
AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY= ... AWS_SESSION_TOKEN= ... docker-compose up ...
Please refer to Docker for Mac can't use loaded environment variable from file as well.
Regarding the AWS IAM permission, please make sure the AWS account of the AWS credential has the IAM permission as specified in the Docker document.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Related

Unable to connect from Spring Boot to Dockerized Redis in outside/inside machine

I am connecting to Redis from the spring boot app on the outside machine where the Redis server docker container is not running. When the app tries to connect to Redis, the app can't connect properly until the sent request is timed out. Meanwhile, if I try to connect from:
Inside the machine where the Redis server docker container is running with the host is localhost, I could connect it. And, I don't know why I can't connect by setup host value as a numerical IP/alphabetical (URL), only works with "localhost."
Outside machine where the Redis server docker container is not running with Redis client app GUI for management, I could connect it.
application.properties:
spring.redis.host=pc-1
spring.redis.port=6379
pc-1 is alias from some numerical ip. I'am using hosts feature from
windows to aliasing/redirecting it.
.env:
REDIS_PORT=6379
docker-compose.yml:
redis:
image: redis:latest
ports:
- "${REDIS_PORT}:6379"
command:
# - redis-server
# - --requirepass "${REDIS_PASSWORD}"
networks:
- redis
healthcheck:
test: ["CMD-SHELL", "redis-cli ping"]
interval: 10s
timeout: 10s
retries: 3
I need help on this issue.
Use the --service-ports flag to the docker compose command to publish the ports you've defined in the docker compose file.
Other debugging tips:
Hardcode the ${REDIS_PORT} variable in case the value is not getting set or set a default like ${REDIS_PORT:-default}
Pass the env file explicitly like docker compose --env-file ./somedir/.env up in case the env file is not being pick up
Use docker inspect to get container status, check the networking info

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

Running lambdas in localstack in gitlab-ci

So I have localstack running locally (on my laptop) and can deploy serverless app to it and then invoke a Lambda.
However, I am really struggling with doing the same thing in gitlab-ci.
This is the relevant part of .gitlab-ci.yml:
integration-test:
stage: integration-test
image: node:14-alpine3.12
tags:
- docker
services:
- name: localstack/localstack
alias: localstack
variables:
LAMBDA_EXECUTOR: docker
HOSTNAME_EXTERNAL: localstack
DEFAULT_REGION: eu-west-1
USE_SSL: "false"
DEBUG: "1"
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
AWS_DEFAULT_REGION: eu-west-1
script:
- npm ci
- npx sls deploy --stage local
- npx jest --testMatch='**/*.integration.js'
only:
- merge_requests
The localstack gets started and the deployment works fine. But as soon as lambda is invoked (in an integration test), localstack is trying to create a container for the lambda to run in and that's when it fails with the following:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\\nmust specify at least one container source (.....)
I tried to set DOCKER_HOST to tcp://docker:2375 but then it fails with:
Lambda process returned error status code: 1. Result: . Output:\\nerror during connect: Post http://docker:2375/v1.29/containers/create: dial tcp: lookup docker on 169.254.169.254:53: no such host\
DOCKER_HOST set to tcp://localhost:2375 complains too:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?\\nmust specify at least one container source
Did anyone ever get lambdas to run within localstack within shared gitlab runners?
Thanks for your help :)
Running docker in docker is usually a bad idea, since it's a big security vulnerability. Granting access to local docker daemon equals granting root privileges on a runner.
If you still want to use docker installed on host to spawn containers, refer to official documentation - https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-socket-binding
which boils down to adding
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
to [runners.docker] section in your runner config.
The question is, why do you need docker? According to https://github.com/localstack/localstack , setting LAMBDA_EXECUTOR to local will
run Lambda functions in a temporary directory on the local machine
Which should be the best approach to your problem, and won't compromise security of your runner host.

Error "ssl handshake failure" when use docker container

I have a application that written with Java (spring-boot). When run it manually (with java -jar command) it's working fine without any problem.
But when use docker container (docker image built based alpine and use docker containers in docker swarm) it doesn't work and my app couldn't send request out and get error "SSL Handshake failure"
I checked it in --network host docker and had a same result. Also I built new docker image and imported cert file in java cacerts and /etc/ssl/certs in alpine but it did not work. In addition to when run manually my app I don't import any cert file in host.
Can anyone help in this case?
Thanks,
Hamid
Use network mode: bridge or host to fix this SSL error.
Still error on docker overlay network:
version: '3.7'
services:
httpstest:
hostname: httpstest
container_name: httpstest
image: httpstest-service:latest
environment:
- TZ=Asia/Ho_Chi_Minh
ports:
- "8288:8080"
networks:
default:
name: kaio_io
driver: bridge
# driver: overlay
# driver: host

How to connect my spring boot app to redis container on docker?

I have created a spring app and i want to connect it to redis server which is deployed on docker-compose i put the needed properties as follow :
spring.redis.host=redis
spring.redis.port=6379
But i keep getting a ConnexionException so how can i Know on which host redis is running and how to connect to it.
Here is my docker-compose file :
version: '2'
services:
redis:
image: 'bitnami/redis:5.0'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
volumes:
- 'redis_data:/bitnami/redis/data'
volumes:
redis_data:
driver: local
From docker compose documentation
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name
If you want to access redis by container name ('redis' in this case), the Spring boot application has also be deployed as a docker compose service, but it doesn't appear in the docker-compose file that you've provided in the question, so please add it.
Alternatively If you're trying to run the spring boot application on host machine, use 'localhost' instead of 'redis' in order to access the redis container.
Another approach you can use is "docker network" , Below are the steps to follow :
Create a docker network for redis
docker network create redis-docker
Spin up redis container is redis-docker network.
docker run -d --net redis-docker --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
Inspect the redis-docker container
docker network inspect redis-docker
Copy the "IPv4Address" IP and paster in application.yml
Now build , start your application.

Resources