bucket not exists with localstack and s3 - go

`Im trying run localstack via docker-compose to create S3 with Golang
im using docker-compose:
and connect S3:
and create bucket with : aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket tags
but, im received error "Bucket not exists" all time!
help pls
`

Hi – Please update your Docker Compose configuration to accurately reflect the latest updates:
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
environment:
- DEBUG=${DEBUG-}
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-}
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
You can now create an S3 bucket using the AWS CLI:
aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket tags
If you run into troubles, please check if LocalStack is running properly:
curl localhost:4566/_localstack/health

Related

Cannot open Minio in browser after dockerizing it in Spring Boot App

I have a problem in opening minio in the browser. I just created Spring Boot app with the usage of it.
Here is my application.yaml file shown below.
server:
port: 8085
spring:
application:
name: springboot-minio
minio:
endpoint: http://127.0.0.1:9000
port: 9000
accessKey: minioadmin #Login Account
secretKey: minioadmin # Login Password
secure: false
bucket-name: commons # Bucket Name
image-size: 10485760 # Maximum size of picture file
file-size: 1073741824 # Maximum file size
Here is my docker-compose.yaml file shown below.
version: '3.8'
services:
minio:
image: minio/minio:latest
container_name: minio
environment:
MINIO_ROOT_USER: "minioadmin"
MINIO_ROOT_PASSWORD: "minioadmin"
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
I run it by these commands shown below.
1 ) docker-compose up -d
2 ) docker ps -a
3 ) docker run minio/minio:latest
Here is the result shown below.
C:\Users\host\IdeaProjects\SpringBootMinio>docker run minio/minio:latest
NAME:
minio - High Performance Object Storage
DESCRIPTION:
Build high performance data infrastructure for machine learning, analytics and application data workloads with MinIO
USAGE:
minio [FLAGS] COMMAND [ARGS...]
COMMANDS:
server start object storage server
gateway start object storage gateway
FLAGS:
--certs-dir value, -S value path to certs directory (default: "/root/.minio/certs")
--quiet disable startup information
--anonymous hide sensitive information from logging
--json output server logs and startup information in json format
--help, -h show help
--version, -v print the version
VERSION:
RELEASE.2022-01-08T03-11-54Z
When I write 127.0.0.1:9000 in the browser, I couldn't open the MinIo login page.
How can I fix my issue?
The MinIO documentation includes a MinIO Docker Quickstart Guide that has some recipes for starting the container. The important thing here is that you cannot just docker run minio/minio; it needs a command to run, probably server. This also needs to be translated into your Compose setup.
The first example on that page breaks down like so:
docker run \
-p 9000:9000 -p 9001:9001 \ # publish ports
-e "MINIO_ROOT_USER=..." \ # set environment variables
-e "MINIO_ROOT_PASSWORD=..." \
quay.io/minio/minio \ # image name
server /data --console-address ":9001" # command to run
That final command is important. In your example where you just docker run the image and get a help message, it's because you omitted the command. In the Compose setup you also don't have a command: line; if you look at docker-compose ps I expect you'll see the container is exited, and docker-compose logs minio will probably show the same help message.
You can include that command in your Compose setup with command::
version: '3.8'
services:
minio:
image: minio/minio:latest
environment:
MINIO_ROOT_USER: "..."
MINIO_ROOT_PASSWORD: "..."
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
command: server /data --console-address :9001 # <-- add this

Testing a container against DynamoDB-Local

I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net

How to fix `kafka: client has run out of available brokers to talk to (Is your cluster reachable?)` error

I am developing an application which reads a message off of an sqs queue, does some stuff with that data, and takes the result and publishes to a kafka topic. In order to test locally, I'd like to set up a kafka image in my docker build. I am currently able to spin up aws-cli, localstack, and my app's containers locally using docker-compose. Separately, I am able to spin up kafka and zookeper without a problem as well. I am unable to get my application to communicate with kafka.
I've tried using two separate compose files, and also fiddled with the networks. Finally, I've referenced: https://rmoff.net/2018/08/02/kafka-listeners-explained/.
Here is my docker-compose file:
version: '3.7'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
env_file: .env
ports:
# Localstack endpoints for various API. Format is localhost:container
- '4563-4584:4563-4584'
- '8080:8080'
environment:
- SERVICES=sns:4575,sqs:4576
- DATA_DIR=/tmp/localstack/data
volumes:
# store data locally in 'localstack' folder
- './localstack:/tmp/localstack'
networks:
- my_network
aws:
image: mesosphere/aws-cli
container_name: aws-cli
# copy local JSON_DATA folder contents into aws-cli container's app folder
#volumes:
# - ./JSON_DATA:/app
env_file: .env
# bash entrypoint needed for multiple commands
entrypoint: /bin/sh -c
command: >
" sleep 10;
aws --endpoint-url=http://localstack:4576 sqs create-queue --queue-name input_queue;
aws --endpoint-url=http://localstack:4575 sns create-topic --name input_topic;
aws --endpoint-url=http://localstack:4575 sns subscribe --topic-arn arn:aws:sns:us-east-2:123456789012:example_topic --protocol sqs --notification-endpoint http://localhost:4576/queue/input_queue; "
networks:
- my_network
depends_on:
- localstack
my_app:
build: .
image: my_app
container_name: my_app
env_file: .env
ports:
- '9000:9000'
networks:
- my_network
depends_on:
- localstack
- aws
zookeeper:
image: confluentinc/cp-zookeeper:5.0.0
container_name: zookeeper
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
networks:
- my_network
kafka:
image: confluentinc/cp-kafka:5.0.0
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
# For more details see See https://rmoff.net/2018/08/02/kafka-listeners-explained/
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_CREATE_TOPICS: "output_topic:2:2"
networks:
- my_network
networks:
my_network:
I would hope to see no errors as a result of publishing to this topic. Instead, I'm getting:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
Any ideas what I may be doing wrong? Thank you for your help.
You've made the broker only resolvable within the Kafka container itself (or from your host to the container) by setting the listeners only to localhost.
If you want another Docker service to be able to reach that container, you'll have to add <some protocol>://kafka:<some port> to the advertised listeners, and make the listeners as not localhost
Where the protocol is also added to KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
FWIW, That blog should cover all those bases.

Trouble communicating between docker containers

I'm running an "elasticsearch" container. I can curl the container and get results but when I try to communicate with the container from within my "web" container it refuses the connection.
docker-compose up
curl localhost:9200 // works.
curl docker-compose run web curl localhost:9200 // connection refused.
docker-compose.yml
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/src
ports:
- "5000:5000"
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:5.1.2
ports:
- "9200:9200"
Dockerfile
FROM python:3.5
ADD . /src
WORKDIR /src
RUN pip install -r requirements.txt
CMD python project/wsgi.py
You cannot use localhost:9200 from within the web container to connect to the elasticsearch container. You could define a link or just use the service name (which is mapped by default):
curl elasticsearch:9200
Links allow you to define extra aliases by which a service is reachable from another service. They are not required to enable services to communicate - by default, any service can reach any other service at that service’s name.
Also see Docker Compose Links
You should be trying to curl elasticsearch:9200, not localhost:9200. The hostname elasticsearch should be in your hosts file on the web container.

linking kibana with elasticsearch

I have the following docker containers running on my box...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5da7523e527b kibana "/docker-entrypoint.s" About a minute ago Up About a minute 0.0.0.0:5601->5601/tcp elated_lovelace
20aea0e545ca elasticsearch "/docker-entrypoint.s" 3 hours ago Up 3 hours 0.0.0.0:9200->9200/tcp, 9300/tcp sad_meitner
My aim was to get kibana to link to my elasticsearch container however when I hit kibana it's telling me that I do not have any document stores. I know this is not right because I definitely have documents in elasticsearch. I'm guessing my link command is wrong.
This is the docker command I used to start the kibana container.
docker run -p 5601:5601 --link sad_meitner:elasticsearch -d kibana
Can someone tell me what I've done wrong?
thanks
First of all, Linking is a legacy feature, Create a user defined network first:
docker network create mynetwork --driver=bridge
Now use mynetwork for containers you want to be able to communicate with each other.
docker run -p 5601:5601 --name kibana -d --network mynetwork kibana
docker run -p 9200:9200 -p 9300:9300 --name elasticsearch -d --network mynetwork elasticsearch
Docker will run a dns server for your user defined network, so you can ping other container by name.
docker exec -it kibana /bin/bash
ping elasticsearch
You can use telnet or curl to verify kibana->elasticsearch connectivity from kibana container.
p.s I used official (library) docker images for ELK stack with user defined networking recently and it worked like a charm.
you can add ENV ELASTICSEARCH_URL=elasticsearch:9200 to your Dockerfile before build kibana, then use docker-compose to run elasticsearch with kibana like this:
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.3.0
container_name: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
kibana:
image: docker.elastic.co/kibana/kibana:5.3.0
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch

Resources