Start ElasticSearch in Wercker - ruby

We have a Ruby project where we are using Wercker as Continuous Integration.
We need to start an Elastic Search service in order to run some integration tests.
Locally, we added the Elastic configuration to the docker file and everything runs smoothly:
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.1
container_name: elasticsearch
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
In The Wercker.yml file, we tried several things, but we cannot reach the elastic service.
Our wercker.yml contains:
services:
- id: elasticsearch:6.5.1
env:
ports:
- "9200:9200"
- "9300:9300"
We have this king of error when trying to use Elastic in our tests:
Errno::EADDRNOTAVAIL: Failed to open TCP connection to localhost:9200 (Cannot assign requested address - connect(2) for "localhost" port 9200)
Do you have any idea of what we are missing?

So, we found a solution:
In wercker.yml
services:
- id: elasticsearch:6.5.1
cmd: "/elasticsearch/bin/elasticsearch -Ediscovery.type=single-node"
And we added a step to check the connection:
build:
steps:
- script:
name: Test elasticsearch connection
code: curl http://elasticsearch:9200

Related

Facing error response from daemon-Windows

I am trying to run apache Kafka on windows using docker and my docker-compose.yml code is as follows:
version: "3"
services:
spark:
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
- "4010-4109:4010-4109"
volumes:
- ./notebooks:/home/jovyan/work/notebooks/
zookeeper:
image: 'bitnami/zookeeper:latest'
container_name: zookeeper
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
container_name: kakfa
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
When I execute the command
docker-compose -f docker-compose.yml up
I get an error: Error response from daemon: driver failed programming external connectivity on endpoint kafka-spark-1 (452eae1760b7860e3924c0e630943f825a809272760c8aa8bbb2f58ab2865377): Bind for 0.0.0.0:9092 failed: port is already allocated
I have tried net stop winnat and net start winnat, unfortunately this solution didn't work.
Would appreciate any kind of help!
Spark isn't running Kafka
Remove the ports here
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
Also, change variable for Kafka to use the proper hostname, otherwise Spark will not work with it...
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
Then you can also remove ports for Kafka container since you wouldn't have access from the host. Unless you add external listeners.
You may also be interested in an example notebook I use to test PySpark with Kafka.

How to run container of beat that required authentication from Elasticsearch

The main purpose: I want to use Logstash for collecting logs files that rely on remote server.
My ELK stack were created by using docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
and then I want to install a filebeat at the target host in order to send log to the ELK host.
docker run docker.elastic.co/beats/filebeat-oss:7.5.1 setup \
-E setup.kibana.host=x.x.x.x:5601 \
-E ELASTIC_PASSWORD="changeme" \
-E output.elasticsearch.hosts=["x.x.x.x:9200"]
but once hit the enter, the error occurs
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://x.x.x.x:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]
Also tried with -E ELASTICS_USERNAME="elastic" the error still persists
You should disable the basic x-pack security which is by default enabled in Elasticsearch 7.X version, under environment variable of ES docker image, mentioned below and start ES docker container.
xpack.security.enabled : false
After this, no need to pass ES creds and you can also remove below from your ES env. var:
ELASTIC_PASSWORD: changeme

Why elasticsearch on docker swarm requires a transport.host=localhost setting?

I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?

Issue with spring cloud dataflow and Remote Repository: Apps are installed but i can't deploy streams

i'm facing an issue using spring cloud dataflow connected to a remote repository.
I think i managed to connect the dataflow server to the repository correctly because at first i couldn't import apps and now i can
The problem is that when i try to deploy a stream the dataflow server doesn't see the remote repository.
Here's an example to make myself clear
When i try to import a jar that does not exist the import is successful but if i try to open the details from the UI i get:
Failed to resolve MavenResource: [JAR-NAME] Configured remote repositories: : [repo1],[springRepo]
So i guess that the system sees "repo1"
But then when i deploy a stream (with all valid apps) i get:
Error Message = [Failed to resolve MavenResource: [JAR-NAME] Configured remote repository: : [springRepo]]
I followed this: https://github.com/spring-cloud/spring-cloud-dataflow/issues/982
And this: https://docs.spring.io/spring-cloud-dataflow/docs/1.1.0.BUILD-SNAPSHOT/reference/html/getting-started-deploying-spring-cloud-dataflow.html
This is my docker-compose.yml:
version: '3'
services:
kafka:
image: wurstmeister/kafka:2.11-0.11.0.3
expose:
- "9092"
environment:
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_HOST_NAME=kafka
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
expose:
- "2181"
dataflow-server:
image: springcloud/spring-cloud-dataflow-server:2.0.2.RELEASE
container_name: dataflow-server
ports:
- "9393:9393"
environment:
- spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=kafka:9092
- spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=zookeeper:2181
- spring.cloud.skipper.client.serverUri=http://skipper-server:7577/api
- spring.cloud.dataflow.applicationProperties.stream.management.metrics.export.influx.enabled=true
- spring.cloud.dataflow.applicationProperties.stream.management.metrics.export.influx.db=myinfluxdb
- spring.cloud.dataflow.applicationProperties.stream.management.metrics.export.influx.uri=http://influxdb:8086
- spring.cloud.dataflow.grafana-info.url=http://localhost:3000
- maven.localRepository=null
- maven.remote-repositories.repo1.url= [URL]
- maven.remote-repositories.repo1.auth.username=***
- maven.remote-repositories.repo1.auth.password=***
depends_on:
- kafka
volumes:
- ~/.m2/repository:/m2repo
app-import:
image: springcloud/openjdk:latest
depends_on:
- dataflow-server
command: >
/bin/sh -c "
while ! nc -z dataflow-server 9393;
do
sleep 1;
done;
wget -qO- 'http://dataflow-server:9393/apps' --post-data='uri=https://repo.spring.io/libs-release/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Einstein.RELEASE/spring-cloud-stream-app-descriptor-Einstein.RELEASE.stream-apps-kafka-maven&force=true';
echo 'Stream apps imported'
wget -qO- 'http://dataflow-server:9393/apps' --post-data='uri=https://repo.spring.io/libs-release-local/org/springframework/cloud/task/app/spring-cloud-task-app-descriptor/Dearborn.SR1/spring-cloud-task-app-descriptor-Dearborn.SR1.task-apps-maven&force=true';
echo 'Task apps imported'"
skipper-server:
image: springcloud/spring-cloud-skipper-server:2.0.1.RELEASE
container_name: skipper
ports:
- "7577:7577"
- "9000-9010:9000-9010"
influxdb:
image: influxdb:1.7.4
container_name: 'influxdb'
ports:
- '8086:8086'
grafana:
image: springcloud/spring-cloud-dataflow-grafana-influxdb:2.0.2.RELEASE
container_name: 'grafana'
ports:
- '3000:3000'
volumes:
scdf-targets:
You need to set the maven remote repository configuration for the Skipper server as well. It is the Skipper server that takes care of handling the deployment request from the SCDF server and hence the Skipper server requires the similar configuration:
- maven.remote-repositories.repo1.url= [URL]
- maven.remote-repositories.repo1.auth.username=***
- maven.remote-repositories.repo1.auth.password=***

cadvisor, elasticsearch, docker: no Elasticsearch node available

I'm trying to connect cadvisor to elasticsearch with docker and I'm getting the error:
cadvisor.go:113] Failed to initialize storage driver: failed to create the elasticsearch client - no Elasticsearch node available
docker-compose.yml
version: '2'
services:
elasticsearch:
image: "elasticsearch:2.3.3"
container_name: "elasticsearch"
ports:
- "9200:9200"
kibana:
image: "kibana:4.5.1"
container_name: "kibana"
ports:
- "5601:5601"
links:
- elasticsearch
cadvisor:
image: "google/cadvisor:latest"
container_name: "cadvisor"
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
links:
- elasticsearch
restart: always
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://elasticsearch:9200"
If I change the command to
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://172.22.0.5:9200"
everything works just fine. Any ideas?
what you are missing is an index in elasticsearch, unfortunately this is not well documented
go to your kibana dashboard, dev tools and send this request:
PUT /.kibana/index-pattern/cadvisor
{"title" : "cadvisor", "timeFieldName": "container_stats.timestamp"}

Resources