Could not connect Logstash to Kafka via compose file - elasticsearch

I'm using compose file to create data pipeline between Logstash and Kafka. But this message shows up in logstash container. Could someone help me out?
The message:
[WARN ][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=logstash] Connection to node 2 could not be established. Broker may not be available.
My compose file:
version: "3"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.0
container_name: zookeeper
ports:
- "2181:2181"
networks:
- kafkanet
environment:
ZOOKEEPER_CLIENT_PORT: "2181"
ZOOKEEPER_TICK_TIME: "2000"
ZOOKEEPER_SYNC_LIMIT: "2"
kafkaserver:
image: confluentinc/cp-kafka:6.2.0
container_name: kafka
ports:
- "9092:9092"
networks:
- kafkanet
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://localhost:9092"
KAFKA_BROKER_ID: "2"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
depends_on:
- zookeeper
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0
container_name: elasticsearch
ports:
- 9200:9200
- 9300:9300
networks:
- kafkanet
kibana:
image: docker.elastic.co/kibana/kibana:6.4.0
container_name: kibana
ports:
- 5601:5601
networks:
- kafkanet
depends_on: [ 'elasticsearch' ]
# Logstash Docker Image
logstash:
image: docker.elastic.co/logstash/logstash:6.4.0
container_name: logstash
networks:
- kafkanet
depends_on: [ 'elasticsearch', 'kafkaserver' ]
volumes:
- './logstash/config:/usr/share/logstash/pipeline/'
networks:
kafkanet:
driver: bridge
./logstash/config/logstash.conf
input {
kafka {
bootstrap_servers => "kafkaserver:9092"
topics => ["sit.catalogue.item","uat.catalogue.item"]
auto_offset_reset => "earliest"
decorate_events => true
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[indexPrefix]}-logs-%{+YYYY.MM.dd}"
}
}

Your advertised listener in Kafka is not right. It should be kafkaserver.
So instead of
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://localhost:9092"
You need
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafkaserver:9092"
For more details, see this blog that I wrote.
BTW if you're pushing data from Kafka to Elasticsearch you should check out Kafka Connect as another option.

Related

Eureka Server is Working but not the services Springboot Microservices Docker-compose

I have created Microservices using Spring Boot and Eureka . I have used API Gateway for the microservices .
All the microservices ( eureka clients ) are visible on eureka server but giving an error like the below
api-Gateway port : 8999
product-service : 9001
product-detail-service : 9002
eureka-server : 8761
api-gatway application.properties
server.port =8999
spring.application.name = api-gateway
eureka.client.instance.preferIpAddress = true
eureka.client.serviceUrl.defaultZone= http://localhost:8761/eureka
spring.cloud.gateway.routes[0].id=product-service
spring.cloud.gateway.routes[0].uri=lb://product-service
spring.cloud.gateway.routes[0].predicates[0]=Path=/product/**
spring.cloud.gateway.routes[1].id=product-detail-service
spring.cloud.gateway.routes[1].uri=lb://product-detail-service
spring.cloud.gateway.routes[1].predicates[0]=Path=/productDetail/**
eureka-server application.properties
server.port=8761
eureka.client.register-with-eureka = false
eureka.server.waitTimeInMsWhenSyncEmpty = 0
product-detail-service application.properties
server.port=9002
spring.application.name = product-detail-service
eureka.instance.preferIpAddress = true
product-service application.properties
server.port = 9001
spring.application.name = product-service
eureka.client.instance.preferIpAddress = true
docker-compose.yml
version: '3.8'
services:
api-server:
build: ../apigateway
ports:
- 8999:8999
environment:
- eureka.client.service-url.defaultZone=http://eureka-server:8761/eureka
depends_on:
- product-service
- product-detail-service
eureka-server:
build: ../eureka_server
ports:
- 8761:8761
depends_on:
- product-service
- product-detail-service
product-service:
build: ../product_service
ports:
- 9001:9001
environment:
- eureka.client.service-url.defaultZone=http://eureka-server:8761/eureka
depends_on:
- product-detail-service
product-detail-service:
build: ../product_details_service
ports:
- 9002:9002
environment:
- eureka.client.service-url.defaultZone=http://eureka-server:8761/eureka
my docker images are created successfully and are running fine without docker-compose .
I have used networks and much more but still not resolved
Please help I am trying to solve the issue from 3 days

Laravel Dusk is not working on Docker/ docker-compose.yaml

I am working on a Laravel project. I started writing browser tests using Dusk. I am using docker as my development environment. When I run the tests, I am getting the "Connection refused" error.
This is my docker-compose.yaml file.
version: '3'
services:
apache:
container_name: res_apache
image: webdevops/apache:ubuntu-16.04
environment:
WEB_DOCUMENT_ROOT: /var/www/public
WEB_ALIAS_DOMAIN: restaurant.localhost
WEB_PHP_SOCKET: php-fpm:9000
volumes: # Only shared dirs to apache (to be served)
- ./public:/var/www/public:cached
- ./storage:/var/www/storage:cached
networks:
- res-network
ports:
- "8081:80"
- "443:443"
php-fpm:
container_name: res_php
image: jguyomard/laravel-php:7.3
volumes:
- ./:/var/www/
- ./ci:/var/www/ci:cached
- ./vendor:/var/www/vendor:delegated
- ./storage:/var/www/storage:delegated
- ./node_modules:/var/www/node_modules:cached
- ~/.ssh:/root/.ssh:cached
- ./composer.json:/var/www/composer.json
- ./composer.lock:/var/www/composer.lock
- ~/.composer/cache:/root/.composer/cache:delegated
networks:
- res-network
db:
container_name: res_db
image: mariadb:10.2
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: restaurant
MYSQL_USER: restaurant
MYSQL_PASSWORD: secret
volumes:
- res-data:/var/lib/mysql
networks:
- res-network
ports:
- "33060:3306"
chrome:
image: robcherry/docker-chromedriver
networks:
- res-network
environment:
CHROMEDRIVER_WHITELISTED_IPS: ""
CHROMEDRIVER_PORT: "9515"
ports:
- 9515:9515
cap_add:
- "SYS_ADMIN"
networks:
res-network:
driver: "bridge"
volumes:
res-data:
driver: "local"
The following is the driver function in DuskTestCase.php class
/**
* Create the RemoteWebDriver instance.
*
* #return \Facebook\WebDriver\Remote\RemoteWebDriver
*/
protected function driver()
{
$options = (new ChromeOptions)->addArguments([
'--disable-gpu',
'--headless',
'--window-size=1920,1080',
]);
return RemoteWebDriver::create(
'http://localhost:9515', DesiredCapabilities::chrome()->setCapability(
ChromeOptions::CAPABILITY, $options
)
);
}
I run the tests running the following command.
docker-compose exec php-fpm php artisan dusk
Then I get the following error.
Facebook\WebDriver\Exception\WebDriverCurlException: Curl error thrown for http POST to /session with params: {"capabilities":{"firstMatch":[{"browserName":"chrome","goog:chromeOptions":{"args":["--disable-gpu","--headless","--windo
w-size=1920,1080"]}}]},"desiredCapabilities":{"browserName":"chrome","platform":"ANY","chromeOptions":{"args":["--disable-gpu","--headless","--window-size=1920,1080"]}}}
Failed to connect to localhost port 9515: Connection refused
/var/www/vendor/php-webdriver/webdriver/lib/Remote/HttpCommandExecutor.php:331
What is wrong with my configuration and how can I fix it?

SpringData Elasticsearch NoNodeAvailableException

I am using SpringData to connect my application to Elastic search local instance. When I do a regular curl to get ES info, it works fine, but I am unable to connect to it from Springboot application.
Elasticsearch local version ./elasticsearch -V => Version: 7.6.0
SpringData Elastic search version 3.1.11
> curl -XGET 'http://localhost:9200/_cluster/state?pretty'
{
"cluster_name" : "elasticsearch",
"cluster_uuid" : "1_8HMIK5QDug_xH80VZLgQ",
"version" : 54,
"state_uuid" : "YEe1FSwfRUuw0uw-T69fJQ",
"master_node" : "Nbktx7KrREetbyfL7v0Fog",
"blocks" : { },
"nodes" : {
"Nbktx7KrREetbyfL7v0Fog" : {
"name" : "k***-macOS",
"ephemeral_id" : "pqMw40oPTUmBoHsyTAz9cg",
"transport_address" : "127.0.0.1:9301",
"attributes" : {
"ml.machine_memory" : "17179869184",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20"
}
}
},
#Value("$ELASTIC_HOST")
private String EsHost;
#Value("$ELASTIC_PORT")
private String EsPort;
#Bean
public ElasticsearchOperations elasticsearchTemplate() throws UnknownHostException {
return new ElasticsearchTemplate(elasticsearchClient());
}
#Bean
public Client elasticsearchClient() throws UnknownHostException {
Settings settings = Settings.builder()
.put("client.transport.sniff", true).build();
TransportClient client = new PreBuiltTransportClient(settings);
client.addTransportAddress( new TransportAddress(InetAddress.getByName(EsHost), Integer.valueOf(EsPort));
return client;
}
Tried all the above ways to get a host and port ALSO TRIED WITH 9300 but still no luck. Also, my elasticsearch.yml is the default file and did not add any explicit host or ports.
Docker-compose
version: '3'
services:
elastic:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.2
environment:
- cluster.name=elasticsearch
- node.name=es01
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9201:9200"
- "9301:9300"
db:
restart: always
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: 'xxx'
POSTGRES_USER: 'xx'
POSTGRES_DB: 'xx'
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
ENVIRONMENT_NAME: "dev"
REGION_NAME: "local"
POSTGRES_PASSWORD: "xx"
POSTGRES_USER: "xx"
POSTGRES_HOST: "db"
ELASTIC_HOST: "elastic"
ELASTIC_PORT: "9200"
depends_on:
- db
- elastic
ERROR:
"failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{JjFZc4y-RBCYbdELAsgaAQ}{elastic}{172.20.0.2:9200}]"}
It works if I change this to
environment:
ENVIRONMENT_NAME: "dev"
REGION_NAME: "local"
POSTGRES_PASSWORD: "xxx"
POSTGRES_USER: "xx"
POSTGRES_HOST: "db"
ELASTIC_HOST: "elastic"
ELASTIC_PORT: "9300" --> this is changed from 9200
client.addTransportAddress(new TransportAddress(InetAddress.getLocalHost(), 9201));
No, idea why !!
Spring Data Elasticsearch 3.1.11 is built with Elasticsearch client libraries in version 6.2.2. So even if you manage to get a connection to the cluster, the chances are very high, that the client and the cluster can't communicate properly.
As for the setup of the connection: You should add the name of the cluster you want to connect to into the settings:
Settings settings = Settings.builder()
.put("client.transport.sniff", true)
.put("cluster.name", "elasticsearch")
.build();

Elasticsearch high level rest client, connection reset error in Kubernetes

I am using a single node elasticsearch server and a Java application based on elasticsearch high level rest client. Both are running in a Kubernetes cluster.
#Bean(destroyMethod = "close")
public RestHighLevelClient client(){
RestHighLevelClient client = null;
Logger.getLogger(getClass().getName()).info("Connecting to elasticsearch on host : " + host);
client = new RestHighLevelClient(RestClient.builder(new HttpHost(host, port, "http")));
return client;
}
This is working fine until service kept idle for about 10 minutes. When trying to query elasticsearch server an exception is thrown form java service
java.io.IOException: Connection reset
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:948) ~[elasticsearch-rest-client-6.4.3.jar!/:7.2.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:227) ~[elasticsearch-rest-client-6.4.3.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1448) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1418) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1388) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:930) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
When I send the requests three time to the service it will again works. But after about 10 minutes of idle time service will give the same exception. I have a docker-compose setup with same images but there is no issue like this.
My elasticsearch deployment
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
type: NodePort
ports:
- name: client
port: 9200
targetPort: 9200
- name: nodes
port: 9300
targetPort: 9300
selector:
app: elasticsearch
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
serviceName: elasticsearch
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
name: elasticsearch
env:
- name: cluster.name
value: "docker-cluster"
- name: 'ES_JAVA_OPTS'
value: "-Xms512m -Xmx512m"
- name: discovery.type
value: "single-node"
ports:
- containerPort: 9200
- containerPort: 9300
name: mysql
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
volumes:
- name: elasticsearch-persistent-storage
persistentVolumeClaim:
claimName: elasticsearch-claim
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-init
securityContext:
privileged: true
My Java Service
apiVersion: v1
kind: Service
metadata:
name: search
spec:
ports:
- port: 9099
targetPort: 9099
selector:
app: search
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
spec:
selector:
matchLabels:
app: search
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: search
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- image: search-service:0.0.1-SNAPSHOT
name: search
env:
- name: ELASTIC_SEARCH_HOST
value: elasticsearch
- name: ELASTIC_SEARCH_PORT
value: "9200"
- name: ELASTIC_SEARCH_CLUSTER
value: docker-cluster
ports:
- containerPort: 9099

Logstash not producing output although pipeline main starts

I'm trying to add logs of apache using kibana elasticsearch and logstash. but logstash didn't create index to elastticsearch , so I'm not able to visualize data in kibana
this is my docker-compose :
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
container_name: elasticsearch
hostname: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- http.cors.enabled=true
- http.cors.allow-origin= "*"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
logstash:
image: docker.elastic.co/logstash/logstash-oss:6.2.4
restart: unless-stopped
depends_on:
- elasticsearch
volumes:
- ./logstash-apache.conf:/opt/logstash/logstash-apache.conf
- ./logs:/logs/access_log
links:
- elasticsearch
command: logstash -f /opt/logstash/logstash-apache.conf
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.2.4
container_name: kibana
volumes:
- esdata2:/usr/share/kibana/config/data
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
volumes:
esdata1:
driver: local
esdata2:
driver: local
this my Logstash-apache.conf
input {
file {
type => "apache_access"
path => "/var/log/httpd/access_log"
start_position => beginning
}
}
filter {
if [type] == "apache_access" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}( \*\*%{POSINT:responsetime}\*\*)?" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "apache_logstash-%{+YYYY.MM.dd}"
}
}
This the output mesg:
logstash_1| [2018-07-18T23:55:25,926][INFO ][logstash.agent] Pipelines running {:count=>1, :pipelines=>["main"]}
I have no error in my output but the problem that logstash not producing data
what I shall do ? could any one help me please ?

Resources