Apache Storm could not find leader nimbus from seed hosts - apache-storm

I have installed Apache-Storm using docker compose
docker-compose.yml:
kafka:
image: spotify/kafka
ports:
- "9092:9092"
- "2181:2181"
environment:
ADVERTISED_HOST: 172.16.8.37
ADVERTISED_PORT: 9092
nimbus:
command: --daemon nimbus drpc
image: fhuz/docker-storm
ports:
- 3773:3773
- 3772:3772
- 6627:6627
links:
- kafka:zk
supervisor:
command: --daemon supervisor logviewer
image: fhuz/docker-storm
ports:
- 8000:8000
- 6700:6700
- 6701:6701
- 6702:6702
- 6703:6703
links:
- kafka:zk
ui:
command: --daemon ui
image: fhuz/docker-storm
ports:
- 8080:8080
links:
- kafka:zk
elasticsearch:
image: elasticsearch:2.4.1
ports:
- 9300:9300
- 9200:9200
I run and there are no problems, then when I Access to the Storm-UI: XXXX.XX.XX.XX:8080 there is the Storm UI, but It throws the next error:
I have try some answers and solutions of other users of StackOverflow but it still failing.
Internal Server Error:
org.apache.storm.utils.NimbusLeaderNotFoundException: Could not find leader nimbus from seed hosts ["127.0.0.1"]. Did you specify a valid list of nimbus hosts for config nimbus.seeds?
at org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:90)
at org.apache.storm.ui.core$nimbus_summary.invoke(core.clj:388)
at org.apache.storm.ui.core$fn__12489.invoke(core.clj:937)
at org.apache.storm.shade.compojure.core$make_route$fn__4604.invoke(core.clj:93)
at org.apache.storm.shade.compojure.core$if_route$fn__4592.invoke(core.clj:39)
at org.apache.storm.shade.compojure.core$if_method$fn__4585.invoke(core.clj:24)
at org.apache.storm.shade.compojure.core$routing$fn__4610.invoke(core.clj:106)
at clojure.core$some.invoke(core.clj:2570)
at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:106)
at clojure.lang.RestFn.applyTo(RestFn.java:139)
at clojure.core$apply.invoke(core.clj:632)
at org.apache.storm.shade.compojure.core$routes$fn__4614.invoke(core.clj:111)
at org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__11958.invoke(json.clj:56)
at org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__5680.invoke(multipart_params.clj:103)
at org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__11140.invoke(reload.clj:22)
at org.apache.storm.ui.helpers$requests_middleware$fn__5907.invoke(helpers.clj:46)
at org.apache.storm.ui.core$catch_errors$fn__12679.invoke(core.clj:1224)
at org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__5611.invoke(keyword_params.clj:27)
at org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__5651.invoke(nested_params.clj:65)
at org.apache.storm.shade.ring.middleware.params$wrap_params$fn__5582.invoke(params.clj:55)
at org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__5680.invoke(multipart_params.clj:103)
at org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__5866.invoke(flash.clj:14)
at org.apache.storm.shade.ring.middleware.session$wrap_session$fn__5854.invoke(session.clj:43)
at org.apache.storm.shade.ring.middleware.cookies$wrap_cookies$fn__5782.invoke(cookies.clj:160)
at org.apache.storm.shade.ring.util.servlet$make_service_method$fn__5488.invoke(servlet.clj:127)
at org.apache.storm.shade.ring.util.servlet$servlet$fn__5492.invoke(servlet.clj:136)
at org.apache.storm.shade.ring.util.servlet.proxy$javax.servlet.http.HttpServlet$ff19274a.service(Unknown Source)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:654)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1320)
at org.apache.storm.logging.filters.AccessLoggingFilter.handle(AccessLoggingFilter.java:47)
at org.apache.storm.logging.filters.AccessLoggingFilter.doFilter(AccessLoggingFilter.java:39)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
at org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:247)
at org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:210)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:443)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.apache.storm.shade.org.eclipse.jetty.server.Server.handle(Server.java:369)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:486)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:933)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:995)
at org.apache.storm.shade.org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
at org.apache.storm.shade.org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.apache.storm.shade.org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.apache.storm.shade.org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
at org.apache.storm.shade.org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.apache.storm.shade.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.apache.storm.shade.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)

Your Supervisor and Nimbus are running on different machines. By default, Storm looks for Nimbus on localhost using this parameter:
nimbus.seeds : ["localhost"]
That's the error you get, it cannot find Nimbus on the local machine. You need to add that field under the Supervisor, with the IP of the machine running the Nimbus process.

Make sure you have started your Zookeeper server and client before running Storm:
$ bin/zkServer.sh start
$ bin/zkCli.sh

Related

Facing error response from daemon-Windows

I am trying to run apache Kafka on windows using docker and my docker-compose.yml code is as follows:
version: "3"
services:
spark:
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
- "4010-4109:4010-4109"
volumes:
- ./notebooks:/home/jovyan/work/notebooks/
zookeeper:
image: 'bitnami/zookeeper:latest'
container_name: zookeeper
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
container_name: kakfa
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
When I execute the command
docker-compose -f docker-compose.yml up
I get an error: Error response from daemon: driver failed programming external connectivity on endpoint kafka-spark-1 (452eae1760b7860e3924c0e630943f825a809272760c8aa8bbb2f58ab2865377): Bind for 0.0.0.0:9092 failed: port is already allocated
I have tried net stop winnat and net start winnat, unfortunately this solution didn't work.
Would appreciate any kind of help!
Spark isn't running Kafka
Remove the ports here
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
Also, change variable for Kafka to use the proper hostname, otherwise Spark will not work with it...
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
Then you can also remove ports for Kafka container since you wouldn't have access from the host. Unless you add external listeners.
You may also be interested in an example notebook I use to test PySpark with Kafka.

How to fix `kafka: client has run out of available brokers to talk to (Is your cluster reachable?)` error

I am developing an application which reads a message off of an sqs queue, does some stuff with that data, and takes the result and publishes to a kafka topic. In order to test locally, I'd like to set up a kafka image in my docker build. I am currently able to spin up aws-cli, localstack, and my app's containers locally using docker-compose. Separately, I am able to spin up kafka and zookeper without a problem as well. I am unable to get my application to communicate with kafka.
I've tried using two separate compose files, and also fiddled with the networks. Finally, I've referenced: https://rmoff.net/2018/08/02/kafka-listeners-explained/.
Here is my docker-compose file:
version: '3.7'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
env_file: .env
ports:
# Localstack endpoints for various API. Format is localhost:container
- '4563-4584:4563-4584'
- '8080:8080'
environment:
- SERVICES=sns:4575,sqs:4576
- DATA_DIR=/tmp/localstack/data
volumes:
# store data locally in 'localstack' folder
- './localstack:/tmp/localstack'
networks:
- my_network
aws:
image: mesosphere/aws-cli
container_name: aws-cli
# copy local JSON_DATA folder contents into aws-cli container's app folder
#volumes:
# - ./JSON_DATA:/app
env_file: .env
# bash entrypoint needed for multiple commands
entrypoint: /bin/sh -c
command: >
" sleep 10;
aws --endpoint-url=http://localstack:4576 sqs create-queue --queue-name input_queue;
aws --endpoint-url=http://localstack:4575 sns create-topic --name input_topic;
aws --endpoint-url=http://localstack:4575 sns subscribe --topic-arn arn:aws:sns:us-east-2:123456789012:example_topic --protocol sqs --notification-endpoint http://localhost:4576/queue/input_queue; "
networks:
- my_network
depends_on:
- localstack
my_app:
build: .
image: my_app
container_name: my_app
env_file: .env
ports:
- '9000:9000'
networks:
- my_network
depends_on:
- localstack
- aws
zookeeper:
image: confluentinc/cp-zookeeper:5.0.0
container_name: zookeeper
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
networks:
- my_network
kafka:
image: confluentinc/cp-kafka:5.0.0
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
# For more details see See https://rmoff.net/2018/08/02/kafka-listeners-explained/
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_CREATE_TOPICS: "output_topic:2:2"
networks:
- my_network
networks:
my_network:
I would hope to see no errors as a result of publishing to this topic. Instead, I'm getting:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
Any ideas what I may be doing wrong? Thank you for your help.
You've made the broker only resolvable within the Kafka container itself (or from your host to the container) by setting the listeners only to localhost.
If you want another Docker service to be able to reach that container, you'll have to add <some protocol>://kafka:<some port> to the advertised listeners, and make the listeners as not localhost
Where the protocol is also added to KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
FWIW, That blog should cover all those bases.

Start ElasticSearch in Wercker

We have a Ruby project where we are using Wercker as Continuous Integration.
We need to start an Elastic Search service in order to run some integration tests.
Locally, we added the Elastic configuration to the docker file and everything runs smoothly:
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.1
container_name: elasticsearch
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
In The Wercker.yml file, we tried several things, but we cannot reach the elastic service.
Our wercker.yml contains:
services:
- id: elasticsearch:6.5.1
env:
ports:
- "9200:9200"
- "9300:9300"
We have this king of error when trying to use Elastic in our tests:
Errno::EADDRNOTAVAIL: Failed to open TCP connection to localhost:9200 (Cannot assign requested address - connect(2) for "localhost" port 9200)
Do you have any idea of what we are missing?
So, we found a solution:
In wercker.yml
services:
- id: elasticsearch:6.5.1
cmd: "/elasticsearch/bin/elasticsearch -Ediscovery.type=single-node"
And we added a step to check the connection:
build:
steps:
- script:
name: Test elasticsearch connection
code: curl http://elasticsearch:9200

Running zeppelin on spark cluster mode

I am using this tutorial spark cluster on yarn mode in docker container to launch zeppelin in spark cluster in yarn mode. However I am stuck at step 4. I can't find conf/zeppelin-env.sh in my docker container to put further configuration. I tried putting these conf folder of zeppelin but yet now successful. Apart from that zeppelin notebook is also not running on localhost:9001.
I am very new to distributed system, it would be great if someone can help me start zeppelin on spark cluster in yarn mode.
Here is my docker-compose file to enable zeppelin talk with spark cluster.
version: '2'
services:
sparkmaster:
build: .
container_name: sparkmaster
ports:
- "8080:8080"
- "7077:7077"
- "8888:8888"
- "8081:8081"
- "8082:8082"
- "5050:5050"
- "5051:5051"
- "4040:4040"
zeppelin:
image: dylanmei/zeppelin
container_name: zeppelin-notebook
env_file:
- ./hadoop.env
environment:
ZEPPELIN_PORT: 9001
CORE_CONF_fs_defaultFS: "hdfs://namenode:8020"
HADOOP_CONF_DIR_fs_defaultFS: "hdfs://namenode:8020"
SPARK_MASTER: "spark://spark-master:7077"
MASTER: "yarn-client"
SPARK_HOME: spark-master
ZEPPELIN_JAVA_OPTS: >-
-Dspark.driver.memory=1g
-Dspark.executor.memory=2g
ports:
- 9001:9001
volumes:
- ./data:/usr/zeppelin/data
- ./notebooks:/usr/zeppelin/notebook
this is the dockerfile you used to launch the standalone spark cluster.
https://github.com/apache/zeppelin/blob/master/scripts/docker/spark-cluster-managers/spark_standalone/Dockerfile
But there is no Zeppelin instance inside the container, so you have to use Zeppelin on your local machine.
Please download and use it.

hostname in docker-compose.yml fails to be recognized on on mac (but works on linux)

I am using the docker-compose 'recipe' below to bring up a container that runs a component of the storm stream processing framework. I am finding that on Mac's
when i enter the container (once it is up and running via docker exec -t -i <container-id> bash)
and I do ping storm-supervisor I get the error
'unknown host'. However, when i run the same docker-compose script on Linux
the host is recognized and ping succeeds.
The failure to resolve the host leads to problems with the Storm component... but what
that component is doing can be ignored for this question. I'm pretty sure if I figured out
how to get the Mac's docker-compose behavior to match Linux's then I would have no problem.
I think i am experiencing the issue mentioned in this post:
https://forums.docker.com/t/docker-compose-not-setting-hostname-when-network-mode-host/16728
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
network_mode: host
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
thanks in advance for any leads or tips !
"network_mode: host" will not work well on docker mac. I experienced the same issue where I had few of my containers in bridge network and the others in host network.
However, you can move all your containers to a custom bridge network. It solved for me.
You can edit your docker-compose.yml file to have a custom bridge network.
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
networks:
- storm
networks:
storm:
external: true
Also, execute the below command to create the custom network.
docker network create storm
You can verify it by
docker network ls
Hope it helped.

Resources