I have setup Kafka on Amazon EC2 instance.
I have done the following in below order:
(1) SSH into the Instance
(2) Start Zookeper
(3) Start Kafka
(4) Execute Producer and Consumer Programs.
Everything is working fine till here. However once I close the SSH window on which I have started Kafka, the Kafka service stops. I can no longer execute Producer and Consumer programs.
How can I have the Kafka Server permanently up for all requests, even after I close the SSH window.
Thank You.
This is now officially supported in kafka and zookeeper startup scripts. So if you are on latest (Since Aug 2015) kafka you can use -daemon as follows.
# ./kafka-server-start.sh
USAGE: ./kafka-server-start.sh [-daemon] server.properties
# ./zookeeper-server-start.sh
USAGE: ./zookeeper-server-start.sh [-daemon] zookeeper.properties
Try bin/kafka-server-start.sh -daemon config/server.properties.
OR:
Try upstart script here:upstart script for kafka
nohup is required at the beginning of command, so that output is not on screen, but in file. Also & is required at the end of command to start the server in background:
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
will change to:
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
nohup bin/kafka-server-start.sh config/server.properties &
Related
I have configured kafka_2.11-2.3.0 and apache-zookeeper-3.5.5-bin on Windows 10. But while running the topic creation command I am getting the below error:
C:\kafka_2.11-2.3.0>.\bin\windows\kafka-topics.bat --create --bootstrap-server 127.0.0.1:2181 --partitions 1 --replication-factor 1 --topic testD1
Error while executing topic command : org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2019-10-14 16:42:40,603] ERROR java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:178)
at kafka.admin.TopicCommand$TopicService$class.createTopic(TopicCommand.scala:149)
at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:172)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:60)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
Read somewhere in stackoverflow to add listeners=PLAINTEXT://127.0.0.1:9092 in the server.properties file but that didn't work out as expected.
Zookeeper runs on 2181, not Kafka (the bootstrap server)
By default, Kafka runs on port 9092 as below
kafka-topics --bootstrap-server 127.0.0.1:9092 --topic first_topic --create --partitions 3 --replication-factor 1
I've struggled with the same issue on linux. The recommended way to create topics is still via the broker, you shouldn't need to connect directly to zookeeper.
It turned out to be that the shell scripts need a little more configuration when connecting to a TLS endpoint:
Copy the certs linked to by your jdk to a temporary location:
cp /usr/lib/jvm/java-11-openjdk-amd64/lib/security/cacerts /tmp/kafka.client.truststore.jks
Make a properties file (e.g. client.properties)
security.protocol=SSL
ssl.truststore.location=/tmp/kafka.client.truststore.jks
Then try running the script again, while passing the option --command-config with your properties file e.g.:
./kafka-topics.sh --bootstrap-server <server>:<port> --list --command-config client.properties
Note that the option is not consistent between the different scripts, for the console consumer / producer you'll need:
--consumer.config and --producer.config
Replacing bootstrap-server with zookeeper fixed the issue.
For version 2.* you have to create the topic using zookeper with the default port 2181 as a parameter.
For the version 3.* the zookeeper is not any more a parameter, you should use --bootstrap-server using localhost or the IP adresse of the server and the default port 9092.
Documentation
Check your broker after you get this error, Kafka broker will give you the correct IP Address in the console which is running in a different terminal.
In my case I replaced 127.0.0.1:2181 with 192.168.0.21:9092 and I was able to create a new topic successfully.
Note: use bootstrap instead of zookeeper.
For mac, It worked for me as follow when I use the bootstrap server with the kafka server port. Initially, it failed when I tried with a zookeeper.
bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic firsttopic --create --partitions 3 --replication-factor 1
I faced the same issue, but everything was working fine before.
So I changed the data directory in both the properties files: zookeeper.properties and server.properties and it started working fine again.
It could be because I didn't shut down the broker and zookeeper properly before.
I run Apache Storm in a cluster and I was looking for ways to stop and/or restart Nimbus, Supervisor and UI. Would writting a servise help? What should I write in this service file and where should I place it? Thank you in advance
Yes, writing a service is the recommended way to run Storm. The commands you want to run are storm nimbus to start Nimbus (minimum 1 per cluster), storm supervisor to run the supervisor (1 per worker machine), storm ui (1 per cluster) and storm logviewer (1 per worker machine). There are other commands you can also run, but you can find these by simply running storm, it will print a list.
Regarding how to write the service, take a look at the upstart cookbook http://upstart.ubuntu.com/cookbook/.
There's an example script here you can probably use to get started https://unix.stackexchange.com/a/84289
you can make them as service and start them up as the node starts and same can be used to stop them.
/etc/rc.d/SERVICE start or stop or restart
We can easily stop them using the command "ps -aux | grep nimbus" or supervisor etc. Then we have to find the process id and kill it with the “kill” command.
We have small pivotal hadoop cluster.In that cluster, We are using spring-xd as data ingestion tool.
Tried:
When following command executed from spring xd-admin machine:
[root#host ~]# service spring-xd-admin status
xd-admin dead but pid file exists
Outcome:
Both spring-xd-admin and container stopped responding.
Hence,cluster data pipeline has been stopped completely.
Advance Thanks For Help ?
Looks like the service crashed and left a pid file behind. You would have to remove the pid file manually. Look for a file named xd-admin.pid in the /var/run/ directory.
I have searched it all over and couldn't find the error.
I have checked This Stackoverflow Issue but it is not the problem with me
I have started a zookeeper server
Command to start server was
bin/zookeeper-server-start.sh config/zookeeper.properties
Then I SSH into VM by using Putty and started kafka server using
$ bin/kafka-server-start.sh config/server.properties
Then I created Kafka Topic and when I list the topic, it appears.
Then I opened another putty and started kafka-console-producer.sh and typed any message (even enter) and get this long repetitive exception.
Configuration files for zookeeper.properties, server.properties, kafka-producer.properties are as following (respectively)
The version of Kafka i am running is 8.2.2. something as I saw it in kafka/libs folder.
P.S. I get no messages in consumer.
Can any body figure out the problem?
The tutorial I was following was [This][9]
8http://%60http://www.bogotobogo.com/Hadoop/BigData_hadoop_Zookeeper_Kafka_single_node_single_broker_cluster.php%60
On the hortonworks sandbox have a look at the server configuration:
$ less /etc/kafka/conf/server.properties
In my case it said
...
listeners=PLAINTEXT://sandbox.hortonworks.com:6667
...
This means you have to use the following command to successfully connect with the console-producer
$ cd /usr/hdp/current/kafka-broker
$ bin/kafka-console-producer.sh --topic test --broker-list sandbox.hortonworks.com:6667
It won't work, if you use --broker-list 127.0.0.1:6667 or --broker-list localhost:6667 . See also http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/configure_kafka.html
To consume the messages use
$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
As you mentioned in your question that you are using HDP 2.3 and for that when you are running Console-Producer
You need to provide sandbox.hortonworks.com:6667 in Broker-list.
Please use the same while running Console-Consumer.
Please let me know in case still you face any issue.
Within Kafka internally there is a conversation that goes on between both producers and consumers (clients) and the broker (server). During those conversations clients often ask the server for the address of a server broker that's managing a particular partition. The answer is always a fully-qualified host name. Without going into specifics if you ever refer to a broker with an address that is not that broker's fully-qualified host name there are situations when the Kafka implementation runs into trouble.
Another mistake that's easy to make, especially with the Sandbox, is referring to a broker by an address that's not defined to the DNS. That's why every node on the cluster has to be able to address every other node in the cluster by fully-qualified host name. It's also why, when accessing the sandbox from another virtual image running on the same machine you have to add sandbox.hortonworks.com to the image's hosts file.
From what I know I am able to set up Mesos master, slave, zookeeper, marathon on a single node.
But once I execute the command to start mesos-master and after that I am trying to start mesos-slave as well but I don't have any way to continue to execute other commands else where. I have to stop the running and run but the problem is mesos-master already stop running.
Don't execute the commands directly from your shell, you want to start all of those components (zookeeper, mesos-master, mesos-slave, and marathon) as services.
/etc/init.d/zookeeper start
start mesos-master
start mesos-slave
start marathon
I forget if zookeeper creates the init script as part of the install for you or not, you may have to find it in the Hadoop docs.
As for the other 3, they all use 'upstart' and you can find the configuration files in /etc/init/