We are executing a Storm topology in pseudo mode.
Storm topology is executing fine and able to connect Storm UI (8080).
But Storm UI is not displaying the running topology information.
Restarted the storm UI process also but no use.
Does storm needs special configuration to display running topology in Storm UI?
You have only to provide port to ui.port option in storm.yaml, like: ui.port: 8080, also made sure that provided port is not already in use. And you don't need to run supervisor to check your Storm UI is running or not, just run nimbus and start ui.
Provide ui.port in storm.yaml file, default port is 8080
Start storm ui by bin/storm ui
I am facing same issue, because my port is already in use so I provided manually port number..
just add ui.port: 8090 in your storm.yaml file which is present inside conf folder of apache storm. And re-run the command storm ui .
Now type http://localhost:8090/ in your google chrome or any other browser.
What versions of Storm are you running?
Check to make sure both Nimbus AND a Supervisor are running. I have seen that if a topology is deployed with no supervisor running then nothing is displayed.
I was also facing the same issue. As the default port is 8080 and is already in use, you might be getting 404 there.
As suggested above as well just use ui.port: 8081 or anything else then 8080 which is not in use.
Mind the space between : and 8081, I faced problem for that as well. Not sure but if you face problem, just mind that space as well and include it.
Also after this if you face any issue, please run the zookeeper/bin> zkcli -server yourhostname command and try it.
Good luck !!
When running the pseudo mode, We normally forget giving a name to topology. If we don't provide the name for topology at the time of submitting it. Then it won't show up in storm UI.
Check following:
Supervisor is running
Nimbus is running
zookeeper is running
you are giving topology some name
Thanks
Related
I have a hadoop cluster setup using Ambari which has services like HDFS,YARN,spark running on the hosts.
When i run the sample spark pi in cluster mode as master yarn, the application gets successfully executed and I can view the same from resource manager logs.
But when i click on the history link, it does not show the spark history UI. How to enable/view the same?
First, check if your spark-history server is already configured by looking for spark.yarn.historyServer.address in spark-defaults.conf file.
If not configured, this link should help you configure the server: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.6/bk_installing_manually_book/content/ch19s04s01.html
If already configured, check if the history server host is accessible from all the nodes in the cluster, and also the port is open.
Zookeeper of storm is stopped working. Because of this Topologies stop working.Do we any mechanism so that zookeep will start automatically?
You will have do define some supervising over the Zookeeper. try daemontools or puppet
What do you mean by "Zookeeper stopped working"? Did you setup Zookeeper in reliable distributed mode? If yes, Zookeeper should be available all the time and Storm topologies should keep running.
However, if one of you ZK nodes dies, you need to start up a new one manually.
See "Setup up a Zookeeper cluster" in https://storm.apache.org/documentation/Setting-up-a-Storm-cluster.html
See also https://storm.apache.org/documentation/images/storm-cluster.png from https://storm.apache.org/tutorial.html
I have installed Hadoop 2.7.1 in psuedo distributed mode (all daemons on single machine). It's up and running and I'm able to access HDFS through command line and run the jobs and I'm able to see the output.
I can access http://localhost:50070/dfshealth.html#tab-overview. it shows version and cluster status and can access hadoop file system.
I found one link and applied its accepted solution but that does not work for me. When I am trying to access http://127.0.0.1:54310, I am getting below error message
It looks like you are making an HTTP request to a Hadoop IPC port. This is
not the correct port for the web interface on this daemon.
Any help is appreciated.
Thanks..
I am using MR2 and not able to track my job on 8088. When I run map reduce job, it submit the job on http://localhost:8080 and thats url is not opening to track the job.
Use port 50030 if you are using MRV1 for YARN use port 8088 for accessing resource manager.
I could access most functionality of hadoop admin site, like below:
But, when I tried to visit the history of each application, I am no luck any more:
Any body know what happens to my environment? Where should I check?
BTW, when I try to run "netstat -a" on my VM, I found no records for port 8088 or 19888, which is very unreasonable to me, because 8088 lead to hadoop main-page and works well.
In this web interface, you can see your jobs in real time if they are running or the history :
Once a M/R finish, the ressource manager does'nt matter of it. This is the job of the historyServer.
Your historyServer (optionnal part of hadoop YARN) seems not to be launched.
It's this service which listen on 19888.
You can launch it with the command : /etc/init.d/hadoop-mapreduce-historyserver start
I have installed the storm correctly. But, I am struggling how to run an example on storm. Can anyone please give me the link or suggestion by which I can execute the example?Also, what are the benefit of running storm under supervision?
Assuming you have installed the storm on your local machine then you have an example storm project bundled along it which you can find in the examples/storm-starter of your storm repository.
To run this example you can follow the series of steps mentioned in README.markdown file in the root folder of storm-starter folder. The steps can also be found at https://github.com/apache/storm/tree/v0.10.0/examples/storm-starter
Regarding running storm and under supervision, the benefit is that since Storm and zookeeper have a fail fast policy, the servers will shutdown if there is an error. Using a supervisor process can bring up the servers in case of they exit the process because of errors.