NiFi version used is 1.11.4.
I am using FetchHbaseRow and PutHbaseJson and have configured Hbase controller service succesfully.
since we have migrated from onpremise to AWS NiFi. the FetchHbaseRow and PutHbaseJson processors are throwing error.
The below image shows the error for FetchHbaseRow processor. It occurs frequently atleast once per day.
whenever this issue occurs, i have rectified by restarting the controller service.
the below shows the controller service for Hbase.
But i could not find the root cause how it worked by restarting the controller service. any explanation please.
Related
I have an issue where i have multiple host dashboards for the same elasticsearch server. Both dashboards has its own name and way of collecting data. One is connected to the installed datadog-agent and the other is somehow connected to the elasticsearch service directly.
The weird thing is that i cannot seem to find a way to turn off the agent connected directly to the ES service, other than turning off the elasticsearch service completly.
I have tried to delete the datadog-agent completely. This stops the dashboard connected to it, to stop receiving data (of course) but the other dashboard keeps receiving data somehow. I cannot find what is sending this data and therefor is not able to stop it. We have multiple master and data node and this is an issue for all of them. ES version is 7.17
another of our clusters is running ES 6.8, and we have not made the final configuration of the monitoring of this cluster but for now it does not have this issue.
just as extra information:
The dashboard connected to the agent is called the same as the host server name, while the other only has the internal ip as it's host name.
Does anyone have any idea what it is that is running and how to stop it? I have tried almost everything i could think of.
i finally found the reason. as all datadog-agents on all master and data nodes was configured to not use the node name as the name and cluster stats was turned on for the elastic plugin for datadog. This resulted in the behavior that when even one of the datadog-agents in the cluster was running, data was coming in to the dashboard which was not named correclty. Leaving the answer here if anyone hits the same situation in the future.
I am trying to set up a cluster on 3 nodes on a Cloud Server with Cloudera Manager. But at Cluster installation step, it gets stuck at 64%. Please guide me on how to go forward with it and where to see logs of the same.
Following is the image of the installation screen
Some cloud companies have policies in which they if lots of data requests are coming, they remove the IP from public hostings for sometime. This is done to prevent DDoS attacks.
A solution can be to ask them to raise the data transfer limit.
I am creating a flow of tweets using nifi and analyze them in solr but tweets coming into nifi but nothing happeing into solr.But error in nifi processor putsolrcontentstream could not connect to localhost:2181/solr cluster not found/not ready.
Putsolrcontentstream processor error:
Are you running in clustered mode?
I just set up a local (Standard mode) Solr core and in the Solr Location property I used http://localhost:8983/solr/myDemoCore. Might you be forgetting to mention the core's name?
If you haven't created a core:
cd path/to/solr/bin/
./solr create -c myDemoCore
./solr restart
Then use http://localhost:8983/solr/myDemoCore in the Solr Location property and try again.
Edit: I see that you're using Windows-- just change your path notation accordingly.
I have created the nodered boilerplate and i have binded the Analytics for Apache Hadoop service.
So it clearly appears as a binded service into the dashboard.
But when I launch the Nodered app and add a HDFS node, I get the following message:
"Unbounded Service: Big Insights service not bound. This node wont work"
Any idea of what i am doing wrong? It used to work well for me a few weeks ago.
You will need to attach the BigInsights for Apache Hadoop service service to your app.
Please attach the service and restage/restage your app.
I have a question in regard to deploying a spark application on a standalone EC-2 Cluster. I have followed the tutorial by Spark ans was able to successfully deploy a standalone EC-2 cluster. I verified that by connecting to the clusrer UI and making sure that everything is as it supposed to be. I developed a simple application and tested it locally. Everything works fine. When I submit it to the cluster (just changing --master local[4] into --masers spark://.... ) I get the following error: ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up. Does any one know how to overcome this problem. my deploy-mode is client.
Make sure that you have provided the correct url to the master.
Basically, the exact spark master URL is displayed on the page when you connected to the Web UI.
URL on the page is something like: Spark Master at spark://IPAddress:port
Also you may notice that web UI and the Spark running port numbers may be different