Nodered boilerplate and Analytics for Apache Hadoop binding issue - hadoop

I have created the nodered boilerplate and i have binded the Analytics for Apache Hadoop service.
So it clearly appears as a binded service into the dashboard.
But when I launch the Nodered app and add a HDFS node, I get the following message:
"Unbounded Service: Big Insights service not bound. This node wont work"
Any idea of what i am doing wrong? It used to work well for me a few weeks ago.

You will need to attach the BigInsights for Apache Hadoop service service to your app.
Please attach the service and restage/restage your app.

Related

How to easily publish a multi-container asp.net core web app and wep api to a remote kubernetes kluster

So I recently got into docker and kubernetes and I have a kubernetes cluster set up on a remote vm(linux, kubeadm) and I'm wondering if there is a solution suitable for production that I can easily use to deploy my multi-container asp.net core web application. I have been trying to solve this issue for the past week and found nothing that suits my needs. I have been trying to use bridge to kubernetes but I can only get that to work locally on my windows machine and not remotely onto my linux vm. this is the layout of my appliction
Ask me if you need any additional information as I'm still new to this stuff.
Thanks for your help.
I found that Jenkins is just what I needed!

NiFi: Hbase controller service restart

NiFi version used is 1.11.4.
I am using FetchHbaseRow and PutHbaseJson and have configured Hbase controller service succesfully.
since we have migrated from onpremise to AWS NiFi. the FetchHbaseRow and PutHbaseJson processors are throwing error.
The below image shows the error for FetchHbaseRow processor. It occurs frequently atleast once per day.
whenever this issue occurs, i have rectified by restarting the controller service.
the below shows the controller service for Hbase.
But i could not find the root cause how it worked by restarting the controller service. any explanation please.

How to configure Application Logging Service for SCP application

I have created the hello world application from the SAP Cloud SDK archetypes and pushed this to the cloud foundry environment, binding it to an application logging service instance. My understanding is that this should already provide me with the ability to analyze all logs in the Kibana dashboard of the cloud platform and previously it also worked this way.
However, this time the Kibana dashboard remains empty, so I am wondering if I missed a step or configuration. Looking at the documentation of the service and the respective tutorial blog, I was not able to identify any additional required steps. In the Logs view on the SCP cockpit I can definitely see the entries, but they are not replicated to the ELK stack in the background.
Problem was not SDK related, but seems to have been an incident on the SCP - now works correctly without any changes.

Unsuccessful deployment on Spring Cloud Dataflow with Apache YARN

I have installed a single-node Apache YARN with Kafka-Zookeeper and Spring Cloud Dataflow 1.0.3.
All is working fine, but when I made some deployment examples, like:
stream create --name "ticktock" --definition "time | hdfs --rollover=100" --deploy
http --port=8000 | log --name=logtest --level=INFO
The stream's status never stays on "deployed". It keeps changing on "undeployed"->"parcial"->"deployed" in a constant loop.
However, the YARN application deploys successfully, but it's like the comunication between Spring Cloud Dataflow server instance and Apache Hadoop it's constantly failing.
What can be the possible issue for this?
Thanks in advance!
Well, there's not a lot of information to go on here, but you may want to check and make sure you have the necessary base directory created in HDFS and that your Yarn user has read/write permission to it.
spring.cloud.deployer.yarn.app.baseDir=/dataflow
Thanks to all for your answers! Been busy with other high-priority projects, but I was able to build all using Ambari (including the plugin mentioned by Sabby Adannan). Now all is working great!.

Spark EC-2 deployment error: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up

I have a question in regard to deploying a spark application on a standalone EC-2 Cluster. I have followed the tutorial by Spark ans was able to successfully deploy a standalone EC-2 cluster. I verified that by connecting to the clusrer UI and making sure that everything is as it supposed to be. I developed a simple application and tested it locally. Everything works fine. When I submit it to the cluster (just changing --master local[4] into --masers spark://.... ) I get the following error: ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up. Does any one know how to overcome this problem. my deploy-mode is client.
Make sure that you have provided the correct url to the master.
Basically, the exact spark master URL is displayed on the page when you connected to the Web UI.
URL on the page is something like: Spark Master at spark://IPAddress:port
Also you may notice that web UI and the Spark running port numbers may be different

Resources