Node-red admin page not loading - vps

I've noticed today that I can't load the node-red admin page, I've checked the 1880 port status and it's still open and it's even show that node-red as a listener on 1880.
Knowing that I'm working on Ubuntu ( VPS ) does anyone know how I can solve this problem? or how I can restart node-red ?
Thanks.

As per the docs
To remotely administer a Node-RED instance, the tool must first be pointed at the Node-RED instance you want it to access. By default, it assumes http://localhost:1880. To change that, use the target command:
node-red-admin target http://node-red.example.com/admin

Related

Apache nifi localhost login problem - cannot see login GUI after using for the first time

Problem:
I'm using apache nifi on ubuntu 18.04 on virtualbox 6.1. I manage to use apache nifi once without any problems. The log in page using localhost:8443 works the first time, but after a while when I start apache nifi again (e.g. after a reboot of the machine) and when I goto localhost:8443 again I do not get a page to log into nifi anymore.
All that appears are some symbols and I cannot log into nifi like the first time. Basically I want to be able to log into apache nifi. I'm not sure why the symbols appear instead of the log in page.
Here's what I do:
I start apache nifi-1.16.3 from its installation with its start command:
bin/nifi.sh start
bin/nifi.sh status
Nifi looks to start correctly and the status command shows that nifi is running
I then enter localhost:8443/nifi/login in firefox web browser and I am presented a page that only contains symbols.
What i've tried:
I've downloaded nifi again and started another instance using the fresh download. This does the same i.e. it will show the login page correctly the first time I use it. Then when I try to access the login page after a time via the localhost it will show the symbols instead of the log in page.
I've checked to see whether the port 8443 is being used by something else but it seems free. When nifi is running I check the port, then I shut it down. Once it is shut down no other service etc. is using port 8443. When trying to access localhost:8443 instead of the symbols it shows "Unable to connect" when nifi is shutdown down.
Not sure what else to explore to solve this issue where I can't access the log in GUI through the localhost.
Just add a secure HTTP protocol like this: Local Host

How to access the flink web UI when running inside a container (wsl2)

In the First Steps instructions for flink, it says you can connect to the web UI via a local host link, I have been searching for a way to make this work on Windows 10, when running inside wsl2. I followed all steps from the linked First Steps page, but the connection is refused every time.
I did eventually figure this out. If you edit the ./conf/flink-conf.yaml file and change:
rest.bind-address: localhost to rest.bind-address: 0.0.0.0
then stop and restart the cluster, I can now access the web UI via http://localhost:8081

Why when I try to clone a machine with the Node-RED on it I lose all the graphical configuration that I've made (Ubuntu Amazon Server)?

I'm running an Ubuntu Server on an Amazon EC2 Service. And I'm using the Node-RED to create an IOT project on the cloud.
I succeeded configuring one machine in a way that it works for my project. My problem is when I clone this machine (creating an Amazon Machine Image of my original server and launching it as a new machine). I don't know why all the nodes that I've created on the graphical interface with the Node-RED disappear when I clone my Ubuntu Server. On my cloned server I just see a blank page when I access the Node-RED as if I had never created any node on the original server:
I think this is a problem with the Node-RED because I'm also running a Kibana instance on the same server and all Kibana's graphical configurations are preserved with the cloned server.
Does anyone know why this is happening? Is there a specific configuration on the Node-RED that I have to change to allow its graphical interface to be cloned?
OBS: I know I could just export everything that I did on the original server to my cloned server using the Node-RED import/export tools... But I'm planning to clone my original server many times, so it'd be better if everything were exactly the same when I clone the machine, without the need of manual work.
Node-RED stores the flow in a file in the ~/.node-red/ directory of the user running that instance, the file name is based on the host name of the machine.
e.g. on a raspberry pi the default flow file is called:
/home/pi/.node-red/flows_raspberrypi.json
So assuming that the host name gets changed when you "clone" the machine, Node-RED will not be able to find a flow file that matches the host name and as such start with an empty flow.
There are a few of ways to work round this.
if you start Node-RED manually from the command line you can specify the flow file as the last argument: node-red flow.json
if you are running Node-RED as a service then you can edit the ~/.node-red/settings.js to include a flowFile key that holds the name of the flow to use.

amabri metrics collector error

We have 5 node hortonworks cluster with ambari monitors installed in all nodes and metrics collector installed in master node.
I am getting Connection failed: [Errno 111] Connection refused to 0.0.0.0:6188
PFA for error.
https://drive.google.com/file/d/0B85rPUe3-QPXbXJSRzJmdUwwQU0/view?usp=sharing
I followed the below document and tried removing the service and added it.
https://cwiki.apache.org/confluence/display/AMBARI/Moving+Metrics+Collector+to+a+new+host
First of all, I am not able to find the origin of the error. Please share your experience if you ever faced this problem.
This happens sometimes that port is already being use by another process when we try to move collector to new host with 'Curl' commands specified on apache wiki.
Istead of doing using this you can leverage the feature which Ambari provides from it's GUI to move components from one host to another host .
'Move Master Wizard'
Follow the steps stated at Move Master Wizard , Ambari will take care rest of things for you.
I have fixed this issue by killing the process running in that port and restart the service. You can also do a manual reboot of the machine to fix this issue.

3 layer application non networked?

I have a system named Windchill which runs in a CentOS 5.7 virtualbox;
The system consists of apache, oracle 11g, listener, windchillDS(openLDAP) and the application core (method server) which is a java process. All installed in the same vm(monolithic)
My question is related with the network, The system runs smooth on the network, but once I remove the network cable it stops working and the method server keeps restarting with the socket timeout error.
Im not a IT specialist, I manage the internal configuration of the system and I dont have an specialist to help me right now but I need desperately to make it run non networked in a laptop to show it for a customer.
I just want a hint of where may be the problem:
Does oracle runs non networked? Which configurations do I need to make it run without a network?
Maybe the problem is the listener?
I guess the problem is the oracle because of the socket timeout error with the database but Im not sure...
Sorry this is long and probably needs more explanation, please ask whathever you want!
I found this tip in another forum specific for the windchill product(login needed)
http://portal.ptcuser.org/p/fo/st/thread=52005&posted=1
It is related with Linux configuration to resolve the IPs:
"Edit your /etc/resolv.conf to look like this when not connected to a network:
domain local domain
search localdomain
nameserver 127.0.0.1"
Worked perfectly!
Thank you for your prompt answers guys!!!

Resources