how can I create a custom chrome node with all the properties that zalenium default chrome node has - zalenium

how can I create a custom chrome node on docker with all the properties( like record video) that zalenium default chrome node has.
I am able to create a node but it doesn't have live execution vnc and also not able to record the video like the default chrome node with zalenium has

Related

On which scenarios firewall is not supported by the nodes?

I'm trying to run a Minio Cluster using the Jelastic template, however when attempting to activate the firewall on one of the nodes I'm greeted with the message "Firewall is not supported on this node."
screenshot
Firewall is only supported on official images from Jelastic.
When you use a custom docker image for your node, you won't be able to use the Jelastic Firewall on it. You have to manually configure the iptables.

Why when I try to clone a machine with the Node-RED on it I lose all the graphical configuration that I've made (Ubuntu Amazon Server)?

I'm running an Ubuntu Server on an Amazon EC2 Service. And I'm using the Node-RED to create an IOT project on the cloud.
I succeeded configuring one machine in a way that it works for my project. My problem is when I clone this machine (creating an Amazon Machine Image of my original server and launching it as a new machine). I don't know why all the nodes that I've created on the graphical interface with the Node-RED disappear when I clone my Ubuntu Server. On my cloned server I just see a blank page when I access the Node-RED as if I had never created any node on the original server:
I think this is a problem with the Node-RED because I'm also running a Kibana instance on the same server and all Kibana's graphical configurations are preserved with the cloned server.
Does anyone know why this is happening? Is there a specific configuration on the Node-RED that I have to change to allow its graphical interface to be cloned?
OBS: I know I could just export everything that I did on the original server to my cloned server using the Node-RED import/export tools... But I'm planning to clone my original server many times, so it'd be better if everything were exactly the same when I clone the machine, without the need of manual work.
Node-RED stores the flow in a file in the ~/.node-red/ directory of the user running that instance, the file name is based on the host name of the machine.
e.g. on a raspberry pi the default flow file is called:
/home/pi/.node-red/flows_raspberrypi.json
So assuming that the host name gets changed when you "clone" the machine, Node-RED will not be able to find a flow file that matches the host name and as such start with an empty flow.
There are a few of ways to work round this.
if you start Node-RED manually from the command line you can specify the flow file as the last argument: node-red flow.json
if you are running Node-RED as a service then you can edit the ~/.node-red/settings.js to include a flowFile key that holds the name of the flow to use.

Node app (meteor) do not accept XHR connections

Have just moved old (but running on the RedHat OpenShift PaaS) node app (Meteor to be ohnest) into new Linux VPS box.
The problem is that node server seems to refuse (block,do not provide, do not service) the XHR type connections from browser directed to the port usally defined using the
DDP_DEFAULT_CONNECTION_URL
env variable.
As I understand it's used for Ajax like responsiveness build in the Meteor apps.
From the browser point of view, I just see failed XHR type connections to the DDP url.
Firewall seems to be set ok.
Http communication (port 80) works ok, so I can get the static part of the web page and even navigate to other static pages but no dynamic data /like db/.
Any idea ?
You forgot to put export before setting the environment variable.
Run this command and I hope that will solve your problem.
export DDP_DEFAULT_CONNECTION_URL
So it was just the DDP_DEFAULT_CONNECTION_URL setting. Once the app was deployed to the RH OpenShift PaaS the value used there was :8000. My fault was I assumed it has to be the same everywhere. Changing it to :8080 (port used by node) made app working.
I just thought they have to be separate ports (one for www and one for DDP).

curl can get web content hosted in local docker but browser cannot open the page

I installed docker in my mac and have a laravel container running. after pointing the port using docker -p I am successfully getting the web content using cURL. However I am not able to use safari or any other browser to view the page. the url is looking like this: http://191.168.99.100 and I am sure the port is successfully paired between docker and container.
I am wondering if there is a specific troubleshooting method to find the reason of the issue.

Access hadoop nodes web UI from multiple links

i am using the following setup for hadoop's nodes web ui access :
dfs.namenode.http-address : 127.0.0.1:50070
By which i am able to access the nodes web ui link only form the local machine as :
http://127.0.0.1:50070
Is there any way by which i can make it accessible from outside as well ? say like :
http://<Machine-IP>:50070
Thanks in Advance !!
You can use hostname or ipaddress instead of localhost/127.0.0.1.
Make sure you can ping the hostname or ip from the remote machine. If you can ping it then you can able to access web ui.
To ping it
Open cmd/terminal
type the below command in remote machines
ping hostname/ip
From http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-web-interfaces.html
The following table lists web interfaces that you can view on the core
and task nodes. These Hadoop interfaces are available on all clusters.
To access the following interfaces, replace slave-public-dns-name in
the URI with the public DNS name of the node. For more information
about retrieving the public DNS name of a core or task node instance,
see Connecting to Your Linux/Unix Instances Using SSH in the Amazon
EC2 User Guide for Linux Instances. In addition to retrieving the
public DNS name of the core or task node, you must also edit the
ElasticMapReduce-slave security group to allow SSH access over TCP
port 22. For more information about modifying security group rules,
see Adding Rules to a Security Group in the Amazon EC2 User Guide for
Linux Instances.
YARN ResourceManager
YARN NodeManager
Hadoop HDFS NameNode
Hadoop HDFS DataNode
Spark HistoryServer
Because there are several application-specific interfaces available on
the master node that are not available on the core and task nodes, the
instructions in this document are specific to the Amazon EMR master
node. Accessing the web interfaces on the core and task nodes can be
done in the same manner as you would access the web interfaces on the
master node.
There are several ways you can access the web interfaces on the master
node. The easiest and quickest method is to use SSH to connect to the
master node and use the text-based browser, Lynx, to view the web
sites in your SSH client. However, Lynx is a text-based browser with a
limited user interface that cannot display graphics. The following
example shows how to open the Hadoop ResourceManager interface using
Lynx (Lynx URLs are also provided when you log into the master node
using SSH).
Copy lynx http://ip-###-##-##-###.us-west-2.compute.internal:8088/
There are two remaining options for accessing web interfaces on the
master node that provide full browser functionality. Choose one of the
following:
Option 1 (recommended for more technical users): Use an SSH client to connect to the master node, configure SSH tunneling with local port
forwarding, and use an Internet browser to open web interfaces hosted
on the master node. This method allows you to configure web interface
access without using a SOCKS proxy.
to do this use the command
$ ssh -gnNT -L 9002:localhost:8088 user#example.com
where user#example.com is your username. Note the use of -g to open access to external ip addresses (beware this is a security risk)
you can check this is running using
nmap localhost
to close this ssh tunnel when done use
ps aux | grep 9002
to find the pid of your running ssh process and kill it.
Option 2 (recommended for new users): Use an SSH client to connect to the master node, configure SSH tunneling with dynamic port
forwarding, and configure your Internet browser to use an add-on such
as FoxyProxy or SwitchySharp to manage your SOCKS proxy settings. This
method allows you to automatically filter URLs based on text patterns
and to limit the proxy settings to domains that match the form of the
master node's DNS name. The browser add-on automatically handles
turning the proxy on and off when you switch between viewing websites
hosted on the master node, and those on the Internet. For more
information about how to configure FoxyProxy for Firefox and Google
Chrome, see Option 2, Part 2: Configure Proxy Settings to View
Websites Hosted on the Master Node.
This seems like insanity to me but I have been unable to find how to configure access in core-site.xml to override the web interface for the ResourceManager which by default it is available at localhost:8088/ and if Amazon think this is the way then I tend to go along with it

Resources