Running Selenium Grid through Vagrant - macos

I'm trying to migrate from running my Selenium server and client from all on my Mac, to having the servers run in a Vagrant VM, and the clients run locally on my Mac.
I'm using Vagrant 1.4.3 running on Mac OS X 10.9.1 to launch an Ubuntu 13.10 VM. Once the VM is launched, I install Java, Node.js and a few other dependencies that are required for my testing environment. After installing Selenium 2.39.0 (the latest as of this writing), here are the relevant configurations.
I SSH into my Vagrant VM and run the following:
java -jar /usr/local/bin/selenium-server-standalone-*.jar \
-role hub \
-trustAllSSLCertificates \
-hubConfig /vagrant/hub.json
/vagrant on the VM maps to the root of my project directory on my local Mac. Here's the relevant config from my Vagrantfile.
config.vm.box = "saucy64"
config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/saucy/20140202/saucy-server-cloudimg-amd64-vagrant-disk1.box"
# ...
config.vm.define "testing" do | test |
test.vm.network :forwarded_port, guest: 3444, host: 4444
test.vm.network :private_network, ip: "192.168.50.6"
# ...
end
Here is the Hub config that the Selenium Grid Hub is using on the Vagrant VM. Selenium Hub uses port 3444 inside the VM, which is portmapped to 4444 outside the VM, facing my Mac.
{
"browserTimeout": 180000,
"capabilityMatcher": "org.openqa.grid.internal.utils.DefaultCapabilityMatcher",
"cleanUpCycle": 2000,
"maxSession": 5,
"newSessionWaitTimeout": -1,
"nodePolling": 2000,
"port": 3444,
"throwOnCapabilityNotPresent": true,
"timeout": 30000
}
Here's how I launch Selenium on my Mac as a node.
java -jar selenium-server-standalone-*.jar \
-role node \
-trustAllSSLCertificates \
-nodeConfig node.mac.json
And here's the node config which tries to talk to the Hub running inside Vagrant.
{
"capabilities": [
{
"platform": "MAC",
"seleniumProtocol": "WebDriver",
"browserName": "firefox",
"maxInstances": 1
},
{
"platform": "MAC",
"seleniumProtocol": "WebDriver",
"browserName": "chrome",
"maxInstances": 1
}
],
"configuration": {
"proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"hubHost": "127.0.0.1",
"hubPort": 4444,
"hub": "http://127.0.0.1:4444/grid/register",
"maxSession": 1,
"port": 4445,
"register": true,
"registerCycle": 2000,
"remoteHost": "http://127.0.0.1:4445",
"role": "node",
"url": "http://127.0.0.1:4445"
}
}
Lastly, here's what I get in the Terminal on the Mac side.
Feb 02, 2014 9:29:07 PM org.openqa.grid.selenium.GridLauncher main
INFO: Launching a selenium grid node
21:29:18.706 INFO - Java: Oracle Corporation 24.51-b03
21:29:18.706 INFO - OS: Mac OS X 10.9.1 x86_64
21:29:18.713 INFO - v2.39.0, with Core v2.39.0. Built from revision ff23eac
21:29:18.773 INFO - Default driver org.openqa.selenium.ie.InternetExplorerDriver registration is skipped: registration capabilities Capabilities [{platform=WINDOWS, ensureCleanSession=true, browserName=internet explorer, version=}] does not match with current platform: MAC
21:29:18.802 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4445/wd/hub
21:29:18.803 INFO - Version Jetty/5.1.x
21:29:18.804 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
21:29:18.804 INFO - Started HttpContext[/selenium-server,/selenium-server]
21:29:18.804 INFO - Started HttpContext[/,/]
21:29:18.864 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#593aa24f
21:29:18.864 INFO - Started HttpContext[/wd,/wd]
21:29:18.866 INFO - Started SocketListener on 0.0.0.0:4445
21:29:18.867 INFO - Started org.openqa.jetty.jetty.Server#48ef85f3
21:29:18.867 INFO - using the json request : {"class":"org.openqa.grid.common.RegistrationRequest","capabilities":[{"platform":"MAC","seleniumProtocol":"WebDriver","browserName":"firefox","maxInstances":1},{"platform":"MAC","seleniumProtocol":"WebDriver","browserName":"chrome","maxInstances":1},{"platform":"MAC","seleniumProtocol":"WebDriver","browserName":"iphone","maxInstances":1},{"platform":"MAC","seleniumProtocol":"WebDriver","browserName":"ipad","maxInstances":1}],"configuration":{"nodeConfig":"node.mac.json","port":4445,"host":"192.168.50.1","hubHost":"127.0.0.1","registerCycle":2000,"trustAllSSLCertificates":"","hub":"http://127.0.0.1:4444/grid/register","url":"http://127.0.0.1:4445","remoteHost":"http://127.0.0.1:4445","register":true,"proxy":"org.openqa.grid.selenium.proxy.DefaultRemoteProxy","maxSession":1,"role":"node","hubPort":4444}}
21:29:18.868 INFO - Starting auto register thread. Will try to register every 2000 ms.
21:29:18.868 INFO - Registering the node to hub :http://127.0.0.1:4444/grid/register
21:30:25.079 INFO - Registering the node to hub :http://127.0.0.1:4444/grid/register
21:31:31.254 INFO - Registering the node to hub :http://127.0.0.1:4444/grid/register
21:32:35.416 INFO - Registering the node to hub :http://127.0.0.1:4444/grid/register
21:33:41.581 INFO - Registering the node to hub :http://127.0.0.1:4444/grid/register
21:34:47.752 INFO - Registering the node to hub :http://127.0.0.1:4444/grid/register
21:35:51.908 INFO - Registering the node to hub :http://127.0.0.1:4444/grid/register
21:36:56.045 INFO - Registering the node to hub :http://127.0.0.1:4444/grid/register
21:38:00.189 INFO - Registering the node to hub :http://127.0.0.1:4444/grid/register
Lastly, here's what I get in the Terminal on the Vagrant VM side.
Feb 03, 2014 5:28:53 AM org.openqa.grid.selenium.GridLauncher main
INFO: Launching a selenium grid server
2014-02-03 05:28:54.780:INFO:osjs.Server:jetty-7.x.y-SNAPSHOT
2014-02-03 05:28:54.811:INFO:osjsh.ContextHandler:started o.s.j.s.ServletContextHandler{/,null}
2014-02-03 05:28:54.823:INFO:osjs.AbstractConnector:Started SocketConnector#0.0.0.0:3444
Feb 03, 2014 5:29:20 AM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Feb 03, 2014 5:29:22 AM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Feb 03, 2014 5:29:22 AM org.openqa.grid.selenium.proxy.DefaultRemoteProxy onEvent
WARNING: Marking the node as down. Cannot reach the node for 2 tries.
Feb 03, 2014 5:29:24 AM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Feb 03, 2014 5:29:26 AM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Feb 03, 2014 5:29:28 AM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Feb 03, 2014 5:29:30 AM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Feb 03, 2014 5:29:32 AM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Google returns nothing of usefulness in this situation. Can anybody help me determine why the Hub and the Node can't talk to each other?

I have a similar setup where my selenium server (aka hub) is on a remote vm and a client (aka node) is on my local machine. I've been seeing the same error:
Feb 04, 2014 5:29:22 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Feb 04, 2014 5:29:22 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy onEvent
WARNING: Marking the node as down. Cannot reach the node for 2 tries.
I talked to our Ops team and they told me that my vm is sitting on a different network and in different location. And even though the node machine is able to reach the hub but the hub can never reach the node. They suggested to get another VM that is sitting on the same network. It's like one way street.
Hope it helps.

I don't know too much about Selenium, but I guess the issue is about using 127.0.0.1. Especially the VM has no way to connect to the host, and you don't forward port 4445.
As you already specify a private_network address (192.168.50.6), you could try to use it directly without any port forwarding.

The first answer was partially correct. You do have to ensure communication path between the node and the server and the server to the node is clear and able to connect on the specific ports. Since technically you are running 2 servers a server on the node listening on 1 port and a server on the hub listening to another port.
Try this:
I had the same problem, but fixed it by adding the host field:
"host": [ip or hostname of node],
Here is my node config file:
{
"capabilities":[
{
"platform":"MAC",
"browserName":"firefox",
"version":"28",
"maxInstances":1
},
{
"platform":"MAC",
"browserName":"chrome",
"version":"34",
"maxInstances":1
}
],
"configuration":{
"port": 5556,
"hubPort": 5555,
"host": 10.50.10.101, //this is the ip of my node
"hubHost":"10.50.10.100", //this is ip of my grid hub
"nodePolling":2500,
"registerCycle":10500,
"register":true,
"cleanUpCycle":2500,
"maxSession":5,
"role":"node"
}
}

Related

Cannot start Oracle NoSQL Database on localhost

Trying to install Oracle NoSQL 18.1.27 on Mac
Setup:
$ java -version
java version "1.8.0_221"
Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)
$ echo $KVROOT
/Users/sn/Software/oraclenosql/kvroot
$ echo $KVHOME
/Users/sn/Software/oraclenosql/kv-18.1.27
Used this command to install:
java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -host localhost -storagedir $KVHOME/kvdata/ -harange 5010,5030 -storagedirsize "1 gb" -store-security none
Test using jps:
$ jps -m
8866 Jps -m
8826 kvstore.jar start -root /Users/sn/Software/oraclenosql/kvroot
8831 ManagedService -root /Users/sn/Software/oraclenosql/kvroot -class Admin -service BootstrapAdmin.5000 -config config.xml
Trying to start the db
$ java -jar $KVHOME/lib/kvstore.jar ping -host localhost -port 5000
Could not connect to registry at localhost:5000 Unable to connect to the storage node agent at host localhost, port 5000, which may not be running; nested exception is:
java.rmi.ConnectException: Connection refused to host: localhost; nested exception is:
java.net.ConnectException: Connection refused (Connection refused)
Can't find store topology: Could not contact any RepNode at: [localhost:5000]
And when trying to ping:
SNA at hostname: localhost, registry port: 5000 is not registered.
No further information is available
Can't find store topology: Could not contact any RepNode at: [localhost:5000]
Logs show these:
adminboot.log
2020-03-26 20:05:07.344 UTC INFO [BootstrapAdmin] Starting in bootstrap mode
2020-03-26 20:05:07.348 UTC INFO [BootstrapAdmin] Starting commandService on rmi://localhost:5000/commandService
2020-03-26 20:05:07.448 UTC INFO [BootstrapAdmin] Successfully created a secure proxy for commandService
2020-03-26 20:05:07.531 UTC INFO [BootstrapAdmin] Starting admin:CLIENT_ADMIN on rmi://localhost:5000/admin:CLIENT_ADMIN
2020-03-26 20:05:07.640 UTC INFO [BootstrapAdmin] Successfully created a secure proxy for admin:CLIENT_ADMIN
2020-03-26 20:05:07.713 UTC INFO [BootstrapAdmin] Started AdminService
What am i missing?
You are not starting the NOSQL you are trying to ping. If you want to start :
$ jps -m ( this will not shows any service if its not started )
$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVROOT&
Press enter again to come out of the nohup
$ now run jps -m again it will shows the process running status
Note: if its properly configured then there is no issue, else it will throw errors. Kindly follow Proper document and google the error :)
Thanks,

glassfish-4 start failed after upgrade Mac OS Sierra

I already upgraded to MacOS Sierra and my Netbeans 8.0.2 throw an error when I try to run Glassfish.
Please check server admin user name and password properties.
Also please check the server log file for other possible causes.
I tried all posibles solutions that I found in stackoverflow but nothing worked.
Glassfish 4 Admin not running from Netbeans 7.4 (Password Incorrect)
This is the log of Glassfish
objc[35340]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/bin/java (0x1000a54c0) and /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre/lib/libinstrument.dylib (0x1001b84e0). One of the two will be used. Which one is undefined.
Listening for transport dt_socket at address: 9009
Launching GlassFish on Felix platform
Nov 02, 2016 11:28:52 AM com.sun.enterprise.glassfish.bootstrap.osgi.BundleProvisioner createBundleProvisioner
INFO: Create bundle provisioner class = class com.sun.enterprise.glassfish.bootstrap.osgi.BundleProvisioner.
Nov 02, 2016 11:28:53 AM com.sun.enterprise.glassfish.bootstrap.osgi.BundleProvisioner$DefaultCustomizer getLocations
WARNING: Skipping entry because it is not an absolute URI.
Nov 02, 2016 11:28:53 AM com.sun.enterprise.glassfish.bootstrap.osgi.BundleProvisioner$DefaultCustomizer getLocations
WARNING: Skipping entry because it is not an absolute URI.
Registered com.sun.enterprise.glassfish.bootstrap.osgi.EmbeddedOSGiGlassFishRuntime#c76ff05 in service registry.
Found populator: com.sun.enterprise.v3.server.GFDomainXml
#!## LogManagerService.postConstruct : rootFolder=/Applications/NetBeans/glassfish-4.1/glassfish
#!## LogManagerService.postConstruct : templateDir=/Applications/NetBeans/glassfish-4.1/glassfish/lib/templates
#!## LogManagerService.postConstruct : src=/Applications/NetBeans/glassfish-4.1/glassfish/lib/templates/logging.properties
#!## LogManagerService.postConstruct : dest=/Applications/NetBeans/glassfish-4.1/glassfish/domains/domain4/config/logging.properties
Info: Running GlassFish Version: GlassFish Server Open Source Edition 4.1 (build 13)
Info: Server log file is using Formatter class: com.sun.enterprise.server.logging.ODLLogFormatter
Info: Realm [admin-realm] of classtype [com.sun.enterprise.security.auth.realm.file.FileRealm] successfully created.
Info: Realm [file] of classtype [com.sun.enterprise.security.auth.realm.file.FileRealm] successfully created.
Info: Realm [certificate] of classtype [com.sun.enterprise.security.auth.realm.certificate.CertificateRealm] successfully created.
Info: Authorization Service has successfully initialized.
Info: Registered org.glassfish.ha.store.adapter.cache.ShoalBackingStoreProxy for persistence-type = replicated in BackingStoreFactoryRegistry
Info: Grizzly Framework 2.3.15 started in: 58ms - bound to [/0.0.0.0:9090]
Info: Grizzly Framework 2.3.15 started in: 12ms - bound to [/0.0.0.0:9191]
Info: Grizzly Framework 2.3.15 started in: 2ms - bound to [/0.0.0.0:4848]
Info: Grizzly Framework 2.3.15 started in: 1ms - bound to [/0.0.0.0:3700]
Info: GlassFish Server Open Source Edition 4.1 (13) startup time : Felix (37,175ms), startup services(1,405ms), total(38,580ms)
Info: Creating a SecureRMIServerSocketFactory # 0.0.0.0 with ssl config = GlassFishConfigBean.org.glassfish.grizzly.config.dom.Ssl
Info: SSLParams =org.glassfish.admin.mbeanserver.ssl.SSLParams#5baca86
Warning: All SSL cipher suites disabled for network-listener(s). Using SSL implementation specific defaults
Info: SSLParams =org.glassfish.admin.mbeanserver.ssl.SSLParams#5baca86
Warning: All SSL cipher suites disabled for network-listener(s). Using SSL implementation specific defaults
Info: Grizzly Framework 2.3.15 started in: 11ms - bound to [/0.0.0.0:7676]
Info: Registered com.sun.enterprise.glassfish.bootstrap.osgi.EmbeddedOSGiGlassFishImpl#3baf6936 as OSGi service registration: org.apache.felix.framework.ServiceRegistrationImpl#4acb2510.
Info: visiting unvisited references
Info: Created HTTP listener http-listener-1 on host/port 0.0.0.0:9090
Info: Created HTTP listener http-listener-2 on host/port 0.0.0.0:9191
Info: Created HTTP listener admin-listener on host/port 0.0.0.0:4848
Info: Created virtual server server
Info: Created virtual server __asadmin
Info: Setting JAAS app name glassfish-web
Info: Virtual server server loaded default web module
Info: Java security manager is disabled.
Info: Entering Security Startup Service.
Info: Loading policy provider com.sun.enterprise.security.provider.PolicyWrapper.
Info: Security Service(s) started successfully.
Info: visiting unvisited references
Info: visiting unvisited references
Info: visiting unvisited references
Info: Initializing Mojarra 2.2.7 ( 20140610-1547 https://svn.java.net/svn/mojarra~svn/tags/2.2.7#13362) for context ''
Info: HV000001: Hibernate Validator 5.0.0.Final
Info: SSLServerSocket /0.0.0.0:8686 and [SSL: ServerSocket[addr=/0.0.0.0,localport=8686]] created
Info: Loading application [__admingui] at [/]
Info: Loading application __admingui done in 15,743 ms
Info: JMXStartupService has started JMXConnector on JMXService URL service:jmx:rmi://10.57.116.239:8686/jndi/rmi://10.57.116.239:8686/jmxrmi
I don't know what else to do.
Please help me with this problem.
you can set system java version,and this can do by jenv;please reference
http://boxingp.github.io/blog/2015/01/25/manage-multiple-versions-of-java-on-os-x/

Unable to access Couldera Manager 5 web console after installation

I am setting up a hadoop cluster(2.6) on CentOS 7 machine with three nodes, cluster is running fine now. However, I am not able to access the Cloudera manager(5.6) web console after completing the CM installation though its services seems to be running.
Below are my findings, please help me what could be the possible reasons:
All process are up and running !
[root#vm-txxxxxx1 ~]# jps
27978 ResourceManager
15368 Main
27052 Jps
27400 DataNode
27639 SecondaryNameNode
28106 NodeManager
27258 NameNode
Firewall stopped
[root#vm-txxxxx1 ~]# service iptables stop
Redirecting to /bin/systemctl stop iptables.service
[root#vm-txxxxxx1 ~]# service iptabes status
Redirecting to /bin/systemctl status iptabes.service
iptabes.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
Mar 24 19:24:05 vm-txxxxx1 systemd[1]: Stopped IPv4 firewall with iptables.
Listening on port 7180 and tested the same locally
[root#vm-txxxxxx1 ~]# netstat -tulpn | grep 7180
tcp 0 0 0.0.0.0:7180 0.0.0.0:* LISTEN 15368/java
[root#vm-txxxxx1 ~]# telnet localhost 7180
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
SELINUX Disabled:
[root#vm-txxxxxx1 ~]# getenforce
Disabled
Hostfile entries
[root#vm-txxxxxx1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
172.16.xx.x1 vm-txxxxxx1
172.16.xx.x2 vm-xxxxxxx2
172.16.xx.x4 del1-vm-poc04
Verify if Cloudera Manager is running:
[root#vm-txxxxxx1 ~]# service cloudera-scm-server status
cloudera-scm-server.service - LSB: Cloudera SCM Server
Loaded: loaded (/etc/rc.d/init.d/cloudera-scm-server)
Active: active (exited) since Tue 2016-03-22 17:09:55 IST; 2 days ago
Process: 15344 ExecStart=/etc/rc.d/init.d/cloudera-scm-server start (code=exited, status=0/SUCCESS)
Mar 22 17:09:50 vm-txxxxxx1 systemd[1]: Starting LSB: Cloudera SCM Server...
Mar 22 17:09:50 vm-txxxxx1 su[15366]: (to cloudera-scm) root on none
Mar 22 17:09:55 vm-txxxxxx1 cloudera-scm-server[15344]: Starting cloudera-scm-server:...]
Mar 22 17:09:55 vm-txxxxxx1 systemd[1]: Started LSB: Cloudera SCM Server.
Hint: Some lines were ellipsized, use -l to show in full.
Below are the lines from Cloudera servers logs
[root#vm-txxxxx1 ~]# tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log
2016-03-24 18:21:00,398 INFO StaleEntityEviction:com.cloudera.server.cmf.StaleEntityEvictionThread: Reaped total of 0 deleted commands
2016-03-24 18:21:00,400 INFO StaleEntityEviction:com.cloudera.server.cmf.StaleEntityEvictionThread: Found no commands older than 2014-03-25T12:51:00.399Z to reap.
2016-03-24 18:21:00,400 INFO StaleEntityEviction:com.cloudera.server.cmf.StaleEntityEvictionThread: Wizard is active, not reaping scanners or configurators
I am accessing the Cloudera Manager page http://172.16.xx.1x:7180
at the end it says "The connection has timeout", it looks like my http request is not able to reach out to the server, that's why nothing comes up in the logs. Please suggest if I am missing something.
Thanks in advance!
#Havnar: Thanks for the suggestion, I am confirming SSL is not enabled now
and sharing the curl result.
[root#vm-txxxx1 ~]# curl -i -u 'admin:admin' http://localhost:7180/api/v1/tools/echo
HTTP/1.1 200 OK
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Set-Cookie: CLOUDERA_MANAGER_SESSIONID=1etaj5o42vprlndf43ua7rbaf;Path=/;HttpOnly
Content-Type: application/json
Date: Fri, 25 Mar 2016 05:50:36 GMT
Transfer-Encoding: chunked
Server: Jetty(6.1.26.cloudera.4)
{
"message" : "Hello, World!"
I tried stop and restarted the cloudera service, nothing find suspicious, there was one warning which is looking little bit suspicious, search them google, nothing looks relevant.
[root#vm-txxxxx1 ~]# vi /var/log/cloudera-scm-server/cloudera-scm-server.log
2016-03-24 20:22:29,002 WARN main:org.hibernate.cache.ehcache.AbstractEhcacheRegionFactory: HHH020003: Could not find a specific ehcache configuration for cache named [org.hibernate.cache.internal.StandardQueryCache]; using defaults.
2016-03-24 20:22:28,581 INFO main:org.hibernate.engine.jdbc.internal.LobCreatorBuilder: HHH000424: Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException
#Havnar : I didn't get what do you meant by "try a cat on the machine running the CM", let me know if anything else need to be checked.
Thanks

One node in hadoop cluster failure

I have configured 10 nodes HDP hadoop cluster recently, each node is of OS SLES11..
On master node I have configured all master services and clients..also the mabari-server. Remaining nodes other slave services and their clients.
NTP sync is on, other pre-requisites also fine.
I am experiencing weird behavior on hadoop cluster, After starting all the services within few hours one of the node goes down.
When I experienced this first time, I have restarted that particular node and added back to the cluster.
Now My master node is causing the same issue due to which whole cluster is down. I have checked the logs but there are no indications related to failure.
I am clueless what is the root cause for the failure of the node in hadoop cluster?
Below are logs :-
the system which went down:
/var/log/messages
these are /var/log/messages: notice)=0', processed='source(src)=6830'
Apr 23 05:22:43 lnx1863 SuSEfirewall2: SuSEfirewall2 not active Apr 23
05:23:49 lnx1863 SuSEfirewall2: SuSEfirewall2 not active Apr 23
05:24:17 lnx1863 sudo: root : TTY=pts/0 ; PWD=/ ; USER=root ;
COMMAND=/usr/bin/du -h / Apr 23 05:24:55 lnx1863 SuSEfirewall2:
SuSEfirewall2 not active Apr 23 05:25:22 lnx1863 kernel:
[248531.127254] megasas: Found FW in FAULT state, will reset adapter.
Apr 23 05:25:22 lnx1863 kernel: [248531.127260] megaraid_sas:
resetting fusion adapter. Apr 23 05:25:22 lnx1863 kernel:
[248531.127427] megaraid_sas: Reset not supported, killing adapter.
namenode logs:-
INFO 2015-04-23 05:27:43,665 Heartbeat.py:78 - Building Heartbeat:
{responseId = 7607, timestamp = 1429781263665, commandsInProgress =
False, componentsMapped = True} INFO 2015-04-23 05:28:44,053
security.py:135 - Encountered communication error. Details:
SSLError('The read operation timed out',) ERROR 2015-04-23
05:28:44,053 Controller.py:278 - Connection to http://localhost was
lost (details=Request to
https://localhost:8441/agent/v1/heartbeat/localhostip failed due to
Error occured during connecting to the server: The read operation
timed out) INFO 2015-04-23 05:29:16,061 NetUtil.py:48 - Connecting to
https://localhost:8440/connection_info INFO 2015-04-23 05:29:16,118
security.py:93 - SSL Connect being called.. connecting to the server

Handshaking not happening between master and slave in jenkins

How to solve this error? Error occurred since I make master ip to public and assign DNS.
Jul 27, 2012 12:44:17 PM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Hudson agent is running in headless mode.
Jul 27, 2012 12:44:17 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://10.10.1.162:8080/jenkins/, http://dem
Jul 27, 2012 12:44:38 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to demo.sigmainfo.in:8050
Jul 27, 2012 12:44:38 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Handshaking
Jul 27, 2012 12:44:58 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: The server rejected the connection:
java.lang.Exception: The server rejected the connection:
at hudson.remoting.Engine.onConnectionRejected(Engine.java:258)
at hudson.remoting.Engine.run(Engine.java:233)
I have seen so many thread regarding this, but didn't get any answer properly.
I connected using headless slave agent and put HOST:PORT in advanced setting of configuration of slave. Master is linux and slave is windows 7.
From Comments:
=================================
Since you are having problems with the public IP & DNS, can you make sure that routing for the public IP and DNS is allowed on your network. Just to be sure this is not a firewall issue. Are you on a corporate network? In that case, your corporate firewall may be blocking certain ports on all IP addresses.

Resources