Hazelcasts tries ports other than specified - amazon-ec2

I have two EC2 instances forming a Hazelcast cluster.
Hazelcast I use is in the vertx-hazelcast:3.9.1 package, which runs Hazelcast version 3.12.2.
I also use the hazelcast-aws:2.4 plugin.
My cluster.xml is:
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ Copyright 2017 Red Hat, Inc.
~
~ Red Hat licenses this file to you under the Apache License, version 2.0
~ (the "License"); you may not use this file except in compliance with the
~ License. You may obtain a copy of the License at:
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
~ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
~ License for the specific language governing permissions and limitations
~ under the License.
-->
<hazelcast
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-3.12.xsd">
<network>
<port port-count="1" auto-increment="false">5701</port>
<public-address>x.x.x.x</public-address>
<join>
<multicast enabled="false"/>
<aws enabled="true">
<security-group-name>security-group-name</security-group-name>
</aws>
</join>
</network>
</hazelcast>
Both instances have the same cluster.xml, but with different entries in <public-address></public-address>.
What happens on cluster startup, and what I'd like to avoid, is that Hazelcast tries connecting to instances in the same security group, using ports 5701-5708, even though I thought I had set up just one port.
It writes unnecessarily to the log, which looks like this:
2021-04-27 10:51:28,671 INFO com.hazelcast.nio.tcp.TcpIpConnector:65 - [x.x.x.x]:5701 [dev] [3.12.2] Connecting to /x.x.x.x:5703, timeout: 10000, bind-any: true
2021-04-27 10:51:28,682 INFO com.hazelcast.nio.tcp.TcpIpConnector:65 - [x.x.x.x]:5701 [dev] [3.12.2] Could not connect to: /x.x.x.x:5703. Reason: SocketException[Connection refused to address /x.x.x.x:5704]
2021-04-27 10:51:28,717 INFO com.hazelcast.internal.cluster.impl.DiscoveryJoiner:65 - [x.x.x.x]:5701 [dev] [3.12.2] [x.x.x.x]:5703 is added to the blacklist.
...
It writes the same output for all ports in the said range.
I seem to have done as suggested here.
How do I stop it trying to use ports other than 5701?

In the "port" tag if you want to use a specific port, set auto-increment to false. If it is set to false, the port-count attribute must be ignored, so remove it:
<port auto-increment="false">5701</port>
Also add the following line just below the previous one:
<reuse-address>true</reuse-address>
When you shutdown a cluster member, the server socket port will be in the TIME_WAIT state for the next couple of minutes. If you start the member right after shutting it down, you may not be able to bind it to the same port because it is in the TIME_WAIT state. If you set reuse-address to true, the TIME_WAIT state is ignored and you can bind the member to the same port again. Default value is false. If you set this to true, Hazelcast will use the same port when you restart a member right after you shut it down.

Related

High CPU usage on idle AMQ Artemis cluster, related to locks with shared-store HA

I have AMQ Artemis cluster, shared-store HA (master-slave), 2.17.0.
I noticed that all my clusters (active servers only) that are idle (no one is using them) using from 10% to 20% of CPU, except one, which is using around 1% (totally normal). I started investigating...
Long story short - only one cluster has a completely normal CPU usage. The only difference I've managed to find that if I connect to that normal cluster's master node and attempt telnet slave 61616 - it will show as connected. If I do the same in any other cluster (that has high CPU usage) - it will show as rejected.
In order to better understand what is happening, I enabled DEBUG logs in instance/etc/logging.properties. Here is what master node is spamming:
2021-05-07 13:54:31,857 DEBUG [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl] Backup is not active, trying original connection configuration now.
2021-05-07 13:54:32,357 DEBUG [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl] Trying reconnection attempt 0/1
2021-05-07 13:54:32,357 DEBUG [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl] Trying to connect with connectorFactory = org.apache.activemq.artemis.core.remoting.impl.netty$NettyConnectorFactory#6cf71172, connectorConfig=TransportConfiguration(name=slave-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?trustStorePassword=****&port=61616&keyStorePassword=****&sslEnabled=true&host=slave-com&trustStorePath=/path/to/ssl/truststore-jks&keyStorePath=/path/to/ssl/keystore-jks
2021-05-07 13:54:32,357 DEBUG [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector] Connector NettyConnector [host=slave.com, port=61616, httpEnabled=false$ httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=true, useNio=true] using native epoll
2021-05-07 13:54:32,357 DEBUG [org.apache.activemq.artemis.core.client] AMQ211002: Started EPOLL Netty Connector version 4.1.51.Final to slave.com:61616
2021-05-07 13:54:32,358 DEBUG [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector] Remote destination: slave.com/123.123.123.123:61616
2021-05-07 13:54:32,358 DEBUG [org.apache.activemq.artemis.spi.core.remoting.ssl.SSLContextFactory] Creating SSL context with configuration
trustStorePassword=****
port=61616
keyStorePassword=****
sslEnabled=true
host=slave.com
trustStorePath=/path/to/ssl/truststore.jks
keyStorePath=/path/to/ssl/keystore.jks
2021-05-07 13:54:32,448 DEBUG [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector] Added ActiveMQClientChannelHandler to Channel with id = 77c078c2
2021-05-07 13:54:32,448 DEBUG [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl] Connector towards NettyConnector [host=slave.com, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=true, useNio=true] failed
This is what slave is spamming:
2021-05-07 14:06:53,177 DEBUG [org.apache.activemq.artemis.core.server.impl.FileLockNodeManager] trying to lock position: 1
2021-05-07 14:06:53,178 DEBUG [org.apache.activemq.artemis.core.server.impl.FileLockNodeManager] failed to lock position: 1
If I attempt to telnet from master node to slave node (same if I do it from slave to slave):
[root#master]# telnet slave.com 61616
Trying 123.123.123.123...
telnet: connect to address 123.123.123.123: Connection refused
However if I attempt the same telnet in that the only working cluster, I can successfully "connect" from master to slave...
Here is what I suspect:
Master acquires lock in instance/data/journal/server.lock
Master keeps trying to connect to slave server
Slave unable to start, because it cannot acquire the same server.lock on shared storage.
Master uses high CPU because of such hard-trying to connect to slave, which is not running.
What am I doing wrong?
EDIT: This is how my NFS mounts look like (taken from mount command):
some_server:/some_dir on /path/to/artemis/instance/data type nfs4 (rw,relatime,sync,vers=4.1,rsize=65536,wsize=65536,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,soft,noac,proto=tcp,timeo=50,retrans=1,sec=sys,clientaddr=123.123.123.123,local_lock=none,addr=123.123.123.123)
Turns out issue was in broker.xml configuration. In static-connectors I somehow decided to list only a "non-current server" (e.g. I have srv0 and srv1 - in srv0 I only added connector of srv1 and vice versa).
What it used to be (on 1st master node):
<cluster-connections>
<cluster-connection name="abc">
<connector-ref>srv0-connector</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>srv1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
How it is now (on 1st master node):
<cluster-connections>
<cluster-connection name="abc">
<connector-ref>srv0-connector</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>srv0-connector</connector-ref>
<connector-ref>srv1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
After listing all cluster's nodes, the CPU normalized and it's not only ~1% on active node. The issue is totally not related AMQ Artemis connections spamming or file locks.

YARN complains java.net.NoRouteToHostException: No route to host (Host unreachable)

Attempting to run h2o on a HDP 3.1 cluster and running into error that appears to be about YARN resource capacity...
[ml1user#HW04 h2o-3.26.0.1-hdp3.1]$ hadoop jar h2odriver.jar -nodes 3 -mapperXmx 10g
Determining driver host interface for mapper->driver callback...
[Possible callback IP address: 192.168.122.1]
[Possible callback IP address: 172.18.4.49]
[Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 172.18.4.49:46015
(You can override these with -driverif and -driverport/-driverportrange and/or specify external IP using -extdriverif.)
Memory Settings:
mapreduce.map.java.opts: -Xms10g -Xmx10g -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Dlog4j.defaultInitOverride=true
Extra memory percent: 10
mapreduce.map.memory.mb: 11264
Hive driver not present, not generating token.
19/07/25 14:48:05 INFO client.RMProxy: Connecting to ResourceManager at hw01.ucera.local/172.18.4.46:8050
19/07/25 14:48:06 INFO client.AHSProxy: Connecting to Application History server at hw02.ucera.local/172.18.4.47:10200
19/07/25 14:48:07 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/ml1user/.staging/job_1564020515809_0006
19/07/25 14:48:08 INFO mapreduce.JobSubmitter: number of splits:3
19/07/25 14:48:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1564020515809_0006
19/07/25 14:48:08 INFO mapreduce.JobSubmitter: Executing with tokens: []
19/07/25 14:48:08 INFO conf.Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.0.0-78/0/resource-types.xml
19/07/25 14:48:08 INFO impl.YarnClientImpl: Submitted application application_1564020515809_0006
19/07/25 14:48:08 INFO mapreduce.Job: The url to track the job: http://HW01.ucera.local:8088/proxy/application_1564020515809_0006/
Job name 'H2O_47159' submitted
JobTracker job ID is 'job_1564020515809_0006'
For YARN users, logs command is 'yarn logs -applicationId application_1564020515809_0006'
Waiting for H2O cluster to come up...
ERROR: Timed out waiting for H2O cluster to come up (120 seconds)
ERROR: (Try specifying the -timeout option to increase the waiting time limit)
Attempting to clean up hadoop job...
19/07/25 14:50:19 INFO impl.YarnClientImpl: Killed application application_1564020515809_0006
Killed.
19/07/25 14:50:23 INFO client.RMProxy: Connecting to ResourceManager at hw01.ucera.local/172.18.4.46:8050
19/07/25 14:50:23 INFO client.AHSProxy: Connecting to Application History server at hw02.ucera.local/172.18.4.47:10200
----- YARN cluster metrics -----
Number of YARN worker nodes: 3
----- Nodes -----
Node: http://HW03.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
Node: http://HW04.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
Node: http://HW02.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
----- Queues -----
Queue name: default
Queue state: RUNNING
Current capacity: 0.00
Capacity: 1.00
Maximum capacity: 1.00
Application count: 0
Queue 'default' approximate utilization: 0.0 / 45.0 GB used, 0 / 9 vcores used
----------------------------------------------------------------------
ERROR: Unable to start any H2O nodes; please contact your YARN administrator.
A common cause for this is the requested container size (11.0 GB)
exceeds the following YARN settings:
yarn.nodemanager.resource.memory-mb
yarn.scheduler.maximum-allocation-mb
----------------------------------------------------------------------
For YARN users, logs command is 'yarn logs -applicationId application_1564020515809_0006'
Looking in the YARN configs in Ambari UI, these properties are nowhere to be found. But checking the YARN logs in the YARN resource manager UI and checking some of the logs for the killed application, I see what appears to be unreachable-host errors...
Container: container_e05_1564020515809_0006_02_000002 on HW03.ucera.local_45454_1564102219781
LogAggregationType: AGGREGATED
=============================================================================================
LogType:stderr
LogLastModifiedTime:Thu Jul 25 14:50:19 -1000 2019
LogLength:2203
LogContents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/11/mapreduce.tar.gz/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/usercache/ml1user/appcache/application_1564020515809_0006/filecache/10/job.jar/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapred.YarnChild).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
java.net.NoRouteToHostException: No route to host (Host unreachable)
at java.net.PlainSocketImpl.socketConnect(Native Method)
....
at java.net.Socket.<init>(Socket.java:211)
at water.hadoop.EmbeddedH2OConfig$BackgroundWriterThread.run(EmbeddedH2OConfig.java:38)
End of LogType:stderr
***********************************************************************
Taking note of "java.net.NoRouteToHostException: No route to host (Host unreachable)". However, I can access all the other nodes from each other and they can all ping each other, so not sure what is going on here. Any suggestions for debugging or fixing?
Think I found the problem, TLDR: firewalld (nodes running on centos7) was still running, when should be disabled on HDP clusters.
From another community post:
For Ambari to communicate during setup with the hosts it deploys to and manages, certain ports must be open and available. The easiest way to do this is to temporarily disable iptables, as follows:
systemctl disable firewalld
service firewalld stop
So apparently iptables and firewalld need to be disabled across the cluster (supporting docs can be found here, I only disabled them on the Ambari installation node). After stopping these services across the cluster (I recommend using clush), was able to run the yarn job without incident.
Normally, this problem is either due to bad DNS configuration, firewalls, or network unreachability. To quote this official doc:
The hostname of the remote machine is wrong in the configuration files
The client's host table /etc/hosts has an invalid IPAddress for the target host.
The DNS server's host table has an invalid IPAddress for the target host.
The client's routing tables (In Linux, iptables) are wrong.
The DHCP server is publishing bad routing information.
Client and server are on different subnets, and are not set up to talk to each other. This may be an accident, or it is to deliberately lock down the Hadoop cluster.
The machines are trying to communicate using IPv6. Hadoop does not currently support IPv6
The host's IP address has changed but a long-lived JVM is caching the old value. This is a known problem with JVMs (search for "java negative DNS caching" for the details and solutions). The quick solution: restart the JVMs
For me, the problem was that the driver was inside a Docker container which made it impossible for the workers to send data back to it. In other words, workers and the driver not being in the same subnet. The solution as given in this answer was to set the following configurations:
spark.driver.host=<container's host IP accessible by the workers>
spark.driver.bindAddress=0.0.0.0
spark.driver.port=<forwarded port 1>
spark.driver.blockManager.port=<forwarded port 2>

Failed to resolve interface public in jboss in EC2

I am working on JBoss AS 7.1.1.Final in amazon EC2 ( redhat server ) server. I changed my ip 127.0.0.1 to 52.32.0.197 ( public EC2 server ip ) whenever i am running my Jboss it is throwing :
Services which failed to start:service jboss.network.public:org.jboss.msc.service.StartException in service jboss.network.public: JBAS015810: failed to resolve interface public
After googling i change my entries in "/etc/hosts" which is currently look like
52.32.0.197 localhost localhost.localdomain localhost4 ocalhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
Again i got this Link and change my "/etc/sysconfig/network-scripts/ifcfg-lo" to
DEVICE=lo
IPADDR=52.32.0.197
NETMASK=255.0.0.0
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback-1
but still getting same error, please help me to resolve this ?
My standalone.xml contains
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:52.32.0.197}"/>
</interface>
<!-- TODO - only show this if the jacorb subsystem is added -->
<interface name="unsecure">
<!--
~ Used for IIOP sockets in the standard configuration.
~ To secure JacORB you need to setup SSL
-->
<inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/>
</interface>
</interfaces>
I am not familiar with JBOSS, but this is clearly a bad IP binding problem.
First, you must have a valid IP Address. I am surprise you didn't mentioned error throw by OS. Your public IP address cannot sit on a wrong network.
DEVICE=lo
IPADDR=52.32.0.197
NETMASK=255.255.255.0
NETWORK=52.32.0.0
Then come to the binding, as point out in the link JBAS015810: failed to resolve interface public
This kind of error could occur if you happen to have specified bind
address for JAVA_OPTS in your configs in standalone.conf
-Djboss.bind.address=192.168.xxx.xxx -Djboss.bind.address.management=192.168.xxx.xxx -Djboss.bind.address.unsecure=192.168.xxx.xxx
open standalone.conf and change those IP address you see (should be 127.0.0.1) to 52.32.0.197. Restart.
I had the same issue, i have resolved that by updating my firewall settings ,Which blocked the public IP access to the application

Unable to open sonar in browser

Installed sonarqube and started the sonar service. but sonar is not opening in the browser with port 9000. havent done any changes in sonar.properties file (all the contents are commented). but the log shows the web server is started, http connector enabled port on 9000. how it is possible when contents are commented?
port status
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN -
Sonar logs
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2015.09.22 10:09:31 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[search]: /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/jre/bin/java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/usr/local/sonarqube-5.1.2/temp -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /tmp/sq-process7377450394324020959properties
2015.09.22 10:09:32 INFO es[o.s.p.ProcessEntryPoint] Starting search
2015.09.22 10:09:32 INFO es[o.s.s.SearchServer] Starting Elasticsearch[sonarqube] on port 9001
2015.09.22 10:09:32 INFO es[o.elasticsearch.node] [sonar-1442930970839] version[1.4.4], pid[27953], build[c88f77f/2015-02-19T13:05:36Z]
2015.09.22 10:09:32 INFO es[o.elasticsearch.node] [sonar-1442930970839] initializing ...
2015.09.22 10:09:32 INFO es[o.e.plugins] [sonar-1442930970839] loaded [], sites []
2015.09.22 10:09:35 INFO es[o.elasticsearch.node] [sonar-1442930970839] initialized
2015.09.22 10:09:35 INFO es[o.elasticsearch.node] [sonar-1442930970839] starting ...
2015.09.22 10:09:35 INFO es[o.e.transport] [sonar-1442930970839] bound_address {inet[/0.0.0.0:9001]}, publish_address {inet[/10.246.236.55:9001]}
2015.09.22 10:09:35 INFO es[o.e.discovery] [sonar-1442930970839] sonarqube/xTJRTzNESlunLbRSr4pkYA
2015.09.22 10:09:38 INFO es[o.e.cluster.service] [sonar-1442930970839] new_master [sonar-1442930970839][xTJRTzNESlunLbRSr4pkYA][usboss-sdijenkins.aaitg.com][inet[/10.246.236.55:9001]]{rack_id=sonar-1442930970839}, reason: zen-disco-join (elected_as_master)
2015.09.22 10:09:38 INFO es[o.elasticsearch.node] [sonar-1442930970839] started
2015.09.22 10:09:40 INFO es[o.e.gateway] [sonar-1442930970839] recovered [6] indices into cluster_state
2015.09.22 10:09:41 INFO app[o.s.p.m.Monitor] Process[search] is up
2015.09.22 10:09:41 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[web]: /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx768m -XX:MaxPermSize=160m -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/usr/local/sonarqube-5.1.2/temp -cp ./lib/common/*:./lib/server/*:/usr/local/sonarqube-5.1.2/lib/jdbc/h2/h2-1.3.176.jar org.sonar.server.app.WebServer /tmp/sq-process4883903207582149281properties
2015.09.22 10:09:42 INFO web[o.s.p.ProcessEntryPoint] Starting web
2015.09.22 10:09:42 INFO web[o.s.s.app.Webapp] Webapp directory: /usr/local/sonarqube-5.1.2/web
2015.09.22 10:09:43 INFO web[o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2015.09.22 10:09:43 INFO web[o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2015.09.22 10:09:44 INFO web[o.e.plugins] [sonar-1442930970839] loaded [], sites []
2015.09.22 10:09:45 INFO web[o.s.s.p.ServerImpl] SonarQube Server / 5.1.2 / 2a52a7106b2bfbd659c591c2d6fc09ad0ab2db5c
2015.09.22 10:09:46 INFO web[o.s.s.d.EmbeddedDatabase] Starting embedded database on port 9092 with url jdbc:h2:tcp://localhost:9092/sonar
2015.09.22 10:09:46 INFO web[o.s.s.d.EmbeddedDatabase] Embedded database started. Data stored in: /usr/local/sonarqube-5.1.2/data
2015.09.22 10:09:46 INFO web[o.s.c.p.Database] Create JDBC datasource for jdbc:h2:tcp://localhost:9092/sonar
2015.09.22 10:09:47 WARN web[o.s.s.d.DatabaseChecker] H2 database should be used for evaluation purpose only
2015.09.22 10:09:49 INFO web[o.s.s.p.DefaultServerFileSystem] SonarQube home: /usr/local/sonarqube-5.1.2
2015.09.22 10:09:49 INFO web[o.s.s.p.ServerPluginJarsInstaller] Install plugins
2015.09.22 10:09:49 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Git / 1.0 / 9ce9d330c313c296fab051317cc5ad4b26319e07
2015.09.22 10:09:49 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin SVN / 1.0 / 213fc8a8b582ff530b12dd4a59a6512be1071234
2015.09.22 10:09:49 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Core / 5.1.2 / 2a52a7106b2bfbd659c591c2d6fc09ad0ab2db5c
2015.09.22 10:09:49 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Java / 3.0 / 65396a609ddface8b311a6a665aca92a7da694f1
2015.09.22 10:09:49 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin English Pack / 5.1.2 / 2a52a7106b2bfbd659c591c2d6fc09ad0ab2db5c
2015.09.22 10:09:49 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Email notifications / 5.1.2 / 2a52a7106b2bfbd659c591c2d6fc09ad0ab2db5c
2015.09.22 10:09:49 INFO web[o.s.s.p.RailsAppsDeployer] Deploy Ruby on Rails applications
2015.09.22 10:09:49 INFO web[o.s.j.s.AbstractDatabaseConnector] Initializing Hibernate
2015.09.22 10:09:50 INFO web[o.s.s.p.UpdateCenterClient] Update center: http://update.sonarsource.org/update-center.properties (no proxy)
2015.09.22 10:09:51 INFO web[o.s.s.n.NotificationService] Notification service started (delay 60 sec.)
2015.09.22 10:09:52 INFO web[o.s.s.s.IndexSynchronizer] Index rules
2015.09.22 10:09:52 INFO web[o.s.s.s.IndexSynchronizer] Index activeRules
2015.09.22 10:09:52 INFO web[o.s.s.s.RegisterMetrics] Register metrics
2015.09.22 10:09:53 INFO web[o.s.s.s.RegisterMetrics] Cleaning quality gate conditions
2015.09.22 10:09:53 INFO web[o.s.s.s.RegisterDebtModel] Register technical debt model
2015.09.22 10:09:53 INFO web[o.s.s.r.RegisterRules] Register rules
2015.09.22 10:09:54 INFO web[o.s.s.q.RegisterQualityProfiles] Register quality profiles
2015.09.22 10:09:55 INFO web[o.s.s.s.RegisterNewMeasureFilters] Register measure filters
2015.09.22 10:09:55 INFO web[o.s.s.s.RegisterDashboards] Register dashboards
2015.09.22 10:09:55 INFO web[o.s.s.s.RegisterPermissionTemplates] Register permission templates
2015.09.22 10:09:55 INFO web[o.s.s.s.RenameDeprecatedPropertyKeys] Rename deprecated property keys
2015.09.22 10:09:55 INFO web[o.s.s.s.IndexSynchronizer] Index activities
2015.09.22 10:09:55 INFO web[o.s.s.s.IndexSynchronizer] Index issues
2015.09.22 10:09:55 INFO web[o.s.s.s.IndexSynchronizer] Index source lines
2015.09.22 10:09:55 INFO web[o.s.s.s.IndexSynchronizer] Index users
2015.09.22 10:09:55 INFO web[o.s.s.s.IndexSynchronizer] Index views
2015.09.22 10:09:55 INFO web[jruby.rack] jruby 1.7.9 (ruby-1.8.7p370) 2013-12-06 87b108a on OpenJDK 64-Bit Server VM 1.7.0_85-mockbuild_2015_07_25_13_10-b00 [linux-amd64]
2015.09.22 10:09:55 INFO web[jruby.rack] using a shared (threadsafe!) runtime
2015.09.22 10:10:26 INFO web[jruby.rack] keeping custom (config.logger) Rails logger instance
2015.09.22 10:10:26 INFO web[o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-0.0.0.0-9000"]
2015.09.22 10:10:26 INFO web[o.s.s.a.TomcatAccessLog] Web server is started
2015.09.22 10:10:26 INFO web[o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2015.09.22 10:10:27 INFO app[o.s.p.m.Monitor] Process[web] is up
sonar.properties file
# See http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Properties.html#load(java.io.InputStream)
#
# Property values can:
# - reference an environment variable, for example sonar.jdbc.url= ${env:SONAR_JDBC_URL}
# - be encrypted. See http://redirect.sonarsource.com/doc/settings-encryption.html
#--------------------------------------------------------------------------------------------------
# DATABASE
#
# IMPORTANT: the embedded H2 database is used by default. It is recommended for tests but not for
# production use. Supported databases are MySQL, Oracle, PostgreSQL and Microsoft SQLServer.
# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.
#sonar.jdbc.username=sonar
#sonar.jdbc.password=sonar
#----- Embedded Database (default)
# It does not accept connections from remote hosts, so the
# server and the analyzers must be executed on the same host.
#sonar.jdbc.url=jdbc:h2:tcp://localhost:9092/sonar
# H2 embedded database server listening port, defaults to 9092
#sonar.embeddedDatabase.port=9092
#----- MySQL 5.x
# Only InnoDB storage engine is supported (not myISAM).
# Only the bundled driver is supported.
#sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
#----- Oracle 10g/11g
# - Only thin client is supported
# - Only versions 11.2.* of Oracle JDBC driver are supported, even if connecting to lower Oracle versions.
# - The JDBC driver must be copied into the directory extensions/jdbc-driver/oracle/
# - If you need to set the schema, please refer to http://jira.codehaus.org/browse/SONAR-5000
#sonar.jdbc.url=jdbc:oracle:thin:#localhost/XE
#----- PostgreSQL 8.x/9.x
# If you don't use the schema named "public", please refer to http://jira.codehaus.org/browse/SONAR-5000
#sonar.jdbc.url=jdbc:postgresql://localhost/sonar
#----- Microsoft SQLServer 2008/2012
# Only the bundled jTDS driver is supported.
# Collation must be case-sensitive (CS) and accent-sensitive (AS).
#sonar.jdbc.url=jdbc:jtds:sqlserver://localhost/sonar;SelectMethod=Cursor
#----- Connection pool settings
# The maximum number of active connections that can be allocated
# at the same time, or negative for no limit.
#sonar.jdbc.maxActive=50
# The maximum number of connections that can remain idle in the
# pool, without extra ones being released, or negative for no limit.
#sonar.jdbc.maxIdle=5
# The minimum number of connections that can remain idle in the pool,
# without extra ones being created, or zero to create none.
#sonar.jdbc.minIdle=2
# The maximum number of milliseconds that the pool will wait (when there
# are no available connections) for a connection to be returned before
# throwing an exception, or <= 0 to wait indefinitely.
#sonar.jdbc.maxWait=5000
#sonar.jdbc.minEvictableIdleTimeMillis=600000
#sonar.jdbc.timeBetweenEvictionRunsMillis=30000
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default heap size is 768Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment: http://docs.oracle.com/javase/7/docs/technotes/guides/vm/server-class.html
#
# Set min and max memory (respectively -Xms and -Xmx) to the same value to prevent heap
# from resizing at runtime.
#
#sonar.web.javaOpts=-Xmx768m -XX:MaxPermSize=160m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0
#sonar.web.host=localhost
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Disabled when value is -1.
#sonar.web.port=9000
# Recommendation for HTTPS
# SonarQube natively supports HTTPS. However using a reverse proxy
# infrastructure is the recommended way to set up your SonarQube installation
# on production environments which need to be highly secured.
# This allows to fully master all the security parameters that you want.
# TCP port for incoming HTTPS connections. Disabled when value is -1 (default).
#sonar.web.https.port=-1
# HTTPS - the alias used to for the server certificate in the keystore.
# If not specified the first key read in the keystore is used.
#sonar.web.https.keyAlias=
# HTTPS - the password used to access the server certificate from the
# specified keystore file. The default value is "changeit".
#sonar.web.https.keyPass=changeit
# HTTPS - the pathname of the keystore file where is stored the server certificate.
# By default, the pathname is the file ".keystore" in the user home.
# If keystoreType doesn't need a file use empty value.
#sonar.web.https.keystoreFile=
# HTTPS - the password used to access the specified keystore file. The default
# value is the value of sonar.web.https.keyPass.
#sonar.web.https.keystorePass=
# HTTPS - the type of keystore file to be used for the server certificate.
# The default value is JKS (Java KeyStore).
#sonar.web.https.keystoreType=JKS
# HTTPS - the name of the keystore provider to be used for the server certificate.
# If not specified, the list of registered providers is traversed in preference order
# and the first provider that supports the keystore type is used (see sonar.web.https.keystoreType).
#sonar.web.https.keystoreProvider=
# HTTPS - the pathname of the truststore file which contains trusted certificate authorities.
# By default, this would be the cacerts file in your JRE.
# If truststoreFile doesn't need a file use empty value.
#sonar.web.https.truststoreFile=
# HTTPS - the password used to access the specified truststore file.
#sonar.web.https.truststorePass=
# HTTPS - the type of truststore file to be used.
# The default value is JKS (Java KeyStore).
#sonar.web.https.truststoreType=JKS
# HTTPS - the name of the truststore provider to be used for the server certificate.
# If not specified, the list of registered providers is traversed in preference order
# and the first provider that supports the truststore type is used (see sonar.web.https.truststoreType).
#sonar.web.https.truststoreProvider=
# HTTPS - whether to enable client certificate authentication.
# The default is false (client certificates disabled).
# Other possible values are 'want' (certificates will be requested, but not required),
# and 'true' (certificates are required).
#sonar.web.https.clientAuth=false
# HTTPS - comma separated list of encryption ciphers to support for HTTPS connections.
# If specified, only the ciphers that are listed and supported by the SSL implementation will be used.
# By default, the default ciphers for the JVM will be used. Note that this usually means that the weak
# export grade ciphers, for instance RC4, will be included in the list of available ciphers.
# The ciphers are specified using the JSSE cipher naming convention (see
# https://www.openssl.org/docs/apps/ciphers.html)
# Example: sonar.web.https.ciphers=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
#sonar.web.https.ciphers=
# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50 for each
# enabled connector.
#sonar.web.http.maxThreads=50
#sonar.web.https.maxThreads=50
# The minimum number of threads always kept running. The default value is 5 for each
# enabled connector.
#sonar.web.http.minThreads=5
#sonar.web.https.minThreads=5
# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25 for each enabled connector.
#sonar.web.http.acceptCount=25
#sonar.web.https.acceptCount=25
# TCP port for incoming AJP connections. Disabled if value is -1. Disabled by default.
#sonar.ajp.port=-1
#--------------------------------------------------------------------------------------------------
# ELASTICSEARCH
# Elasticsearch is used to facilitate fast and accurate information retrieval.
# It is executed in a dedicated Java process.
# JVM options of Elasticsearch process
# Recommendations:
#
# Use HotSpot Server VM. The property -server should be added if server mode
# is not enabled by default on your environment: http://docs.oracle.com/javase/7/docs/technotes/guides/vm/server-class.html
#
# Set min and max memory (respectively -Xms and -Xmx) to the same value to prevent heap
# from resizing at runtime.
#
#sonar.search.javaOpts=-Xmx1G -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true \
# -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 \
# -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.search.javaAdditionalOpts=
# Elasticsearch port. Default is 9001. Use 0 to get a free port.
# This port must be private and must not be exposed to the Internet.
#sonar.search.port=9001
#--------------------------------------------------------------------------------------------------
# UPDATE CENTER
# Update Center requires an internet connection to request http://update.sonarsource.org
# It is enabled by default.
#sonar.updatecenter.activate=true
# HTTP proxy (default none)
#http.proxyHost=
#http.proxyPort=
# NT domain name if NTLM proxy is used
#http.auth.ntlm.domain=
# SOCKS proxy (default none)
#socksProxyHost=
#socksProxyPort=
# proxy authentication. The 2 following properties are used for HTTP and SOCKS proxies.
#http.proxyUser=
#http.proxyPassword=
#--------------------------------------------------------------------------------------------------
# LOGGING
# Level of logs. Supported values are INFO, DEBUG and TRACE
#sonar.log.level=INFO
# Path to log files. Can be absolute or relative to installation directory.
# Default is <installation home>/logs
#sonar.path.logs=logs
# Rolling policy of log files
# - based on time if value starts with "time:", for example by day ("time:yyyy-MM-dd")
# or by month ("time:yyyy-MM")
# - based on size if value starts with "size:", for example "size:10MB"
# - disabled if value is "none". That needs logs to be managed by an external system like logrotate.
#sonar.log.rollingPolicy=time:yyyy-MM-dd
# Maximum number of files to keep if a rolling policy is enabled.
# - maximum value is 20 on size rolling policy
# - unlimited on time rolling policy. Set to zero to disable old file purging.
#sonar.log.maxFiles=7
# Access log is the list of all the HTTP requests received by server. If enabled, it is stored
# in the file {sonar.path.logs}/access.log. This file follows the same rolling policy as for
# sonar.log (see sonar.log.rollingPolicy and sonar.log.maxFiles).
#sonar.web.accessLogs.enable=true
# Format of access log. It is ignored if sonar.web.accessLogs.enable=false. Value is:
# - "common" is the Common Log Format (shortcut for: %h %l %u %user %date "%r" %s %b)
# - "combined" is another format widely recognized (shortcut for: %h %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}")
# - else a custom pattern. See http://logback.qos.ch/manual/layouts.html#AccessPatternLayout
#sonar.web.accessLogs.pattern=combined
#--------------------------------------------------------------------------------------------------
# OTHERS
# Delay in seconds between processing of notification queue. Default is 60 seconds.
#sonar.notifications.delay=60
# Paths to persistent data files (embedded database and search index) and temporary files.
# Can be absolute or relative to installation directory.
# Defaults are respectively <installation home>/data and <installation home>/temp
#sonar.path.data=data
#sonar.path.temp=temp
#--------------------------------------------------------------------------------------------------
# DEVELOPMENT - only for developers
# The following properties MUST NOT be used in production environments.
# Dev mode allows to reload web sources on changes and to restart server when new versions
# of plugins are deployed.
#sonar.web.dev=false
# Path to webapp sources for hot-reloading of Ruby on Rails, JS and CSS (only core,
# plugins not supported).
#sonar.web.dev.sources=/path/to/server/sonar-web/src/main/webapp
# Uncomment to enable the Elasticsearch HTTP connector, so that ES can be directly requested through
# http://lmenezes.com/elasticsearch-kopf/?location=http://localhost:9010
#sonar.search.httpPort=9010
The default port is 9000.
So if you want any port other than 9000, you need to uncomment the below line and specify exact port number that you want to choose. You can uncomment the below line and give the port as 9000 too. It does not harm.
sonar.web.port=9000
Can you pls let us know the url that you are typing in browser. It should be something like
http://yourhost:9000/
For more information, you can visit http://docs.sonarqube.org/display/SONAR/Installing
Please check the port 9000 is enabled or not.
You can check it by using this command on linux machine
netstat -tulpn | grep LISTEN
9000 Port should be enable like given in below image:
After enable restart the sonarqube server and try to open in browser.

Datastax Opscenter - Agent not connecting

I setup Cassandra, OpsCenter and the needed DataStax agent on my EC2 Amazon machine. At the moment it's only one machine.
Everything seems to be running fine, except the node list is empty and so are the keyspaces in the Opscenter. The cassandra, datastax and opscenter logs show no errors and I followed the installation / configuration carefully. Then tried all the suggested fixes.
My guess is the problem lies in the communication between the agent and opscenter.
After a while these requests fail:
etc/cassandra/cassandra.yaml: (simplified)
cluster_name: 'CassandraCluster'
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "1.2.3.4"
listen_address: 1.2.3.4
rpc_address: 0.0.0.0
endpoint_snitch: Ec2Snitch
etc/opscenter/opscenterd.conf: (simplified)
[webserver]
port = 81
interface = 0.0.0.0
[authentication]
enabled = False
[stat_reporter]
[agents]
use_ssl = false
var/lib/datastax-agent/conf/address.yaml: (simplified)
stomp_interface: 1.2.3.4
local_interface: 1.2.3.4
use_ssl: 0
nodetool status output:
Note: Ownership information does not include topology; for complete information, specify a keyspace
Datacenter: eu-west_1_cassandra
===============================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 1.2.3.4 2.06 MB 256 100.0% 8a121c12-7cbf-4a2a-b111-4ad111c111d8 1a
Nothing really strange shows up in the log except for the repetitive occurence of the following line in the agent.log:
INFO [install-location-finder] 2015-03-11 15:26:04,690 New JMX connection (127.0.0.1:7199)
INFO [install-location-finder] 2015-03-11 15:27:04,698 New JMX connection (127.0.0.1:7199)
INFO [install-location-finder] 2015-03-11 15:28:04,709 New JMX connection (127.0.0.1:7199)
INFO [install-location-finder] 2015-03-11 15:29:04,716 New JMX connection (127.0.0.1:7199)
INFO [install-location-finder] 2015-03-11 15:30:04,724 New JMX connection (127.0.0.1:7199)
INFO [install-location-finder] 2015-03-11 15:31:04,731 New JMX connection (127.0.0.1:7199)
To supply all the info here are the logs:
opscenterd.log
agent.log
cassandra/system.log
In certain environments the persistent connection between the browser and opscenterd may fail. We're working on implementing a more robust connection that will work in all environments, but in the meantime you can use the following workaround:
http://www.datastax.com/documentation/opscenter/5.1/opsc/troubleshooting/opscTroubleshootingZeroNodes.html
Minimal configuration that I find working was setting this options below for address.yaml
stomp_interface: [opscenter-ip]
stomp_port: 61620
use_ssl: 0
cassandra_conf: /etc/cassandra/cassandra.yaml
jmx_host: [cassandra-node-ip]
jmx_port: 7199
Make sure you have sysstat installed also.

Resources