Why won't Kibana Node server start up? - elasticsearch

I just upgraded to Elasticsearch and Kibana 6.0.1 from 5.6.4 and I'm having trouble getting the Kibana server running. The service appears to be running, but nothing is binding to the port and I don't see any errors in the logs.
Verifying the version I have running:
root#my-server:/var/log# /usr/share/kibana/bin/kibana --version
6.0.1
Checking the service status:
root#my-server:/var/log# sudo service kibana start
root#my-server:/var/log# sudo service kibana status
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-12-08 21:17:53 UTC; 1s ago
Main PID: 17766 (node)
Tasks: 6 (limit: 4915)
Memory: 86.6M
CPU: 1.981s
CGroup: /system.slice/kibana.service
└─17766 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
The contents of my /etc/kibana/kibana.yml config file:
elasticsearch.password: mypassword
elasticsearch.url: http://my-server:9200
elasticsearch.username: elastic
logging.dest: /var/log/kibana.log
logging.verbose: true
server.basePath: /kibana
server.host: localhost
server.port: 5601
The contents of my log file:
root#my-server:/var/log# tail /var/log/kibana.log -n1000
{"type":"log","#timestamp":"2017-12-08T21:17:04Z","tags":["plugins","debug"],"pid":17712,"dir":"/usr/share/kibana/plugins","message":"Scanning `/usr/share/kibana/plugins` for plugins"}
{"type":"log","#timestamp":"2017-12-08T21:17:04Z","tags":["plugins","debug"],"pid":17712,"dir":"/usr/share/kibana/src/core_plugins","message":"Scanning `/usr/share/kibana/src/core_plugins` for plugins"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/plugins/x-pack/index.js","message":"Found plugin at /usr/share/kibana/plugins/x-pack/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/console/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/console/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/elasticsearch/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/elasticsearch/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/kbn_doc_views/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/kbn_doc_views/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/kbn_vislib_vis_types/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/kbn_vislib_vis_types/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/kibana/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/kibana/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/markdown_vis/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/markdown_vis/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/metrics/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/metrics/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/region_map/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/region_map/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/spy_modes/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/spy_modes/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/state_session_storage_redirect/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/state_session_storage_redirect/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/status_page/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/status_page/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/table_vis/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/table_vis/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/tagcloud/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/tagcloud/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/tile_map/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/tile_map/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/timelion/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/timelion/index.js"}
{"type":"ops","#timestamp":"2017-12-08T21:17:18Z","tags":[],"pid":17712,"os":{"load":[1.03271484375,1.29541015625,2.1494140625],"mem":{"total":2094931968,"free":763858944},"uptime":10018},"proc":{"uptime":16.017,"mem":{"rss":269451264,"heapTotal":239005696,"heapUsed":200227592,"external":489126},"delay":3.2269310001283884},"load":{"requests":{},"concurrents":{"5601":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 191.0MB uptime: 0:00:16 load: [1.03 1.30 2.15] delay: 3.227"}
{"type":"log","#timestamp":"2017-12-08T21:17:18Z","tags":["info","optimize"],"pid":17712,"message":"Optimizing and caching bundles for graph, monitoring, ml, kibana, stateSessionStorageRedirect, timelion, login, logout, dashboardViewer and status_page. This may take a few minutes"}
Confirming that ES is up and running:
root#my-server:/var/log# curl -u elastic:"mypassword" http://my-
server:9200/
{
"name" : "vf9xM-O",
"cluster_name" : "my-server",
"cluster_uuid" : "pdwwLfCOTgehc_5B8oB-8g",
"version" : {
"number" : "6.0.1",
"build_hash" : "601be4a",
"build_date" : "2017-12-04T09:29:09.525Z",
"build_snapshot" : false,
"lucene_version" : "7.0.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
When I try to CURL the Kibana Node server (which should be running on 5601 as per the config):
root#my-server:/var/log# curl 'localhost:5601'
curl: (7) Failed to connect to localhost port 5601: Connection refused
Indeed when I list the open ports, I see lots of things, but nothing on 5601:
root#my-server:/var/log# netstat -ntlp | grep LISTEN
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 1412/uwsgi
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1413/systemd-resolv
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 15924/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1519/sshd
tcp 0 0 0.0.0.0:3031 0.0.0.0:* LISTEN 1412/uwsgi
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 1591/postgres
tcp6 0 0 :::5355 :::* LISTEN 1413/systemd-resolv
tcp6 0 0 :::9200 :::* LISTEN 16108/java
tcp6 0 0 :::9300 :::* LISTEN 16108/java
tcp6 0 0 :::22 :::* LISTEN 1519/sshd
tcp6 0 0 :::5432 :::* LISTEN 1591/postgres
I'm not sure what else to try to troubleshoot Kibana, any ideas are really really appreciated!

Well I'm not really sure why, but this seemed to work after I rebooted the machine:
root#my-server:~# sudo reboot
after a minute I SSHed back in and voila:
root#my-server:~# netstat -ntlp | grep LISTEN
tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN 1428/node
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 1414/uwsgi
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1427/systemd-resolv
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1467/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1510/sshd
tcp 0 0 0.0.0.0:3031 0.0.0.0:* LISTEN 1414/uwsgi
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 1578/postgres
tcp6 0 0 :::5355 :::* LISTEN 1427/systemd-resolv
tcp6 0 0 :::9200 :::* LISTEN 2011/java
tcp6 0 0 :::9300 :::* LISTEN 2011/java
tcp6 0 0 :::22 :::* LISTEN 1510/sshd
tcp6 0 0 :::5432 :::* LISTEN 1578/postgres
¯_(ツ)_/¯

Related

Specify local IP address in net.DialTCP throwing error bind: address already in use

Written one TCP client which is used to create 10 TCP concurrent connections(holding for logger time) and enable only 10 ephemeral ports to use(49001 - 49010 ip_local_port_range file).
func createConnection(c int, desAddr, desPort string) (brokerCon net.Conn, err error) {
localips := GetLocalIP()
maxRetry := len(localips)
retry := 0
for retry < maxRetry {
lIPPort := fmt.Sprintf("%s:0", strings.Split(localips[retry].String(), "/")[0])
fmt.Println("\n---", lIPPort)
laddr, lerr := net.ResolveTCPAddr("tcp4", lIPPort)
if lerr != nil {
fmt.Println("Getting Error ResolveTCPAddr for local ", lerr)
}
raddr, rerr := net.ResolveTCPAddr("tcp4", desAddr+desPort)
if rerr != nil {
fmt.Println("Getting Error ResolveTCPAddr for local ", rerr)
}
brokerCon, err = net.DialTCP("tcp", laddr, raddr)
if err != nil {
fmt.Printf("Failed to connect connetion %d , %d retrying with seconday IP, err:\n",
c, retry, err)
retry = retry + 1
time.Sleep(1 * time.Second)
continue
} else {
//fmt.Println("successfull ")
break
}
}
return
}
Scenario 1:
It's able to create 10 connections if the laddr value is nil on the DialTCP method. DialTCP is able to use 49009 and 49007 port which is already used by some different destination(as because A TCP connection is determined by a 5-tuple: [local IP, local port, remote IP, remote port, protocol] ).
sudo netstat -anpl --tcp --udp | grep 490
tcp 0 0 10.50.1.245:49005 10.50.1.41:9999 ESTABLISHED 23408/client
tcp 0 0 10.50.1.245:49010 10.50.1.41:9999 ESTABLISHED 23408/client
tcp 0 0 10.50.1.245:49008 10.50.1.41:9999 ESTABLISHED 23408/client
tcp 0 0 10.50.1.245:49009 XXX.XXX.XXX.X44:443 ESTABLISHED 3250/XXXX
tcp 0 0 10.50.1.245:49006 10.50.1.41:9999 ESTABLISHED 23408/client
tcp 0 0 10.50.1.245:49002 10.50.1.41:9999 ESTABLISHED 23408/client
tcp 0 0 10.50.1.245:49004 10.50.1.41:9999 ESTABLISHED 23408/client
tcp 0 0 10.50.1.245:49001 10.50.1.41:9999 ESTABLISHED 23408/client
tcp 0 0 10.50.1.245:49007 10.50.1.41:9999 ESTABLISHED 23408/client
tcp 0 0 10.50.1.245:49003 10.50.1.41:9999 ESTABLISHED 23408/client
tcp 0 0 10.50.1.245:49007 XXX.XXX.XXX.X29:443 ESTABLISHED 2806/XXXX
tcp 0 0 10.50.1.245:49009 10.50.1.41:9999 ESTABLISHED 23408/client
Scenario 2:
If I Pass local IP details(10.50.1.245) with 0 port, it's able to create only 8 connections and throwing bind: address already in use for the remaining 2 connections. if the local address is not nil why dialTCP is not able to use 49005 and 49007 port which is already used by some different destination.
sudo netstat -anpl --tcp --udp | grep 490
tcp 0 0 10.50.1.245:49010 10.50.1.41:9999 ESTABLISHED 25841/client
tcp 0 0 10.50.1.245:49005 XXX.XXX.XXX.X66:443 ESTABLISHED 2510/XXX
tcp 0 0 10.50.1.245:49008 10.50.1.41:9999 ESTABLISHED 25841/client
tcp 0 0 10.50.1.245:49005 XXX.XXX.XXX.X44:443 ESTABLISHED 3250/XXX
tcp 0 0 10.50.1.245:49006 10.50.1.41:9999 ESTABLISHED 25841/client
tcp 0 0 10.50.1.245:49002 10.50.1.41:9999 ESTABLISHED 25841/client
tcp 0 0 10.50.1.245:49004 10.50.1.41:9999 ESTABLISHED 25841/client
tcp 0 0 10.50.1.245:49001 10.50.1.41:9999 ESTABLISHED 25841/client
tcp 0 0 10.50.1.245:49003 10.50.1.41:9999 ESTABLISHED 25841/client
tcp 0 0 10.50.1.245:49007 XXX.XXX.XXX.X29:443 ESTABLISHED 2806/XXX
tcp 0 0 10.50.1.245:49009 10.50.1.41:9999 ESTABLISHED 25841/client

Elasticsearch on LAN not connecting

I have an elasticsearch running on a server (ubuntu) hosted on a local machine in our network. We have used it for testing and want to connect from local computers. The machines lan ip is 192.168.1.100. My ip is 192.168.1.54. It is running when I do
curl -X GET 'http://localhost:9200'
{
"name" : "node-1",
"cluster_name" : "norrath",
"cluster_uuid" : "0EqCQH1ZTSGzOOdq_Sf7EQ",
"version" : {
"number" : "6.2.1",
"build_hash" : "7299dc3",
"build_date" : "2018-02-07T19:34:26.990113Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
When I try from my machine..
curl 'http://192.168.1.100:9200'
curl: (7) Failed to connect to 192.168.1.100 port 9200: Connection refused
First thing I did was follow digital oceans instructions and changed
network.host: 0.0.0.0
Using netstat -atun
tcp6 0 0 :::9200 :::* LISTEN
tcp6 0 0 :::9300 :::* LISTEN
UFW status
sudo ufw status
Status: inactive
I have tried multiple config file changes..
#http.cors.enabled: true
#http.cors.allow-origin: "/.*/"
#transport.host: 0.0.0.0
#transport.tcp.port: 9300
#http.port: 9200
network.host: 0.0.0.0
#network.bind_host: 0.0.0.0
#network.publish_host: 0.0.0.0
systemctl restart elasticsearch
Still not able to connect over lan.
After examining my netstat output, I realized it is listening for tcp6 requests but not ipv4. Changing my curl request to the inet6 address, and setting up tcp/udp rather than tcp only fixed our issue.

Enabling debug on Wildfly domain mode in Docker - port already in use

I'm providing full docker environments for a team of developers, comprising Wildfly, MySQL and Apache primarily.
I preconfigure all images according to production and a developer has now requested one more option: to be able to use intellij to debug a running wildfly slave.
The setup:
I set up a virtual machine to host docker as people use different OS'.
I forward ports that must be reachable from the local machine that hosts the VM. This works, they can access the DB, wildfly management etc. Screenshot of the VM configuration and ports here:
debian machine hosting docker
Dockerfile for host with debugging on (which isnt working):
FROM ourerpo/wildfly:base
ARG VERSION=8.2.0
WORKDIR $JBOSS_USER_HOME
ENV JAVA_OPTS='-Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787,server=y,suspend=n'
ADD srv srv/
RUN mkdir -p $JBOSS_CONF \
&& mv srv/wildfly.conf.slave $JBOSS_CONF/wildfly.conf \
&& chown $JBOSS_USER:$JBOSS_USER $JBOSS_CONF \
&& chmod 644 $JBOSS_CONF \
&& chown $JBOSS_USER:$JBOSS_USER srv/ -R \
&& chmod 744 srv/*.sh
USER $JBOSS_USER
# Move in template host configuration and insert slave key
RUN mv srv/host-slave-${VERSION}.tmpl $JBOSS_DOMAIN/configuration/host-slave.xml \
&& cat $JBOSS_DOMAIN/configuration/host-slave.xml | sed -e"s#<secret value=\".*\"/>#<secret value=\"somevalue\"/>#" >$JBOSS_DOMAIN/configuration/host-slave.xml.new \
&& mv $JBOSS_DOMAIN/configuration/host-slave.xml.new $JBOSS_DOMAIN/configuration/host-slave.xml
ENTRYPOINT exec /app/wildfly/bin/domain.sh --domain-config=domain.xml --host-config=host-slave.xml -Djboss.domain.master.address=stsdomain -Djboss.bind.address=0.0.0.0
The image when spawned as a container logs the following:
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /app/wildfly
JAVA: /app/java/bin/java
JAVA_OPTS: -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787,server=y,suspend=n
=========================================================================
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Listening for transport dt_socket at address: 8787
14:58:27,755 INFO [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final
14:58:27,875 INFO [org.jboss.as.process.Host Controller.status] (main) JBAS012017: Starting process 'Host Controller'
[Host Controller] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[Host Controller] ERROR: transport error 202: bind failed: Address already in use
[Host Controller] ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
[Host Controller] JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [debugInit.c:750]
[Host Controller] FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
14:58:28,000 INFO [org.jboss.as.process.Host Controller.status] (reaper for Host Controller) JBAS012010: Process 'Host Controller' finished with an exit status of 134
Two things to note:
-agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787,server=y,suspend=n
ERROR: transport error 202: bind failed: Address already in use
So the port should be in use, using netstat I can't see it though:
me#machine:~/mapped$ netstat -tulpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::9999 :::* LISTEN -
tcp6 0 0 :::8050 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::13080 :::* LISTEN -
tcp6 0 0 :::15672 :::* LISTEN -
tcp6 0 0 :::9990 :::* LISTEN -
tcp6 0 0 :::5671 :::* LISTEN -
tcp6 0 0 :::5672 :::* LISTEN -
tcp6 0 0 :::2376 :::* LISTEN -
tcp6 0 0 :::3306 :::* LISTEN -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 172.17.0.1:123 0.0.0.0:* -
udp 0 0 172.10.12.1:123 0.0.0.0:* -
udp 0 0 10.0.2.15:123 0.0.0.0:* -
udp 0 0 127.0.0.1:123 0.0.0.0:* -
udp 0 0 0.0.0.0:123 0.0.0.0:* -
udp6 0 0 fe80::1053:e1ff:fed:123 :::* -
udp6 0 0 fe80::2c88:1cff:fe9:123 :::* -
udp6 0 0 fe80::42:3dff:fe28::123 :::* -
udp6 0 0 fe80::58c3:fdff:fe3:123 :::* -
udp6 0 0 fe80::d435:6fff:fee:123 :::* -
udp6 0 0 fe80::8091:1aff:fe7:123 :::* -
udp6 0 0 fe80::2459:65ff:fe0:123 :::* -
udp6 0 0 fe80::94b2:9fff:fe6:123 :::* -
udp6 0 0 fe80::42:19ff:fe2f::123 :::* -
udp6 0 0 fe80::a00:27ff:fef4:123 :::* -
udp6 0 0 ::1:123 :::* -
udp6 0 0 :::123 :::* -
Docker inspect on container:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "9ac8dad9fd93a0fb9bdff4c068b8e925aa9ff941df4f81033ce910a093f36a78",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"11080/tcp": null,
"8787/tcp": null,
"8899/tcp": null
Things I have tried:
Changing -Djava.awt.headless=t rue -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787
To -Djava.awt.headless=t rue -agentlib:jdwp=transport=dt_socket,address=8787
Change port from 8787 to something else.
Exposed the port, not exposing the port.
Server=y, Server=n
I'm running:
Docker version 1.11.2,
Wildfly 8.2
Docker network inspect:
me#machine:~/mapped$ docker network inspect compose_stsdevnet
[
{
"Name": "compose_thenet",
"Id": "9a17953da5f9698f3f27cf18d9d41751d049774439a53629fdcd69a996e370db",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.10.12.0/24"
}
]
},
"Internal": false,
"Containers": {
<other containers here>
<failing container> "9094b4136707e643df69fdff7dc04432a8d9c36275c3ae6dc6f2286393d3753a": {
"Name": "stupefied_stonebraker",
"EndpointID": "0c425d16334ecf3127233156d9770dc286bf72f57d778efe01fafb4696a17012",
"MacAddress": "02:42:ac:0a:0c:03",
"IPv4Address": "172.10.12.3/24",
"IPv6Address": ""
},
<the domain> "e4dd4f67f33df6643c691aa74a71dc4a8d69738004dfbe09b20c3061bd3bc614": {
"Name": "stsdomain",
"EndpointID": "0c89e70edbddb34f7be6b180a289480e1ac57ef482a651f0addce167eaa1110a",
"MacAddress": "02:42:ac:0a:0c:18",
"IPv4Address": "172.10.12.24/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Any ideas or suggestions would be much appreciated. Thanks in advance.
By placing the value in the JAVA_OPTS environment variable it will be used for both the process controller and the host controller. You're seeing the error because the host controller already has a debug agent listening on port 8787 when the process controller tries to bind to it.
My guess would be you want to actually debug your application on the servers. If that is the case in your host-slave.xml you'd need to add something like the following to a specific server.
<jvm name="default">
<jvm-options>
<option value="-agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"/>
</jvm-options>
</jvm>
Example:
<servers>
<server name="server-one" group="main-server-group">
<jvm name="default">
<jvm-options>
<option value="-agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"/>
</jvm-options>
</jvm>
</server>
<server name="server-two" group="other-server-group">
<!--
~ server-two avoids port conflicts by incrementing the ports in
~ the default socket-group declared in the server-group
-->
<socket-bindings port-offset="150"/>
</server>
</servers>

DNS configuration for accessing consul remotely

I have installed consul on AWS EC2, with 3 servers and 1 client.
server IPs = 11.XX.XX.1,11.XX.XX.2,11.XX.XX.3.
client IP = 11.XX.XX.4
consul config: /etc/consul.d/server/config.json
{
"bootstrap": false,
"server": true,
"datacenter": "abc",
"advertise_addr": "11.XX.XX.1",
"data_dir": "/var/consul",
"log_level": "INFO",
"enable_syslog": true,
"addresses": {
"http": "0.0.0.0"
},
"start_join": ["11.XX.XX.2", "11.XX.XX.3"]
}
netstat output on server:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8400 0.0.0.0:* LISTEN 29720/consul
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1006/sshd
tcp 0 0 127.0.0.1:8600 0.0.0.0:* LISTEN 29720/consul
tcp6 0 0 :::8301 :::* LISTEN 29720/consul
tcp6 0 0 :::8302 :::* LISTEN 29720/consul
tcp6 0 0 :::8500 :::* LISTEN 29720/consul
tcp6 0 0 :::22 :::* LISTEN 1006/sshd
tcp6 0 0 :::8300 :::* LISTEN 29720/consul
curl is working fine from remote machine but dig is only working on the local machine.
; <<>> DiG 9.9.5-3ubuntu0.6-Ubuntu <<>> #127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40873
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web.service.consul. IN A
;; ANSWER SECTION:
web.service.consul. 0 IN A 11.XX.XX.4
;; Query time: 0 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Fri Dec 30 08:21:41 UTC 2016
;; MSG SIZE rcvd: 52
but dig is not working from remote machine:
dig #11.XX.XX.1 -p 8600 web.service.consul
; <<>> DiG 9.9.5-3ubuntu0.6-Ubuntu <<>> #11.XX.XX.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
-----------------------------
How to make it work?
By default consul only listens for DNS connections on the instance loopback device. Best practices asks you to install the client on any remote machine looking to consume consul DNS. This is not always practical.
I have seen people expose DNS (consul port 8600) on all interfaces via the Consul configuration JSON like so:
{
"server": true,
"addresses": {
"dns": "0.0.0.0"
}
}
You can also expose all ports listening on loopback with the client_addr field in JSON or pass it via the command line with:
consul agent -client 0.0.0.0
There are more controls and knobs available to tweak (see docs):
https://www.consul.io/docs/agent/options.html

logstash-snmptrap not showing any logs in logstash

We are trying to implement logstash with snmptrap, but the logs are not coming in logstash, in netstat the logstash udp port is not open for all can that be the issue.
logstash.conf
input {
snmptrap {
type => "snmptrap"
community => "public"
port => "1062"
}
}
snmptrapd.conf
authCommunity log,net public
forward default localhost:1062
Is there any issue with the configuration ? Netstat output
udp 0 0 0.0.0.0:162 0.0.0.0:*
udp 0 0 :::1062 :::*

Resources