Elasticsearch on LAN not connecting - elasticsearch

I have an elasticsearch running on a server (ubuntu) hosted on a local machine in our network. We have used it for testing and want to connect from local computers. The machines lan ip is 192.168.1.100. My ip is 192.168.1.54. It is running when I do
curl -X GET 'http://localhost:9200'
{
"name" : "node-1",
"cluster_name" : "norrath",
"cluster_uuid" : "0EqCQH1ZTSGzOOdq_Sf7EQ",
"version" : {
"number" : "6.2.1",
"build_hash" : "7299dc3",
"build_date" : "2018-02-07T19:34:26.990113Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
When I try from my machine..
curl 'http://192.168.1.100:9200'
curl: (7) Failed to connect to 192.168.1.100 port 9200: Connection refused
First thing I did was follow digital oceans instructions and changed
network.host: 0.0.0.0
Using netstat -atun
tcp6 0 0 :::9200 :::* LISTEN
tcp6 0 0 :::9300 :::* LISTEN
UFW status
sudo ufw status
Status: inactive
I have tried multiple config file changes..
#http.cors.enabled: true
#http.cors.allow-origin: "/.*/"
#transport.host: 0.0.0.0
#transport.tcp.port: 9300
#http.port: 9200
network.host: 0.0.0.0
#network.bind_host: 0.0.0.0
#network.publish_host: 0.0.0.0
systemctl restart elasticsearch
Still not able to connect over lan.

After examining my netstat output, I realized it is listening for tcp6 requests but not ipv4. Changing my curl request to the inet6 address, and setting up tcp/udp rather than tcp only fixed our issue.

Related

Send log-data from fluentd in kubernetes cluster to elasticsearch in remote standalone server outside cluster BUT IN HOST NETWORK?

As a further question of this question I want to know how I can reach my external service (elasticsearch) from inside a kubernetes pod (fluentd) if the external service is not reachable via internet but only from the host-network where also my kubernetes is hosted.
Here is the external service kubernetes object I applied:
kind: Service
apiVersion: v1
metadata:
name: ext-elastic
namespace: kube-system
spec:
type: ExternalName
externalName: 192.168.57.105
ports:
- port: 9200
So now I have this service:
ubuntu#controller:~$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ext-elastic ExternalName <none> 192.168.57.105 9200/TCP 2s
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1d
The elasticsearch is there:
ubuntu#controller:~$ curl 192.168.57.105:9200
{
"name" : "6_6nPVn",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "ZmxYHz5KRV26QV85jUhkiA",
"version" : {
"number" : "6.2.3",
"build_hash" : "c59ff00",
"build_date" : "2018-03-13T10:06:29.741383Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
But from my fluentd-pod I can neither resolve the service-name in an nslookup nor ping the simple IP. Those commands are both not working:
ubuntu#controller:~$ kubectl exec fluentd-f5dks -n kube-system ping 192.168.57.105
ubuntu#controller:~$ kubectl exec fluentd-f5dks -n kube-system nslookup ext-elastic
Here is the description about my network-topology:
The VM where my elasticsearch is on has 192.168.57.105 and the VM where my kubernetes controller is on has 192.168.57.102. As shown above, the connection works well.
The controller-node has also the IP 192.168.56.102. This is the network in which he is together with the other worker-nodes (also VMs) of my kubernetes-cluster.
My fluentd-pod is seeing himself as 172.17.0.2. It can easily reach the 192.168.56.102 but not the 192.168.57.102 although it is it's host and also one and the same node.
Edit
The routing table of the fluentd-pod looks like this:
ubuntu#controller:~$ kubectl exec -ti fluentd-5lqns -n kube-system -- route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.244.0.1 0.0.0.0 UG 0 0 0 eth0
10.244.0.0 * 255.255.255.0 U 0 0 0 eth0
The /etc/resolc.conf of the fluentd-pod looks like this:
ubuntu#controller:~$ kubectl exec -ti fluentd-5lqns -n kube-system -- cat /etc/resolv.conf
nameserver 10.96.0.10
search kube-system.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
The routing table of the VM that is hosting the kubernetes controller and can reach the desired elasticsearch service looks like this:
ubuntu#controller:~$ route
Kernel-IP-Routentabelle
Ziel Router Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3
10.0.2.0 * 255.255.255.0 U 0 0 0 enp0s3
10.244.0.0 * 255.255.255.0 U 0 0 0 kube-bridge
10.244.1.0 192.168.56.103 255.255.255.0 UG 0 0 0 enp0s8
10.244.2.0 192.168.56.104 255.255.255.0 UG 0 0 0 enp0s8
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.56.0 * 255.255.255.0 U 0 0 0 enp0s8
192.168.57.0 * 255.255.255.0 U 0 0 0 enp0s9
Basically, your pod should have a route to the endpoint IP or the default route to the router which can redirect this traffic to the destination.
The destination endpoint should also have the route(or default route) to the source of the traffic to be able to send a reply.
Check out this article for details about routing in AWS cloud as an example.
In a general sense, a route table tells network packets which way they
need to go to get to their destination. Route tables are managed by
routers, which act as “intersections” within the network — they
connect multiple routes together and contain helpful information for
getting traffic to its final destination. Each AWS VPC has a VPC
router. The primary function of this VPC router is to take all of the
route tables defined within that VPC, and then direct the traffic flow
within that VPC, as well as to subnets outside of the VPC, based on
the rules defined within those tables.
Route tables consist of a list of destination subnets, as well as
where the “next hop” is to get to the final destination.

Why won't Kibana Node server start up?

I just upgraded to Elasticsearch and Kibana 6.0.1 from 5.6.4 and I'm having trouble getting the Kibana server running. The service appears to be running, but nothing is binding to the port and I don't see any errors in the logs.
Verifying the version I have running:
root#my-server:/var/log# /usr/share/kibana/bin/kibana --version
6.0.1
Checking the service status:
root#my-server:/var/log# sudo service kibana start
root#my-server:/var/log# sudo service kibana status
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-12-08 21:17:53 UTC; 1s ago
Main PID: 17766 (node)
Tasks: 6 (limit: 4915)
Memory: 86.6M
CPU: 1.981s
CGroup: /system.slice/kibana.service
└─17766 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
The contents of my /etc/kibana/kibana.yml config file:
elasticsearch.password: mypassword
elasticsearch.url: http://my-server:9200
elasticsearch.username: elastic
logging.dest: /var/log/kibana.log
logging.verbose: true
server.basePath: /kibana
server.host: localhost
server.port: 5601
The contents of my log file:
root#my-server:/var/log# tail /var/log/kibana.log -n1000
{"type":"log","#timestamp":"2017-12-08T21:17:04Z","tags":["plugins","debug"],"pid":17712,"dir":"/usr/share/kibana/plugins","message":"Scanning `/usr/share/kibana/plugins` for plugins"}
{"type":"log","#timestamp":"2017-12-08T21:17:04Z","tags":["plugins","debug"],"pid":17712,"dir":"/usr/share/kibana/src/core_plugins","message":"Scanning `/usr/share/kibana/src/core_plugins` for plugins"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/plugins/x-pack/index.js","message":"Found plugin at /usr/share/kibana/plugins/x-pack/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/console/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/console/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/elasticsearch/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/elasticsearch/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/kbn_doc_views/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/kbn_doc_views/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/kbn_vislib_vis_types/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/kbn_vislib_vis_types/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/kibana/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/kibana/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/markdown_vis/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/markdown_vis/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/metrics/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/metrics/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/region_map/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/region_map/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/spy_modes/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/spy_modes/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/state_session_storage_redirect/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/state_session_storage_redirect/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/status_page/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/status_page/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/table_vis/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/table_vis/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/tagcloud/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/tagcloud/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/tile_map/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/tile_map/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/timelion/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/timelion/index.js"}
{"type":"ops","#timestamp":"2017-12-08T21:17:18Z","tags":[],"pid":17712,"os":{"load":[1.03271484375,1.29541015625,2.1494140625],"mem":{"total":2094931968,"free":763858944},"uptime":10018},"proc":{"uptime":16.017,"mem":{"rss":269451264,"heapTotal":239005696,"heapUsed":200227592,"external":489126},"delay":3.2269310001283884},"load":{"requests":{},"concurrents":{"5601":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 191.0MB uptime: 0:00:16 load: [1.03 1.30 2.15] delay: 3.227"}
{"type":"log","#timestamp":"2017-12-08T21:17:18Z","tags":["info","optimize"],"pid":17712,"message":"Optimizing and caching bundles for graph, monitoring, ml, kibana, stateSessionStorageRedirect, timelion, login, logout, dashboardViewer and status_page. This may take a few minutes"}
Confirming that ES is up and running:
root#my-server:/var/log# curl -u elastic:"mypassword" http://my-
server:9200/
{
"name" : "vf9xM-O",
"cluster_name" : "my-server",
"cluster_uuid" : "pdwwLfCOTgehc_5B8oB-8g",
"version" : {
"number" : "6.0.1",
"build_hash" : "601be4a",
"build_date" : "2017-12-04T09:29:09.525Z",
"build_snapshot" : false,
"lucene_version" : "7.0.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
When I try to CURL the Kibana Node server (which should be running on 5601 as per the config):
root#my-server:/var/log# curl 'localhost:5601'
curl: (7) Failed to connect to localhost port 5601: Connection refused
Indeed when I list the open ports, I see lots of things, but nothing on 5601:
root#my-server:/var/log# netstat -ntlp | grep LISTEN
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 1412/uwsgi
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1413/systemd-resolv
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 15924/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1519/sshd
tcp 0 0 0.0.0.0:3031 0.0.0.0:* LISTEN 1412/uwsgi
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 1591/postgres
tcp6 0 0 :::5355 :::* LISTEN 1413/systemd-resolv
tcp6 0 0 :::9200 :::* LISTEN 16108/java
tcp6 0 0 :::9300 :::* LISTEN 16108/java
tcp6 0 0 :::22 :::* LISTEN 1519/sshd
tcp6 0 0 :::5432 :::* LISTEN 1591/postgres
I'm not sure what else to try to troubleshoot Kibana, any ideas are really really appreciated!
Well I'm not really sure why, but this seemed to work after I rebooted the machine:
root#my-server:~# sudo reboot
after a minute I SSHed back in and voila:
root#my-server:~# netstat -ntlp | grep LISTEN
tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN 1428/node
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 1414/uwsgi
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1427/systemd-resolv
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1467/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1510/sshd
tcp 0 0 0.0.0.0:3031 0.0.0.0:* LISTEN 1414/uwsgi
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 1578/postgres
tcp6 0 0 :::5355 :::* LISTEN 1427/systemd-resolv
tcp6 0 0 :::9200 :::* LISTEN 2011/java
tcp6 0 0 :::9300 :::* LISTEN 2011/java
tcp6 0 0 :::22 :::* LISTEN 1510/sshd
tcp6 0 0 :::5432 :::* LISTEN 1578/postgres
¯_(ツ)_/¯

Enabling debug on Wildfly domain mode in Docker - port already in use

I'm providing full docker environments for a team of developers, comprising Wildfly, MySQL and Apache primarily.
I preconfigure all images according to production and a developer has now requested one more option: to be able to use intellij to debug a running wildfly slave.
The setup:
I set up a virtual machine to host docker as people use different OS'.
I forward ports that must be reachable from the local machine that hosts the VM. This works, they can access the DB, wildfly management etc. Screenshot of the VM configuration and ports here:
debian machine hosting docker
Dockerfile for host with debugging on (which isnt working):
FROM ourerpo/wildfly:base
ARG VERSION=8.2.0
WORKDIR $JBOSS_USER_HOME
ENV JAVA_OPTS='-Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787,server=y,suspend=n'
ADD srv srv/
RUN mkdir -p $JBOSS_CONF \
&& mv srv/wildfly.conf.slave $JBOSS_CONF/wildfly.conf \
&& chown $JBOSS_USER:$JBOSS_USER $JBOSS_CONF \
&& chmod 644 $JBOSS_CONF \
&& chown $JBOSS_USER:$JBOSS_USER srv/ -R \
&& chmod 744 srv/*.sh
USER $JBOSS_USER
# Move in template host configuration and insert slave key
RUN mv srv/host-slave-${VERSION}.tmpl $JBOSS_DOMAIN/configuration/host-slave.xml \
&& cat $JBOSS_DOMAIN/configuration/host-slave.xml | sed -e"s#<secret value=\".*\"/>#<secret value=\"somevalue\"/>#" >$JBOSS_DOMAIN/configuration/host-slave.xml.new \
&& mv $JBOSS_DOMAIN/configuration/host-slave.xml.new $JBOSS_DOMAIN/configuration/host-slave.xml
ENTRYPOINT exec /app/wildfly/bin/domain.sh --domain-config=domain.xml --host-config=host-slave.xml -Djboss.domain.master.address=stsdomain -Djboss.bind.address=0.0.0.0
The image when spawned as a container logs the following:
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /app/wildfly
JAVA: /app/java/bin/java
JAVA_OPTS: -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787,server=y,suspend=n
=========================================================================
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Listening for transport dt_socket at address: 8787
14:58:27,755 INFO [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final
14:58:27,875 INFO [org.jboss.as.process.Host Controller.status] (main) JBAS012017: Starting process 'Host Controller'
[Host Controller] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[Host Controller] ERROR: transport error 202: bind failed: Address already in use
[Host Controller] ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
[Host Controller] JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [debugInit.c:750]
[Host Controller] FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
14:58:28,000 INFO [org.jboss.as.process.Host Controller.status] (reaper for Host Controller) JBAS012010: Process 'Host Controller' finished with an exit status of 134
Two things to note:
-agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787,server=y,suspend=n
ERROR: transport error 202: bind failed: Address already in use
So the port should be in use, using netstat I can't see it though:
me#machine:~/mapped$ netstat -tulpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::9999 :::* LISTEN -
tcp6 0 0 :::8050 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::13080 :::* LISTEN -
tcp6 0 0 :::15672 :::* LISTEN -
tcp6 0 0 :::9990 :::* LISTEN -
tcp6 0 0 :::5671 :::* LISTEN -
tcp6 0 0 :::5672 :::* LISTEN -
tcp6 0 0 :::2376 :::* LISTEN -
tcp6 0 0 :::3306 :::* LISTEN -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 172.17.0.1:123 0.0.0.0:* -
udp 0 0 172.10.12.1:123 0.0.0.0:* -
udp 0 0 10.0.2.15:123 0.0.0.0:* -
udp 0 0 127.0.0.1:123 0.0.0.0:* -
udp 0 0 0.0.0.0:123 0.0.0.0:* -
udp6 0 0 fe80::1053:e1ff:fed:123 :::* -
udp6 0 0 fe80::2c88:1cff:fe9:123 :::* -
udp6 0 0 fe80::42:3dff:fe28::123 :::* -
udp6 0 0 fe80::58c3:fdff:fe3:123 :::* -
udp6 0 0 fe80::d435:6fff:fee:123 :::* -
udp6 0 0 fe80::8091:1aff:fe7:123 :::* -
udp6 0 0 fe80::2459:65ff:fe0:123 :::* -
udp6 0 0 fe80::94b2:9fff:fe6:123 :::* -
udp6 0 0 fe80::42:19ff:fe2f::123 :::* -
udp6 0 0 fe80::a00:27ff:fef4:123 :::* -
udp6 0 0 ::1:123 :::* -
udp6 0 0 :::123 :::* -
Docker inspect on container:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "9ac8dad9fd93a0fb9bdff4c068b8e925aa9ff941df4f81033ce910a093f36a78",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"11080/tcp": null,
"8787/tcp": null,
"8899/tcp": null
Things I have tried:
Changing -Djava.awt.headless=t rue -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787
To -Djava.awt.headless=t rue -agentlib:jdwp=transport=dt_socket,address=8787
Change port from 8787 to something else.
Exposed the port, not exposing the port.
Server=y, Server=n
I'm running:
Docker version 1.11.2,
Wildfly 8.2
Docker network inspect:
me#machine:~/mapped$ docker network inspect compose_stsdevnet
[
{
"Name": "compose_thenet",
"Id": "9a17953da5f9698f3f27cf18d9d41751d049774439a53629fdcd69a996e370db",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.10.12.0/24"
}
]
},
"Internal": false,
"Containers": {
<other containers here>
<failing container> "9094b4136707e643df69fdff7dc04432a8d9c36275c3ae6dc6f2286393d3753a": {
"Name": "stupefied_stonebraker",
"EndpointID": "0c425d16334ecf3127233156d9770dc286bf72f57d778efe01fafb4696a17012",
"MacAddress": "02:42:ac:0a:0c:03",
"IPv4Address": "172.10.12.3/24",
"IPv6Address": ""
},
<the domain> "e4dd4f67f33df6643c691aa74a71dc4a8d69738004dfbe09b20c3061bd3bc614": {
"Name": "stsdomain",
"EndpointID": "0c89e70edbddb34f7be6b180a289480e1ac57ef482a651f0addce167eaa1110a",
"MacAddress": "02:42:ac:0a:0c:18",
"IPv4Address": "172.10.12.24/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Any ideas or suggestions would be much appreciated. Thanks in advance.
By placing the value in the JAVA_OPTS environment variable it will be used for both the process controller and the host controller. You're seeing the error because the host controller already has a debug agent listening on port 8787 when the process controller tries to bind to it.
My guess would be you want to actually debug your application on the servers. If that is the case in your host-slave.xml you'd need to add something like the following to a specific server.
<jvm name="default">
<jvm-options>
<option value="-agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"/>
</jvm-options>
</jvm>
Example:
<servers>
<server name="server-one" group="main-server-group">
<jvm name="default">
<jvm-options>
<option value="-agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"/>
</jvm-options>
</jvm>
</server>
<server name="server-two" group="other-server-group">
<!--
~ server-two avoids port conflicts by incrementing the ports in
~ the default socket-group declared in the server-group
-->
<socket-bindings port-offset="150"/>
</server>
</servers>

DNS configuration for accessing consul remotely

I have installed consul on AWS EC2, with 3 servers and 1 client.
server IPs = 11.XX.XX.1,11.XX.XX.2,11.XX.XX.3.
client IP = 11.XX.XX.4
consul config: /etc/consul.d/server/config.json
{
"bootstrap": false,
"server": true,
"datacenter": "abc",
"advertise_addr": "11.XX.XX.1",
"data_dir": "/var/consul",
"log_level": "INFO",
"enable_syslog": true,
"addresses": {
"http": "0.0.0.0"
},
"start_join": ["11.XX.XX.2", "11.XX.XX.3"]
}
netstat output on server:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8400 0.0.0.0:* LISTEN 29720/consul
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1006/sshd
tcp 0 0 127.0.0.1:8600 0.0.0.0:* LISTEN 29720/consul
tcp6 0 0 :::8301 :::* LISTEN 29720/consul
tcp6 0 0 :::8302 :::* LISTEN 29720/consul
tcp6 0 0 :::8500 :::* LISTEN 29720/consul
tcp6 0 0 :::22 :::* LISTEN 1006/sshd
tcp6 0 0 :::8300 :::* LISTEN 29720/consul
curl is working fine from remote machine but dig is only working on the local machine.
; <<>> DiG 9.9.5-3ubuntu0.6-Ubuntu <<>> #127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40873
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web.service.consul. IN A
;; ANSWER SECTION:
web.service.consul. 0 IN A 11.XX.XX.4
;; Query time: 0 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Fri Dec 30 08:21:41 UTC 2016
;; MSG SIZE rcvd: 52
but dig is not working from remote machine:
dig #11.XX.XX.1 -p 8600 web.service.consul
; <<>> DiG 9.9.5-3ubuntu0.6-Ubuntu <<>> #11.XX.XX.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
-----------------------------
How to make it work?
By default consul only listens for DNS connections on the instance loopback device. Best practices asks you to install the client on any remote machine looking to consume consul DNS. This is not always practical.
I have seen people expose DNS (consul port 8600) on all interfaces via the Consul configuration JSON like so:
{
"server": true,
"addresses": {
"dns": "0.0.0.0"
}
}
You can also expose all ports listening on loopback with the client_addr field in JSON or pass it via the command line with:
consul agent -client 0.0.0.0
There are more controls and knobs available to tweak (see docs):
https://www.consul.io/docs/agent/options.html

logstash-snmptrap not showing any logs in logstash

We are trying to implement logstash with snmptrap, but the logs are not coming in logstash, in netstat the logstash udp port is not open for all can that be the issue.
logstash.conf
input {
snmptrap {
type => "snmptrap"
community => "public"
port => "1062"
}
}
snmptrapd.conf
authCommunity log,net public
forward default localhost:1062
Is there any issue with the configuration ? Netstat output
udp 0 0 0.0.0.0:162 0.0.0.0:*
udp 0 0 :::1062 :::*

Resources