DNS configuration for accessing consul remotely - amazon-ec2

I have installed consul on AWS EC2, with 3 servers and 1 client.
server IPs = 11.XX.XX.1,11.XX.XX.2,11.XX.XX.3.
client IP = 11.XX.XX.4
consul config: /etc/consul.d/server/config.json
{
"bootstrap": false,
"server": true,
"datacenter": "abc",
"advertise_addr": "11.XX.XX.1",
"data_dir": "/var/consul",
"log_level": "INFO",
"enable_syslog": true,
"addresses": {
"http": "0.0.0.0"
},
"start_join": ["11.XX.XX.2", "11.XX.XX.3"]
}
netstat output on server:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8400 0.0.0.0:* LISTEN 29720/consul
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1006/sshd
tcp 0 0 127.0.0.1:8600 0.0.0.0:* LISTEN 29720/consul
tcp6 0 0 :::8301 :::* LISTEN 29720/consul
tcp6 0 0 :::8302 :::* LISTEN 29720/consul
tcp6 0 0 :::8500 :::* LISTEN 29720/consul
tcp6 0 0 :::22 :::* LISTEN 1006/sshd
tcp6 0 0 :::8300 :::* LISTEN 29720/consul
curl is working fine from remote machine but dig is only working on the local machine.
; <<>> DiG 9.9.5-3ubuntu0.6-Ubuntu <<>> #127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40873
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web.service.consul. IN A
;; ANSWER SECTION:
web.service.consul. 0 IN A 11.XX.XX.4
;; Query time: 0 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Fri Dec 30 08:21:41 UTC 2016
;; MSG SIZE rcvd: 52
but dig is not working from remote machine:
dig #11.XX.XX.1 -p 8600 web.service.consul
; <<>> DiG 9.9.5-3ubuntu0.6-Ubuntu <<>> #11.XX.XX.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
-----------------------------
How to make it work?

By default consul only listens for DNS connections on the instance loopback device. Best practices asks you to install the client on any remote machine looking to consume consul DNS. This is not always practical.
I have seen people expose DNS (consul port 8600) on all interfaces via the Consul configuration JSON like so:
{
"server": true,
"addresses": {
"dns": "0.0.0.0"
}
}
You can also expose all ports listening on loopback with the client_addr field in JSON or pass it via the command line with:
consul agent -client 0.0.0.0
There are more controls and knobs available to tweak (see docs):
https://www.consul.io/docs/agent/options.html

Related

Elasticsearch on LAN not connecting

I have an elasticsearch running on a server (ubuntu) hosted on a local machine in our network. We have used it for testing and want to connect from local computers. The machines lan ip is 192.168.1.100. My ip is 192.168.1.54. It is running when I do
curl -X GET 'http://localhost:9200'
{
"name" : "node-1",
"cluster_name" : "norrath",
"cluster_uuid" : "0EqCQH1ZTSGzOOdq_Sf7EQ",
"version" : {
"number" : "6.2.1",
"build_hash" : "7299dc3",
"build_date" : "2018-02-07T19:34:26.990113Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
When I try from my machine..
curl 'http://192.168.1.100:9200'
curl: (7) Failed to connect to 192.168.1.100 port 9200: Connection refused
First thing I did was follow digital oceans instructions and changed
network.host: 0.0.0.0
Using netstat -atun
tcp6 0 0 :::9200 :::* LISTEN
tcp6 0 0 :::9300 :::* LISTEN
UFW status
sudo ufw status
Status: inactive
I have tried multiple config file changes..
#http.cors.enabled: true
#http.cors.allow-origin: "/.*/"
#transport.host: 0.0.0.0
#transport.tcp.port: 9300
#http.port: 9200
network.host: 0.0.0.0
#network.bind_host: 0.0.0.0
#network.publish_host: 0.0.0.0
systemctl restart elasticsearch
Still not able to connect over lan.
After examining my netstat output, I realized it is listening for tcp6 requests but not ipv4. Changing my curl request to the inet6 address, and setting up tcp/udp rather than tcp only fixed our issue.

Configuring 3proxy Socks5 behind NAT network - error 12

I'm trying to configure 3proxy server using this guide (I've already used it on OHV hosting and it works just nice!), now trying to start 3proxy behind NAT, and have error 12 of 3proxy which means 12 - failed to bind()
Where is mistake and what I'm doing wrong?
Internal IP:
172.16.20.50
External IP:
82.118.227.155
NAT Ports:
5001-5020
Here are my entire config:
######################
##3Proxy.cfg Content##
######################
##Main##
#Starting 3proxy as a service/daemon
daemon
#DNS Servers to resolve domains and for the local DNS cache
#that providers faster resolution for cached entries
nserver 8.8.8.8
nserver 1.1.1.1
nscache 65536
#Authentication
#CL = Clear Text, CR = Encrypted Passswords (MD5)
#Add MD5 users with MD5 passwords with "" (see below)
#users "user:CR:$1$lFDGlder$pLRb4cU2D7GAT58YQvY49."
users 3proxy:CL:hidden
#Logging
log /var/log/3proxy/3proxy.log D
logformat "- +_L%t.%. %N.%p %E %U %C:%c %R:%r %O %I %h %T"
#logformat "-""+_L%C - %U [%d/%o/%Y:%H:%M:%S %z] ""%T"" %E %I"
rotate 30
#Auth type
#auth strong = username & password
auth strong
#Binding address
external 82.118.227.155
internal 172.16.20.50
#SOCKS5
auth strong
flush
allow 3proxy
maxconn 1000
socks -p5011
User 3proxy created, access to 3proxy granted.
Logs, which means connection established, but no traffic transfered (0/0):
[root#bgvpn113 ~]# tail -f /var/log/3proxy/3proxy.log.2018.05.14
1526329023.448 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21151 88.212.201.205:443 0 0 0 CONNECT_88.212.201.205:443
1526329023.458 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21154 88.212.201.205:443 0 0 0 CONNECT_88.212.201.205:443
1526329023.698 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21158 88.212.201.205:443 0 0 0 CONNECT_88.212.201.205:443
1526329037.419 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21162 195.201.201.32:443 0 0 0 CONNECT_195.201.201.32:443
1526329037.669 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21164 195.201.201.32:443 0 0 0 CONNECT_195.201.201.32:443
Mistake was in outside IP.
I set both ips to 172.16.20.50 and it started to work!

Why won't Kibana Node server start up?

I just upgraded to Elasticsearch and Kibana 6.0.1 from 5.6.4 and I'm having trouble getting the Kibana server running. The service appears to be running, but nothing is binding to the port and I don't see any errors in the logs.
Verifying the version I have running:
root#my-server:/var/log# /usr/share/kibana/bin/kibana --version
6.0.1
Checking the service status:
root#my-server:/var/log# sudo service kibana start
root#my-server:/var/log# sudo service kibana status
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-12-08 21:17:53 UTC; 1s ago
Main PID: 17766 (node)
Tasks: 6 (limit: 4915)
Memory: 86.6M
CPU: 1.981s
CGroup: /system.slice/kibana.service
└─17766 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
The contents of my /etc/kibana/kibana.yml config file:
elasticsearch.password: mypassword
elasticsearch.url: http://my-server:9200
elasticsearch.username: elastic
logging.dest: /var/log/kibana.log
logging.verbose: true
server.basePath: /kibana
server.host: localhost
server.port: 5601
The contents of my log file:
root#my-server:/var/log# tail /var/log/kibana.log -n1000
{"type":"log","#timestamp":"2017-12-08T21:17:04Z","tags":["plugins","debug"],"pid":17712,"dir":"/usr/share/kibana/plugins","message":"Scanning `/usr/share/kibana/plugins` for plugins"}
{"type":"log","#timestamp":"2017-12-08T21:17:04Z","tags":["plugins","debug"],"pid":17712,"dir":"/usr/share/kibana/src/core_plugins","message":"Scanning `/usr/share/kibana/src/core_plugins` for plugins"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/plugins/x-pack/index.js","message":"Found plugin at /usr/share/kibana/plugins/x-pack/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/console/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/console/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/elasticsearch/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/elasticsearch/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/kbn_doc_views/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/kbn_doc_views/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:16Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/kbn_vislib_vis_types/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/kbn_vislib_vis_types/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/kibana/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/kibana/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/markdown_vis/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/markdown_vis/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/metrics/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/metrics/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/region_map/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/region_map/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/spy_modes/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/spy_modes/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/state_session_storage_redirect/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/state_session_storage_redirect/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/status_page/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/status_page/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/table_vis/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/table_vis/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/tagcloud/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/tagcloud/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/tile_map/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/tile_map/index.js"}
{"type":"log","#timestamp":"2017-12-08T21:17:17Z","tags":["plugins","debug"],"pid":17712,"path":"/usr/share/kibana/src/core_plugins/timelion/index.js","message":"Found plugin at /usr/share/kibana/src/core_plugins/timelion/index.js"}
{"type":"ops","#timestamp":"2017-12-08T21:17:18Z","tags":[],"pid":17712,"os":{"load":[1.03271484375,1.29541015625,2.1494140625],"mem":{"total":2094931968,"free":763858944},"uptime":10018},"proc":{"uptime":16.017,"mem":{"rss":269451264,"heapTotal":239005696,"heapUsed":200227592,"external":489126},"delay":3.2269310001283884},"load":{"requests":{},"concurrents":{"5601":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 191.0MB uptime: 0:00:16 load: [1.03 1.30 2.15] delay: 3.227"}
{"type":"log","#timestamp":"2017-12-08T21:17:18Z","tags":["info","optimize"],"pid":17712,"message":"Optimizing and caching bundles for graph, monitoring, ml, kibana, stateSessionStorageRedirect, timelion, login, logout, dashboardViewer and status_page. This may take a few minutes"}
Confirming that ES is up and running:
root#my-server:/var/log# curl -u elastic:"mypassword" http://my-
server:9200/
{
"name" : "vf9xM-O",
"cluster_name" : "my-server",
"cluster_uuid" : "pdwwLfCOTgehc_5B8oB-8g",
"version" : {
"number" : "6.0.1",
"build_hash" : "601be4a",
"build_date" : "2017-12-04T09:29:09.525Z",
"build_snapshot" : false,
"lucene_version" : "7.0.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
When I try to CURL the Kibana Node server (which should be running on 5601 as per the config):
root#my-server:/var/log# curl 'localhost:5601'
curl: (7) Failed to connect to localhost port 5601: Connection refused
Indeed when I list the open ports, I see lots of things, but nothing on 5601:
root#my-server:/var/log# netstat -ntlp | grep LISTEN
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 1412/uwsgi
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1413/systemd-resolv
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 15924/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1519/sshd
tcp 0 0 0.0.0.0:3031 0.0.0.0:* LISTEN 1412/uwsgi
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 1591/postgres
tcp6 0 0 :::5355 :::* LISTEN 1413/systemd-resolv
tcp6 0 0 :::9200 :::* LISTEN 16108/java
tcp6 0 0 :::9300 :::* LISTEN 16108/java
tcp6 0 0 :::22 :::* LISTEN 1519/sshd
tcp6 0 0 :::5432 :::* LISTEN 1591/postgres
I'm not sure what else to try to troubleshoot Kibana, any ideas are really really appreciated!
Well I'm not really sure why, but this seemed to work after I rebooted the machine:
root#my-server:~# sudo reboot
after a minute I SSHed back in and voila:
root#my-server:~# netstat -ntlp | grep LISTEN
tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN 1428/node
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 1414/uwsgi
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1427/systemd-resolv
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1467/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1510/sshd
tcp 0 0 0.0.0.0:3031 0.0.0.0:* LISTEN 1414/uwsgi
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 1578/postgres
tcp6 0 0 :::5355 :::* LISTEN 1427/systemd-resolv
tcp6 0 0 :::9200 :::* LISTEN 2011/java
tcp6 0 0 :::9300 :::* LISTEN 2011/java
tcp6 0 0 :::22 :::* LISTEN 1510/sshd
tcp6 0 0 :::5432 :::* LISTEN 1578/postgres
¯_(ツ)_/¯

Enabling debug on Wildfly domain mode in Docker - port already in use

I'm providing full docker environments for a team of developers, comprising Wildfly, MySQL and Apache primarily.
I preconfigure all images according to production and a developer has now requested one more option: to be able to use intellij to debug a running wildfly slave.
The setup:
I set up a virtual machine to host docker as people use different OS'.
I forward ports that must be reachable from the local machine that hosts the VM. This works, they can access the DB, wildfly management etc. Screenshot of the VM configuration and ports here:
debian machine hosting docker
Dockerfile for host with debugging on (which isnt working):
FROM ourerpo/wildfly:base
ARG VERSION=8.2.0
WORKDIR $JBOSS_USER_HOME
ENV JAVA_OPTS='-Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787,server=y,suspend=n'
ADD srv srv/
RUN mkdir -p $JBOSS_CONF \
&& mv srv/wildfly.conf.slave $JBOSS_CONF/wildfly.conf \
&& chown $JBOSS_USER:$JBOSS_USER $JBOSS_CONF \
&& chmod 644 $JBOSS_CONF \
&& chown $JBOSS_USER:$JBOSS_USER srv/ -R \
&& chmod 744 srv/*.sh
USER $JBOSS_USER
# Move in template host configuration and insert slave key
RUN mv srv/host-slave-${VERSION}.tmpl $JBOSS_DOMAIN/configuration/host-slave.xml \
&& cat $JBOSS_DOMAIN/configuration/host-slave.xml | sed -e"s#<secret value=\".*\"/>#<secret value=\"somevalue\"/>#" >$JBOSS_DOMAIN/configuration/host-slave.xml.new \
&& mv $JBOSS_DOMAIN/configuration/host-slave.xml.new $JBOSS_DOMAIN/configuration/host-slave.xml
ENTRYPOINT exec /app/wildfly/bin/domain.sh --domain-config=domain.xml --host-config=host-slave.xml -Djboss.domain.master.address=stsdomain -Djboss.bind.address=0.0.0.0
The image when spawned as a container logs the following:
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /app/wildfly
JAVA: /app/java/bin/java
JAVA_OPTS: -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787,server=y,suspend=n
=========================================================================
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Listening for transport dt_socket at address: 8787
14:58:27,755 INFO [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final
14:58:27,875 INFO [org.jboss.as.process.Host Controller.status] (main) JBAS012017: Starting process 'Host Controller'
[Host Controller] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[Host Controller] ERROR: transport error 202: bind failed: Address already in use
[Host Controller] ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
[Host Controller] JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [debugInit.c:750]
[Host Controller] FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
14:58:28,000 INFO [org.jboss.as.process.Host Controller.status] (reaper for Host Controller) JBAS012010: Process 'Host Controller' finished with an exit status of 134
Two things to note:
-agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787,server=y,suspend=n
ERROR: transport error 202: bind failed: Address already in use
So the port should be in use, using netstat I can't see it though:
me#machine:~/mapped$ netstat -tulpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::9999 :::* LISTEN -
tcp6 0 0 :::8050 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::13080 :::* LISTEN -
tcp6 0 0 :::15672 :::* LISTEN -
tcp6 0 0 :::9990 :::* LISTEN -
tcp6 0 0 :::5671 :::* LISTEN -
tcp6 0 0 :::5672 :::* LISTEN -
tcp6 0 0 :::2376 :::* LISTEN -
tcp6 0 0 :::3306 :::* LISTEN -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 172.17.0.1:123 0.0.0.0:* -
udp 0 0 172.10.12.1:123 0.0.0.0:* -
udp 0 0 10.0.2.15:123 0.0.0.0:* -
udp 0 0 127.0.0.1:123 0.0.0.0:* -
udp 0 0 0.0.0.0:123 0.0.0.0:* -
udp6 0 0 fe80::1053:e1ff:fed:123 :::* -
udp6 0 0 fe80::2c88:1cff:fe9:123 :::* -
udp6 0 0 fe80::42:3dff:fe28::123 :::* -
udp6 0 0 fe80::58c3:fdff:fe3:123 :::* -
udp6 0 0 fe80::d435:6fff:fee:123 :::* -
udp6 0 0 fe80::8091:1aff:fe7:123 :::* -
udp6 0 0 fe80::2459:65ff:fe0:123 :::* -
udp6 0 0 fe80::94b2:9fff:fe6:123 :::* -
udp6 0 0 fe80::42:19ff:fe2f::123 :::* -
udp6 0 0 fe80::a00:27ff:fef4:123 :::* -
udp6 0 0 ::1:123 :::* -
udp6 0 0 :::123 :::* -
Docker inspect on container:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "9ac8dad9fd93a0fb9bdff4c068b8e925aa9ff941df4f81033ce910a093f36a78",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"11080/tcp": null,
"8787/tcp": null,
"8899/tcp": null
Things I have tried:
Changing -Djava.awt.headless=t rue -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:8787
To -Djava.awt.headless=t rue -agentlib:jdwp=transport=dt_socket,address=8787
Change port from 8787 to something else.
Exposed the port, not exposing the port.
Server=y, Server=n
I'm running:
Docker version 1.11.2,
Wildfly 8.2
Docker network inspect:
me#machine:~/mapped$ docker network inspect compose_stsdevnet
[
{
"Name": "compose_thenet",
"Id": "9a17953da5f9698f3f27cf18d9d41751d049774439a53629fdcd69a996e370db",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.10.12.0/24"
}
]
},
"Internal": false,
"Containers": {
<other containers here>
<failing container> "9094b4136707e643df69fdff7dc04432a8d9c36275c3ae6dc6f2286393d3753a": {
"Name": "stupefied_stonebraker",
"EndpointID": "0c425d16334ecf3127233156d9770dc286bf72f57d778efe01fafb4696a17012",
"MacAddress": "02:42:ac:0a:0c:03",
"IPv4Address": "172.10.12.3/24",
"IPv6Address": ""
},
<the domain> "e4dd4f67f33df6643c691aa74a71dc4a8d69738004dfbe09b20c3061bd3bc614": {
"Name": "stsdomain",
"EndpointID": "0c89e70edbddb34f7be6b180a289480e1ac57ef482a651f0addce167eaa1110a",
"MacAddress": "02:42:ac:0a:0c:18",
"IPv4Address": "172.10.12.24/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Any ideas or suggestions would be much appreciated. Thanks in advance.
By placing the value in the JAVA_OPTS environment variable it will be used for both the process controller and the host controller. You're seeing the error because the host controller already has a debug agent listening on port 8787 when the process controller tries to bind to it.
My guess would be you want to actually debug your application on the servers. If that is the case in your host-slave.xml you'd need to add something like the following to a specific server.
<jvm name="default">
<jvm-options>
<option value="-agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"/>
</jvm-options>
</jvm>
Example:
<servers>
<server name="server-one" group="main-server-group">
<jvm name="default">
<jvm-options>
<option value="-agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"/>
</jvm-options>
</jvm>
</server>
<server name="server-two" group="other-server-group">
<!--
~ server-two avoids port conflicts by incrementing the ports in
~ the default socket-group declared in the server-group
-->
<socket-bindings port-offset="150"/>
</server>
</servers>

logstash-snmptrap not showing any logs in logstash

We are trying to implement logstash with snmptrap, but the logs are not coming in logstash, in netstat the logstash udp port is not open for all can that be the issue.
logstash.conf
input {
snmptrap {
type => "snmptrap"
community => "public"
port => "1062"
}
}
snmptrapd.conf
authCommunity log,net public
forward default localhost:1062
Is there any issue with the configuration ? Netstat output
udp 0 0 0.0.0.0:162 0.0.0.0:*
udp 0 0 :::1062 :::*

Resources