I installed Kibana and Elasticsearch on cent os. My VM is in GCP and created firewall to expose 5601,5602,9200 and 9300 port for outside and also open firewalld ports also.
services is also running.
when i check ports using netstat command i can see port is also listing.
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:5601 0.0.0.0: LISTEN 1591/node
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1105/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1269/master
tcp6 0 0 :::9200 :::* LISTEN 2273/java
tcp6 0 0 127.0.0.1:9300 :::* LISTEN 2273/java
tcp6 0 0 ::1:9300 :::* LISTEN 2273/java
tcp6 0 0 :::22 :::* LISTEN 1105/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1269/master*
when i inspect the browser i can see this is header section.
Request URL: http://X.X.X.X:5601/
Referrer Policy: strict-origin-when-cross-origin
Provisional headers are shown
Learn more
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
and this is response section.
Browser is chrome and also updated.
Can i know what is the issue?
I found the issue.
Change the below line in "/etc/kibana/kibana.yml" file.
server.host: "localhost" => server.host: "0.0.0.0"
Related
I have set up Kafka on my amazon ec2 machine running ubuntu-18 following this blog plost and this is how it is exposing the ports.
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 772/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1220/sshd
tcp 0 0 0.0.0.0:3004 0.0.0.0:* LISTEN 1041/mongod
tcp6 0 0 :::45827 :::* LISTEN 2059/java
tcp6 0 0 :::9092 :::* LISTEN 2136/java
tcp6 0 0 :::2181 :::* LISTEN 2059/java
tcp6 0 0 :::32851 :::* LISTEN 2136/java
tcp6 0 0 :::22 :::* LISTEN 1220/sshd
how can I bind it to 0.0.0.0:9092?
how can I bind it to 0.0.0.0:9092
:::9092 should be all you need for binding on IPv6.
If you want to force IPv4, please refer kafka binding to ipv6 port even though ipv4 address specified in config
You can also add this to server.properties to explicitly bind to all interfaces
listeners=PLAINTEXT://0.0.0.0:9092
But when set, you also need to set (and uncomment) advertised.listeners to the external interface address (IP or hostname) that clients should use to communicate to that server, as mentioned in the property file.
# If not set, it uses the value for "listeners".
#advertised.listeners=PLAINTEXT://your.host.name:9092
More details here if you need something more complex https://www.confluent.io/blog/kafka-listeners-explained
I am doing this on aws ec2 running ubuntu 18, the blog post shared in the first answer provides detailed information on how to go about this kind of challenge. The main challenge was failing to get a broker connection from the machine.
What worked is to add your machines public DNS(ec2......com) to advertised listeners.
I made the edit in server.properties file and like
advertised.listeners=PLAINTEXT://public DNS(ec2......com):9092
My elastic search is running on server A on port 9200 and 9300.
tcp6 0 0 127.0.0.1:9200 :::* LISTEN 23489/java
tcp6 0 0 ::1:9200 :::* LISTEN 23489/java
tcp6 0 0 127.0.0.1:9300 :::* LISTEN 23489/java
tcp6 0 0 ::1:9300 :::* LISTEN 23489/java
When I try to connect to elastic search from server B which is on same LAN, I get connection refused error. Even I am unable to telnet the server on port 9200 or 9300. Please suggest what I am missing.
This is because your ES is bound to localhost 127.0.0.1.
You need to change the network.host property in elasticsearch.yml in order to be able to connect from remote hosts. Basically, this does the trick and will use the first available IP address in your network:
network.host: 0
So I'm in over my head, but I nearly have a working Clojure app with WebSocket connection deployed to a prod-like environment, but there are a few things that I can't seem to work out. When I hit the endpoint with curl from localhost I get the response I was hoping for, so far so good. But when I try access it from my domain, I don't get a connection. When I checked netstat to make sure the port (8001) is open, I see the following:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3283/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3218/sshd
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 3283/nginx: master
tcp6 0 0 :::8001 :::* LISTEN 3260/java
tcp6 0 0 :::80 :::* LISTEN 3283/nginx: master
tcp6 0 0 :::22 :::* LISTEN 3218/sshd
tcp6 0 0 :::443 :::* LISTEN 3283/nginx: master
I am running ufw but I already allowed the port, so I'm not sure if there is something I need to do when running the app to make it available as a normal tcp service? Or am I barking up the wrong tree?
Here is the command that is currently running:
java -jar test.jar 8001
Also I was torn between here and serverfault, but opted for stackoverflow because I think it's Clojure-centric. Please feel free to correct me.
looks like you have a good setup where nginx is providing a reverse proxy to your app, this is a solid way to go about it.
A tcp6 listener on :::8001 is likely also listening on ipv4 as well as ipv6 it just doesn't print both of them
At lest at a first glance it looks like nginx is missing a forward rule for /* to localhost:8001/*
they look something like this in ngins.conf
location ~ ^/.* {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $fwd_proto;
proxy_redirect off;
proxy_pass http://localhost:8001;
}
This is driving me crazy. I have been working on this for days and just can't seem to solve this issue. I have a private cloud running on eucalyptus for testing and 4 VMs running Ubuntu 12.04. I am trying to get cloudera to run HDFS and map-reduce however when I try to start it up, the data-nodes never seem to be able to communicate with the name-node. It installs fine and passes all the pre-launch checks. Host files are all set up with 127.0.0.1 localhost and the ip and hostnames of the other vms, firewalls are all disable and security groups are set to allow everything. I can connect to the 8022 port from the data-nodes to the name-node with telnet and netstat on the name-node looks like this:
tcp 0 0 172.31.254.119:9000 0.0.0.0:* LISTEN 6519/python
tcp 0 0 0.0.0.0:7432 0.0.0.0:* LISTEN 5672/postgres
tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 6538/python
tcp 0 0 172.31.254.119:50090 0.0.0.0:* LISTEN 8694/java
tcp 0 0 0.0.0.0:7180 0.0.0.0:* LISTEN 5680/java
tcp 0 0 0.0.0.0:7182 0.0.0.0:* LISTEN 5680/java
tcp 0 0 172.31.254.119:8020 0.0.0.0:* LISTEN 8689/java
tcp 0 0 172.31.254.119:50070 0.0.0.0:* LISTEN 8689/java
tcp 0 0 172.31.254.119:8022 0.0.0.0:* LISTEN 8689/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 576/sshd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 5486/postgres
tcp6 0 0 :::7432 :::* LISTEN 5672/postgres
tcp6 0 0 :::22 :::* LISTEN 576/sshd
yet the error I keep getting is:
Failed to publish event: SimpleEvent{attributes={STACKTRACE=[org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode: DatanodeRegistration(172.31.254.110, storageID=DS-1259113373-172.31.254.110-50010-1378398035331, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=cluster9;nsid=46459994;c=0)
I would greatly appreciate any advice from anyone with more Linux/cloudera/eucalyptus experience then I.
Thanks all.
You have specified that you are using loopback, but the DN is identifying itself as 172.31.254.110. Use proper hostname instead of 127.0.0.1. To be on the safer side add the hostname and IP of each machine into the /etc/hosts files of all other machines. If problem still persists, show me your config files.
My security group has the following:
>22 (SSH) 0.0.0.0/0
>80 (HTTP) 0.0.0.0/0
>143 (IMAP) 0.0.0.0/0
>443 (HTTPS) 0.0.0.0/0
>995 (POP3S) 0.0.0.0/0
>465 (SMTPS) 0.0.0.0/0
>25 (SMTP) 0.0.0.0/0
Running a netstat on the server shows the following:
>Active Internet connections (servers and established)
>Proto Recv-Q Send-Q Local Address Foreign Address State
>tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN
>tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
>tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
>tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN
>tcp 0 0 10.211.30.202:44025 194.14.0.125:6667 ESTABLISHED
>tcp6 0 0 :::995 :::* LISTEN
>tcp6 0 0 :::110 :::* LISTEN
>tcp6 0 0 :::143 :::* LISTEN
>tcp6 0 0 :::22 :::* LISTEN
>tcp6 0 0 :::25 :::* LISTEN
>tcp6 0 0 :::993 :::* LISTEN
And when I try and access the box from the outside world, I get nothing.
>thedude:~ root$ telnet mail.sd0a.com 25<br />
>Trying 107.20.235.215...<br />
>telnet: connect to address 107.20.235.215: Operation timed out<br />
>telnet: Unable to connect to remote host<br />
Anyone have any positive experiences with Amazon EC2 instances and getting mail to a state where it will work? Its worth noting that via command line, mail seems to go through. System is Ubuntu 12.04.1 LTS if that matters.
Might be your ISP filtering outbound connections to port 25/tcp in order to prevent botnet spam.
To eliminate the obvious: Have you tried
connect to another port other than 25?
connect to another new ec2 instance, port 25? (straightforward task to duplicate it on EC2)
connect from another machine (or your friend's PC) to sd0a.com:25?
traceroute to identify where the packets are dropped?
setup postfix on port 2525 (remember to add that into Security Groups)
ufw* on Ubuntu. (Default is off... but good to check)
As far as I can tell, all IP addresses on Amazon EC2 are blacklisted in spamhaus.com (and a lot of other anti-spam list). Hence most likely your ISP is blocking these packets, if so it is IP block or port block?