Kibana web interface not loading - elasticsearch

Despite ElasticSearch and Kibana both running on my production server, I'm unable to visit the GUI over the public IP: http://52.4.153.19:5601/
Localhost curls return 200 but console errors on the browser report timeouts after a few images are retrieved.
I've successfully installed, run, and accessed Kibana on my local (Windows 10) and on my staging AWS EC2 Ubuntu 14.04 environment. I'm able to access both over port 5601 on localhost and the staging environment is accessible over the public IP address and all domains addressed accordingly. The reverse proxy also works and all status indicators are green on the dashboard.
I'm running Kibana 4.5, ElasticSearch 2.3.1, Apache 2.4.12
I've used the same exact volume from the working environment to attach to the production instance, so everything is identical on the two volumes, except that the staging environment's apache vhost uses a subdomain while the production environment's servername is the base domain. Both are configured for SSL wildcards. Both are in separate availability zones at Amazon. I've tried altering the server block to use a subdomain on the production server, just to see if the domain was impactful but the error remains.
I also tried running one instance individually, in case EC2 had some kind of networking error with 0.0.0.0 but I'm unable to come to a resolution. All logs and configurations are identical between the two servers for ElasticSearch and Kibana.
I've tried deleting and re-creating the kibana index, tried alternate settings inclusive of the host, elasticsearch url, extending the max ping and timeout, max retries, extended the apache limits, http.cors to allow different origins. I've tried other ports but both servers are indicating that 5601 is listening in the same way.
I also had the same problem on a completely different volume that was previously attached to this instance.
The only difference I can see is that the working version pings fine while the non-working version has a 100% packet loss when pinging the IP, although I can't imagine why that would be, as I'm able to reach the website on 80, just fine. I can also access various other tools running on other ports. I assume there might be some kind of networking conflict. Any ideas?

May be port 5601 is blocked by firewall
Allow incoming connections to port 5601 by:
sudo iptables -I INPUT -p tcp --dport 5601 -j ACCESS
For security:
Modify above mentioned command and accept connection only from specific address. (See man iptables)
or use Shield plugin for elasticseach

Sorry, forgot to update this question. The answer turned out being that I simply needed to deploy a new instance. Simply by creating a clone of the instance, I was able to resolve the issue. I've had networking problems at AWS, before, with their internal dns/ip conflicts, so I've had to do so, in the past and this turned out to be the quickest and cleanest solution, albeit not providing any definitive insight into the cause.

Related

Elasticsearch - Collecting logs from devices not on server LAN

I am trying to build familiarity with SIEM systems in general and decided to set up an Elastic Stack via Digital Ocean. Everything was successful and my server as localhost is producing logs. It's been interesting to tinker with visualizations and that good stuff.
Obviously my interest isn't in logs from this remote server, though. I would like to configure some devices on my home network to send logs.
Current setup on server: filebeat > logstash > elasticsearch > kibana.
When I install filebeat onto, say, my laptop and configure the .yml file in a similar way to the server (comment out elastic output, uncomment logstash output) it is not able to connect. Basically I just set the hosts to serverip:logstash port and enabled filebeat on the system. Running the setup commands leads to a "couldn't connect to any configured elasticsearch hosts".
Instead of a direct answer, can someone explain for me generally what I need to be considering for this process? What is happening when connecting outside of the server LAN? and how do I handle authentication to the server, if needed?
Thank you, really. I know that the information is out there but I am deep in a rabbit hole and having a hard time finding what I need.
By default, the HTTP API is bound to only the host's local loopback interface,
ensuring that it is not accessible to the rest of the network. Because the API
includes neither authentication nor authorization and has not been hardened or
tested for use as a publicly-reachable API, binding to publicly accessible IPs
should be avoided where possible.
Even you set "http.host: 0.0.0.0" - you need to open port for your laptop (better if you already have public IP and open it only for your laptop)
For authentication - you have to investigate xpack - security features .
BR Alexey.

ERR_CONNECTION_RESET from EKS nodes

I had EC2 server where I was running my existing application. The EC2 instance was on private subnet and ELB was created in public subnet with access to particular VPN IP. So whenever I was on VPN, I was able to access my application and if I am outside that VPN IP then I wasn't able to access the application.
Now I have created EKS cluster and I am deploying my application using kubectl with docker image of the application. Weird thing is the application works fine whenever I am NOT on VPN (I tweaked security group to allow all traffic from all IPs) and whenever I am on VPN, I receive "ERR_CONNECTION_RESET" in chrome and curl shows - empty response received from server.
Till now I have tried below things. As I am relatively new with EKS, I am not able to find much.
1. Same security group applied - Not resolving
2. Checked logs of all pods - whichever pods I received from "kubectl get po --all-namespaces" - No issues showing up
3. Checked /var/log/messages
4. Tried to change application port
5. kubectl get events not showing anything on why server is sending back empty response.
6. Tried to SSH to node and tried to curl localhost:30080 and it works fine, but when tried to curl from my machine (which is on VPN), it fails with empty response.
Please again note that, the application runs totally fine when I am outside VPN. Further my old application (that is on EC2) runs fine with VPN.
Thanks in advance!
Finally found the issue was with the corporate VPN which was blocking all ports other than 80 and 443. When I was creating the service, I wanted to have ELB to expose port 5000. So I was thinking elb-host:5000 will point to dev service nodeport which was 30080. This was perfectly working when I was NOT on the VPN. But when I was connecting the site using VPN, corporate traffic was blocking port 5000 on ELB. After I changed the port to 80, it started working as expected.
While using nginx, it was creating ELB with port 80 instead of my intended port 5000. I didn't notice that port change and thought that this is happening due to IP blocking.

Can't connect to Tigase server running on EC2 Instance: Connection Refused

After installing Tigase on an AWS EC2 instance I keep getting the error message 'connection refused' when I try to connect to it using an xmpp client.
The instance is attached to a security group with rules to allow traffic to the necessary ports (tigase needs 5223 primarily and some others for more exotic features). I've also tried it with rules allowing all traffic to all ports from all sources but I still get the same message.
I've also checked iptables because I noticed some people needed to configure those as well in specific cases, I made sure it allows all connections but still I can't connect to Tigase.
Yes Tigase is running, there are no relevant errors in the Tigase logs
SSH (port 22) and HTTP (port 80) work fine
Enabling ICMP (ping) on all ports works fine
I've tried several xmpp clients, same problem
I've deleted and recreated instances several times
Re-installed Tigase on fresh instances several times with various configuration options
Tried using domain name associated with Elastic IP, normal IP and tried public DNS directly.
Configured the DNS in the way necessary for Tigase as described here
I've looked everywhere and have not been able to find anything to fix this. Networking isn't my main area of expertise and I'd really appreciate any advice.
Wow, in case anyone runs into the same problem in the future, turns out that this was related to the AMI. I was using an Amazon Linux AMI and switched to Ubuntu Server 14.04 LTS. I wish I tried this sooner but I didn't really consider this a possible solution earlier. Apparently Amazon Linux doesn't play well with Tigase.

Google Compute Engine IIS Webfarm

I'm trying to setup a Win2008R2 IIS webfarm on Google Compute Engine.
I've got the machine setup, however when I try to add it to a network load balancer pool, the balancer consistently reports the machine as unhealthy - even if i disable healt checks. I have a single forward rule setup for port 80.
I've tried different size instances in different regions/zones to no avail. Traffic into the load balancer never makes it to my instance, and the instance is always report as unhealthy.
For the firewall I went ahead and added a blanket rule so 0.0.0.0/0 can access all local net services (ICMP;TCP:1-65535; UDP:1-65535) and I've disabled windows firewall.
Anyone have any experience getting this working?
Spoke with google support. "Known issue with windows instances - check back in 6 months." In the mean time, use linux or setup your own NLB within your project.
Strange that it is not working for you. I replicated your situation and I am getting to the machine with no issues. The load balancer is forwarding traffic as expected and it reaches the system who is marked as healthy in the Lb pool.
You may want to add the following rule to the windows firewall with advanced security(make sure you use the "advanced security" one and not the default):
Inbound rule > New port > port 80
Once this is done, from your machine you can curl or telnet to the address while running a netstat on the Windows system and you should see the LB forwarding rule IP making requests :
$ curl IP (locally)
$ netstat (on the windows machine)
Hope this helps !

elasticsearch on Ec2 cannot hit public IP(timeout)

I have elasticsearch running on EC2,
I can hit form local IP address(ex. curl -XGET localhost:9200)
I cannot hit from public IP address, whether on the same machine, or from our network, it always times out,
IPtables are allowing
port is open(to itself as well as private network)
Elasticsearch http.cors is enabled and allows "*"
aside from Iptables, amazon security config, elasticsearch config could there be anything I am overlooking? (we can access 443 and get kibana up, it just times out on the elasticsearch ajax call or if I try to access 9200 directly)
been working on this for over a day so I humbly come to you all!
thank you
I had exactly the same issue.
I managed to solve it as follows:
Do what TJ said in his comment, + restart the instance. I wasn't sure if this was/is necessary, but I did it for good measure.
I made sure that the following is set in the elasticsearch.yml file:
a. http.enabled: true
b. http.cors.enabled: true
c. http.cors.allow-origin: "*"
Restarted elasticsearch (service elasticsearch restart)
Then when I tried to access elasticsearch from the public IP it worked - http://[PUBLIC IP OF INSTANCE]:9200
Hope this helps.
I just spent lots of time trying to get this working and just succeeded.
Setup: Elasticsearch 6.2.4, running on a Windows Server 2012, EC2 instance.
I also installed the discovery-ec2 plugin, not sure now if it is required, my assumption is, yes it is required although some of the settings it allows were not necessary to get it working.
Config (.yml). I tried tons of different .yml config settings which in the end did not help, in the end I think the main setting is:
network.host: 0.0.0.0
I tried setting the network.host to ec2:privateIpv4 and ec2:publicIpv4 (plugin settings) but they didn't help.
I had added the required Custom TCP Rules (allowing 9200 and 9300...not sure if 9300 is needed).
Either it failed to start (usually with a binding to 9300 error) or started but was not publicly accessible.
The Fix. What got it working in the end is you must also open the port in windows firewall. As soon as I added the inbound rule, boom it connected :)
I then stripped out all the extra configs I had been trying, restarted Elasticsearch... and it still worked!

Resources