How to set metricbeat on amazon elasticsearch - elasticsearch

I have two servers one for production and one for test, I've been trying to install metricbeat on both servers.
I did on my test server and set it to send logs to my amazon elasticsearch service and now I can see on kibana all data from that server but I did the same process to my production server and when I use command sudo metricbeat -e it starts but I get
ERROR pipeline/output.go:74 Failed to connect: Get https://amazon-endpoint request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) but the config for both is the same

Related

How to expose elasticsearch setup using eck externally

Hi i would like to expose my elasticsearch cluster in kubernetes created using ECK (https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html) so it can be accessed externally.
I have a requriement to setup Functionbeat to ship aws lambda cloudwatch logs to elastcsearch.
Please see Step 2: Connect to the Elastic Stack https://www.elastic.co/guide/en/beats/functionbeat/current/functionbeat-installation-configuration.html
Attempt:
I have an elastic load balancer that has haproxy running on it which i use to expose other k8 services externally such as frontends. Ive attempted to modify this to also allow me to expose elasticsearch.
haproxy
frontend elasticsearch
bind *:9200
acl host_data_elasticsearch hdr(host) -i elasticsearch.acme.com
use_backend elasticsearchApp if host_data_elasticsearch
backend elasticsearchApp
server data-es data-es-es-http:9200 check rise 1 ssl verify none
Im attempting to see if i can connect using the following curl command:
curl -u "elastic:$ELASTIC_PASSWORD" -k "https://elasticsearch.acme.com:9200"
However i get the following error:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
In the browser if i navigate to the url i get
This site can’t provide a secure connection
elasticsearch.acme.com sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
Posting answer as community wiki based on #Joao Morais comment:
you added ssl to the server line which instructs haproxy to perform a ssl offload and you didn't add the ssl stuff in the frontend. it seems you should either remove the ssl+verify from the server, add ssl to the front or query a plain http request.
Additional information:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number indicates that you are trying to reach website that is not secure.
To access it you should replace https: with http: in your curl command so it will look like this:
curl -u "elastic:$ELASTIC_PASSWORD" -k "http://elasticsearch.acme.com:9200"

How can I troubleshoot/fix an issue interacting with a running Kubernetes pod (timeout error)?

I have two EC2 instances, one running a Kubernetes Master node and the other running the Worker node. I can successfully create a pod from a deployment file that pulls a docker image and it starts with a status of "Running". However when I try to interact with it I get a timeout error.
Ex: kubectl logs <pod-name> -v6
Output:
Config loaded from file /home/ec2-user/.kube/config
GET https://<master-node-ip>:6443/api/v1/namespaces/default/pods/<pod-name> 200 OK in 11 milliseconds
GET https://<master-node-ip>:6443/api/v1/namespaces/default/pods/<pod-name>/log 500 Internal Server Error in 30002 milliseconds
Server response object: [{"status": "Failure", "message": "Get https://<worker-node-ip>:10250/containerLogs/default/<pod-name>/<container-name>: dial tcp <worker-node-ip>:10250: i/o timeout", "code": 500 }]
I can get information about the pod by running kubectl describe pod <pod-name> and confirm the status as Running. Any ideas on how to identify exactly what is causing this error and/or how to fix it?
Probably, you didn't install any network add-on to your Kubernetes cluster. It's not included in kubeadm installation, but it's required to communicate between pods scheduled on different nodes. The most popular are Calico and Flannel. As you already have a cluster, you may want to chose the network add-on that uses the same subnet as you stated with kubeadm init --pod-network-cidr=xx.xx.xx.xx/xx during cluster initialization.
192.168.0.0/16 is default for Calico network addon
10.244.0.0/16 is default for Flannel network addon
You can change it by downloading corresponded YAML file and by replacing the default subnet with the subnet you want. Then just apply it with kubectl apply -f filename.yaml

Kafka: client has run out of available brokers to talk to

I'm trying to wrap up changes to our Kafka but I'm in over my head and am having a hard time debugging the issue.
I have multiple servers funneling their Ruby on Rails logs to 1 Kafka broker using Filebeat, from there the logs go to our Logstash server, and are then stashed in Elasticsearch. I didnt setup the original system but I tried taking us down from 3 Kafka servers to 1 as they weren't need. I updated the IP address configs in these files in our setup to remove the 2 old Kafka servers and restarted the appropriate services.
# main (filebeat)
sudo vi /etc/filebeat/filebeat.yml
sudo service filebeat restart
# kafka
sudo vi /etc/hosts
sudo vi /etc/kafka/config/server.properties
sudo vi /etc/zookeeper/conf/zoo.cfg
sudo vi /etc/filebeat/filebeat.yml
sudo service kafka-server restart
sudo service zookeeper-server restart
sudo service filebeat restart
# elasticsearch
sudo service elasticsearch restart
# logstash
sudo vi /etc/logstash/conf.d/00-input-kafka.conf
sudo service logstash restart
sudo service kibana restart
When I tail the Filebeat logs I see this -
2018-04-23T15:20:05Z WARN kafka message: client/metadata got error from broker while fetching metadata:%!(EXTRA *net.OpError=dial tcp 172.16.137.132:9092: getsockopt: connection refused)
2018-04-23T15:20:05Z WARN kafka message: client/metadata no available broker to send metadata request to
2018-04-23T15:20:05Z WARN client/brokers resurrecting 1 dead seed brokers
2018-04-23T15:20:05Z WARN kafka message: Closing Client
2018-04-23T15:20:05Z ERR Kafka connect fails with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
to 1 Kafka broker... I tried taking us down from 3 Kafka servers to 1 as they weren't need. I updated the IP address configs in these files in our setup to remove the 2 old Kafka servers and restarted the appropriate services
I think you are misunderstanding that Kafka is only a highly available system if you have more than one broker, so the other 2 are needed despite you possibly only providing a single broker in the logstash config
Your errors state the single broker refused a connection, and therefore no logs will be sent to it.
At a minimum, I would recommend 4 brokers, and a replication factor of 3 on all your critical topics for a useful Kafka cluster.. That way, you can tolerate broker outages as well as distribute the load of your Kafka brokers.
It would also be beneficial to make the topic count a factor of your total logging servers, as well as key a Kafka message based on the application type, for example. That way you are guaranteed log order for those applications

Kubernetes proxy connection

I am trying to play around with kubernetes and specifically the REST API. The steps to connect with the cluster API are listed here. However Im stuck in the first step i.e. running kubectl proxy
I try running this:
kubectl --context='vagrant' proxy --port=8080 &
which returns error: couldn't read version from server: Get https://172.17.4.99:443/api: dial tcp 172.17.4.99:443: i/o timeout
What does this mean? How do overcome it connect to the API?
Check that your docker, proxy, kube-apiserver, kube-control-manager services are running without error. Check their status using systemclt status your-service-name. If the service is loaded but not running then restart the service by using systemctl restart your-service-name.

How to setup Elasticsearch server on remote Ubuntu server?

I have purchased space of 1Gb Ubuntu server to deploy my Elasticsearch Application.
I followed the guide below to deploy Elasticsearch server.
Link to guide
Now whenever I try to access Elasticsearch server using a curl command, it shows the following error
curl: (7) Failed to connect to 0.0.0.0 port 9200: Connection refused
Here is the curl command I tried
curl -XGET '0.0.0.0:9200/?pretty'
Which step could I have missed or is not shown in the guide?
Thank you
Is your elasticsearch service running?
Check with the following command
systemctl status elasticsearch
If it is not running try to start it with
systemctl start elasticsearch
After a few minutes check if it is still running or crashed using systemctl status elasticsearch. If it has crashed please add more details to your question.

Resources