Do we need to have metricbeat installed in the remote prometheus cluster to pull prometheus data to ELK Cluster using metricbeat prometheus module? - elasticsearch

Reference:
Configuring Metricbeat
Metricbeat Prometheus Module
From the second link I got the metricbeat Prometheus module configuration is as follows:-
- module: prometheus
period: 10s
hosts: ["localhost:9090"]
metricsets: ["query"]
queries:
- name: 'up'
path: '/api/v1/query'
params:
query: "up"
Regarding my use case I want to pull data from remote prometheus host which is outside my network to my ELK cluster using metricbeat prometheus queries.
In this regard I added my remote prometheus host name in the host section of the above config file for metricbeat prometheus module.
Now my question do we need to install metricbeat in the remote prometheus cluster also to pull the data (Ref: Configuring Metricbeat) or just adding the remote prometheus host name in the host section of metricbeat configuration is enough to do the trick?

You are not required to again configured MetricBeat on remote Prometheus host. You can use same configuration which you have given in question. But you can not give localhost:9090 as you are not running metricbeat on same host where Prometheus is running. Hence, you can update configuration like prometheus_ip:9090.
Also, You need to make sure that connectivity is allowed between host where you have installed metricbeat and host where you are running Prometheus.
You can use Elastic Agent & fleet as well instead of Metricbeat. because it provide centralized configuration management and it easy to configure. You can read more about Elastic agent and fleet here and it provide Prometheus integration.

Related

Elastic.co APM with gke network policies

I have gke clusters, and I have elasticsearch deployments on elastic.co. Now on my gke cluster I have network policies for each pod with egress and ingress rules. My issue is that in order to use elastic APM I need to allow egress to my elastic deployment.
Anyone has an idea how to do that? I am thinking either a list of IPs for elastic.co on the gcp instances to be able to whitelist them in my egress rules, or some kind of proxy between my gke cluster and elastic apm.
I know a solution can be to have a local elastic cluster on gcp, but I prefer not to go this way.
About the possibility of using some kind of Proxy between your gke cluster and elastic apm. You can check the following link [1], to see if it can fit your necessities.
[1] https://cloud.google.com/vpc/docs/special-configurations#proxyvm

APM Server has still not connected to Elasticsearch

I have installed Elastic search, kibana and logstash (all version 7.5.2)
After this I have installed Kibana(7.5.2) and configured apm-server.yml with output as elasticsearch with xpack monitoring as well. The service apm-server is running fine. I am able to see the APM application in the Kibana Stack monitoring page as well.
But when I go to https://kibana_server:5601/app/kibana#/home/tutorial/apm and click on Check APM Server statusI get as below:
APM Server has still not connected to Elasticsearch
apm-server.yml
apm-server:
host: "lxapm1001:8200"
output.elasticsearch:
hosts: ["http://lxecs2001:9200"]
username: apm_system
password: "${ES_PWD}"
monitoring.enabled: true
monitoring.elasticsearch:
Has anyone faced similar issue. Please advise. Let me know if any additional details are required.

Kubernetes and Prometheus not working together with Grafana

I have created a kubernetes cluster on my local machine with one master and at the moment zero workers, using kubeadm as the bootstrap tool. I am trying to get Prometheus (from the helm packet manager) and Kuberntes matrics together to the Grafana Kubernetes App, but this is not the case. The way I am setting up the monitoring is:
Open grafana-server at port 3000 and install the kuberntes app.
Install stable/prometheus from helm and using this custom YAML file I found in another guide.
Adding Prometheus data source to Grafana with IP from kubernetes Prometheus service (or pods, tried both and both works well) and use TLS Client Auth.
Starting proxy port with kubectl proxy
Filling in all information needed in the Kubernetes Grafana app and then deploy it. No errors.
All kubernetes metric shows, but no Prometheus metric.
If kubernetes proxy connection is stopped, Prometheus metric can be seen. There are no problems connecting to the Prometheus pod or service IP when kubernetes proxy is running. Does someone have any clue what I am doing wrong?

tunnel or proxy from app in one kubernetes cluster (local/minikube) to a database inside a different kubernetes cluster (on Google Container Engine)

I have a large read-only elasticsearch database running in a kubernetes cluster on Google Container Engine, and am using minikube to run a local dev instance of my app.
Is there a way I can have my app connect to the cloud elasticsearch instance so that I don't have to create a local test database with a subset of the data?
The database contains sensitive information, so can't be visible outside it's own cluster or VPC.
My fall-back is to run kubectl port-forward inside the local pod:
kubectl --cluster=<gke-database-cluster-name> --token='<token from ~/.kube/config>' port-forward elasticsearch-pod 9200
but this seems suboptimal.
I'd use a ExternalName Service like
kind: Service
apiVersion: v1
metadata:
name: elastic-db
namespace: prod
spec:
type: ExternalName
externalName: your.elastic.endpoint.com
According to the docs
An ExternalName service is a special case of service that does not have selectors. It does not define any ports or endpoints. Rather, it serves as a way to return an alias to an external service residing outside the cluster.
If you need to expose the elastic database, there are two ways of exposing applications to outside the cluster:
Creating a Service of type LoadBalancer, that would load balance the traffic for all instances of your elastic database. Once the Load Balancer is created on GKE, just add the load balancer's DNS as the value for the elastic-db ExternalName created above.
Using an Ingress controller. The Ingress controller will have an IP that is reachable from outside the cluster. Use that IP as ExternalName for the elastic-db created above.

Packetbeat dashboard for Application logs

Can packetbeat is used to monitor the tomcat server logs and windows logs?? or it will only monitor the database i.e., network monitoring?
Packetbeat only does network monitoring. But you can use it together with Logstash or Logstash-Forwarder to get visibility also into your logs.
It will do only network monitoring. you can use ELK for tomcat server logs.
#tsg is correct but now with the Beats 1.x release they are deprecating Logstash Forwarder in lieu of another Beat called Filebeat. Also they added Topbeat, which allows you to monitor server load and processes in your cluster.
See:
* https://www.elastic.co/blog/beats-1-0-0
You will likely want to install the package repo for your OS, then install each with:
{package manager cmd} install packetbeat
{package manager cmd} install topbeat
{package manager cmd} install filebeat
They each are installed in common directories. For example with Ubuntu (Linux) the config files are in /etc/<beat name>/<beat name>.yml where beat name is one of the 3 above. Each file are similar and you can disable the direct ES export and instead export to Logstash (comment ES and uncomment Logstash) and then add a beats import in your Logstash config. From thereon, Logstash listens for any beats over that port and can redistribute (or queue) using the [#metadata][beat] param to tell where it came from.
Libbeat also provides a framework to build your own so you can send any data you want to Logstash and it can queue and/or index. ;-)
Packetbeat is used mainly for network analysis . It currently supports following protocols :
ICMP (v4 and v6)
DNS
HTTP
Mysql
PostgreSQL
Redis
Thrift-RPC
MongoDB
Memcache
However , for visualizing tomcat logs you can configure them to use log4j and then configure logstash to take input from log4j and then using elasticsearch and kibana to visualise the logs.
To monitor windows logs you can use another beats platform Winlogbeat.

Resources