Elastic.co APM with gke network policies - elasticsearch

I have gke clusters, and I have elasticsearch deployments on elastic.co. Now on my gke cluster I have network policies for each pod with egress and ingress rules. My issue is that in order to use elastic APM I need to allow egress to my elastic deployment.
Anyone has an idea how to do that? I am thinking either a list of IPs for elastic.co on the gcp instances to be able to whitelist them in my egress rules, or some kind of proxy between my gke cluster and elastic apm.
I know a solution can be to have a local elastic cluster on gcp, but I prefer not to go this way.

About the possibility of using some kind of Proxy between your gke cluster and elastic apm. You can check the following link [1], to see if it can fit your necessities.
[1] https://cloud.google.com/vpc/docs/special-configurations#proxyvm

Related

Receiving logs from filebeat to ECK on GKE

I built a cluster on GKE with ECK operator and am trying to send logs from an on premises Filebeat installation to the cloud.
Elasticsearch has LoadBlancer IP. I specified certificate, password and necessary things, but I couldn't make it work. Is there a tutorial?

Standard way to monitor ingress traffic in K8 or EKS

Is there a standard way to monitor the traffic of K8 Ingress. We were trying to extract metrics like
Requests per second
HTTP errors
Response time
etc...
Specific Environment
AWS EKS
Nginx ingress
AWS Elastic search service [Store and search metrics]
Kibana and Easy Alert [Dashboard and alerting]
Solutions tried
Service mesh like Istio ( Do not want the heaviness of Istio. Service to service traffic is very less. )
Custom nginx solution (https://sysdig.com/blog/monitor-nginx-kubernetes/)
Looking for something generic to K8 . Any hints ?
The ingress-nginx project has a monitoring guide that describes how to send basic statistics to Prometheus, which can then be viewed in Grafana. If you cluster already has these tools installed, then it is just a matter of configuring prometheus to scrape nginx ingress controller pods.
First of all, there is not "standard way" to monitor a cluster. People or companies have different needs, so you should implement your best solution.
If you are managing all ingress traffic from Nginx ingress controller, then you can implement Nginx based solutions. Said that, linkerd is also a great tool to monitor and manage your network stack, especially if it's simple. There is a dashboard and you can check all your requirements. Linkerd components are also not heavy as istio.

aws-alb-ingress metrics on eks?

I'm using the official aws-alb-ingress-controller for ingress + load balancing to my services hosted in an EKS cluster.
Does this offer metrics of any kind? Preferably Prometheus metrics? To show things like volume metrics etc?
https://github.com/kubernetes-sigs/aws-alb-ingress-controller
I don't see any mention of metrics in the docs, but metrics seems like a necessary part of any production load balancer.
Each AWS ALB Ingress Controller Pod
Exposes prometheus /metrics on the same port 10254 where it responds to /healthz checks.
Both endpoints are currently served with the same mux.
This is the closest option, AFAIK:
https://github.com/prometheus/cloudwatch_exporter
The AWS ALB ingress controller is integrated with CloudWatch and provides various metrics. In CloudWatch you can do monitoring and alerting based on these metrics.
If your system uses Prometheus for this, you can any exporter to send the metrics to Prometheus. Another possible exporter would be: YACE (Yet Another CloudWatch Exporter.
Here you can find an article from the AWS open source blog on how to setup an ELB with Prometheus metrics and Graphana on top with a custom dashboard. The configuration for an ALB is pretty similar. Here you have the information on how to achieve this for an ALB.

Kubernetes and Prometheus not working together with Grafana

I have created a kubernetes cluster on my local machine with one master and at the moment zero workers, using kubeadm as the bootstrap tool. I am trying to get Prometheus (from the helm packet manager) and Kuberntes matrics together to the Grafana Kubernetes App, but this is not the case. The way I am setting up the monitoring is:
Open grafana-server at port 3000 and install the kuberntes app.
Install stable/prometheus from helm and using this custom YAML file I found in another guide.
Adding Prometheus data source to Grafana with IP from kubernetes Prometheus service (or pods, tried both and both works well) and use TLS Client Auth.
Starting proxy port with kubectl proxy
Filling in all information needed in the Kubernetes Grafana app and then deploy it. No errors.
All kubernetes metric shows, but no Prometheus metric.
If kubernetes proxy connection is stopped, Prometheus metric can be seen. There are no problems connecting to the Prometheus pod or service IP when kubernetes proxy is running. Does someone have any clue what I am doing wrong?

Elasticsearch Access Log

I'm trying to track down who is issuing queries to an ElasticSearch Cluster. Elastic doesn't appear to have an access log.
Is there a place where I can find out which IP is hitting the cluster?
Elasticsearch doesn't provide any security out of the box, and that is on purpose and by design.
So you have a couple solutions out there:
Don't let your ES cluster exposed to the open world, but put it behind a firewall (i.e. whitelist the hosts that can access ports 9200/9300 on your nodes)
Look into the Shield plugin for Elasticsearch in order to secure your environment.
Put an nginx server in front of your cluster to act as a reverse proxy.
Add simple basic authentication with either the elasticsearch-jetty plugin or simply the elasticsearch-http-basic plugin, which also allowws you to whitelist the client IPs that are allowed to access your cluster.
If you want to have access logs, you need either 2 or 3, but all solutions above will allow you to secure your ES environment.

Resources