aws-alb-ingress metrics on eks? - metrics

I'm using the official aws-alb-ingress-controller for ingress + load balancing to my services hosted in an EKS cluster.
Does this offer metrics of any kind? Preferably Prometheus metrics? To show things like volume metrics etc?
https://github.com/kubernetes-sigs/aws-alb-ingress-controller
I don't see any mention of metrics in the docs, but metrics seems like a necessary part of any production load balancer.

Each AWS ALB Ingress Controller Pod
Exposes prometheus /metrics on the same port 10254 where it responds to /healthz checks.
Both endpoints are currently served with the same mux.

This is the closest option, AFAIK:
https://github.com/prometheus/cloudwatch_exporter

The AWS ALB ingress controller is integrated with CloudWatch and provides various metrics. In CloudWatch you can do monitoring and alerting based on these metrics.
If your system uses Prometheus for this, you can any exporter to send the metrics to Prometheus. Another possible exporter would be: YACE (Yet Another CloudWatch Exporter.
Here you can find an article from the AWS open source blog on how to setup an ELB with Prometheus metrics and Graphana on top with a custom dashboard. The configuration for an ALB is pretty similar. Here you have the information on how to achieve this for an ALB.

Related

Access GCP Managed Prometheus metrics from Grafana on Windows

I have installed Grafana (running at localhost:3000) and Prometheus (running at localhost:9090) on Windows 10, and am able to add the latter as a valid data source to the former. However, I want to create Grafana dashboards for data from Google's Managed Prometheus service. How do I add Google's Managed Prometheus as a data source in Grafana, running on Windows 10? Is there a way to accomplish this purely with native Windows binaries, without using Linux binaries via Docker?
I've not done this (myself yet).
I'm also using Google's (very good) Managed Service for Prometheus.
It's reasonably well-documented Managed Prometheus: Grafana
There's an important caveat under Authenticating Google APIs: "Google Cloud APIs all require authentication using OAuth2; however, Grafana doesn't support OAuth2 authentication for Prometheus data sources. To use Grafana with Managed Service for Prometheus, you must use the Prometheus UI as an authentication proxy.
Step #1: use the Prometheus UI
The Prometheus UI is deployed to a GKE cluster and so, if you want to use it remotely, you have a couple of options:
Hacky: port-forward
Better: expose it as a service
Step #2: Hacky
NAMESPACE="..." # Where you deployed Prometheus UI
kubectl port-forward deployment/frontend \
--namespace=${NAMESPACE} \
${PORT}:9090
Step #3: From the host where you're running the port-forward, you should now be able to configure Grafana to use the Prometheus UI datasource on http://localhost:${PORT}. localhost because it's port-forwarding to your (local)host and ${PORT} because that's the port it's using.
Now we can connect gcp prometheus directly from grafana using service account. Feature available from version 9.1.X
I have tested gmp with standalone Grafana on GKE it is working well as expected.
https://grafana.com/docs/grafana/latest/datasources/google-cloud-monitoring/google-authentication/

Elastic.co APM with gke network policies

I have gke clusters, and I have elasticsearch deployments on elastic.co. Now on my gke cluster I have network policies for each pod with egress and ingress rules. My issue is that in order to use elastic APM I need to allow egress to my elastic deployment.
Anyone has an idea how to do that? I am thinking either a list of IPs for elastic.co on the gcp instances to be able to whitelist them in my egress rules, or some kind of proxy between my gke cluster and elastic apm.
I know a solution can be to have a local elastic cluster on gcp, but I prefer not to go this way.
About the possibility of using some kind of Proxy between your gke cluster and elastic apm. You can check the following link [1], to see if it can fit your necessities.
[1] https://cloud.google.com/vpc/docs/special-configurations#proxyvm

Standard way to monitor ingress traffic in K8 or EKS

Is there a standard way to monitor the traffic of K8 Ingress. We were trying to extract metrics like
Requests per second
HTTP errors
Response time
etc...
Specific Environment
AWS EKS
Nginx ingress
AWS Elastic search service [Store and search metrics]
Kibana and Easy Alert [Dashboard and alerting]
Solutions tried
Service mesh like Istio ( Do not want the heaviness of Istio. Service to service traffic is very less. )
Custom nginx solution (https://sysdig.com/blog/monitor-nginx-kubernetes/)
Looking for something generic to K8 . Any hints ?
The ingress-nginx project has a monitoring guide that describes how to send basic statistics to Prometheus, which can then be viewed in Grafana. If you cluster already has these tools installed, then it is just a matter of configuring prometheus to scrape nginx ingress controller pods.
First of all, there is not "standard way" to monitor a cluster. People or companies have different needs, so you should implement your best solution.
If you are managing all ingress traffic from Nginx ingress controller, then you can implement Nginx based solutions. Said that, linkerd is also a great tool to monitor and manage your network stack, especially if it's simple. There is a dashboard and you can check all your requirements. Linkerd components are also not heavy as istio.

Kubernetes and Prometheus not working together with Grafana

I have created a kubernetes cluster on my local machine with one master and at the moment zero workers, using kubeadm as the bootstrap tool. I am trying to get Prometheus (from the helm packet manager) and Kuberntes matrics together to the Grafana Kubernetes App, but this is not the case. The way I am setting up the monitoring is:
Open grafana-server at port 3000 and install the kuberntes app.
Install stable/prometheus from helm and using this custom YAML file I found in another guide.
Adding Prometheus data source to Grafana with IP from kubernetes Prometheus service (or pods, tried both and both works well) and use TLS Client Auth.
Starting proxy port with kubectl proxy
Filling in all information needed in the Kubernetes Grafana app and then deploy it. No errors.
All kubernetes metric shows, but no Prometheus metric.
If kubernetes proxy connection is stopped, Prometheus metric can be seen. There are no problems connecting to the Prometheus pod or service IP when kubernetes proxy is running. Does someone have any clue what I am doing wrong?

how to add google compute instances behind aws elb

I deployed web instances over google compute engine. managing load balancing with nginx load balancer now. but i want it to be handled with amazon's elb .
can somebody tell me how we could do it.
thanks
This is not possible.
AWS ELB only works for load-balancing traffic among ec2 instances.
Please refer to the follow aws documentation for more details
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elastic-load-balancing.html
Also I am not clear why you would not use google cloud load balancing capability for your google compute engines. You should avoid the additional hops between networks for routing traffic from one netwrok to the other.
Please refer to the following docs for more information:
https://cloud.google.com/compute/docs/load-balancing-and-autoscaling#policies

Resources