I have installed Grafana (running at localhost:3000) and Prometheus (running at localhost:9090) on Windows 10, and am able to add the latter as a valid data source to the former. However, I want to create Grafana dashboards for data from Google's Managed Prometheus service. How do I add Google's Managed Prometheus as a data source in Grafana, running on Windows 10? Is there a way to accomplish this purely with native Windows binaries, without using Linux binaries via Docker?
I've not done this (myself yet).
I'm also using Google's (very good) Managed Service for Prometheus.
It's reasonably well-documented Managed Prometheus: Grafana
There's an important caveat under Authenticating Google APIs: "Google Cloud APIs all require authentication using OAuth2; however, Grafana doesn't support OAuth2 authentication for Prometheus data sources. To use Grafana with Managed Service for Prometheus, you must use the Prometheus UI as an authentication proxy.
Step #1: use the Prometheus UI
The Prometheus UI is deployed to a GKE cluster and so, if you want to use it remotely, you have a couple of options:
Hacky: port-forward
Better: expose it as a service
Step #2: Hacky
NAMESPACE="..." # Where you deployed Prometheus UI
kubectl port-forward deployment/frontend \
--namespace=${NAMESPACE} \
${PORT}:9090
Step #3: From the host where you're running the port-forward, you should now be able to configure Grafana to use the Prometheus UI datasource on http://localhost:${PORT}. localhost because it's port-forwarding to your (local)host and ${PORT} because that's the port it's using.
Now we can connect gcp prometheus directly from grafana using service account. Feature available from version 9.1.X
I have tested gmp with standalone Grafana on GKE it is working well as expected.
https://grafana.com/docs/grafana/latest/datasources/google-cloud-monitoring/google-authentication/
Reference:
Configuring Metricbeat
Metricbeat Prometheus Module
From the second link I got the metricbeat Prometheus module configuration is as follows:-
- module: prometheus
period: 10s
hosts: ["localhost:9090"]
metricsets: ["query"]
queries:
- name: 'up'
path: '/api/v1/query'
params:
query: "up"
Regarding my use case I want to pull data from remote prometheus host which is outside my network to my ELK cluster using metricbeat prometheus queries.
In this regard I added my remote prometheus host name in the host section of the above config file for metricbeat prometheus module.
Now my question do we need to install metricbeat in the remote prometheus cluster also to pull the data (Ref: Configuring Metricbeat) or just adding the remote prometheus host name in the host section of metricbeat configuration is enough to do the trick?
You are not required to again configured MetricBeat on remote Prometheus host. You can use same configuration which you have given in question. But you can not give localhost:9090 as you are not running metricbeat on same host where Prometheus is running. Hence, you can update configuration like prometheus_ip:9090.
Also, You need to make sure that connectivity is allowed between host where you have installed metricbeat and host where you are running Prometheus.
You can use Elastic Agent & fleet as well instead of Metricbeat. because it provide centralized configuration management and it easy to configure. You can read more about Elastic agent and fleet here and it provide Prometheus integration.
I built a cluster on GKE with ECK operator and am trying to send logs from an on premises Filebeat installation to the cloud.
Elasticsearch has LoadBlancer IP. I specified certificate, password and necessary things, but I couldn't make it work. Is there a tutorial?
I am using helm go client to install the release. The link is
https://github.com/helm/helm/tree/55193089e67207363e6f448054113b5a36649d74/pkg/helm
What I didn't get , how can I give my kubeconfig file to this go client so that my client can communicate with the kubernetes cluster. I went through documentation but doesn't has the details on the same.
Is there any other way to connect to cluster from helm go client.
I have set up a simple Kubernetes load balancer service in front of a Node.js container, which should be exposing port 80, but I can't get a response out of it. How can I debug how the load balancer is handling requests to port 80? Are there logs I can inspect?
I have set up a load balancer service and a replication controller as described in the Kubernetes guestbook example.
The service/load balancer spec is similar to this:
{
"kind":"Service",
"apiVersion":"v1",
"metadata":{
"name":"guestbook",
"labels":{
"app":"guestbook"
}
},
"spec":{
"ports": [
{
"port":3000,
"targetPort":"http-server"
}
],
"selector":{
"app":"guestbook"
},
"type": "LoadBalancer"
}
}
As for my hosting platform, I'm using AWS and the OS is CoreOS alpha (976.0.0). Kubectl is at version 1.1.2.
Kubernetes Info
$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get pods
NAME READY STATUS RESTARTS AGE
busybox-sleep 1/1 Running 0 18m
web-s0s5w 1/1 Running 0 12h
$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.3.0.1 <none> 443/TCP <none> 1d
web 10.3.0.171
Here is the primary debugging document for Services:
http://kubernetes.io/docs/user-guide/debugging-services/
LoadBalancer creates an external resource. What exactly that resource is depends on your Cloud Provider - some of them don't support it at all (in this case, you might want to try NodePort instead).
Both Google and Amazon support external load balancers.
Overall, when asking these questions it's extremely helpful to know if you are running on Google Container Engine, Google Compute Engine, Amazon Web Services, Digital Ocean, Vagrant, or whatever, because the answer depends on that. Showing all your configs and all your existing Kubnernetes resources (kubectl get pods, kubectl get services) along with your Dockerfiles or which images you are using will also help.
For Google (GKE or GCE), you would verify the load balancer exists:
gcloud compute forwarding-rules list
The external load balancer will map port 80 to an arbitrary Node, but then the Kubernetes proxy will map that to an ephemeral port on the correct node that actually has a Pod with that label, then it will map to the container port. So you have to figure out which step along the way isn't working. Unfortunately all those kube-proxy and iptables jumps are quite difficult to follow, so usually I would first double check all my Pods exist and have labels that match the selector of the Service. I would double check that my container is exposing the right port, I am using the right name for the port, etc. You might want to create some other Pods that just make calls to the Service (using the environment variables or KubeDNS, see the Kubernetes service documentation if you don't know what I'm referring to) and verify it's accessible internally before debugging the load balancer.
Some other good debugging steps:
Verify that your Kubernetes Service exists:
kubectl get services
kubectl get pods
Check your logs of your pod
kubectl logs <pod name>
Check that your service is created internally by printing the environment variable for it
kubectl exec <pod name> -- printenv GUESTBOOK_SERVICE_HOST
try creating a new pod and see if the service can be reached internally through GUESTBOOK_SERVICE_HOST and GUESTBOOK_SERVICE_PORT.
kubectl describe pod <pod name>
will give the instance id of the pod, you can SSH to it and run Docker and verify your container is running, attach to it, etc. If you really want to get into the IP tables debugging, try
sudo iptables-save
The target port of the LoadBalancer needs to be the port INSIDE the container. So in my case I need to set the targetPort to 3000 instead of 80, on the LoadBalancer.
Even though on the pod itself I have already mapped port 80 to 3000.
This is very counter intuitive to me, and not mentioned in all the LoadBalancer docs.