How to query kubernetes custom api (networking.gke.io/v1beta1) using go client? - go

I want to play with kubernetes api with gke. But gke use a special api (networking.gke.io/v1beta1). I want to be able to query it, but the go-client of kubernetes dont have this api. How can I query it?
I try REST API, but dont know at all how to use it, and the documentation is not clear.

GKE networking apis and clients are in this repo : gke-managed-certs
Clients are in this package: /pkg/clients

Related

Access GCP Managed Prometheus metrics from Grafana on Windows

I have installed Grafana (running at localhost:3000) and Prometheus (running at localhost:9090) on Windows 10, and am able to add the latter as a valid data source to the former. However, I want to create Grafana dashboards for data from Google's Managed Prometheus service. How do I add Google's Managed Prometheus as a data source in Grafana, running on Windows 10? Is there a way to accomplish this purely with native Windows binaries, without using Linux binaries via Docker?
I've not done this (myself yet).
I'm also using Google's (very good) Managed Service for Prometheus.
It's reasonably well-documented Managed Prometheus: Grafana
There's an important caveat under Authenticating Google APIs: "Google Cloud APIs all require authentication using OAuth2; however, Grafana doesn't support OAuth2 authentication for Prometheus data sources. To use Grafana with Managed Service for Prometheus, you must use the Prometheus UI as an authentication proxy.
Step #1: use the Prometheus UI
The Prometheus UI is deployed to a GKE cluster and so, if you want to use it remotely, you have a couple of options:
Hacky: port-forward
Better: expose it as a service
Step #2: Hacky
NAMESPACE="..." # Where you deployed Prometheus UI
kubectl port-forward deployment/frontend \
--namespace=${NAMESPACE} \
${PORT}:9090
Step #3: From the host where you're running the port-forward, you should now be able to configure Grafana to use the Prometheus UI datasource on http://localhost:${PORT}. localhost because it's port-forwarding to your (local)host and ${PORT} because that's the port it's using.
Now we can connect gcp prometheus directly from grafana using service account. Feature available from version 9.1.X
I have tested gmp with standalone Grafana on GKE it is working well as expected.
https://grafana.com/docs/grafana/latest/datasources/google-cloud-monitoring/google-authentication/

Expose Strapi Prometheus Metrics on another Port

I’m using strapi v4 along with the prometheus plugin and right now my app metrics are being exposed on http://localhost:1337/api/metrics
But I need it to be on another port like http://localhost:9090/metrics (also removing the api prefix).
So strapi and the rest of the backend would still be running on port 1337 and only the metrics on 9090
I've been through the documentation but it seems like there is no configuration for that. Can anybody help think of a way to do this?
Right now the metrics don't run on a separate server. They run on the same server as strapi.
This is to be able to use the user-permissions plugin and the API token of strapi to secure the endpoint.
In the future, I could look into making it an option to create a separate server.

Get back data from ElasticSearch

I'm new to ELK. We have a Spring Boot backend on a dedicated AWS instance. We have ELK stack on another instance (To the outside world only Kibana is available). Information gathering to ELK is carried out via Amazon SQS.
These information include logs and some business history about user (registration, any other action, etc).
In this case, I have a question. Is it possible to get back information by action, by user and use it in the backend responses?
I am guessing you want to use data present in Elasticsearch to be available to spring boot application. It is definitely possible. You will need to open up elasticsearch port on elasticsearch machine to specifically to EC2 instance on which spring boot is running. How to open port will depend on if they are on same vpc, different vpc, different aws account etc. Once port is open, you can either use Spring Data Elasticsearch or just rest calls to access elasticsearch api.

Go http api server and socket.io

Currently I'm working on a real-time online game. First I implemented a go server with socket.io for handling messages between client and my game world and it works fine. Now for user data managing I need a http api for some functionality like login. I want to use awesome http/net package for that purpose. Should I serve the http server on different Port?
My next question is for deploying I want to use google container engine. Can I use pods with two ports open?
As far as I understood from your explanation, you need two ports open for two different APIs running in your application. Regarding Exposing two ports in Google Container Engine, you can read the discussion here that describes ways to expose ports in a pod.
Moreover, I invite you read this tutorial that involves deploying an API in a GKE cluster with a containerPort in a pod, Creating a Kubernetes service to allow internal cluster traffic to your pods (routing requests on an incoming port to your API targetPort), and creating an Ingress service to define what traffic is allowed into your cluster and where it goes. You can define different APIs with different targetPorts and run them on different pods. You can try it as an alternative. For more documentation on Exposing Applications using Services, you can read this GKE doc.

Using ReactiveSearch with plain elasticsearch

I'd like to use reactivesearch with my own plain vanilla elasticsearch cluster. While the example and documentation describe that this should be possible: ReactiveBase, see the url Param. I get connection errors and a Websocket call wss://.. which looks like ReactiveBase is trying to connect to a appbase.io hosted elastic instead. It also passes a credentials code along with the call to elastic which is not specified in my code.
Is it possible to connect to a normal elastic and where can I find the documentation on how to do this?
This is my definition of ReactiveBase:
<ReactiveBase app="documents"url="https://search-siroop-3jjelqkbvwhzqzsolxt5ujxdxm.eu-central-1.es.amazonaws.com/">
To implement this example I followed the ReactiveSearch Quickstart
Yes, it's possible to connect to a normal Elasticsearch cluster (docs) with reactivesearch. It seems you're using the correct props. Sample code:
<ReactiveBase
app="your-elasticsearch-index"
url="http://your-elasticsearch-cluster"
>
<Component1 .. />
<Component2 .. />
</ReactiveBase>
The app prop refers to the index name. Looks like you're using this with AWS. Since AWS doesn't allow you to configure ES settings, you might need to use a middleware proxy server. From the docs:
If you are using Elasticsearch on AWS, then the recommended approach
is to connect via the middleware proxy as they don’t allow setting the
Elasticsearch configurations.
The docs also explain how you can write your own proxy server.
TLDR:
Proxy server
Using the proxy server in client app with reactivesearch
The connection error related to websockets you see here isn't causing the issue. It's used for streaming which works on appbase.io. This has been fixed in the 2.2.0 release. Hope this helps :)

Resources