I want to watch changes to Pods continuously using the client-go Kubernetes SDK. I am using the below code to watch the changes:
func (c *Client) watchPods(namespace string, restartLimit int) {
fmt.Println("Watch Kubernetes Pods")
watcher, err := c.Clientset.CoreV1().Pods(namespace).Watch(context.Background(),
metav1.ListOptions{
FieldSelector: "",
})
if err != nil {
fmt.Printf("error create pod watcher: %v\n", err)
return
}
for event := range watcher.ResultChan() {
pod, ok := event.Object.(*corev1.Pod)
if !ok || !checkValidPod(pod) {
continue
}
owner := getOwnerReference(pod)
for _, c := range pod.Status.ContainerStatuses {
if reflect.ValueOf(c.RestartCount).Int() >= int64(restartLimit) {
if c.State.Waiting != nil && c.State.Waiting.Reason == "CrashLoopBackOff" {
doSomething()
}
if c.State.Terminated != nil {
doSomethingElse()
}
}
}
}
}
The code is watching changes to the Pods, but it exits after some time. I want to run this continuously. I also want to know how much load it puts on the API Server and what is the best way to run a control loop for looking for changes.
In Watch, a long poll connection is established with the API server. Upon establishing a connection, the API server sends an initial batch of events and any subsequent changes. The connection will be dropped after a timeout occurs.
I would suggest using an Informer instead of setting up a watch, as it is much more optimized and easier to setup. While creating an informer, you can register specific functions which will be invoked when pods get created, updated and deleted. Even in informers you can target specific pods using a labelSelector, similar to watch. You can also create shared informers, which are shared across multiple controllers in the cluster. This results in reducing the load on the API server.
Below are few links to get you started:
https://aly.arriqaaq.com/kubernetes-informers/
https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/
https://pkg.go.dev/k8s.io/client-go/informers
Related
I have a Cloud Run gRPC service and I want it to listen to PubSub topic.
The only way to do it from the official documentations is to use a trigger but this only works with REST server that accept http requests.
I couldn't find any good example on the web of how to use pubsub with cloud run grpc services.
My service is built with Go following the official grpc instructions
main.go:
func main() {
ctx := context.Background()
pubSubClient, err := pubsub.NewClient(ctx, 'project-id')
if err != nil {
log.Fatal(err)
}
pubSubSubscription := pubSubClient.Subscription("subscription-id")
go func(sub *pubsub.Subscription) {
err := sub.Receive(ctx, func(ctx context.Context, m *pubsub.Message) {
defer m.Ack()
log.Printf("sub.Receiv called %+v", m)
})
if err != nil {
// handle error
}
}(pubSubSubscription)
....
lis, err := net.Listen("tcp", ":"+port)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s, _ := server.New(ctx, ...)
grpcServer := grpc.NewServer()
grpcpkg.RegisterReportServiceServer(grpcServer, s)
if err := grpcServer.Serve(lis); err != nil {
log.Fatalf("failed to serve: %s", err)
}
}
This will work as long as there is at least one instance up and listening. But if there is no instances active, and a new pubsub message publishes, cloud run won't "wake up" the insane because it's autoscaling is utilized based on http requests.
The only way to make the above code work is by setting min-instance 1 in cloud run, but this can be not efficient for a lot of use cases (i.e the service is not active at night)
Is there any work around for this?
Is there any way to trigger a grpc cloud run service from a pubsub message?
The sub.Receive method you are attempting to use on the client library is meant for Pull message delivery and not Push delivery. You can read about how these two delivery types compare to make a choice.
If you wish to receive messages as incoming requests, you must use Push delivery and can follow the the Cloud Run Pubsub usage guide. Note that Cloud Pub/Sub only delivers messages as HTTPS POST requests with the specified JSON payload. To handle these requests from a gRPC server, you can use the gRPC Gateway plugin to generate a reverse proxy that exposes an HTTP API.
Once your Cloud Run instance is accepting HTTP traffic using the reverse proxy, it will be autoscaled with incoming requests, so you won't need to keep 1 instance always running. Note that this may mean that message processing latency is upto 10s on a cold start.
Im using the following code which works as expected, I use from the cli gcloud auth application-default login and enter my credentials and I was able to run the code successfully from my macbook.
Now I need to run this code in my CI and we need to use different approach , what should be the approach to get the client_secret
and client_id or service account / some ENV variable, what is the way for doing it via GO code?
import "google.golang.org/api/compute/v1"
project := "my-project"
region := "my-region"
ctx := context.Background()
c, err := google.DefaultClient(ctx, compute.CloudPlatformScope)
if err != nil {
log.Fatal(err)
}
computeService, err := compute.New(c)
if err != nil {
log.Fatal(err)
}
req := computeService.Routers.List(project, region)
if err := req.Pages(ctx, func(page *compute.RouterList) error {
for _, router := range page.Items {
// process each `router` resource:
fmt.Printf("%#v\n", router)
// NAT Gateways are found in router.nats
}
return nil
}); err != nil {
log.Fatal(err)
}
Since you're using Jenkins you probably want to start with how to create a service account. It guides you on creating a service account and exporting a key to be set as a var in another CI/CD system.
Then refer to the docs from the client library on how to create a new client with source credential.
e.g.
client, err := storage.NewClient(ctx, option.WithCredentialsFile("path/to/keyfile.json"))
If you provided no source, it would attempt to read the credentials locally and act as the service account running the operation (not applicable in your use case).
Many CIs support the export of specific env vars. Or your script / conf can do it too.
But if you want to run in a CI why you need such configuration? Integration tests?
Some services can be used locally for unit/smoke testing. Like pubsub, there is a way to run a fake/local pubsub to perform some tests.
Or perhaps I did not understand your question, in this case can you provide an example?
I am trying to upload a CustomSchema to all Users of a company in GSuite. This Custom Schema contains their Github Usernames, which I extracted with the github API.
The problem is, after running the code, the account in Gsuite is not added.
Relevant code (A connection to GSuite with admin Authentication is established, the map has all user entries. If you still want more code, I can provide you with it - just trying to keep it simple):
for _, u := range allUsers.Users {
if u.CustomSchemas != nil {
log.Printf("%v", string(u.CustomSchemas["User_Names"]))
}else{
u.CustomSchemas = map[string]googleapi.RawMessage{}
}
nameFromGsuite := u.Name.FullName
if githubLogin, ok := gitHubAccs[nameFromGsuite]; ok {
userSchemaForGithub := GithubAcc{GitHub: githubLogin}
jsonRaw, err := json.Marshal(userSchemaForGithub)
if err != nil {
log.Fatalf("Something went wrong logging: %v", err)
}
u.CustomSchemas["User_Names"] = jsonRaw
adminService.Users.Update(u.Id, u)
} else {
log.Printf("User not found for %v\n", nameFromGsuite)
}
}
This is the struct for the json encoding:
type GithubAcc struct {
GitHub string `json:"GitHub"`
}
For anyone stumbling upon this.
Everything in the code snippet is correct. By the way the method is written, I expected that adminService.Users.Update() actually updates the user. Instead, it returns an UserUpdatesCall.
You need to execute that update by calling .Do()
From the API:
Do executes the "directory.users.update" call.
So the solution is to change adminService.Users.Update(u.Id, u)
into adminService.Users.Update(u.Id, u).Do()
In one of my services that happens to be my loadbalancer, I am getting the following error when calling the server method in one of my deployed services:
rpc error: code = Unimplemented desc = unknown service
fooService.FooService
I have a few other services set up with gRPC and they work fine. It just seems to be this one and I am wondering if that is because it is the loadbalancer?
func GetResult(w http.ResponseWriter, r *http.Request) {
conn, errOne := grpc.Dial("redis-gateway:10006", grpc.WithInsecure())
defer conn.Close()
rClient := rs.NewRedisGatewayClient(conn)
result , errTwo := rClient.GetData(context.Background(), &rs.KeyRequest{Key: "trump", Value: "trumpVal"}, grpc.WithInsecure())
fmt.Fprintf(w, "print result: %s \n", result) //prints nil
fmt.Fprintf(w, "print error one: %v \n", errOne) // prints nil
fmt.Fprintf(w, "print error two: %s \n", errTwo) // prints error
}
The error says there is no service called fooService.FooService which is true because the dns name for the service I am calling is called foo-service. However it is the exact same setup for my other services that use gRPC and work fine. Also my proto files are correctly configured so that is not an issue.
server I am calling:
func main() {
lis, err := net.Listen("tcp", ":10006")
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
grpcServer := grpc.NewServer()
newServer := &RedisGatewayServer{}
rs.RegisterRedisGatewayServer(grpcServer, newServer)
if err := grpcServer.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
The function I am trying to access from client:
func (s *RedisGatewayServer) GetData(ctx context.Context, in *rs.KeyRequest)(*rs.KeyRequest, error) {
return in, nil
}
My docker and yaml files are all correct also with the right naming and ports.
I had this exact problem, and it was because of a very simple mistake: I had put the call to the service registration after the server start. The code looked like this:
err = s.Serve(listener)
if err != nil {
log.Fatalf("[x] serve: %v", err)
}
primepb.RegisterPrimeServiceServer(s, &server{})
Obviously, the registration should have been called before the server was ran:
primepb.RegisterPrimeServiceServer(s, &server{})
err = s.Serve(listener)
if err != nil {
log.Fatalf("[x] serve: %v", err)
}
thanks #dolan's for the comment, it solved the problem.
Basically we have to make sure, the method value should be same in both server and client (you can even copy the method name from the pb.go file generated from server side)
func (cc *ClientConn) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...CallOption) error {
this invoke function will be there inside all the methods which you have implemented in gRPC service.
for me, my client was connecting to the wrong server port.
My server listens to localhost:50052
My client connects to localhost:50051
and hence I see this error.
I had the same problem - I was able to perform rpc call from Postman to the service whereas apigateway was able to connect to the service but on method call It gave error code 12 unknown service and the reason was in my proto files client was under package X whereas server was under package Y.
Quite silly but yeah making the package for both proto under apigateway and service solved my problem.
There are many scenarios in which this can happen. The underlying issue seems to be the same - the GRPC server is reachable but cannot service client requests.
The two scenarios I faced that are not documented in previous answers are:
1. Client and server are not running the same contract version
A breaking change introduced in the contract can cause this error.
In my case, the server was running the older version of the contract while the client was running the latest one.
A breaking change meant that the server could not resolve the service my client was asking for thus returning the unimplemented error.
2. The client is connecting to the wrong GRPC server
The client reached the incorrect server that doesn't implement the contract.
Consider this scenario if you're running multiple different GRPC services. You might be mistakingly dialing the wrong one.
I'm wondering if it's a good idea to push data from gRPC server to a client. Basically I want to use a pub/sub pattern with gRPC.
The way I do is that I return a response stream on the server implementation that I never close. Then, the client has a never ending go routine in charge of reading this stream.
Here is an example:
service Service {
rpc RegularChanges (Void) returns (stream Change) {}
}
On the server side:
func (self *MyServiceImpl) RegularChanges(in *pb.Void, stream pb.Service_RegularChangesServer) error {
for {
d, err := time.ParseDuration("1s")
if err != nil {
log.Fatalf("Cannot parse duration")
break;
}
time.Sleep(d)
stream.Send(&pb.Change{Name:"toto", Description:"status changed"})
}
return nil
}
On client:
for {
change, err := streamChanges.Recv()
if err != nil {
log.Fatalf("Error retrieving change")
} else {
log.Println(change)
}
}
I just began with go and gRPC but I know it's based on HTTP2, hence it should support pushing datas. However, I'm not sure this is the way gRPC should be used.
gRPC is intended to be used in this way.
You should still consider how the client should behave on failures and how you may want to re-balance across backends. If your connection is going across the Internet, you may also want to enable keepalive to detect connection breakages by providing KeepaliveParams to the client and server.