creating Kubernetes POD with DeletionGracePeriodSeconds is not respected - go

I am creating Kubernetes POD with Golang. Iam trying to set DeletionGracePeriodSeconds but after creating pod, the pod has 30 in this field while I am setting 25.
Name of the pod is OK, so after creating the POD it has name that I assigned in code.
func setupPod(client *Client, ns string, name string, labels map[string]string) (*v1.Pod, error) {
seconds := func(i int64) *int64 { return &i }(25)
pod := &v1.Pod{}
pod.Name = name
pod.Namespace = ns
pod.SetDeletionGracePeriodSeconds(seconds) //it is 25 seconds under debugger
pod.DeletionGracePeriodSeconds = seconds
pod.Spec.Containers = []v1.Container{v1.Container{Name: "ubuntu", Image: "ubuntu", Command: []string{"sleep", "30"}}}
pod.Spec.NodeName = "node1"
if labels != nil {
pod.Labels = labels
}
_, err := client.client.CoreV1().Pods(ns).Create(client.context, pod, metav1.CreateOptions{})
return pod, err
}

DeletionGracePeriodSeconds is read-only and hence you can not change it. You should instead set terminationGracePeriodSeconds and kubernetes will set the DeletionGracePeriodSeconds accordingly. You can verify that by getting the value and printing it.
From the API docs
Number of seconds allowed for this object to gracefully terminate
before it will be removed from the system. Only set when
deletionTimestamp is also set. May only be shortened. Read-only.
podSpec := &v1.Pod{
Spec: v1.PodSpec{
TerminationGracePeriodSeconds: <Your-Grace-Period>
},
}
_, err = clientset.CoreV1().Pods("namespacename").Create(context.TODO(), podSpec, metav1.CreateOptions{})

Related

kubectl get pod <pod> -n <namespace> -o yaml in kubernetes client-go

Now i have Pods as Kubernetes structs wiht the help of the command
pods , err := clientset.CoreV1().Pods("namespace_String").List(context.TODO(), metav1.ListOptions{})
now i do i get it as individual yaml files
which command should i use
for i , pod := range pods.Items{
if i==0{
t := reflect.TypeOF(&pod)
for j := 0; j<t.NumMethod(); j++{
m := t.Method(j)
fmt.Println(m.Name)
}
}
}
this function will print the list of functions in the pod item which should i use
Thanks for the answer
The yaml is just a representation of the Pod object in the kubernetes internal storage in etcd. With your client-go what you have got is the Pod instance, of the type v1.Pod. So you should be able to work with this object itself and get whatever you want, for example p.Labels() etc. But if for some reason, you are insisting on getting a yaml, you can do that via:
import (
"sigs.k8s.io/yaml"
)
b, err := yaml.Marshal(pod)
if err != nil {
// handle err
}
log.Printf("Yaml of the pod is: %q", string(b))
Note that yaml library coming here is not coming from client-go library. The documentation for the yaml library can be found in: https://pkg.go.dev/sigs.k8s.io/yaml#Marshal
Instead of yaml if you want to use json, you can simply use the Marshal function https://pkg.go.dev/k8s.io/apiserver/pkg/apis/example/v1#Pod.Marshal provided by the v1.Pod struct itself, like any other Go object.
To get individual pod using client-go:
pod, err := clientset.CoreV1().Pods("pod_namespace").Get(context.TODO(),"pod_name", metav1.GetOptions{})
if err!=nil {
log.Fatalln(err)
}
// do something with pod

How to add condition to test http request using httptest in golang

I am beginner in golang and started working on backend RBAC application to manage access of Kubernetes cluster, we have a monitoring stack that is behind proxy serves prometheus , thanos and grafana URL. I am not able to add conditions to check HTTP status using httptest. I have to add condition if pods are up and running else print the error.
rq := httptest.NewRequest("GET", "/", nil)
rw := httptest.NewRecorder()
proxy.ServeHTTP(rw, rq)
if rw.Code != 200 && monitoringArgs.selector == "PROMETHEUS" {
fmt.Printf("Target pods are in error state, please check with 'oc get pods -n %s -l %s'", monitoringArgs.namespace, monitoringArgs.selector)
}
How can I added condition for all three prometheus/grafana/Thanos
You can use the restart count of POD also in logic something like
pods, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{
LabelSelector: "app=myapp",
})
// check status for the pods - to see Probe status
for _, pod := range pods.Items {
pod.Status.Conditions // use your custom logic here
for _, container := range pod.Status.ContainerStatuses {
container.RestartCount // use this number in your logic
}
}

Kafka logs in go service

I need kafka consumer logs for debug. I do the following:
chanLogs := make(chan confluentkafka.LogEvent)
go func() {
for {
logEv := <-chanLogs
logger.Debug("KAFKA: " + logEv.String())
}
}()
configMap["go.logs.channel.enable"] = true
configMap["go.logs.channel"] = chanLogs
consumer, err := confluentkafka.NewConsumer(&configMap)
err := consumer.SubscribeTopics(Topics, nil)
And I never get a line. I tried it with kafka chan (consumer.Logs()) with the same result. What I do wrong?
UPD
In initial post I wrongfully set parameter name. The correct one is go.logs.channel.enable. But sometimes this still don't work.
As described in the doc, you should enable that feature:
go.logs.channel.enable (bool, false) - Forward log to Logs() channel.
go.logs.channel (chan kafka.LogEvent, nil) - Forward logs to application-provided channel instead of Logs(). Requires go.logs.channel.enable=true.
So change your code like:
configMap["go.logs.channel"] = chanLogs
configMap["go.logs.channel.enable"] = true
consumer, err := confluentkafka.NewConsumer(&configMap)
See also in the doc or in the sample on the code repo here
The solution was to add
configMap["debug"] = "all"
I found it here

Leader election- Pod is not selecting as a leader

I have implemented leader election using kubernetes/client-go leader election. I have 2 replicas. For the first time both pod is selecting as leader, but same pod is not elected as leader after this. And the leader election get stopped after some time. I tried to delete one pod, then the new pod that is created is selected as leader. Again once the pod stopped leading, no pod is acting as leader. I am using configmap for resource lock. Please help me to solve the issue.
func NewElectorWithCallbacks(namespace, configMapName, identity string, ttl time.Duration, client cli.CoreV1Interface, callbacks *leaderelection.LeaderCallbacks) (*leaderelection.LeaderElector, error) {
hostname, err := os.Hostname()
if err != nil {
return nil, err
}
broadcaster := record.NewBroadcaster()
broadcaster.StartLogging(log.Printf)
broadcaster.StartRecordingToSink(&cli.EventSinkImpl{Interface: client.Events(namespace)})
recorder := broadcaster.NewRecorder(scheme.Scheme, api.EventSource{Component: identity, Host: hostname})
cmLock := &resourcelock.ConfigMapLock{
Client: client,
ConfigMapMeta: meta.ObjectMeta{
Namespace: namespace,
Name: configMapName,
},
LockConfig: resourcelock.ResourceLockConfig{
Identity: identity,
EventRecorder: recorder,
},
}
if callbacks == nil {
callbacks = NewDefaultCallbacks()
}
config := leaderelection.LeaderElectionConfig{
Lock: cmLock,
LeaseDuration: ttl,
RenewDeadline: ttl / 2,
RetryPeriod: ttl / 4,
Callbacks: *callbacks,
}
return leaderelection.NewLeaderElector(config)
}
config, err = rest.InClusterConfig()
v1Client, err := v1.NewForConfig(config)
callbacks := &leaderelection.LeaderCallbacks{
OnStartedLeading: func(context.Context) {
// do the work
fmt.Println("selected as leader")
// Wait forever
select {}
},
OnStoppedLeading: func() {
fmt.Println("Pod stopped leading")
},
}
elector, err := election.NewElectorWithCallbacks(namespace, electionName, hostname, ttl, v1Client, callbacks)
elector.Run(context.TODO())
You can deploy the pods as statefullsets & headless service. Please refer the docs
Why?
Pods will create sequentially. You define the first pod being launched is Master and rest are slaves.
Pods in a StatefulSet have a unique ordinal index and a stable network identity. For example below,
kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
Even if the pod web-0 restarts, the name or FQDN for pod never change.
web-0.nginx.default.svc.cluster.local
<pod_name>.<service_name>.<namespace>.svc.cluster.local
I have only highlighted few points, please go though the docs completly.

How to get GroupID in Golang Kafka 10?

I am using Kafka 10.0 and https://github.com/Shopify/sarama.
I am trying to get the offset of the latest message that a consumer processed.
To do so I've found the method NewOffsetManagerFromClient(group string, client Client) which require the group name.
How do I get consumer group name?
offsets := make(map[int32]int64)
config := sarama.NewConfig()
config.Consumer.Offsets.CommitInterval = 200 * time.Millisecond
config.Version = sarama.V0_10_0_0
// config.Consumer.Offsets.Initial = sarama.OffsetNewest
cli, _ := sarama.NewClient(kafkaHost, config)
defer cli.Close()
offsetManager, _ := sarama.NewOffsetManagerFromClient(group, cli)
for _, partition := range partitions {
partitionOffsetManager, _ := offsetManager.ManagePartition(topic, partition)
offset, _ := partitionOffsetManager.NextOffset()
offsets[partition] = offset
}
return offsets
I created a consumer with
consumer := sarama.NewConsumer(connections, config)
but I do not know how to create a consumer group and get its group name.
You are attempting to create your own offset manager to find current offsets:
offsetManager, _ := sarama.NewOffsetManagerFromClient(group, cli)
Similarly, the consumer that was consuming your topic's messages would have to use the same offset manager and they would have used a specific group id. Use that group id.
I think you can use any string as groupId. Please look at the example from sarama GoDoc
// Start a new consumer group
group, err := NewConsumerGroupFromClient("my-group", client)
if err != nil {
panic(err)
}
defer func() { _ = group.Close() }()
Maybe you can give it any string. And you should make sure the other consumers can get the same groupId for joining the group.

Resources