I am beginner in golang and started working on backend RBAC application to manage access of Kubernetes cluster, we have a monitoring stack that is behind proxy serves prometheus , thanos and grafana URL. I am not able to add conditions to check HTTP status using httptest. I have to add condition if pods are up and running else print the error.
rq := httptest.NewRequest("GET", "/", nil)
rw := httptest.NewRecorder()
proxy.ServeHTTP(rw, rq)
if rw.Code != 200 && monitoringArgs.selector == "PROMETHEUS" {
fmt.Printf("Target pods are in error state, please check with 'oc get pods -n %s -l %s'", monitoringArgs.namespace, monitoringArgs.selector)
}
How can I added condition for all three prometheus/grafana/Thanos
You can use the restart count of POD also in logic something like
pods, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{
LabelSelector: "app=myapp",
})
// check status for the pods - to see Probe status
for _, pod := range pods.Items {
pod.Status.Conditions // use your custom logic here
for _, container := range pod.Status.ContainerStatuses {
container.RestartCount // use this number in your logic
}
}
Related
Now i have Pods as Kubernetes structs wiht the help of the command
pods , err := clientset.CoreV1().Pods("namespace_String").List(context.TODO(), metav1.ListOptions{})
now i do i get it as individual yaml files
which command should i use
for i , pod := range pods.Items{
if i==0{
t := reflect.TypeOF(&pod)
for j := 0; j<t.NumMethod(); j++{
m := t.Method(j)
fmt.Println(m.Name)
}
}
}
this function will print the list of functions in the pod item which should i use
Thanks for the answer
The yaml is just a representation of the Pod object in the kubernetes internal storage in etcd. With your client-go what you have got is the Pod instance, of the type v1.Pod. So you should be able to work with this object itself and get whatever you want, for example p.Labels() etc. But if for some reason, you are insisting on getting a yaml, you can do that via:
import (
"sigs.k8s.io/yaml"
)
b, err := yaml.Marshal(pod)
if err != nil {
// handle err
}
log.Printf("Yaml of the pod is: %q", string(b))
Note that yaml library coming here is not coming from client-go library. The documentation for the yaml library can be found in: https://pkg.go.dev/sigs.k8s.io/yaml#Marshal
Instead of yaml if you want to use json, you can simply use the Marshal function https://pkg.go.dev/k8s.io/apiserver/pkg/apis/example/v1#Pod.Marshal provided by the v1.Pod struct itself, like any other Go object.
To get individual pod using client-go:
pod, err := clientset.CoreV1().Pods("pod_namespace").Get(context.TODO(),"pod_name", metav1.GetOptions{})
if err!=nil {
log.Fatalln(err)
}
// do something with pod
I want to use go client to describe a node, to be specific I want to list the node condition types and it's status and also events.
Edit: I was able to describe the node and get node condition but not events or cpu/memory.
I found below to get node conditions and status but not events.
nodes, _ := clientset.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
for _, node := range nodes.Items {
fmt.Printf("%s\n", node.Name)
for _, condition := range node.Status.Conditions {
fmt.Printf("\t%s: %s\n", condition.Type, condition.Status)
}
}
I am trying to create alert policies for Kubernetes Clusters in Google Cloud Platform. The following is the sample code.
service, err := monitoring.NewService(context.Background())
if err != nil {
log.Panicln(err)
return
}
mqlCondition := &monitoring.MonitoringQueryLanguageCondition{
Duration: "60s",
Query: `fetch k8s_pod
| metric 'kubernetes.io/pod/volume/utilization'
| filter
(resource.cluster_name == 'test'
&& resource.pod_name =~ 'server.*')
&& (metric.volume_name =~ 'dat.*')
| align mean_aligner()
| window 10m
| condition value.utilization > 0.001 '10^2.%'
`,
Trigger: &monitoring.Trigger{
Count: 1,
},
}
condition := monitoring.Condition{
DisplayName: "MQL-based Condition",
ConditionMonitoringQueryLanguage: mqlCondition,
}
alertpolicy := &monitoring.AlertPolicy{
DisplayName: "Prakash1",
Combiner: "OR",
Conditions: []*monitoring.Condition{&condition},
NotificationChannels: []string{"projects/abc-app/notificationChannels/16000000099515524778"},
}
p, err := service.Projects.AlertPolicies.Create("projects/abc-app", alertpolicy).Context(context.Background()).Do()
if err != nil {
log.Panicln(err)
return
}
When I create two or more alert policies concurrently, I get the following error:
"googleapi: Error 409: Too many concurrent edits to the project configuration. Please try again., aborted"
Can you please tell me how I can resolve this error ?
As the error says, you can't add policies concurrently. I would suggest you to generate in parallel the creation requests and serialize the actual API call. You can achieve this with buffered channels and multiple go routines. For example:
var(
backlogSize = 3 //change as per your needs
requests = make(chan *monitoring.AlertPolicy, backlogSize)
)
func createPolicies(){
...
go func(){
//init the service
...
for policy := range requests {
p, err := service.Projects.AlertPolicies.Create("projects/abc-app", policy).Context(context.Background()).Do()
if err != nil {
log.Println(err)
}
}
}()
go func(){
newPolicy := &monitoring.AlertPolicy
//fill policy
...
requests <- newPolicy
}()
...
//wait for completion and close requests channel
}
Another solution would be to retry with exponential backoff + jitter each failed concurrent request
I am creating Kubernetes POD with Golang. Iam trying to set DeletionGracePeriodSeconds but after creating pod, the pod has 30 in this field while I am setting 25.
Name of the pod is OK, so after creating the POD it has name that I assigned in code.
func setupPod(client *Client, ns string, name string, labels map[string]string) (*v1.Pod, error) {
seconds := func(i int64) *int64 { return &i }(25)
pod := &v1.Pod{}
pod.Name = name
pod.Namespace = ns
pod.SetDeletionGracePeriodSeconds(seconds) //it is 25 seconds under debugger
pod.DeletionGracePeriodSeconds = seconds
pod.Spec.Containers = []v1.Container{v1.Container{Name: "ubuntu", Image: "ubuntu", Command: []string{"sleep", "30"}}}
pod.Spec.NodeName = "node1"
if labels != nil {
pod.Labels = labels
}
_, err := client.client.CoreV1().Pods(ns).Create(client.context, pod, metav1.CreateOptions{})
return pod, err
}
DeletionGracePeriodSeconds is read-only and hence you can not change it. You should instead set terminationGracePeriodSeconds and kubernetes will set the DeletionGracePeriodSeconds accordingly. You can verify that by getting the value and printing it.
From the API docs
Number of seconds allowed for this object to gracefully terminate
before it will be removed from the system. Only set when
deletionTimestamp is also set. May only be shortened. Read-only.
podSpec := &v1.Pod{
Spec: v1.PodSpec{
TerminationGracePeriodSeconds: <Your-Grace-Period>
},
}
_, err = clientset.CoreV1().Pods("namespacename").Create(context.TODO(), podSpec, metav1.CreateOptions{})
I have implemented leader election using kubernetes/client-go leader election. I have 2 replicas. For the first time both pod is selecting as leader, but same pod is not elected as leader after this. And the leader election get stopped after some time. I tried to delete one pod, then the new pod that is created is selected as leader. Again once the pod stopped leading, no pod is acting as leader. I am using configmap for resource lock. Please help me to solve the issue.
func NewElectorWithCallbacks(namespace, configMapName, identity string, ttl time.Duration, client cli.CoreV1Interface, callbacks *leaderelection.LeaderCallbacks) (*leaderelection.LeaderElector, error) {
hostname, err := os.Hostname()
if err != nil {
return nil, err
}
broadcaster := record.NewBroadcaster()
broadcaster.StartLogging(log.Printf)
broadcaster.StartRecordingToSink(&cli.EventSinkImpl{Interface: client.Events(namespace)})
recorder := broadcaster.NewRecorder(scheme.Scheme, api.EventSource{Component: identity, Host: hostname})
cmLock := &resourcelock.ConfigMapLock{
Client: client,
ConfigMapMeta: meta.ObjectMeta{
Namespace: namespace,
Name: configMapName,
},
LockConfig: resourcelock.ResourceLockConfig{
Identity: identity,
EventRecorder: recorder,
},
}
if callbacks == nil {
callbacks = NewDefaultCallbacks()
}
config := leaderelection.LeaderElectionConfig{
Lock: cmLock,
LeaseDuration: ttl,
RenewDeadline: ttl / 2,
RetryPeriod: ttl / 4,
Callbacks: *callbacks,
}
return leaderelection.NewLeaderElector(config)
}
config, err = rest.InClusterConfig()
v1Client, err := v1.NewForConfig(config)
callbacks := &leaderelection.LeaderCallbacks{
OnStartedLeading: func(context.Context) {
// do the work
fmt.Println("selected as leader")
// Wait forever
select {}
},
OnStoppedLeading: func() {
fmt.Println("Pod stopped leading")
},
}
elector, err := election.NewElectorWithCallbacks(namespace, electionName, hostname, ttl, v1Client, callbacks)
elector.Run(context.TODO())
You can deploy the pods as statefullsets & headless service. Please refer the docs
Why?
Pods will create sequentially. You define the first pod being launched is Master and rest are slaves.
Pods in a StatefulSet have a unique ordinal index and a stable network identity. For example below,
kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
Even if the pod web-0 restarts, the name or FQDN for pod never change.
web-0.nginx.default.svc.cluster.local
<pod_name>.<service_name>.<namespace>.svc.cluster.local
I have only highlighted few points, please go though the docs completly.