How do i programatically determine whether a pod is in crashloopbackoff - go

Is there a way to determine programatically if a pod is in crashloopbackoff?
I tried the following
pods,err := client.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{})
if err != nil {
return err
}
for _, item := range pods.Items {
log.Printf("found pod %v with state %v reason %v and phase %v that started at %v",
item.Name, item.Status.Message, item.Status.Reason, item.Status.Phase, item.CreationTimestamp.Time)
}
However this just prints blank for state and reason, tough it prints the phase.

To clarify I am posting a community wiki answer.
It's hiding in ContainerStateWaiting.Reason:
kubectl get po -o jsonpath='{.items[*].status.containerStatuses[*].state.waiting.reason}'
although be aware that it only intermittently shows up there, since it is an intermittent state of the container; perhaps a more programmatic approach is to examine the restartCount and the Error state
See also this repository.

Related

How should we handle errors when return it in go? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Which is better?
1
err := MakeFood()
if err != nil{
return err
}
2
err := MakeFood()
if err != nil{
logs.Errorf("failed to make food, error=%v",err)
return err
}
3
err := MakeFood()
if err != nil{
return fmt.Errorf("failed to make food, error=%w",err)
}
4
var ErrMakeFood = errors.New("failed to make food")
err := MakeFood()
if err != nil{
return ErrMakeFood // we discard err
}
In my practise, return fmt.Errorf("xxx, error=%w",err) is my favorite, which creates a cascaded error string when error happens and return.
But it seems that, in go builtin src code, return err is normal and tidy.
And sometimes we are suggested to use static error declarations(the example 4 I gave). This is the lint rule of a golang-lint named goerr113.
No one is better, they are all different and can be appropriate for different cases.
The 1st one,
err := MakeFood()
if err != nil{
return err
}
is OK when MakeFood is known to provide its own context for all errors it returns (or wraps and returns — "bubbles up"); for instance, any error it ever returns is wrapped in the "failed to make food: " context.
It also fits for other cases when there's no need to provide any additional context.
The second and third are appropriate in the context of a bigger function which performs some conceptually single (atomic) task which consists from multiple steps (and making food is one of such steps); in this case it's customary to wrap errors reported by each of the steps in the same context—leading to context "chaining" when each call in the call chain adds its own context.
For instance, if our bigger function processes a food delivery order, it could look like this:
func ProcessFoodDeliveryOrder() error {
meal, err := MakeFood()
if err != nil {
return fmt.Errorf("failed to process order: %w", err)
}
err := Deliver(meal, address)
if err != nil {
return fmt.Errorf("failed to process order: %w", err)
}
return nil
}
The question of whether one should use the newer Go facility of actually nesting errors by using the %w formatting verb—as opposing to merely providing textual (message) context—is an open one: not all errors need to be wrapped (and then later tested by errors.Is or handled by errors.As); there's no need to use that facility just because you know it exists and looks cool: it has its runtime cost.
The error message is better formatted using plain : %s combo—like with fmt.Errorf("new context: %s", err) because chaning the errors this way produces a trivially understandable and easily read "action: reason" text. BTW it's recommended in The Book.
The latter approach is called "a sentinel error". It's okay to use if the users of your package will actually use such error variable—directly or by testing it using errors.Is.
But also consider that it may be better to assert the behaviour of the errors, not comparing them to a sentinel variable. Consider reading this and look at net.Error.

What causes 'no such file or directory' after adding a test case and running go test?

The problem
After adding another test function to an existing test file running go test -v ./... fails due to several no such file or directory build errors after adding another test case. The error messages are seemingly unrelated to the changes, however.
The added test case can be found in the relevant code section at the bottom.
The error messages are:
open /tmp/go-build842273301/b118/vet.cfg: no such file or directory
open /tmp/go-build842273301/b155/vet.cfg: no such file or directory
# tornadowarnung.xyz/riotwatch/riot/static
vet: in tornadowarnung.xyz/riotwatch/riot/static, can't import facts for package "encoding/json": open $WORK/b036/vet.out: no such file or directory
# tornadowarnung.xyz/riotwatch/web/server/endpoint/static
vet: open $WORK/b121/vet.cfg: no such file or directory
open /tmp/go-build842273301/b115/vet.cfg: no such file or directory
open /tmp/go-build842273301/b001/vet.cfg: no such file or directory
# tornadowarnung.xyz/riotwatch/web/server
vet: open $WORK/b152/vet.cfg: no such file or directory
# tornadowarnung.xyz/riotwatch/web/server/endpoint/static
vet: open $WORK/b159/vet.cfg: no such file or directory
Because of that, some packages show their build failed:
FAIL tornadowarnung.xyz/riotwatch/riot/static [build failed]
FAIL tornadowarnung.xyz/riotwatch/web/server [build failed]
FAIL tornadowarnung.xyz/riotwatch/web/server/endpoint [build failed]
FAIL tornadowarnung.xyz/riotwatch/web/server/endpoint/static [build failed]
Relevant code
func TestLoader_ProfileIcon(t *testing.T) {
tempDir := os.TempDir()
l := Loader{
profileIconPath: tempDir,
}
defer os.RemoveAll(tempDir)
t.Run("returns expected content", func(t *testing.T) {
want := bytes.NewBufferString("image data")
fileName := "123456"
if err := createTestFile(t, tempDir, fileName, want); err != nil {
t.Fatal(err)
}
got, err := l.ProfileIcon(123456)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("does not panic on missing file", func(t *testing.T) {
res, err := l.ProfileIcon(-1)
if err == nil {
t.Errorf("Expected an error but got error %v and result %v", nil, res)
}
})
}
func createTestFile(t *testing.T, tempDir string, fileName string, content *bytes.Buffer) error {
t.Helper()
f, err := os.Create(path2.Join(tempDir, fmt.Sprintf("%v.png", fileName)))
if err != nil {
return err
}
_, err = f.Write(content.Bytes())
if err != nil {
return err
}
return nil
}
Reproducing the error is difficult
On my Ubuntu machine having go 1.15 installed the error only occurs sometimes when I'm cloning the repository again or when I'm cleaning the test cache.
When running the image used in the Gitlab job golang:alpine locally and running the same commands I cannot reproduce this error every time. Sometimes it occurs but most of the time it doesn't.
What I've tried
I have tried to switch between go versions 1.13, 1.14, and 1.15 but every version yields the same result.
Switching to any other images like golang:latest, golang:1.14 or golang:1.13 doesn't help either.
I've tried googling for the error that occurs but I haven't found any results that are relevant or contain any solution I haven't already tried.
Reverting said commit will make the tests pass again. I've also reverted the commit and slowly tried to introduce the changes again manually. This makes the problems occur again.
os.TempDir doesn't create a new temporary directory for you, it returns the system's temp dir. By calling os.RemoveAll on it, you're blowing away the entire thing, including some scratch files used by the build and test process.
I could validate the behaviour on a MacOS.
Seems like there is something wrong with the os.TempDir().
Your tests ran when I created the dir myself with os.Mkdir(...).
You should create an Issue in the Go repository.

unstructured.UnstructuredList caused lots of reflect.go trace

I'm trying to use the unstructured.UnstructuredList to reuse come logic for configmap and secret.
However, after adding the ListAndDeployReferredObject, I started to see tons of trace as Starting reflector *unstructured.Unstructured was added to my log file.
Am I doing something odd or I'm missing some setting for using the unstructured.Unstructured?
Thanks in advance.
func (r *ReconcileSubscription) ListAndDeployReferredObject(instance *appv1alpha1.Subscription, gvk schema.GroupVersionKind, refObj referredObject) error {
insName := instance.GetName()
insNs := instance.GetNamespace()
uObjList := &unstructured.UnstructuredList{}
uObjList.SetGroupVersionKind(gvk)
opts := &client.ListOptions{Namespace: insNs}
err := r.Client.List(context.TODO(), uObjList, opts)
if err != nil && !errors.IsNotFound(err) {
klog.Errorf("Failed to list referred objects with error %v ", err)
return err
}
// other logics...
}
I0326 23:05:58.955589 95169 reflector.go:120] Starting reflector *unstructured.Unstructured (10m0s) from pkg/mod/k8s.io/client-go#v0.0.0-20191016111102-bec269661e48/tools/cache/reflector.go:96
...
I0326 23:15:18.718932 95169 reflector.go:158] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/client-go#v0.0.0-20191016111102-bec269661e48/tools/cache/reflector.go:96
I figured out these prints are normal, since we are using the dynamic client on our controller for caches

Deleting a job with client-go

Does anyone have any examples of how to delete k8s jobs using client-go?
I've tried the following which deletes the job but leaves the pods behind:
fg := metav1.DeletePropagationBackground
deleteOptions := metav1.DeleteOptions{PropagationPolicy: &fg}
if err := clientset.BatchV1().Jobs("default").Delete("example-job", &deleteOptions); err != nil {
fmt.Printf(err)
}
I've tried using the DeletePropagationForeground option, but that resulted in nothing being deleted.

Linux Network namespaces unexpected behavior

So I've been playing around with Network namespaces recently.
I put together a simple code, built it and noticed something very weird happening.
The code is as follows:
package main
import (
"fmt"
"log"
"net"
"os"
"path"
"syscall"
)
const (
NsRunDir = "/var/run/netns"
SelfNetNs = "/proc/self/ns/net"
)
func main() {
netNsPath := path.Join(NsRunDir, "myns")
os.Mkdir(NsRunDir, 0755)
if err := syscall.Mount(NsRunDir, NsRunDir, "none", syscall.MS_BIND, ""); err != nil {
log.Fatalf("Could not create Network namespace: %s", err)
}
fd, err := syscall.Open(netNsPath, syscall.O_RDONLY|syscall.O_CREAT|syscall.O_EXCL, 0)
if err != nil {
log.Fatalf("Could not create Network namespace: %s", err)
}
syscall.Close(fd)
if err := syscall.Unshare(syscall.CLONE_NEWNET); err != nil {
log.Fatalf("Could not clone new Network namespace: %s", err)
}
if err := syscall.Mount(SelfNetNs, netNsPath, "none", syscall.MS_BIND, ""); err != nil {
log.Fatalf("Could not Mount Network namespace: %s", err)
}
if err := syscall.Unmount(netNsPath, syscall.MNT_DETACH); err != nil {
log.Fatalf("Could not Unmount new Network namespace: %s", err)
}
if err := syscall.Unlink(netNsPath); err != nil {
log.Fatalf("Could not Unlink new Network namespace: %s", err)
}
ifcs, _ := net.Interfaces()
for _, ifc := range ifcs {
fmt.Printf("%#v\n", ifc)
}
}
Now, when you run this code on Trusty 14.04, you will see something weird happening.
This happens when you run the binary several times in a row.
Sometimes it prints out all Host's interfaces, sometimes it simply prints out just a loopback interface which means that the range loop at the end of the program seems to be executing once when the namespace is still attached and sometimes when it's already been detached.
I'm totally confused why this is happening but I'm thinking it's either my code or maybe I'm just missing out something in terms of the program execution or some kernel stuff.
Any help would be massively appreciated.
Thanks
Update1:
So it seems the "strange" behaviour has to do with how golang schedules goroutines across OS Threads. So you need to make sure that you handle the runtime well. What I mean by that is, that if you lock the code execution to one OS Thread you will get consistent results. You can do this by adding the following runtime package statement:
runtime.LockOSThread()
However this still does not solve my problem, but now I think it all comes down to understanding of Namespaces. I need to look more into that.
Update2:
To give you a little bit more context into why you should use the above OS Thread Lock when running bunch of syscalls and experience the similar "strangeness" of correct behavior, give ththis blogpost a read. It describes runtime and go schedulers. It was written for go 1.1 but it still gives very good overview.

Resources