Go lang / client : How to describe a node using go/ client - go

I want to use go client to describe a node, to be specific I want to list the node condition types and it's status and also events.
Edit: I was able to describe the node and get node condition but not events or cpu/memory.

I found below to get node conditions and status but not events.
nodes, _ := clientset.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
for _, node := range nodes.Items {
fmt.Printf("%s\n", node.Name)
for _, condition := range node.Status.Conditions {
fmt.Printf("\t%s: %s\n", condition.Type, condition.Status)
}
}

Related

Extract Prometheus Metrics in Go

I'm new in Golang, what I am trying to do is to query Prometheus and save the query result in an object (such as a map) that has all timestamps and their values of the metric.
I started from this example code with only a few changes (https://github.com/prometheus/client_golang/blob/master/api/prometheus/v1/example_test.go)
func getFromPromRange(start time.Time, end time.Time, metric string) model.Value {
client, err := api.NewClient(api.Config{
Address: "http://localhost:9090",
})
if err != nil {
fmt.Printf("Error creating client: %v\n", err)
os.Exit(1)
}
v1api := v1.NewAPI(client)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
r := v1.Range{
Start: start,
End: end,
Step: time.Second,
}
result, warnings, err := v1api.QueryRange(ctx, metric, r)
if err != nil {
fmt.Printf("Error querying Prometheus: %v\n", err)
os.Exit(1)
}
if len(warnings) > 0 {
fmt.Printf("Warnings: %v\n", warnings)
}
fmt.Printf("Result:\n%v\n", result)
return result
}
The result that is printed is for example:
"TEST{instance="localhost:4321", job="realtime"} =>\n21 #[1597758502.337]\n22 #[1597758503.337]...
These are actually the correct values and timestamps that are on Prometheus. How can I insert these timestamps and values into a map object (or another type of object that I can then use in code)?
The result coming from QueryRange has the type model.Matrix.
This will then contain a pointer of type *SampleStream. As your example then contains only one SampleStream, we can access the first one directly.
The SampleStream then has a Metric and Values of type []SamplePair. What you are aiming for is the slice of sample pairs. Over this we then can iterate and build for instance a map.
mapData := make(map[model.Time]model.SampleValue)
for _, val := range result.(model.Matrix)[0].Values {
mapData[val.Timestamp] = val.Value
}
fmt.Println(mapData)
You have to know the type of result you're getting returned. For example, model.Value can be of type Scalar, Vector, Matrix or String. Each of these types have their own way of getting the data and timestamps. For example, a Vector has an array of Sample types which contain the data you're looking for. The godocs and the github repo for the prom/go client have really great documentation if you want to dive deeper.
maybe you can find your answer in this issue
https://github.com/prometheus/client_golang/issues/194
switch {
case val.Type() == model.ValScalar:
scalarVal := val.(*model.Scalar)
// handle scalar stuff
case val.Type() == model.ValVector:
vectorVal := val.(model.Vector)
for _, elem := range vectorVal {
// do something with each element in the vector
// etc

golang errgroup example without channels

I saw the example of errgroup in godoc, and it makes me confused that it simply assigns the result to global results instead of using channels in each search routines. Heres the code:
Google := func(ctx context.Context, query string) ([]Result, error) {
g, ctx := errgroup.WithContext(ctx)
searches := []Search{Web, Image, Video}
results := make([]Result, len(searches))
for i, search := range searches {
i, search := i, search // https://golang.org/doc/faq#closures_and_goroutines
g.Go(func() error {
result, err := search(ctx, query)
if err == nil {
results[i] = result
}
return err
})
}
if err := g.Wait(); err != nil {
return nil, err
}
return results, nil
}
I'm not sure is there any reason or implied rules guarantees it is correct? THX
The intent here is to make searches and results congruent. The result for the Web search is always at results[0], the result for the Image search always at results[1], etc. It also makes for a simpler example, because there is no need for an additional goroutine that consumes a channel.
If the goroutines would send their results into a channel, the result order would be unpredictable. If predictable result order is not a property you care about feel free to use a channel.
There is secret sauce in this code that creates siloing:
results := make([]Result, len(searches))
^^^^ ^^^^^^^^^^^^^
for i, search := ... {
i, search := i, search
^^^^^^^^^^
g.Go {
results[i] = result
^^^^^^^^^^
}
We know how big the result set is going to be, so we pre-allocate all the slots before starting any goroutines. This eliminates any contention over the slice object itself
make(.., len(searches))
^^^^ ^^^^^^^^^^^^^
We then promote the index number and search property to a closure for each iteration, so there is no contention over the variables being used by the loop/goroutines
i, search := i, search
And finally, each worker operates on a singular slot in the pre-sized slice:
results[i] = result
The workers are guaranteed to only perform read operations on the "results" slice to find out where their element is (results[i]).
This particular pattern is limiting, you can't use the results until all the workers are completed. So ask yourself what you're going to do next when deciding whether to use this or a channels-based pipeline workflow.
results := getSearchResults(searches)
statistics := analyzeResults(results)
for stats := range statistics {
our.Write("{%s}\n", stats.String())
}
If the analysis of a given result is independent of any other, this is a good candidate for a channel-based workflow.
But if the analysis depends on order, or has different results depending on each other then you may not have any choice but to serialize the flow.

golang net module(LookupSRV)

I'm very new to golang, I have some experience with python but not on this level per say. I am creating an application that's called "digall", making it easy for a user to see active dns-records when checking a domain name.
In the application I am using LookupSRV, which I seem to have some issues with:
func srvRecord(query string) {
service := "sipfederationtls"
protocol:= "tcp"
fmt.Printf("\n[+] SRV Record(s)\n")
//srvMap := ["sipfederationtls", "autodiscover", "VLMCS"]
cname, addresses, err := net.LookupSRV(service, protocol, query)
if err != nil {
fmt.Printf("[!] This feature is currently under development, thus not ready yet.\n")
}
fmt.Printf("cname : %s \n", cname)
for i := 0; i < len(addresses); i++ {
fmt.Printf("addrs[%d].Target : %s \n", i, addresses[i].Target)
fmt.Printf("addrs[%d].Port : %d \n", i, addresses[i].Port)
fmt.Printf("addrs[%d].Priority : %d \n", i, addresses[i].Priority)
fmt.Printf("addrs[%d].Weight : %d \n", i, addresses[i].Weight)
}
}
As you can see the variable "service" serves as the prefix of the SRV record. My only problem is that i want to check multiple prefixes of this record, namely "sipfederationtls", "autodiscover" and "VLMCS".
What I am asking is; How to i make this function swift through these prefixes and return the ones that work? (the ones that error out will be handled by err by my fantastic error message)
I am aware that this is a noob question, and like I said I am very new to golang. I would appreciate any tips you guys could give me.
Here is the full source of the application: http://dpaste.com/3X24ZYR
Thank you.
You can't query multiple services at once using LookupSRV method, as you can't use dig for querying several services at once.
You better create a slice of the services' names:
services := [...]string{"service1", "service2", "service3")
And then iterate over it and call LookupSRV for each service:
for _, service := range services {
cname , addrs, err := net.LookupSRV(service, "tcp", "your.domain.name")
// error handlling
}
Also when iterating over the lookup result, it is better to use the range keyword:
for _, record := range addrs {
fmt.Printf("Target: %s:%d\n", record.Target, record.Port)
}

Convert array of strings to field name

Newbie question:
I want to print various variables of a library (is that the correct name? reflect.TypeOf(servers) gives []lib.Server)
I want to do something like this, but this obviously does not work:
servers, err := GetClient().GetServers() //call to external API
serverVariables := []string{}
serverVariables = append(serverVariables, "Name")
serverVariables = append(serverVariables, "IPAddress")
for _, server := range servers {
for _,element := range serverVariables {
fmt.Println(server.element)
}
}
What I already can do is the following (but I want to do it using the above approach):
servers, err := GetClient().GetServers() //call to external API
for _, server := range servers {
fmt.Println(server.Name)
fmt.Println(server.IPAddress)
}
giving the following output:
ServerNameOne
192.168.0.1
ServerNameTwo
192.168.0.2
Reflection is what you probably want to use:
for _, server := range servers {
v := reflect.ValueOf(server)
for _, element := range serverVariables {
fmt.Println(v.FieldByName(element))
}
}
You should also change serverVariables initialization to be serverVariables := []string{}
Playground example: https://play.golang.org/p/s_kzIJ7-B7
It seems to me that you have experience in some dynamic language like Python or JavaScript. Go is compiled and strongly typed. Besides reflection being slower, when using it compiler can't help you with finding basic errors in your code and what is most important you lose type of the accessed variable.
More info on http://blog.golang.org/laws-of-reflection
So I strongly recommend you to keep your current approach:
for _, server := range servers {
fmt.Println(server.Name)
fmt.Println(server.IPAddress)
}

Insert thousand nodes to neo4j using golang

I am importing data to neo4j using neoism, and I have some issues importing big data, 1000 nodes, would take 8s. here is a part of the code that imports 100nodes.
quite basic code, needs improvement, anyone can help me improve this?
var wg sync.WaitGroup
for _, itemProps := range items {
wg.Add(1)
go func(i interface{}) {
s := time.Now()
cypher := neoism.CypherQuery{
Statement: fmt.Sprintf(`
CREATE (%v)
SET i = {Props}
RETURN i
`, ItemLabel),
Parameters: neoism.Props{"Props": i},
}
if err := database.ExecuteCypherQuery(cypher); err != nil {
utils.Error(fmt.Sprintf("error ImportItemsNeo4j! %v", err))
wg.Done()
return
}
utils.Info(fmt.Sprintf("import Item success! took: %v", time.Since(s)))
wg.Done()
}(itemProps)
}
wg.Wait()
Afaik neoism still uses old APIs, you should use cq instead: https://github.com/go-cq/cq
also you should batch your creates,
i.e. either send multiple statements per request, e.g 100 statements per request
or even better send a list of parameters to a single cypher query:
e.g. {data} is a [{id:1},{id:2},...]
UNWIND {data} as props
CREATE (n:Label) SET n = props

Resources