Extract Prometheus Metrics in Go - go

I'm new in Golang, what I am trying to do is to query Prometheus and save the query result in an object (such as a map) that has all timestamps and their values of the metric.
I started from this example code with only a few changes (https://github.com/prometheus/client_golang/blob/master/api/prometheus/v1/example_test.go)
func getFromPromRange(start time.Time, end time.Time, metric string) model.Value {
client, err := api.NewClient(api.Config{
Address: "http://localhost:9090",
})
if err != nil {
fmt.Printf("Error creating client: %v\n", err)
os.Exit(1)
}
v1api := v1.NewAPI(client)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
r := v1.Range{
Start: start,
End: end,
Step: time.Second,
}
result, warnings, err := v1api.QueryRange(ctx, metric, r)
if err != nil {
fmt.Printf("Error querying Prometheus: %v\n", err)
os.Exit(1)
}
if len(warnings) > 0 {
fmt.Printf("Warnings: %v\n", warnings)
}
fmt.Printf("Result:\n%v\n", result)
return result
}
The result that is printed is for example:
"TEST{instance="localhost:4321", job="realtime"} =>\n21 #[1597758502.337]\n22 #[1597758503.337]...
These are actually the correct values and timestamps that are on Prometheus. How can I insert these timestamps and values into a map object (or another type of object that I can then use in code)?

The result coming from QueryRange has the type model.Matrix.
This will then contain a pointer of type *SampleStream. As your example then contains only one SampleStream, we can access the first one directly.
The SampleStream then has a Metric and Values of type []SamplePair. What you are aiming for is the slice of sample pairs. Over this we then can iterate and build for instance a map.
mapData := make(map[model.Time]model.SampleValue)
for _, val := range result.(model.Matrix)[0].Values {
mapData[val.Timestamp] = val.Value
}
fmt.Println(mapData)

You have to know the type of result you're getting returned. For example, model.Value can be of type Scalar, Vector, Matrix or String. Each of these types have their own way of getting the data and timestamps. For example, a Vector has an array of Sample types which contain the data you're looking for. The godocs and the github repo for the prom/go client have really great documentation if you want to dive deeper.

maybe you can find your answer in this issue
https://github.com/prometheus/client_golang/issues/194
switch {
case val.Type() == model.ValScalar:
scalarVal := val.(*model.Scalar)
// handle scalar stuff
case val.Type() == model.ValVector:
vectorVal := val.(model.Vector)
for _, elem := range vectorVal {
// do something with each element in the vector
// etc

Related

Passing []net.Conn to io.MultiWriter

I have many of net.Conn and i want to implement multi writer. so i can send data to every available net.Conn.
My current approach is using io.MultiWriter.
func route(conn, dst []net.Conn) {
targets := io.MultiWriter(dst[0], dst[1])
_, err := io.Copy(targets, conn)
if err != nil {
log.Println(err)
}
}
but the problem is, i must specify every net.Conn index in io.MultiWriter and it will be a problem, because the slice size is dynamic.
when i try another approach by pass the []net.Conn to io.MultiWriter, like code below
func route(conn, dst []net.Conn) {
targets := io.MultiWriter(dst...)
_, err := io.Copy(targets, conn)
if err != nil {
log.Println(err)
}
}
there is an error "cannot use mirrors (variable of type []net.Conn) as []io.Writer value in argument to io.MultiWriter"
Is there proper way to handle this case? so i can pass the net.Conn slice to io.MultiWriter.
Thank you.
io.MultiWriter() has a param of type ...io.Writer, so you may only pass a slice of type []io.Writer.
So first create a slice of the proper type, copy the net.Conn values to it, then pass it like this:
ws := make([]io.Writer, len(dst))
for i, c := range dst {
ws[i] = c
}
targets := io.MultiWriter(ws...)

JSON Marshalling losing struct reference

I was working through an example on marshalling and decided to get crafty with it. Was hoping I could have a generic interface reference passed in similar to the json.Unmarshall function and handle my other test cases by input instead of repeating the same code (yes, I'm coming from Java). Instead, I get back a map instead of a struct. It also appears I do not have a way to imply downwards what this 'type' of struct is.
Questions:
How would I be able to get the appropriate struct type back?
Is it even reasonable to take this approach of a utility method like this?
Code:
//test 1 - simple dto
var country models.Country
var outCountry, errorCountry = attemptParse("Simple DTO", "./json/country.json", country)
fmt.Println("Return - ", outCountry, " - ", errorCountry)
...
func attemptParse(test string, dataFile string, v interface{}) (interface{}, error){
...
err := json.Unmarshal(data, &v)
if err != nil {
fmt.Println("Test - ", test, " - fail", err)
return nil, err
}
fmt.Println("Test - ", test, " - result:")
fmt.Println(v)
return v, nil
}
Have also tried marshalling to nptr := reflect.New(reflect.TypeOf(v)), but this ended me with an address I could not resolve.
Output:
Test - Simple DTO - start
Test - Simple DTO - result:
map[Capital:Washington DC Continent:North America Name:USA]
Return - map[Capital:Washington DC Continent:North America Name:USA] - <nil>
Try passing a pointer to the attemptParse() function:
var outCountry, errorCountry = attemptParse("Simple DTO", "./json/country.json", &country)
and change
err := json.Unmarshal(data, &v)
to
err := json.Unmarshal(data, v)
Please, see this playground for more details: https://go.dev/play/p/XEN5cZce0tG
I'm not sure what exactly you are trying to test. But I would rather try table-driven tests first: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests.

Golang code is not inserting on BigQuery's table after I have created it from code

I have a BigQuery table with this schema:
name STRING NULLABLE
age INTEGER NULLABLE
amount INTEGER NULLABLE
And I can succesfully insert on the table with this code:
ctx := context.Background()
client, err := bigquery.NewClient(ctx, projectID)
if err != nil {
log.Fatal(err)
}
u := client.Dataset(dataSetID).Table("test_user").Uploader()
savers := []*bigquery.StructSaver{
{Struct: test{Name: "Sylvanas", Age: 23, Amount: 123}},
}
if err := u.Put(ctx, savers); err != nil {
log.Fatal(err)
}
fmt.Printf("rows inserted!!")
This works fine because the table is already created on bigquery, what I want to do now is deleting the table if exist and creating it again from code:
type test struct {
Name string
Age int
Amount int
}
if err := client.Dataset(dataSetID).Table("test_user").Delete(ctx); err != nil {
log.Fatal(err)
}
fmt.Printf("table deleted")
t := client.Dataset(dataSetID).Table("test_user")
// Infer table schema from a Go type.
schema, err := bigquery.InferSchema(test{})
if err := t.Create(ctx,
&bigquery.TableMetadata{
Name: "test_user",
Schema: schema,
}); err != nil {
log.Fatal(err)
}
fmt.Printf("table created with the test schema")
This is also working really nice because is deleting the table and creating it with the infered schema from my struct test.
The problem is coming when I try to do the above insert after the delete/create process. No error is thrown but it is not inserting data (and the insert works fine if I comment the delete/create part).
What am I doing wrong?
Do I need to commit the create table transaction somehow in order to insert or maybe do I need to close the DDBB connection?
According to this old answer, it might take up to 2 min for a BigQuery streaming buffer to be properly attached to a deleted and immediately re-created table.
I have run some tests, and in my case it just took a few seconds until the table is availabe instead of the 2~5 min reported on other questions. The resulting code is quite different from yours, but the concepts should apply.
What I tried is, instead of directly inserting the rows, adding them on a buffered channel, and wait until you can verify that the current table is properly saving the values before start sending them.
I've used a quite simpler struct to run my tests (so it was easier to write the code):
type Row struct {
ByteField []byte
}
I generated my rows the following way:
func generateRows(rows chan<- *Row) {
for {
randBytes := make([]byte, 100)
_, _ = rand.Read(randBytes)
rows <- &row{randBytes}
time.Sleep(time.Millisecond * 500) // use whatever frequency you need to insert rows at
}
}
Notice how I'm sending the rows to the channel. Instead of generating them, you just have to get them from your data source.
The next part is finding a way to check if the table is properly saving the rows. What I did was trying to insert one of the buffered rows into the table, recover that row, and verify if everything is OK. If the row is not properly returned, send it back to the buffer.
func unreadyTable(rows chan *row) bool {
client, err := bigquery.NewClient(context.Background(), project)
if err != nil {return true}
r := <-rows // get a row to try to insert
uploader := client.Dataset(dataset).Table(table).Uploader()
if err := uploader.Put(context.Background(), r); err != nil {rows <- r;return true}
i, err := client.Query(fmt.Sprintf("select * from `%s.%s.%s`", project, dataset, table)).Read(context.Background())
if err != nil {rows <- r; return true}
var testRow []bigquery.Value
if err := i.Next(&testRow); err != nil {rows <- r;return true}
if reflect.DeepEqual(&row{testrow[0].([]byte)}, r) {return false} // there's probably a better way to check if it's equal
rows <- r;return true
}
With a function like that, we only need to add for ; unreadyTable(rows); time.Sleep(time.Second) {} to block until it's safe to insert the rows.
Finally, we put everything together:
func main() {
// initialize a channel where the rows will be sent
rows := make(chan *row, 1000) // make it big enough to hold several minutes of rows
// start generating rows to be inserted
go generateRows(rows)
// create the BigQuery client
client, err := bigquery.NewClient(context.Background(), project)
if err != nil {/* handle error */}
// delete the previous table
if err := client.Dataset(dataset).Table(table).Delete(context.Background()); err != nil {/* handle error */}
// create the new table
schema, err := bigquery.InferSchema(row{})
if err != nil {/* handle error */}
if err := client.Dataset(dataset).Table(table).Create(context.Background(), &bigquery.TableMetadata{Schema: schema}); err != nil {/* handle error */}
// wait for the table to be ready
for ; unreadyTable(rows); time.Sleep(time.Second) {}
// once it's ready, upload indefinitely
for {
if len(rows) > 0 { // if there are uninserted rows, create a batch and insert them
uploader := client.Dataset(dataset).Table(table).Uploader()
insert := make([]*row, min(500, len(rows))) // create a batch of all the rows on buffer, up to 500
for i := range insert {insert[i] = <-rows}
go func(insert []*row) { // do the actual insert async
if err := uploader.Put(context.Background(), insert); err != nil {/* handle error */}
}(insert)
} else { // if there are no rows waiting to be inserted, wait and check again
time.Sleep(time.Second)
}
}
}
Note: Since math.Min() does not like ints, I had to include func min(a,b int)int{if a<b{return a};return b}.
Here's my full working example.

Is there a simple way to convert data base rows to JSON in Golang

Currently on what I've seen so far is that, converting database rows to JSON or to []map[string]interface{} is not simple. I have to create two slices and then loop through columns and create keys every time.
...Some code
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
Is there any package for this ? Or I am missing some basics here
I'm dealing with the same issue, as far as my investigation goes it looks that there is no other way.
All the packages that I have seen use basically the same method
Few things you should know, hopefully will save you time:
database/sql package converts all the data to the appropriate types
if you are using the mysql driver(go-sql-driver/mysql) you need to add
params to your db string for it to return type time instead of a string
(use ?parseTime=true, default is false)
You can use tools that were written by the community, to offload the overhead:
A minimalistic wrapper around database/sql, sqlx, uses similar way internally with reflection.
If you need more functionality, try using an "orm": gorp, gorm.
If you interested in diving deeper check out:
Using reflection in sqlx package, sqlx.go line 560
Data type conversion in database/sql package, convert.go line 86
One thing you could do is create a struct that models your data.
**Note: I am using MS SQLServer
So lets say you want to get a user
type User struct {
ID int `json:"id,omitempty"`
UserName string `json:"user_name,omitempty"`
...
}
then you can do this
func GetUser(w http.ResponseWriter, req *http.Request) {
var r Role
params := mux.Vars(req)
db, err := sql.Open("mssql", "server=ServerName")
if err != nil {
log.Fatal(err)
}
err1 := db.QueryRow("select Id, UserName from [Your Datavse].dbo.Users where Id = ?", params["id"]).Scan(&r.ID, &r.Name)
if err1 != nil {
log.Fatal(err1)
}
json.NewEncoder(w).Encode(&r)
if err != nil {
log.Fatal(err)
}
}
Here are the imports I used
import (
"database/sql"
"net/http"
"log"
"encoding/json"
_ "github.com/denisenkom/go-mssqldb"
"github.com/gorilla/mux"
)
This allowed me to get data from the database and get it into JSON.
This takes a while to code, but it works really well.
Not in the Go distribution itself, but there is the wonderful jmoiron/sqlx:
import "github.com/jmoiron/sqlx"
tableData := make([]map[string]interface{}, 0)
for rows.Next() {
entry := make(map[string]interface{})
err := rows.MapScan(entry)
if err != nil {
log.Fatal("SQL error: " + err.Error())
}
tableData = append(tableData, entry)
}
If you know the data type that you are reading, then you can read into the data type without using generic interface.
Otherwise, there is no solution regardless of the language used due to nature of JSON itself.
JSON does not have description of composite data structures. In other words, JSON is a generic key-value structure. When parser encounters what is supposed to be a specific structure there is no identification of that structure in JSON itself. For example, if you have a structure User the parser would not know how a set of key-value pairs maps to your structure User.
The problem of type recognition is usually addressed with document schema (a.k.a. XSD in XML world) or explicitly through passed expected data type.
One quick way to go about being able to get an arbirtrary and generic []map[string]interface{} from these query libraries is to populate an array of interface pointers with the same size of the amount of columns on the query, and then pass that as a parameter on the scan function:
// For example, for the go-mssqldb lib:
queryResponse, err := d.pool.Query(query)
if err != nil {
return nil, err
}
defer queryResponse.Close()
// Holds all the end-results
results := []map[string]interface{}{}
// Getting details about all the fields from the query
fieldNames, err := queryResponse.Columns()
if err != nil {
return nil, err
}
// Creating interface-type pointers within an array of the same
// size of the number of columns we have, so that we can properly
// pass this to the "Scan" function and get all the query parameters back :)
var scanResults []interface{}
for range fieldNames {
var v interface{}
scanResults = append(scanResults, &v)
}
// Parsing the query results into the result map
for queryResponse.Next() {
// This variable will hold the value for all the columns, named by the column name
rowValues := map[string]interface{}{}
// Cleaning up old values just in case
for _, column := range scanResults {
*(column.(*interface{})) = nil
}
// Scan into the array of pointers
err := queryResponse.Scan(scanResults...)
if err != nil {
return nil, err
}
// Map the pointers back to their value and the associated column name
for index, column := range scanResults {
rowValues[fieldNames[index]] = *(column.(*interface{}))
}
results = append(results, rowValues)
}
return results, nil

Go — handling multiple errors elegantly?

Is there a way to clean up this (IMO) horrific-looking code?
aJson, err1 := json.Marshal(a)
bJson, err2 := json.Marshal(b)
cJson, err3 := json.Marshal(c)
dJson, err4 := json.Marshal(d)
eJson, err5 := json.Marshal(e)
fJson, err6 := json.Marshal(f)
gJson, err4 := json.Marshal(g)
if err1 != nil {
return err1
} else if err2 != nil {
return err2
} else if err3 != nil {
return err3
} else if err4 != nil {
return err4
} else if err5 != nil {
return err5
} else if err5 != nil {
return err5
} else if err6 != nil {
return err6
}
Specifically, I'm talking about the error handling. It would be nice to be able to handle all the errors in one go.
var err error
f := func(dest *D, src S) bool {
*dest, err = json.Marshal(src)
return err == nil
} // EDIT: removed ()
f(&aJson, a) &&
f(&bJson, b) &&
f(&cJson, c) &&
f(&dJson, d) &&
f(&eJson, e) &&
f(&fJson, f) &&
f(&gJson, g)
return err
Put the result in a slice instead of variables, put the intial values in another slice to iterate and return during the iteration if there's an error.
var result [][]byte
for _, item := range []interface{}{a, b, c, d, e, f, g} {
res, err := json.Marshal(item)
if err != nil {
return err
}
result = append(result, res)
}
You could even reuse an array instead of having two slices.
var values, err = [...]interface{}{a, b, c, d, e, f, g}, error(nil)
for i, item := range values {
if values[i], err = json.Marshal(item); err != nil {
return err
}
}
Of course, this'll require a type assertion to use the results.
define a function.
func marshalMany(vals ...interface{}) ([][]byte, error) {
out := make([][]byte, 0, len(vals))
for i := range vals {
b, err := json.Marshal(vals[i])
if err != nil {
return nil, err
}
out = append(out, b)
}
return out, nil
}
you didn't say anything about how you'd like your error handling to work. Fail one, fail all? First to fail? Collect successes or toss them?
I believe the other answers here are correct for your specific problem, but more generally, panic can be used to shorten error handling while still being a well-behaving library. (i.e., not panicing across package boundaries.)
Consider:
func mustMarshal(v interface{}) []byte {
bs, err := json.Marshal(v)
if err != nil {
panic(err)
}
return bs
}
func encodeAll() (err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(error); ok {
return
}
panic(r)
}
}()
ea := mustMarshal(a)
eb := mustMarshal(b)
ec := mustMarshal(c)
return nil
}
This code uses mustMarshal to panic whenever there is a problem marshaling a value. But the encodeAll function will recover from the panic and return it as a normal error value. The client in this case is never exposed to the panic.
But this comes with a warning: using this approach everywhere is not idiomatic. It can also be worse since it doesn't lend itself well to handling each individual error specially, but more or less treating each error the same. But it has its uses when there are tons of errors to handle. As an example, I use this kind of approach in a web application, where a top-level handler can catch different kinds of errors and display them appropriately to the user (or a log file) depending on the kind of error.
It makes for terser code when there is a lot of error handling, but at the loss of idiomatic Go and handling each error specially. Another down-side is that it could prevent something that should panic from actually panicing. (But this can be trivially solved by using your own error type.)
You can use go-multierror by Hashicorp.
var merr error
if err := step1(); err != nil {
merr = multierror.Append(merr, err)
}
if err := step2(); err != nil {
merr = multierror.Append(merr, err)
}
return merr
You can create a reusable method to handle multiple errors, this implementation will only show the last error but you could return every error msg combined by modifying the following code:
func hasError(errs ...error) error {
for i, _ := range errs {
if errs[i] != nil {
return errs[i]
}
}
return nil
}
aJson, err := json.Marshal(a)
bJson, err1 := json.Marshal(b)
cJson, err2 := json.Marshal(c)
if error := hasError(err, err1, err2); error != nil {
return error
}
Another perspective on this is, instead of asking "how" to handle the abhorrent verbosity, whether we actually "should". This advice is heavily dependent on context, so be careful.
In order to decide whether handling the json.Marshal error is worth it, we can inspect its implementation to see when errors are returned. In order to return errors to the caller and preserve code terseness, json.Marshal uses panic and recover internally in a manner akin to exceptions. It defines an internal helper method which, when called, panics with the given error value. By looking at each call of this function, we learn that json.Marshal errors in the given scenarios:
calling MarshalJSON or MarshalText on a value/field of a type which implements json.Marshaler or encoding.TextMarshaler returns an error—in other words, a custom marshaling method fails;
the input is/contains a cyclic (self-referencing) structure;
the input is/contains a value of an unsupported type (complex, chan, func);
the input is/contains a floating-point number which is NaN or Infinity (these are not allowed by the spec, see section 2.4);
the input is/contains a json.Number string that is an incorrect number representation (for example, "foo" instead of "123").
Now, a usual scenario for marshaling data is creating an API response, for example. In that case, you will 100% have data types that satisfy all of the marshaler's constraints and valid values, given that the server itself generates them. In the situation user-provided input is used, the data should be validated anyway beforehand, so it should still not cause issues with the marshaler. Furthermore, we can see that, apart from the custom marshaler errors, all the other errors occur at runtime because Go's type system cannot enforce the required conditions by itself. With all these points given, here comes the question: given our control over the data types and values, do we need to handle json.Marshal's error at all?
Probably no. For a type like
type Person struct {
Name string
Age int
}
it is now obvious that json.Marshal cannot fail. It is trickier when the type looks like
type Foo struct {
Data any
}
(any is a new Go 1.18 alias for interface{}) because there is no compile-time guarantee that Foo.Data will hold a value of a valid type—but I'd still argue that if Foo is meant to be serialized as a response, Foo.Data will also be serializable. Infinity or NaN floats remain an issue, but, given the JSON standard limitation, if you want to serialize these two special values you cannot use JSON numbers anyway, so you'll have to look for another solution, which means that you'll end up avoiding the error anyway.
To conclude, my point is that you can probably do:
aJson, _ := json.Marshal(a)
bJson, _ := json.Marshal(b)
cJson, _ := json.Marshal(c)
dJson, _ := json.Marshal(d)
eJson, _ := json.Marshal(e)
fJson, _ := json.Marshal(f)
gJson, _ := json.Marshal(g)
and live fine with it. If you want to be pedantic, you can use a helper such as:
func must[T any](v T, err error) T {
if err != nil {
panic(err)
}
return v
}
(note the Go 1.18 generics usage) and do
aJson := must(json.Marshal(a))
bJson := must(json.Marshal(b))
cJson := must(json.Marshal(c))
dJson := must(json.Marshal(d))
eJson := must(json.Marshal(e))
fJson := must(json.Marshal(f))
gJson := must(json.Marshal(g))
This will work nice when you have something like an HTTP server, where each request is wrapped in a middleware that recovers from panics and responds to the client with status 500. It's also where you would care about these unexpected errors—when you don't want the program/service to crash at all. For one-time scripts you'll probably want to have the operation halted and a stack trace dumped.
If you're unsure of how your types will be changed in the future, you don't trust your tests, data may not be in your full control, the codebase is too big to trace the data or whatever other reason which causes uncertainty over the correctness of your data, it is better to handle the error. Pay attention to the context you're in!
P.S.: Pragmatically ignoring errors should be generally sought after. For example, the Write* methods on bytes.Buffer, strings.Builder never return errors; fmt.Fprintf, with a valid format string and a writer that doesn't return errors, also returns no errors; bufio.Writer aswell doesn't, if the underlying writer doesn't return. You will find some types implement interfaces with methods that return errors but don't actually return any. In these cases, if you know the concrete type, handling errors is unnecessarily verbose and redundant. What do you prefer,
var sb strings.Builder
if _, err := sb.WriteString("hello "); err != nil {
return err
}
if _, err := sb.WriteString("world!"); err != nil {
return err
}
or
var sb strings.Builder
sb.WriteString("hello ")
sb.WriteString("world!")
(of course, ignoring that it could be a single WriteString call)?
The given examples write to an in-memory buffer, which unless the machine is out of memory, an error which you cannot handle in Go, cannot ever fail. Other such situations will surface in your code—blindly handling errors adds little to no value! Caution is key—if an implementation changes and does return errors, you may be in trouble. Standard library or well-established packages are good candidates for eliding error checking, if possible.

Resources