I'm using the Operator SDK to build a custom Kubernetes operator. I have created a custom resource definition and a controller using the respective Operator SDK commands:
operator-sdk add api --api-version example.com/v1alpha1 --kind=Example
operator-sdk add controller --api-version example.com/v1alpha1 --kind=Example
Within the main reconciliation loop (for the example above, the auto-generated ReconcileExample.Reconcile method) I have some custom business logic that requires me to query the Kubernetes API for other objects of the same kind that have a certain field value. It's occurred to me that I might be able to use the default API client (that is provided by the controller) with a custom field selector:
func (r *ReconcileExample) Reconcile(request reconcile.Request) (reconcile.Result, error) {
ctx := context.TODO()
listOptions := client.ListOptions{
FieldSelector: fields.SelectorFromSet(fields.Set{"spec.someField": "someValue"}),
Namespace: request.Namespace,
}
otherExamples := v1alpha1.ExampleList{}
if err := r.client.List(ctx, &listOptions, &otherExamples); err != nil {
return reconcile.Result{}, err
}
// do stuff...
return reconcile.Result{}, nil
}
When I run the operator and create a new Example resource, the operator fails with the following error message:
{"level":"info","ts":1563388786.825384,"logger":"controller_example","msg":"Reconciling Example","Request.Namespace":"default","Request.Name":"example-test"}
{"level":"error","ts":1563388786.8255732,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"example-controller","request":"default/example-test","error":"Index with name field:spec.someField does not exist","stacktrace":"..."}
The most important part being
Index with name field:spec.someField does not exist
I've already searched the Operator SDK's documentation on the default API client and learned a bit about the inner workings of the client, but no detailed explanation on this error or how to fix it.
What does this error message mean, and how can I create this missing index to efficiently list objects by this field value?
The default API client that is provided by the controller is a split client -- it serves Get and List requests from a locally-held cache and forwards other methods like Create and Update directly to the Kubernetes API server. This is also explained in the respective documentation:
The SDK will generate code to create a Manager, which holds a Cache and a Client to be used in CRUD operations and communicate with the API server. By default a Controller's Reconciler will be populated with the Manager's Client which is a split-client. [...] A split client reads (Get and List) from the Cache and writes (Create, Update, Delete) to the API server. Reading from the Cache significantly reduces request load on the API server; as long as the Cache is updated by the API server, read operations are eventually consistent.
To query values from the cache using a custom field selector, the cache needs to have a search index for this field. This indexer can be defined right after the cache has been set up.
To register a custom indexer, add the following code into the bootstrapping logic of the operator (in the auto-generated code, this is done directly in main). This needs to be done after the controller manager has been instantiated (manager.New) and also after the custom API types have been added to the runtime.Scheme:
package main
import (
k8sruntime "k8s.io/apimachinery/pkg/runtime"
"example.com/example-operator/pkg/apis/example/v1alpha1"
// ...
)
function main() {
// ...
cache := mgr.GetCache()
indexFunc := func(obj k8sruntime.Object) []string {
return []string{obj.(*v1alpha1.Example).Spec.SomeField}
}
if err := cache.IndexField(&v1alpha1.Example{}, "spec.someField", indexFunc); err != nil {
panic(err)
}
// ...
}
When a respective indexer function is defined, field selectors on spec.someField will work from the local cache as expected.
Related
I was reading this blog recently and I saw something interesting. The object instance is initialized in the file itself and then accessed everywhere. I found it pretty convenient and was wondering if it's the best practice.
https://dev.to/hackmamba/build-a-rest-api-with-golang-and-mongodb-gin-gonic-version-269m#:~:text=setup.go%20file%20and%20add%20the-,snippet%20below,-%3A
I'm more used to a pattern where we first create a struct like so:
type Server struct {
config util.Config
store db.Store
tokenMaker token.Maker
router *gin.Engine
}
and then set eveything in main:
func NewServer(config util.Config, store db.Store) (*Server, error) {
tokenMaker, err := token.NewPasetoMaker(config.TokenSymmetricKey)
if err != nil {
return nil, fmt.Errorf("cannot create token maker: %w", err)
}
server := &Server{
config: config,
store: store,
tokenMaker: tokenMaker,
}
server.setupRouter()
return server, nil
}
and then the server object is passed every where.
What's best? Is it okay to use the pattern mentioned in that blog?
Thank you.
I tried to implement both patterns, The pattern mentioned in the blog seems very convenient to use as I'm not passing around objects and can easily access object I'm interested in.
You can follow any one of those patterns. But, I think it's better to pass the object pointer everywhere necessary. It saves lots of work and ensures that the object is always updated.
I have the following models
type UsersModel struct {
db *pgx.Conn
}
func (u *UsersModel) SignupUser(ctx context.Context, payload SignupRequest) (SignupQueryResult, error) {
err := u.db.Exec("...")
return SignupQueryResult{}, err
}
type SessionsModel struct {
db *pgx.Conn
}
func (s *SessionsModel) CreateSession(ctx context.Context, payload CreateSessionRequest) error {
_, err := s.db.Exec("...")
return err
}
and my service calls UsersModel.SignupUser as follows
type SignupService struct {
userModel signupServiceUserModel
}
func (ss *SignupService) Signup(ctx context.Context, request SignupRequest) (SignupQueryResult, error) {
return ss.userModel.SignupUser(ctx, request)
}
Now, I need to tie SignupUser and CreateSession in a transaction instead of isolated operations, not sure what the best way to structure this is, and how to pass transaction around while maintaining that abstraction of DB specific stuff from services. Or should I just call the sessions table insert query(which I'm putting in *SessionsModel.CreateSession directly in *UsersModel.SignupUser?
For reference, transactions in pgx happen by calling *pgx.Conn.Begin() which returns a concrete pgx.Tx , on which you execute the same functions as you would on *px.Conn , followed by *pgx.Tx.Commit() or *pgx.Tx.Rollback()
Questions I have are:
Where to start transaction - model or service?
If in service, how do I do that while abstracting that there's an underlying DB from service?
How do I pass transaction between models?
There is no right or wrong answer for this since there are multiple ways to do it. However, I share how I'd do it and why.
make sure to keep the service layer clean of any concrete DB implementation, so if you switch to a completely new DB you do not need to change other pieces.
about the solution, I would create a completely new method called SignupUserAndCreateSession that encloses all the logic you need. I wouldn't worry because you have the two original methods in one, as per my understanding in this scenario both of them are tightly coupled by design, so this would not be an anti-pattern.
I would avoid moving around the *pgx.Tx between methods since anyway you would depend on another level that makes sure to commit or roll back, and this might cause errors in future implementations.
I am trying to integrate Elastic APM and Sentry into my website using Buffalo. The interesting files are as follows:
handlers/sentryHandler.go
package handlers
import (
sentryhttp "github.com/getsentry/sentry-go/http"
"github.com/gobuffalo/buffalo"
)
func SentryHandler(next buffalo.Handler) buffalo.Handler {
handler := buffalo.WrapBuffaloHandler(next)
sentryHandler := sentryhttp.New(sentryhttp.Options{})
return buffalo.WrapHandler(sentryHandler.Handle(handler))
}
handlers/elasticAPMHandler.go
package handlers
import (
"fmt"
"github.com/gobuffalo/buffalo"
"go.elastic.co/apm/module/apmhttp"
)
func ElasticAPMHandler(next buffalo.Handler) buffalo.Handler {
fmt.Println("AAA")
handler := apmhttp.Wrap(buffalo.WrapBuffaloHandler(next))
return buffalo.WrapHandler(handler)
}
actions/app.go
package actions
import (
"github.com/gobuffalo/buffalo"
"github.com/gobuffalo/envy"
forcessl "github.com/gobuffalo/mw-forcessl"
paramlogger "github.com/gobuffalo/mw-paramlogger"
"github.com/unrolled/secure"
"my_website/handlers"
"my_website/models"
"github.com/gobuffalo/buffalo-pop/pop/popmw"
csrf "github.com/gobuffalo/mw-csrf"
i18n "github.com/gobuffalo/mw-i18n"
"github.com/gobuffalo/packr/v2"
)
func App() *buffalo.App {
if app == nil {
app = buffalo.New(buffalo.Options{
Env: ENV,
SessionName: "_my_website_session",
})
// Automatically redirect to SSL
app.Use(forceSSL())
// Catch errors and send them to Sentry.
app.Use(handlers.SentryHandler)
// Get tracing information and send it to Elastic.
app.Use(handlers.ElasticAPMHandler)
// Other Buffalo middleware stuff goes here...
// Routing stuff goes here...
}
return app
}
The problem I'm running into is if I have the Sentry/APM handlers at the top, then I get errors like application.html: line 24: "showPagePath": unknown identifier. However, if I move it to just before I set up the routes, then I get a no transaction found error. So, I'm guessing that the handler wrappers are dropping the buffalo.Context information. So, what would I need to do to be able to integrate Sentry and Elastic in Buffalo asides from trying to reimplement their wrappers?
So, I'm guessing that the handler wrappers are dropping the buffalo.Context information.
That's correct. The problem is that buffalo.WrapHandler (Source) throws away all of the context other than the underlying http.Request/http.Response:
// WrapHandler wraps a standard http.Handler and transforms it
// into a buffalo.Handler.
func WrapHandler(h http.Handler) Handler {
return func(c Context) error {
h.ServeHTTP(c.Response(), c.Request())
return nil
}
}
So, what would I need to do to be able to integrate Sentry and Elastic in Buffalo asides from trying to reimplement their wrappers?
I can see two options:
Reimplement buffalo.WrapHandler/buffalo.WrapBuffaloHandler to stop throwing away the buffalo.Context. This would involve storing the buffalo.Context in the underlying http.Request's context, and then pulling it out again on the other side instead of creating a whole new context.
Implement Buffalo-specific middleware for Sentry and Elastic APM without using the Wrap* functions.
There's an open issue in the Elastic APM agent for the latter option: elastic/apm#39.
I am trying to configure a RabbitMQ broker using the go-micro framework. I have noticed that the broker interface in go-micro has a broker.SubscriberOptions struct which allows configuring the parameters I am looking for (AutoAck, Queue name and so on) however I am unable to figure out how to pass this when starting a broker.
This is how a simple rabbit go-micro setup would look like
package main
import (
"log"
"github.com/micro/go-micro/server"
"github.com/micro/go-plugins/broker/rabbitmq"
micro "github.com/micro/go-micro"
)
func main() {
// Create a new service. Optionally include some options here.
service := micro.NewService(
micro.Name("go-micro-rabbit"),
micro.Broker(rabbitmq.NewBroker()),
)
// Init will parse the command line flags.
service.Init()
// Register handler
proto.RegisterGreeterHandler(service.Server(), new(Greeter))
micro.RegisterSubscriber("micro-exchange", service.Server(), myFunc, server.SubscriberQueue("my-queue"))
// Run the server
if err := service.Run(); err != nil {
log.Fatal(err)
}
}
The micro.RegisterSubscriber method takes in a list of server.SubscriberOption but does not allow me to set the broker.SubscriberOptions and the rabbitmq.NewBroker allows setting broker.Options but once again, not broker.SubscriberOptions
I have dug in the code of go-micro but have been unable to figure out how the broker.Subscribe method (Which exposes the correct struct) is called or by who.
Is this possible at all? Is it maybe something not yet fully fleshed out in the API?
I'm using golang with couchbase integration component called go-couchbase. It's enable to connect with couchbase and retrieve data. However I have a problem to send start key and skip value and limit value with this API. Because there is no functionality found by myself.
url : - github.com/couchbaselabs/go-couchbase
Please let me know any method to send these values to couchbase and retrieve data?
That start key is only mentioned once, as a parameter to a couhbase view:
// View executes a view.
//
// The ddoc parameter is just the bare name of your design doc without
// the "_design/" prefix.
//
// Parameters are string keys with values that correspond to couchbase
// view parameters. Primitive should work fairly naturally (booleans,
// ints, strings, etc...) and other values will attempt to be JSON
// marshaled (useful for array indexing on on view keys, for example).
//
// Example:
//
// res, err := couchbase.View("myddoc", "myview", map[string]interface{}{
// "group_level": 2,
// "start_key": []interface{}{"thing"},
// "end_key": []interface{}{"thing", map[string]string{}},
// "stale": false,
// })
func (b *Bucket) View(ddoc, name string, params map[string]interface{}) (ViewResult, error) {
I suppose the skip one (mentioned in "Pagination with Couchbase") is just another parameter to add to the params map[string]interface{}.