I am building a Go Kubernetes operator. I have used kubebuilder to create it.
I want to store some internal details in the CRD status. I have tried :
To update the whole resource :
if err = r.Client.Update(ctx, upCRD); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
And to update only the status :
if err = r.Status().Update(ctx, upCRD); err != nil {
return reconcile.Result{}, client.IgnoreNotFound(err)
}
The status struct is defined as follows :
type HAAuditStatus struct {
ChaosStrategyCron cron.EntryID `json:"chaosStrategyCron,omitempty"`
TestStatus TestStatus `json:"testStatus,omitempty"`
MetricStatus MetricStatus `json:"metricStatus,omitempty"`
RoundRobinStrategy RoundRobinStrategy `json:"roundRobinStrategy,omitempty"`
FixedStrategy FixedStrategy `json:"fixedStrategy,omitempty"`
NextChaosDateTime int64 `json:"nextChaosDateTime,omitempty"`
Created bool `json:"created,default=false"`
}
No error is raised and the specs fields modified are actually persisted but not the status field whose values remain the default at the next reconciling step.
I have looked at the other issues on GitHub or StackOverflow but any suggestion made solved my issue and I can't figure out what is the problem. For a a bigger picture, you can refer to the repo where the operator is located.
Any suggestion is very welcomed :)
I might have found the reason why the status were not updated.
Before updating the status, I was also updating the spec fields (to give some feedback to the user on created resources).
The issue is caused by the fact that the specs updates trigger a new reconcilation, and the instruction after this update (among them the status update) were not execute.
I realized that using specs to give feedback to the user is not suitable and the events were more appropriate for this purpose.
Related
I'm using gqlgen package to create GraphQL server. However, I can't limit the amount of the alias. FixedComplexityLimit limits the complexity of the query. It is possible in JS community thanks to graphql-no-alias npm package. I need that kind of thing.
I want to limit the amount of the alias to prevent the batching attack. Let's try to explain by giving an example.
query {
productsByIds(productIds: "353573855") {
active {
id
path
title
}
productsByIds2: productsByIds(productIds: "353573855") {
active {
id
path
title
}
}
}
The above query should give an error. However, the below should work. This is just an example I have more complex schemas that's why the complexity limit didn't work for me.
query {
productsByIds(productIds: "353573855") {
active {
id
path
title
}
products {
active {
id
path
title
}
}
}
I'm afraid you have to come with something on your own for that. If you think the request itself or the response could become too large, you can limit it in your router config. For example, with fiber you could do:
routerConfig := fiber.Config{ReadBufferSize: maxRequestSize, WriteBufferSize: maxResponseSize};
router := fiber.New(routerConfig)
router.Post("/graphql", adaptor.HTTPHandler(gqlHandler))
If it's just really the aliases you want to prevent, you need to parse the request. You can either do so by some custom middle ware before the request gets passed to the gqlHandler (advantage: you can stop parsing the request in total in case of an alias request, disadvantage: you're basically duplicating code from a library, and it needs to be parsed again later on if you don't drop the standard gqlHandler). Or, and that's what I propose, you check the parsed request.
import gqlLib "github.com/99designs/gqlgen/graphql"
...
oCtx := gqlLib.GetOperationContext(ctx)
fragmentToSelections := getFragmentsSelectionsByName(oCtx)
selectionSet := oCtx.Operation.SelectionSet
An alias can be detected by having an Alias that differs from the Name:
file is just the query root in this example. selectionSet[0] is an unaliased request, selectionSet[1] is.
I want to read all entities from a Datastore kind (around 6 entities/records).
I have a Datastore that is key'ed on a weird type that I am trying to understand. I can't find any uniqueness on the a key to perform a query on.
The table looks like this:
GCP Datastore representing data I want to read into my Go app
When I click on a record, it looks like this:
Key literal exposed and used from here on out to try and get the records in the Go app
``I can perform an ancestor query in the console like this:```
GCP Datastore queried using Ancestor query
Great! So now I want to retrieve this data from my Golang App? But how?
I see a lot of solutions online about using q.Get(...) // where q is a *Query struct
Any of these solutions won't work because they import google.golang.org/appengine/datastore. I understand that this is legacy and deprecated. So I want a solution that imports cloud.google.com/go/datastore.
I tried something along these lines but didn't get much luck:
First try using GetAll and query
I tried this next:
Second try attempting to use ancestor query... not ready yet
Lastly I tried to get a single record directly:
Lastly I tried to get the record directly
In all cases, my err is not nil and the dts that should be populated from datastore query is also nil.
Any guidance to help me understand how to query on this key type? Am I missing something fundamental with the way this table is key'ed and queried?
Thank you
Then I tried this:
It seems you are just missing your Namespace
// Merchant Struct
type MerchantDetails struct {
MEID string
LinkTo *datastore.Key
Title string
}
// Struct array to store in
var tokens []MerchantDetails
// Ancestor Key to filter by
parentKey := datastore.NameKey("A1_1113", "activate", nil)
parentKey.Namespace = "Devs1"
// The call using the new datastore UI. Basically query.Run(), but datastore.GetAll()
keys, err := helpers.DatastoreClient.GetAll(
helpers.Ctx,
datastore.NewQuery("A1_1112").Ancestor(parentKey).Namespace("Devs1"),
&tokens,
)
if err != nil {
return "", err
}
// Print all name/id from the found values
fmt.Printf("keys: %v", keys)
I am using Golang and Firego for connecting to Firebase. I am trying to search an admin with Email: john#gmail.com. The following is my Database Structure
For this I have tried:
dB.Child("CompanyAdmins").Child("Info").OrderBy("Email").EqualTo("john#gmail.com").Value(&result)
but it does not produce expected result. How can I do this?
While #dev.bmax has the problem identified correctly, the solution is simpler. You can specify the path of a property to order on:
dB.Child("CompanyAdmins")
.OrderBy("Info/Email")
.EqualTo("john#gmail.com")
.Value(&result)
Update (2017-02-10):
Full code I just tried:
f := firego.New("https://stackoverflow.firebaseio.com", nil)
var result map[string]interface{}
if err := f.Child("42134844/CompanyAdmins").OrderBy("Info/Email").EqualTo("john#gmail.com").Value(&result); err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", result)
This prints:
map[-K111111:map[Info:map[Email:john#gmail.com]]]
Which is the exact place where I put the data.
Update 20170213:
This is the index I have defined:
"CompanyAdmins": {
".indexOn": "Info/Email"
}
If this doesn't work for you, please provide a similarly complete snippet that I can test.
Can you put Info data directly into CompanyAdmins structure? This way, your query will work.
CompanyAdmins
-id
-Email: "johndon#gmail.com"
-Settings:
- fields
The problem with your query, is that Info is not a direct child of CompanyAdmins.
You could use the email as the key instead of an auto-generated one when you insert values. That way, you can access the admin directly:
dB.Child("CompanyAdmins").Child("john#gmail.com").Child("Info")
Otherwise, you need to restructure the database. Your order-by field (email) should be one level higher, like Rodrigo Vinicius suggests. Then, your query will change to:
dB.Child("CompanyAdmins").OrderBy("Email").EqualTo("john#gmail.com")
I want to allow the users to signup using github account, and display all his/her private and public repositories. I am able to get the token from github and get the repositories (both public and private), but the only problem is that it is not returning all repositories (i.e. some repositories are not fetched).
I am using golang for server side implementation.
Using this method to get repositories.
By default all the commands that accept a ListOptions argument have a PerPage attribute. In order to get all the data, you'll have to iterate through the pages using the Page attribute until the number of results you get is less than PerPage.
In Go-ish pseudo-code, it'd look like this:
totalResults := []Result{}
for page := 0; ; page++ {
results := fetch current page
totalResults = append(totalResults, results)
if len(results) < per page {
break
}
}
You can see the ListOptions struct defined here.
As pointed out by robbrit to get all repos we have to use PerPage option, because by default only 30 repos are returned. That solved my problem.
I have the following 2 structs with a many-2-many relationship.
type Message struct {
gorm.Model
Body string `tag:"body" schema:"body"`
Locations []Location `tag:"locations" gorm:"many2many:message_locations;"`
TimeSent time.Time `tag:"timesent"`
TimeReceived time.Time `tag:"timereceived"`
User User
}
type Location struct {
gorm.Model
PlaceID string `tag:"loc_id" gorm:"unique"`
Lat float64 `tag:"loc_lat"`
Lng float64 `tag:"loc_lng"`
}
If I create a Message with DB.Create(my_message), everything works fine : the Message is created in DB, along with a Location and the join message_locations table is filled with the respective IDs of the Message and the Location.
What I was expecting though was that if the Location already exists in DB (based on the place_id field, which is passed on), gorm would create the Message, retrieve the Location ID and populate message_locations.That's not what is happening.
Since the PlaceID must be unique, gorm finds that a duplicate key value violates unique constraint "locations_place_id_key" and aborts the transaction.
If on the other hand I make the PlaceID not unique, gorm creates the message alright, with the association, but then that creates another, duplicate entry for the Location.
I can test if the location already exists before trying to save the message:
existsLoc := Location{}
DB.Where("place_id = ?", mssg.Locations[0].PlaceID).First(&existsLoc)
then if true switch the association off:
DB.Set("gorm:save_associations", false).Create(mssg)
DB.Create(mssg)
The message is saved without gorm complaining, but then message_locations is not filled.
I could fill it "manually" since I've retrieved the Location ID when testing for its existence, but it seems to me it kind of defeats the purpose of using gorm in the first place.
I'm not sure what the right way to proceed might be. I might be missing something obvious, I suspect maybe something's wrong with the way I declared my structs? Hints welcome.
UPDATE 2016/03/25
I ended up doing the following, which I'm pretty sure is not optimal. If you have a better idea, please chime in.
After testing if the location already exists and it does:
// in a transaction
tx := DB.Begin()
// create the message with transaction disabled
if errMssgCreate := tx.Set("gorm:save_associations", false).Create(mssg).Error; errMssgCreate != nil {
tx.Rollback()
log.Println(errMssgCreate)
}
// then create the association with existing location
if errMssgLocCreate := tx.Model(&mssg).Association("Locations").Replace(&existLoc).Error; errMssgLocCreate != nil {
tx.Rollback()
log.Println(errMssgLocCreate)
}
tx.Commit()
In my situation I was using a UUID for the ID. I coded a BeforeCreate hook to generate the uuid. When saving a new association, there was no need for the beforeCreate hook to create a new ID, but it did so anyway (that feels a bit like it could be a bug?).
Note that it did this even when using "association" mode to append a new relationship. The behaviour was not limited to when calling Create with a nested association.
It took me several hours to debug this because when I inspected the contents of the associated records they matched exactly the instances that I had just created.
In other words:
I made a bunch of Foo
I made some Bar, and tried to attach just certain Foo to each
The Foo in the Bar relationship had the same reference as the objects I had just created.
Removing the beforeCreate hook makes the code behave like I'd expect. And happily I was already in the habit of manually making a uuid whenever needed instead of relying on it so it didn't hurt me to remove it.
I've pasted a minimally reproducible example at https://pastebin.com/wV4h38Qz
package models
import (
"github.com/google/uuid"
"gorm.io/gorm"
"time"
)
type Model struct {
ID uuid.UUID `gorm:"type:char(36);primary_key"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt gorm.DeletedAt
}
// BeforeCreate will set a UUID rather than numeric ID.
func (m *Model) BeforeCreate(tx *gorm.DB) (err error) {
m.ID = uuid.New()
return
}
gorm:"many2many:message_locations;save_association:false"
Is getting closer to what you would like to have. You need to place it in your struct definition for Message. The field is then assumed to exist in the db and only the associations table will be populated with data.