I'm using the oauth package "code.google.com/p/goauth2/oauth" with revel and the it creates a few structures with quite a bit of information in it. I need this information to be persistent throughout the session but sessions can only be type string. Is there a better way of doing this than the following?
c.Session["AccessToken"] = t.Token.AccessToken
c.Session["RefreshToken"] = t.Token.RefreshToken
...
If not how do I reassign the strings to create another structure to call Client.Get() ?
You can use the json package to "convert" structs to string and vice versa. Just know that only exported fields are serialized this way.
Since oauth.Token has only exported fields, this will work:
if data, err := json.Marshal(t.Token); err == nil {
c.Session["Token"] = string(data)
} else {
panic(err)
}
And this is how you can reconstruct the token from the session:
if err := json.Unmarshal([]byte(c.Session["Token"]), &t.Token); err != nil {
panic(err)
}
Instead of that you can try to save some string ID to Session and the object you need to Cache:
c.Session["id"] = id
go cache.Set("token_"+id, t.Token, 30*time.Minute)
And then use it as follows:
var token oauth.Token
id := c.Session["id"]
if err := cache.Get("token_"+id, &token); err != nil {
// TODO: token not found, do something
}
// TODO: use your token here...
The advantage of this approach is you do not have to work with json package explicitly and cache has a few different back-ends out of the box: memory, redis, memcached. Moreover, you do not have a limitation of 4K as in case of Cookie based Session.
https://revel.github.io/manual/cache.html
Related
To give some context, I am trying to verify a JWT signature using Public Key. The public key expires after some amount of hrs which could change. Now the problem is that if the validation fails, I don't know if it failed because of an invalid Token, or the Public Key got expired.
To solve this, I am doing the following:
Fetch a new key from some URL when there is a validation error
Use the updated key to validate the token
If it still fails, then the JWT validation fails
Code
I have a HTTP Handler inside which a new instance of tokenValidator is created and then validate() is called on it.
keystore that is used to create the tokenValidator is initialized outside the handler
type tokenValidator struct {
keyStore *keystore.KeyStore
}
func (t *tokenValidator) validate(){
// get token
.....
// validate token
var err error
err = t.validateSignature(token, publicKeyName)
if err != nil {
t.keyStore.UpdateKeys() // fetch new keys and try again
err = t.validateSignature(token, publicKeyName)
if err != nil {
return nil, fmt.Errorf("pf token signature validation failed: %w", err)
}
}
}
Keystore
Now, I don't want all the failed validation requests to fetch the key from the URL instead I want to use the updated key for all the requests waiting to acquire the lock.
So, for example, if there are 100 parallel requests, I want the request that acquires lock to update the key. All the others should use the updated key.
type KeyStore struct {
Keys jwks.JWKPublicKeys // map[string]rsa.PublicKey
mu sync.Mutex
isKeyUpdated bool
}
func (k *KeyStore) UpdateKeys() error {
k.isKeyUpdated = false
k.mu.Lock()
defer k.mu.Unlock()
var err error
if !k.isKeyUpdated {
keysMap, err := retrieveKeysFromURL()
if err == nil {
k.Keys = *keysMap // update the Keys Map
k.isKeyUpdated = true
}
}
return err
}
I am new to concurrency-related topics and was wondering if this could be improved. Or if there is some other better solution to this problem?
Thanks
The idea that you reload keys when token validation fails is a big security risk. Someone can bombard your API with invalid keys to launch a DoS attack by forcing you to reload keys in a never-ending loop. Ideally, you should find a way to determine if a key has expired or not.
You can keep a cache using:
cache map[string]*Key
where
type Key struct {
once sync.Once
key rsa.PublicKey
}
When you expire a key, you can simply do:
cache.cache[id]=&Key{}
which will insert an empty key in the cache. To get the key value from the cache:
func GetPublicKey(id string) rsa.PublicKey {
cache.Lock()
k:=cache.cache[id]
if k==nil {
cache.cache[id]=&Key{}
}
cache.Unlock()
k.once.Do(func() {
k.key=loadPublicKey()
})
return k.key
}
This way, everybody will wait until one of the goroutines initializes the key.
I was reading this blog recently and I saw something interesting. The object instance is initialized in the file itself and then accessed everywhere. I found it pretty convenient and was wondering if it's the best practice.
https://dev.to/hackmamba/build-a-rest-api-with-golang-and-mongodb-gin-gonic-version-269m#:~:text=setup.go%20file%20and%20add%20the-,snippet%20below,-%3A
I'm more used to a pattern where we first create a struct like so:
type Server struct {
config util.Config
store db.Store
tokenMaker token.Maker
router *gin.Engine
}
and then set eveything in main:
func NewServer(config util.Config, store db.Store) (*Server, error) {
tokenMaker, err := token.NewPasetoMaker(config.TokenSymmetricKey)
if err != nil {
return nil, fmt.Errorf("cannot create token maker: %w", err)
}
server := &Server{
config: config,
store: store,
tokenMaker: tokenMaker,
}
server.setupRouter()
return server, nil
}
and then the server object is passed every where.
What's best? Is it okay to use the pattern mentioned in that blog?
Thank you.
I tried to implement both patterns, The pattern mentioned in the blog seems very convenient to use as I'm not passing around objects and can easily access object I'm interested in.
You can follow any one of those patterns. But, I think it's better to pass the object pointer everywhere necessary. It saves lots of work and ensures that the object is always updated.
I'm trying to write an application in GO that will get all the image vulnerabilities inside a GCP project for me using the Container Analysis API.
The GO Client library for this API has the function findVulnerabilityOccurrencesForImage() to do this, however it requires you to pass the URL of the image you want to get the vulnerability report from in the form resourceURL := "https://gcr.io/my-project/my-repo/my-image" and the projectID. This means that if there are multiple images in your project, you have to list and store them first and only after that you can recursively call the findVulnerabilityOccurrencesForImage() function to get ALL of the vulnerabilities.
So I need a way to get and store all of the images' URLs inside all of the repos inside a given GCP project, but so far I couldn't find a solution. I can easily do that in the CLI by running gcloud container images list command but I don't see a way how that can be done with an API.
Thank you in advance for your help!
You can use the Cloud Storage package and the Objects method to do so. For example:
func GetURLs() ([]string, error) {
bucket := "bucket-name"
urls := []string{}
results := client.Bucket(bucket).Objects(context.Background(), nil)
for {
attrs, err := results.Next()
if err != nil {
if err == iterator.Done {
break
}
return nil, fmt.Errorf("iterating results: %w", err)
}
urls = append(urls, fmt.Sprint("https://storage.googleapis.com", "/", bucket, "/", attrs.Name))
}
return urls, nil
}
I am trying to make use of this golang package: https://github.com/jefflaplante/sensulib
I want to get all the events from the sensu API. I've followed the example code and modified it slightly so it works:
config := sensu.DefaultConfig()
config.Address = "sensu-url:port"
onfig.Username = "admin"
config.Password = "password"
// Create a new API Client
sensuAPI, err := sensu.NewAPIClient(config)
if err != nil {
// do some stuff
}
Now I want to grab all the events from the API, and there's a neat function do to that, GetEvents
However, the function expects a parameter, out, which is an interface. Here's the function itself:
func (c *API) GetEvents(out interface{}) (*http.Response, error) {
resp, err := c.get(EventsURI, out)
return resp, err
}
What exactly is it expecting me to pass here? I guess the function wants to write the results to something, but I have no idea what I'm supposed to call the function with
I've read a bunch of stuff about interfaces, but it's not getting any clearer. Any help would be appreciated!
The empty interface interface{} is just a placeholder for anything. It's roughly the equivalent of object in Java or C# for instance. It means the library doesn't care about the type of the parameter you are going to pass. For hints about what the library does with that parameter, I suggest you look at the source code.
After reading about Google Datastore concepts/theory I started using the Go datastore package
Scenario:
Kinds User and LinkedAccount require that every user has one or more linked accounts (yay 3rd party login). For strong consistency, LinkedAccounts will be children of the associated User. New User creation then involves creating both a User and a LinkedAccount, never just one.
User creation seems like the perfect use case for transactions. If, say LinkedAccount creation fails, the transaction rolls back an fails. This doesn't currently seem possible. The goal is to create a parent and then a child within a transaction.
According to docs
All Datastore operations in a transaction must operate on entities in
the same entity group if the transaction is a single group transaction
We want a new User and LinkedAccount to be in the same group, so to me it sounds like Datastore should support this scenario. My fear is that the intended meaning is that operations on existing entities in the same group can be performed in a single transaction.
tx, err := datastore.NewTransaction(ctx)
if err != nil {
return err
}
incompleteUserKey := datastore.NewIncompleteKey(ctx, "User", nil)
pendingKey, err := tx.Put(incompleteUserKey, user)
if err != nil {
return err
}
incompleteLinkedAccountKey := datastore.NewIncompleteKey(ctx, "GithubAccount", incompleteUserKey)
// also tried PendingKey as parent, but its a separate struct type
_, err = tx.Put(incompleteLinkedAccountKey, linkedAccount)
if err != nil {
return err
}
// attempt to commit
if _, err := tx.Commit(); err != nil {
return err
}
return nil
From the library source its clear why this doesn't work. PendingKey's aren't keys and incomplete keys can't be used as parents.
Is this a necessary limitation of Datastore or of the library? For those experienced with this type of requirement, did you just sacrifice the strong consistency and make both kinds global?
For Google-ability:
datastore: invalid key
datastore: cannot use pendingKey as type *"google.golang.org/cloud/datastore".Key
One thing to note is that transactions in the Cloud Datastore API can operate on up to 25 entity groups, but this doesn't answer the question of how to create two entities in the same entity group as part of a single transaction.
There are a few ways to approach this (note that this applies to any use of the Cloud Datastore API, not just the gcloud-golang library):
Use a (string) name for the parent key instead of having Datastore automatically assign a numeric ID:
parentKey := datastore.NewKey(ctx, "Parent", "parent-name", 0, nil)
childKey := datastore.NewIncompleteKey(ctx, "Child", parentKey)
Make an explicit call to AllocateIds to have the Datastore pick a numeric ID for the parent key:
incompleteKeys := [1]*datastore.Key{datastore.NewIncompleteKey(ctx, "Parent", nil)}
completeKeys, err := datastore.AllocateIDs(ctx, incompleteKeys)
if err != nil {
// ...
}
parentKey := completeKeys[0]
childKey := datastore.NewIncompleteKey(ctx, "Child", parentKey)