golang/gorilla: updating session expiry? - session

I know how to use gorilla under golang to manage sessions. But what I'm trying to accomplish is to optionally set the session expiry time to a later date at run time, depending upon various application conditions. I haven't been able to figure out how to update this expiry time.
Consider the following code fragment ...
skey := "some sort of secret key"
sname := "some sort of session name"
session_store := sessions.NewCookieStore([]byte(skey))
session_store.Options = &sessions.Options{
MaxAge: 300,
}
// `r` is previously defined as the current *http.Request
sess, err := session_store.Get(r, sname)
As written, sess will expire 300 seconds after it was initialized. But how can I extend the lifetime of sess before this much time passes, so that its expiry will then occur at a later time?
Thank you in advance.

Related

How to start & stop heartbeat per session using context.WithCancel?

I'm implementing currently the Golang client for TypeDB and struggle with their session based heartbeat convention. Usually, you implement heartbeat per client so that's relatively easy, just run a gorountine in the background and send a heartbeat every few seconds.
TypeDB, however, chose to implement heartbeat (they call it pulse) on a per session base. which means, every time a new session gets created, I have to start monitoring that session with a separate GoRoutine. Conversely, if the client closes a session, I have to stop the monitoring. What's particularly ugly, I also have to check for stalled session every once in a while. There is is GH issue to switch over to per client heartbeat, but no ETA so I have to make session heartbeat work to prevent serve side session termination.
So far, my solution:
Create a new session
Open that session & check for error
If no error, add session to a hashmap keyed by session ID
This seems to work for now. Code, just for context is here:
https://github.com/marvin-hansen/typedb-client-go/blob/main/src/client/v2/manager_session.go
For monitoring each session, I am mulling over two issues:
Chanel close over multiple gorountines is a bit tricky and may lead to race conditions.
I would need some kind of error group to catch heartbeat failures i.e. in case the server shuts down or a network link error.
With all that in mind, I believe a context.WithCancel might be safe & sane solution.
What I came up so far is this:
Pass the global context as parameter to the heartbeat function
Create a new context WithCancel for each session calling heartbeat
Run heartbeat in a GoRoutine until either cancel gets called (by stopMonitoring) or or error occurs
What's not so clear to me is, how do I track all the cancel functions returned from each tracked session as to ensure I am closing the right GoRotuine matching the session to close ?
Thank you for any hint to solve this.
The code:
func (s SessionManager) startMonitorSession(sessionID []byte) {
// How do I track each goRoutine per session
}
func (s SessionManager) stopMonitorSession(sessionID []byte) {
// How do I call the correct cancel function to stop the GoRoutine matching the session?
}
func (s SessionManager) runHeartbeat(ctx context.Context, sessionID []byte) context.CancelFunc {
// Create a new context, with its cancellation function from the original context
ctx, cancel := context.WithCancel(ctx)
go func() {
select {
case <-ctx.Done():
fmt.Println("Stopped monitoring session: ")
default:
err := s.sendPulseRequest(sessionID)
// If this operation returns an error
// cancel all operations using this local context created above
if err != nil {
cancel()
}
fmt.Println("done")
}
}()
// return cancel function for call site to close at a later stage
return cancel
}
func (s SessionManager) sendPulseRequest(sessionID []byte) error {
mtd := "sendPulse: "
req := requests.GetSessionPulseReq(sessionID)
res, pulseErr := s.client.client.SessionPulse(s.client.ctx, req)
if pulseErr != nil {
dbgPrint(mtd, "Heartbeat error. Close session")
return pulseErr
}
if res.Alive == false {
dbgPrint(mtd, "Server not alive anymore. Close session")
closeErr := s.CloseSession(sessionID)
if closeErr != nil {
return closeErr
}
}
// no error
return nil
}
Update:
Thanks to the comment(s) I managed to solve the bulk of the issue by wrapping session & CancelFunc in a dedicated struct, called TypeDBSession.
That way, the stop function simply pulls the CancelFunc from the struct, calls it, and stops the monitoring GoRoutine. With some more tweaking, tests seems to pass although this is not concurrency safe for the time being.
That being said, this was a non-trivial issue to solve. Again, but thanks to the comments!
If any one is open to suggesting some code improvements especially w.r.t to make this concurrency safe, feel free to comment here or fill a GH issue / PR.
SessionType:
https://github.com/marvin-hansen/typedb-client-go/blob/main/src/client/v2/manager_session_type.go
SessionMonitoring:
https://github.com/marvin-hansen/typedb-client-go/blob/main/src/client/v2/manager_session_monitor.go
Tests:
https://github.com/marvin-hansen/typedb-client-go/tree/main/test/client/session
My two cents:
You may need run the hearbeat repeatedly. Use a for with a time.Ticker around the select
Store a map session id —> func() to track all cancellable context. Perhaps you should convert the id to string

Should there be a new datastore.Client per HTTP request?

The official Go documentation on the datastore package (client library for the GCP datastore service) has the following code snippet for demonstartion:
type Entity struct {
Value string
}
func main() {
ctx := context.Background()
// Create a datastore client. In a typical application, you would create
// a single client which is reused for every datastore operation.
dsClient, err := datastore.NewClient(ctx, "my-project")
if err != nil {
// Handle error.
}
k := datastore.NameKey("Entity", "stringID", nil)
e := new(Entity)
if err := dsClient.Get(ctx, k, e); err != nil {
// Handle error.
}
old := e.Value
e.Value = "Hello World!"
if _, err := dsClient.Put(ctx, k, e); err != nil {
// Handle error.
}
fmt.Printf("Updated value from %q to %q\n", old, e.Value)
}
As one can see, it states that the datastore.Client should ideally only be instantiated once in an application. Now given that the datastore.NewClient function requires a context.Context object does it mean that it should get instantiated only once per HTTP request or can it safely be instantiated once globally with a context.Background() object?
Each operation requires a context.Context object again (e.g. dsClient.Get(ctx, k, e)) so is that the point where the HTTP request's context should be used?
I'm new to Go and can't really find any online resources which explain something like this very well with real world examples and actual best practice patterns.
You may use any context.Context for the datastore client creation, it may be context.Background(), that's completely fine. Client creation may be lengthy, it may require connecting to a remote server, authenticating, fetching configuration etc. If your use case has limited time, you may pass a context with timeout to abort the operation. Also if creation takes longer than the time you have, you may use a context with cancel and abort the mission at your will. These are just options which you may or may not use. But the "tools" are given via context.Context.
Later when you use the datastore.Client during serving (HTTP) client requests, then using the request's context is reasonable, so if a request gets cancelled, then so will its context, and so will the datastore operation you issue, rightfully, because if the client cannot see the result, then there's no point completing the query. Terminating the query early you might not end up using certain resources (e.g. datastore reads), and you may lower the server's load (by aborting jobs whose result will not be sent back to the client).

When create JWT token inside loop getting same token in jwt-go

I am creating jwt tokens using jwt-go library. Later wrote a script to load test. I have noticed when I send the many concurrent request getting same token. To check more about this I created token inside for loop and result is same.
The library that I use is https://github.com/dgrijalva/jwt-go, go version is 1.12.9.
expirationTime := time.Now().Add(time.Duration(60) * time.Minute)
for i := 1; i < 5; i++ {
claims := &jwt.StandardClaims{
ExpiresAt: expirationTime.Unix(),
Issuer:"telescope",
}
_token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
var jwtKey = []byte("secret_key")
auth_token, _ := _token.SignedString(jwtKey)
fmt.Println(auth_token)
}
A JWT contains three parts: a mostly-fixed header, a set of claims, and a signature. RFC 7519 has the actual details. If the header is fixed and the claims are identical between two tokens, then the signature will be identical too, and you can easily get duplicated tokens. The two timestamp claims "iat" and "exp" are only at a second granularity, so if you issue multiple tokens with your code during the same second you will get identical results (even if you move the expirationTime calculation inside the loop).
The jwt-go library exports the StandardClaims listed in RFC 7519 §4.1 as a structure, which is what you're using in your code. Digging through the library code, there's nothing especially subtle here: StandardClaims uses ordinary "encoding/json" annotations, and then when a token is written out, the claims are JSON encoded and then base64-encoded. So given a fixed input, you'll get a fixed output.
If you want every token to be "different" in some way, the standard "jti" claim is a place to provide a unique ID. This isn't part of the StandardClaims, so you need to create your own custom claim type that includes it.
type UniqueClaims struct {
jwt.StandardClaims
TokenId string `json:"jti,omitempty"`
}
Then when you create the claims structure, you need to generate a unique TokenId yourself.
import (
"crypto/rand"
"encoding/base64"
)
bits := make([]byte, 12)
_, err := rand.Read(bits)
if err != nil {
panic(err)
}
claims := UniqueClaims{
StandardClaims: jwt.StandardClaims{...},
TokenId: base64.StdEncoding.EncodeToString(bits),
}
https://play.golang.org/p/zDnkamwsCi- has a complete example; every time you run it you will get a different token, even if you run it multiple times in the same second. You can base64 decode the middle part of the token by hand to see the claims, or use a tool like the https://jwt.io/ debugger to decode it.
I changed your code:
Moved calculation of expirationTime in the loop
Added 1 sec delay on each step of loop
for i := 1; i < 5; i++ {
expirationTime := time.Now().Add(time.Duration(60) * time.Minute)
claims := &jwt.StandardClaims{
ExpiresAt: expirationTime.Unix(),
Issuer: "telescope",
}
_token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
var jwtKey = []byte("secret_key")
auth_token, _ := _token.SignedString(jwtKey)
fmt.Println(auth_token)
time.Sleep(time.Duration(1) * time.Second)
}
In this case we get different tokens:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NjcyNDcwNDgsImlzcyI6InRlbGVzY29wZSJ9.G7wV-zsCYjysLEdgYAq_92JGDPsgqqOz9lZxdh5gcX8
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NjcyNDcwNDksImlzcyI6InRlbGVzY29wZSJ9.yPNV20EN3XJbGiHhe-wGTdiluJyVHXj3nIqEsfwDZ0Q
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NjcyNDcwNTAsImlzcyI6InRlbGVzY29wZSJ9.W3xFXEiVwh8xK47dZinpXFpKuvUl1LFUAiaLZZzZ2L0
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NjcyNDcwNTEsImlzcyI6InRlbGVzY29wZSJ9.wYUbzdXm_VQGdFH9RynAVVouW9h6KI1tHRFJ0Y322i4
Sorry, I am not big expert in JWT and I hope somebody who is explain us this behavior from RFC point of view.
I want to get different tokens. eg : same person login in to system using different browser. so I want to keep many tokens.
It is the same user and we can get him the same token. If we want to give it another one we need to revoke previous one or the client must refresh it.

Reload tensorflow model in Golang app server

I have a Golang app server wherein I keep reloading a saved tensorflow model every 15 minutes. Every api call that uses the tensorflow model, takes a read mutex lock and whenever I reload the model, I take a write lock. Functionality wise, this works fine but during the model load, my API response time increases as the request threads keep waiting for the write lock to be released. Could you please suggest a better approach to keep the loaded model up to date?
Edit, Code updated
Model Load Code:
tags := []string{"serve"}
// load from updated saved model
var m *tensorflow.SavedModel
var err error
m, err = tensorflow.LoadSavedModel("/path/to/model", tags, nil)
if err != nil {
log.Errorf("Exception caught while reloading saved model %v", err)
destroyTFModel(m)
}
if err == nil {
ModelLoadMutex.Lock()
defer ModelLoadMutex.Unlock()
// destroy existing model
destroyTFModel(TensorModel)
TensorModel = m
}
Model Use Code(Part of the API request):
config.ModelLoadMutex.RLock()
defer config.ModelLoadMutex.RUnlock()
scoreTensorList, err = TensorModel.Session.Run(map[tensorflow.Output]*tensorflow.Tensor{
UserOp.Output(0): uT,
DataOp.Output(0): nT},
[]tensorflow.Output{config.SumOp.Output(0)},
nil,
)
Presumably destroyTFModel takes a long time. You could try this:
old := TensorModel
ModelLoadMutex.Lock()
TensorModel = new
ModelLoadMutex.Unlock()
go destroyTFModel(old)
So destroy after assign and/or try destroying on another goroutine if it needs to clean up resources and somehow takes a long time blocking this response. I'd look into what you're doing in destroyTFModel and why it is slow though, does it make network requests to the db or involve the file system? Are you sure there isn't another lock external to your app you're not aware of (for example if it had to open a file and locked it for reads while destroying this model?).
Instead of using if err == nil { around it, consider returning on error.

Go Gorilla Mux Session Name

I am having a hard time understanding Gorilla mux's session name.
http://www.gorillatoolkit.org/pkg/sessions#CookieStore.Get
var store = sessions.NewCookieStore([]byte("something-very-secret"))
func MyHandler(w http.ResponseWriter, r *http.Request) {
// Get a session. We're ignoring the error resulted from decoding an
// existing session: Get() always returns a session, even if empty.
session, _ := store.Get(r, "session-name")
// Set some session values.
session.Values["foo"] = "bar"
session.Values[42] = 43
// Save it.
session.Save(r, w)
}
I want to use session to avoid using global variables between two handlers. So I save the key-value in the shared session and retrieve the value from the session.
And I wonder if I want each user to have its own unique session and its Values, do I need to assign unique session name(session id)? Or the gorilla session handles by itself that each user gets its own session and values?
I wonder if I need to generate session names with unique identifiers.
Thanks
The session data is stored in the client's cookies. So the session you retrieve with store.Get(r, "session-name") is reading that particular client't (request) cookies. You do not need unique names. The name in this case is the name of the cookie so it will be unique to the request.

Resources