Seems, it should be a common problem.
My Flask application has middleware that commits (if request processed normally) or rolls back (if request processing fails) SQLAlchemy session after each request. I need to implement cache for some object in DB, let's say User.
I'd like to implement manual invalidation, so when I change User object, I should remove it from cache. If next request needs User and it's not found in cache, it retrieve it from DB and cache.
OK, I changed User object and flush it (no commit yet, because next part of code can fail and transaction should be reverted). Seems, I can't invalidate object in cache immediately, because transaction's operation are still in pending mode, so if somebody else will try to access User's info, he will not find him in cache, so will retrieve outdated user from DB and cache him. So I need to remove User from cache only after transaction commit.
I thought about placing information about User needs to be removed from cache and executing removal in middleware, after DB transaction commits:
class DbSessionMiddleware:
def __init__(self, flask_app):
self.app = flask_app.wsgi_app
def __call__(self, environ, start_response):
try:
return self.app(environ, start_response)
except BaseException:
dbs().rollback()
raise
finally:
dbs().commit()
# Check if error occurred and reset cache, if not?
Am I thinking in the right direction? How to pass data to middleware?
May be, use Flask-SQLAlchemy signals for this?
Related
I placed this code inside a Route::get() method only to test it quicker. So this is how it looks:
use Illuminate\Support\Facades\Cache;
Route::get('/cache', function(){
$lock = Cache::lock('test', 4);
if($lock->get()){
Cache::put('name', 'SomeName'.now());
dump(Cache::get('name'));
sleep(5);
// dump('inside get');
}else{
dump('locked');
}
// $lock->release();
});
If you reach this route from two browsers (almost)at the same time. They both will respond with the result from dump(Cache::get('name'));. Shouldn't the second browser respond be "locked"? Because when it calls the $lock->get() that is supposed to return false? And that because when the second browser tries to reach this route the lock should be still set.
That same code works just fine if the time required for the code after the $lock = Cache::lock('test', 4) to be executed is less than 4. If you set the sleep($sec) when $sec<4 you will see that the first browser reaching this route will respond with the result from Cache::get('name') and the second browser will respond with "locked" as expected.
Can anyone explain why is this happening? Isn't it suppose that any get() method to that lock, expect the first one, to return false for that amount of time the lock has been set? I used 2 different browsers but it works the same with 2 tabs from the same browser too.
Quote from the 5.6 docs https://laravel.com/docs/5.6/cache#atomic-locks:
To utilize this feature, your application must be using the memcached or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 5.8 docs https://laravel.com/docs/5.8/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, dynamodb, or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 8.0 docs https://laravel.com/docs/8.x/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, redis, dynamodb, database, file, or array cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Apparently, they have been adding support for more drivers to make use of this lock functionality. Check which Cache driver you are using and if it fits the support list of your Laravel version.
There is likely an atomicity issue here where the cache driver you are using is not able to lock a file atomically. What should happen is that when a process (i.e. a php request) is writing to the lock file, all other processes requiring the lock file should at least wait until the lock file available to be read again. If not, they read the lock file before it has been written to, which obviously causes a race condition.
I saw this question I asked, well now I can say that the problem I was trying to solve here was not because of the atomic lock. The problem here is the sleep method. If the time provided to the sleep method is bigger than the time that a lock will live, it means when the next request it's able to hit the route the lock time will expire(will be released). And that's because let's say you have defined a route like this:
Route::get('case/{value}', function($value){
if($value){
dump('hit-1');
}else{
sleep(5);
dump('hit-0');
}
});
And you open two browser tabs with the same URL that hits this route something like:
127.0.0.1:8000/case/0
and
127.0.0.1:8000/case/1
It will show you that the first route will take 5sec to finish execution and even if the second request is sent almost at the same time with the first request, still it will wait to finish the first one and then run. This means the second request will last 5sec(from the first request) plus the time it took to run.
Back to the asked question the lock time will expire by the time the second request will get it or said differently run the $lock->get() statement.
I am using spring-boot(1.4.1) with hibernate(5.0.1.Final). I noticed that when I try to write to the db from within #TransactionalEventListener handler the call is simply ignored. A read call works just fine.
When I say ignore, I mean there is no write in the db and there are no logs. I even enabled log4jdbc and still no logs which mean no hibernate session was created. From this, I reckon, somewhere in spring-boot we identify that its a transaction event handler and ignore a write call.
Here is an example.
// This function is defined in a class marked with #Service
#TransactionalEventListener
open fun handleEnqueue(event: EnqueueEvent) {
// some code to obtain encodeJobId
this.uploadService.saveUploadEntity(uploadEntity, encodeJobId)
}
#Service
#Transactional
class UploadService {
//.....code
open fun saveUploadEntity(uploadEntity: UploadEntity, encodeJobId: String): UploadEntity {
// some code
return this.save(uploadEntity)
}
}
Now if I force a new Transaction by annotating
#Transactional(propagation = Propagation.REQUIRES_NEW)
saveUploadEntity
a new transaction with connection is made and everything works fine.
I dont like that there is complete silence in logs when this write is dropped (again reads succeed). Is there a known bug?
How to enable the handler to start a new transaction? If I do Propogation.Requires_new on my handleEnqueue event, it does not work.
Besides, enabling log4jdbc which successfully logs reads/writes I have following settings in spring.
Thanks
I ran into the same problem. This behavior is actually mentioned in the documentation of the TransactionSynchronization#afterCompletion(int) which is referred to by the TransactionPhase.AFTER_COMMIT (which is the default TransactionPhase attribute of the #TransactionalEventListener):
The transaction will have been committed or rolled back already, but the transactional resources might still be active and accessible. As a consequence, any data access code triggered at this point will still "participate" in the original transaction, allowing to perform some cleanup (with no commit following anymore!), unless it explicitly declares that it needs to run in a separate transaction. Hence: Use PROPAGATION_REQUIRES_NEW for any transactional operation that is called from here.
Unfortunately this seems to leave no other option than to enforce a new transaction via Propagation.REQUIRES_NEW. The problem is that the transactionalEventListeners are implemented as transaction synchronizations and hence bound to the transaction. When the transaction is closed and its resources cleaned up, so are the listeners. There might be a way to use a customized EntityManager which stores events and then publishes them after its close() was called.
Note that you can use TransactionPhase.BEFORE_COMMIT on your #TransactionalEventListener which will take place before the commit of the transaction. This will write your changes to the database but you won't know whether the transaction you're listening on was actually committed or is about to be rolled back.
I'm having a problem associating a browser_id to a session when using Products.BeakerSessionDataManager. I'm working on Plone 5.
As far as I understand Zope sessions, as soon as .getSessionData() is called on a session data manager, a session data container is created if it did not exist. Furthermore, this data will contain a token, which is the same as the browser_id associated with the browser making the request. And finally, a cookie is set on the response with the name _ZopeId (and the value is the same as the token). Thus, when I use the default session data manager that come with Zope, I get this:
ipdb> context.session_data_manager.getSessionData()
id: 14737473151418102847, token: 38878600A7nh90DE9ao, content keys: []
However, when I use Products.BeakerSessionDataManager, the same call gives me this:
ipdb> context.session_data_manager.getSessionData()
{'_accessed_time': 1473745441.437582, '_creation_time': 1473745441.437582}
Moreover, no cookie is set.
Perusing some old Zope docs, I found a reference to getContainerKey(), so I thought that might get me the browser_id. However, the returned value is different on every request, so that does not work. Also, calling .getBrowserIdManager().getBrowserId() on the session_data_manager throws an error, because Beaker does not support browser id managers.
I want to set a cookie, and I want a token. I'm doing this so that I can identify anonymous clients in a voting application, so that they will not cast multiple votes (at least not in the same session - there are other mechanisms to allow voting only when certain other conditions are met).
Am I misunderstanding the machinery, or am I missing something?
Is there a way to delete the value from session witch revel the go web framework?
I have a function for validate captcha for user input, and I set the value of captcha in session, and delete the captcha from session if there nothing to do for client after 1 minute. The code is like:
time.AfterFunc(time.Minute, func() {
delete(this.Session, CSecurityCode)
})
But I can still get the old value of captcha , why, and how to fix this?
Anybody who can help me?
The Session value is valid only while processing a client request. i.e. between the time you get the request and the time you respond to that request. Its contents are kept in a cookie on the client side and you'll get a new Session every time the client connects to your server. Thus if you keep it for later use (like the name of your AfterFunc suggests, triggered by a timer?), anything you do with it will be lost on the next client connection.
In order to achieve what you want to do, you'll need to add a "lastSeen" timestamp to the Session. When a request comes in, check Session["lastSeen"], and if it's older than 1 minute, then you can discard CSecurityCode from it.
I'm new to the Go language (Golang) and I'm writing a web-based application. I'd like to use session variables, like the kind in PHP (variables that are available from one page to the next and unique for a user session). Is there something like that in Go? If not, how would I go about implementing them myself? Or what alternatives methods are there?
You probably want to take a look at gorilla. It has session support as documented here.
Other than that or possibly one of the other web toolkits for go you would have to roll your own.
Possible solutions might be:
goroutine per user session to store session variables in memory.
store your variables in a session cookie.
use a database to store user session data.
I'll leave the implementation details of each of those to the reader.
Here's an alternative in case you just want session support without a complete web toolkit.
https://github.com/bpowers/seshcookie
Here's another alternative (disclosure: I'm the author):
https://github.com/icza/session
Quoting from its doc:
This package provides an easy-to-use, extensible and secure session implementation and management. Package documentation can be found and godoc.org:
https://godoc.org/github.com/icza/session
This is "just" an HTTP session implementation and management, you can use it as-is, or with any existing Go web toolkits and frameworks.
Overview
There are 3 key players in the package:
Session is the (HTTP) session interface. We can use it to store and retrieve constant and variable attributes from it.
Store is a session store interface which is responsible to store sessions and make them retrievable by their IDs at the server side.
Manager is a session manager interface which is responsible to acquire a Session from an (incoming) HTTP request, and to add a Session to an HTTP response to let the client know about the session. A Manager has a backing Store which is responsible to manage Session values at server side.
Players of this package are represented by interfaces, and various implementations are provided for all these players.
You are not bound by the provided implementations, feel free to provide your own implementations for any of the players.
Usage
Usage can't be simpler than this. To get the current session associated with the http.Request:
sess := session.Get(r)
if sess == nil {
// No session (yet)
} else {
// We have a session, use it
}
To create a new session (e.g. on a successful login) and add it to an http.ResponseWriter (to let the client know about the session):
sess := session.NewSession()
session.Add(sess, w)
Let's see a more advanced session creation: let's provide a constant attribute (for the lifetime of the session) and an initial, variable attribute:
sess := session.NewSessionOptions(&session.SessOptions{
CAttrs: map[string]interface{}{"UserName": userName},
Attrs: map[string]interface{}{"Count": 1},
})
And to access these attributes and change value of "Count":
userName := sess.CAttr("UserName")
count := sess.Attr("Count").(int) // Type assertion, you might wanna check if it succeeds
sess.SetAttr("Count", count+1) // Increment count
(Of course variable attributes can be added later on too with Session.SetAttr(), not just at session creation.)
To remove a session (e.g. on logout):
session.Remove(sess, w)
Check out the session demo application which shows all these in action.
Google App Engine support
The package provides support for Google App Engine (GAE) platform.
The documentation doesn't include it (due to the +build appengine build constraint), but here it is: gae_memcache_store.go
The implementation stores sessions in the Memcache and also saves sessions to the Datastore as a backup in case data would be removed from the Memcache. This behaviour is optional, Datastore can be disabled completely. You can also choose whether saving to Datastore happens synchronously (in the same goroutine) or asynchronously (in another goroutine), resulting in faster response times.
We can use NewMemcacheStore() and NewMemcacheStoreOptions() functions to create a session Store implementation which stores sessions in GAE's Memcache. Important to note that since accessing the Memcache relies on Appengine Context which is bound to an http.Request, the returned Store can only be used for the lifetime of a request! Note that the Store will automatically "flush" sessions accessed from it when the Store is closed, so it is very important to close the Store at the end of your request; this is usually done by closing the session manager to which you passed the store (preferably with the defer statement).
So in each request handling we have to create a new session manager using a new Store, and we can use the session manager to do session-related tasks, something like this:
ctx := appengine.NewContext(r)
sessmgr := session.NewCookieManager(session.NewMemcacheStore(ctx))
defer sessmgr.Close() // This will ensure changes made to the session are auto-saved
// in Memcache (and optionally in the Datastore).
sess := sessmgr.Get(r) // Get current session
if sess != nil {
// Session exists, do something with it.
ctx.Infof("Count: %v", sess.Attr("Count"))
} else {
// No session yet, let's create one and add it:
sess = session.NewSession()
sess.SetAttr("Count", 1)
sessmgr.Add(sess, w)
}
Expired sessions are not automatically removed from the Datastore. To remove expired sessions, the package provides a PurgeExpiredSessFromDSFunc() function which returns an http.HandlerFunc. It is recommended to register the returned handler function to a path which then can be defined as a cron job to be called periodically, e.g. in every 30 minutes or so (your choice). As cron handlers may run up to 10 minutes, the returned handler will stop at 8 minutes
to complete safely even if there are more expired, undeleted sessions. It can be registered like this:
http.HandleFunc("/demo/purge", session.PurgeExpiredSessFromDSFunc(""))
Check out the GAE session demo application which shows how it can be used.
cron.yaml file of the demo shows how a cron job can be defined to purge expired sessions.
Check out the GAE session demo application which shows how to use this in action.