Using the Auth Manager of Yii I used CachedDbAuthManager. Once SQL executes for specific role against a user it caches the result. Next time records fetched from cache. Now once admin delete the role for a particular user it still remains in cache.
What is solution to this Problem?
Have a look at Yii's Cache Dependency Implementation.
You could eg. invalidate a cache when the admin edits an auth table, see also the database cache dependency. Often this is done just by looking for the latest eg. modified_at time, but this column is not part of the standard auth tables.
From the database cache man page:
CDbCacheDependency represents a dependency based on the query result of a SQL statement.
There is another extension SingleDbAuthManager which is doing nearly the same thing. It reads whole auth tree at once and cache it.
The performance of both SingleDbAuthManager and CachedDbAuthManager is vering. CachedDbAuthManager taking less time but fails to update cache in my case.
Related
I'm now working on a big project,we decided to use redis as cache in our system so,when we put some data in the cache and then the original data is changed,how could we know ? and what is the best practice in this case ? to delete the old data and replace the new one ? Is there any mechanism to replace just the changed part ?
Few things to keep in mind for caching for a large application using redis :
1) localise your cache as much as you can. For example if you have 5 information for every user that needs to be cached. Instead of accessing them all together make simple cache for each info.
2) choose the right data structure. Use redis' set, hash, sorted set and bit operations wherever possible.
3) make sure your system will work even if redis is not available (to overcome downtime). That is, check in redis if it's there serve, if not get from dB and populate in cache. So that even If redis is not available you will get values from DB
To answer your question, You can do it in three ways
1) you can maintain cache alongside your DB. During on success of transaction in the DB update the cache. So that you will not loose any information. But implementing this is bit difficult
2) whenever a transaction begins drop cache belongs to that. So that the values in the cache will be removed and will be fetched from DB during the successive read request.
3) maintain a last accessed or created time in both cache and DB. During every read compare them and decide. This is the most reliable solution.
I'm trying to use a transaction process with a wizard in apex 5.0.
I want to register a new Student to the database, so in the first page of the
wizard I want to create a savepoint (s1) then insert the information of the
student into the table "STUDENT", and in the second page I want to insert the info of the student's superior.
what I want to do is when the user click the Previous button, I want to rollback to savepoint s1 and undo the insert statement.
I tried to create a process but it seems that the rollback statement in the second page can't see the savepoint I declared in the first page.
so, does any one can help with that?
Apex uses connection pooling. Unlike client-server environments like Oracle Forms, Apex is state-less. DB connections are extremely short and fleeting and are not tied to one apex session. The session in apex is something for apex itself.
This means that transactional control is not as you'd think it is. A render of a page is a short DB connection / session and ends when the page has rendered. When submitting it'll be another session.
Oracle Apex Documentation link
2.6.2 What Is a Session?
A session is a logical construct that establishes persistence (or stateful behavior) across page views. Each
session is assigned a unique identifier. The Application Express
engine uses this identifier (or session ID) to store and retrieve an
application's working set of data (or session state) before and after
each page view.
Because sessions are entirely independent of one another, any number
of sessions can exist in the database at the same time. A user can
also run multiple instances of an application simultaneously in
different browser programs.
Sessions are logically and physically distinct from Oracle database
sessions used to service page requests. A user runs an application in
a single Oracle Application Express session from sign in to sign out
with a typical duration measured in minutes or hours. Each page
requested during that session results in the Application Express
engine creating or reusing an Oracle database session to access
database resources. Often these database sessions last just a fraction
of a second.
Can you still use savepoints? Yes. But not just anywhere. You could use it in one process. Can you set one in one page and then rollback from another? No. The technology just does not allow it. Even if it did, you'd have to deal with implicit commits, as outlined in #Cristian_I his answer.
For the same reason you can not use global temporary tables.
What CAN you use?
You could use apex collections. You can compare them to temporary tables, in that they will hold data in one apex session.
Simply store your information in collections and then process the data in them once you get to the end.
The other things you can do is: well, you can just keep the data stored in your page items. Session state is in effect. You can still access the session state of the page's items on the final step. If for some reason you wish to move back a step and then "auto-clear" that page, all you need to do is to clear the cache for that page. This is more difficult if you wish to use a tabular form somewhere since you'd have to build it on a collection, but I'd recommend a repeatable step in that case.
I think that your problem is that Apex gives commit statements when you switch from one page to another.
A simple rollback or commit erases all savepoints. (You can find out more here.) According to a post by Dan McGhan Apex gives implicit commits in the following situations:
On load, after a page finishes rendering
On submit, before branching to another page
On submit, if one or more validations fail, before re-rendering the
page
After a PL/SQL process that contains one or more bind variables has
completed
After a computation
When APEX_UTIL.SET_SESSION_STATE is called
When APEX_MAIL.PUSH_QUEUE is called
Meybe you can simulate savepoints functionalty by using some temporary tables.
Since Apex is stateless and the results of each page request are always either fully committed or fully rolled back (i.e. no inter-page savepoints are possible), you need to make a choice between two strategies:
Option 1: allow the intermediate info to be committed to the table. One way to do this is to add a flag to the table, e.g. "status", which is set to "provisional" on the first page, and updated to "complete" on the second page. This may require changes to other parts of your application so they know how to deal with any abandoned records that are left in "provisional" status.
Option 2: save the intermediate results in an Apex Collection. This data is available for the scope of the user's Apex session and is not accessible to other sessions, so would be ideal for this scenario. https://docs.oracle.com/database/121/AEAPI/apex_collection.htm#AEAPI531
Questions
Does the user option preload refer to caching on the client or on the server?
Are there any ways to make this occur asynchronously so that users don't take a large performance hit when first requesting data from a table?
More Info
In Dynamics Ax 2012, under File > User Options > Preload a user can select which tables are preloaded the first time they're accessed.
I've not found anything to say whether this behaviour relates to caching on the client or the AOS.
The fact it's a user setting implies that it's the client.
But it could be an AOS setting where users with this option take the initial hit of preloading the entire table, whilst those without would benefit from any caching caused by other users, but wouldn't trigger the load themselves.
If it's the latter we could improve performance by removing this option from all (human) users, leaving it enabled only on our batch user account, having scheduled jobs on each AOS to request a record from each table, thus triggering the preload without any user being negatively impacted.
Ref: http://dynamicbusinesssolutions.ru/axshared.en/html/9cd36702-2fa7-470c-a627-08
If a table is large or frequently changed it is not a candidate for entire table cache. This applies to ordinary users and batch users alike.
The EntireTable cache is located on the server, but the load is initiated by the user, the first user doing the select takes a performance hit.
To succesfully disable a table from preload, you can disable it using the Admin user, it will apply to all users. Or you can let all users disable it by themselves.
Personally I never change the user setup. If a table is large I change the table CacheLookup property as a customization.
See Set-based Caching:
When you set a table's CacheLookup property to EntireTable, all the
records in the table are placed in the cache after the first select.
This type of caching follows the rules of single record caching. This
means that the SELECT statement WHERE clause must include equality
tests on all fields of the unique index that is defined in the table's
PrimaryIndex property.
The EntireTable cache is located on the server
and is shared by all connections to the Application Object Server
(AOS). If a select is made on the client tier to a table that is
EntireTable cached, it first looks in its own cache and then searches
the server-side EntireTable cache.
An EntireTable cache is created for
each table for a given company. If you have two selects on the same
table for different companies the entire table is cached twice.
Note: Avoid using EntireTable caches for large tables because once
the cache size reaches 128 KB the cache is moved from memory to disk.
A disk search is much slower than an in-memory search.
I have never used memcached before and I am confused on the following basic question.
Memcached is a cache right? And I assume we cache data from a DB for faster access. So when the DB is updated who is responsible to update the cache? Our code is does memcached "understand" when the DB has been updated?
Memcached is a cache right? And I assume we cache data from a DB for
faster access
Yes it is a cache, but you have to understand that a cache speed up the access when you are often accessing same data. If you access thousand times data/objects which are always different each other a cache doesn't help.
To answer your question:
So when the DB is updated who is responsible to update the cache?
Always you but you don't have to worry about if you are doing the right thing.
Our code is does memcached "understand" when the DB has been updated?
memcached doesn't know about your database. (actually the client doesn't know even about servers..) So when you use an object of your database you should check if is present in cache, if not you put in cache otherwise you are fine.. that is all. When the moment comes memcache will free the memory used by old data, or you can tell memcached to free data after a time you choose(read the API for details).
You are responsible to update the cache (or some plugin).
What happens is that the query is compressed to some key features and these are hashed. This is tested against the cache. If the value is in the cache, the data is returned directly from cache. Otherwise the query is performed, stored in cache and returned to the user.
In pseudo code:
key = query_key(your_sql_query)
if key in cache:
return cache.get(key)
else:
results = execute(your_sql_query)
cache.set(key, results, time_to_live)
return results.
The cache is cleared once in a while, you can give a time to live to a key, then your cached results are refreshed.
This is the most simple model, but can cause some inconsistencies.
One strategy is that if your code is also the only app that updates data, then your code can also refresh memcached as a second step after it has updated the database. Or at least evict the stale data from memcached, so the next time an app wants to read it, it will be forced to re-query the current data from the database and restore that latest data to memcached.
Another strategy is to store data in memcached with an expiration time, so memcached automatically purges that data element after a certain time. You pick the expiration time, based on your knowledge of how frequently the data might be updated, and how tolerant your app is of reading stale data.
So ultimately, you are the one responsible for putting data into memcached. Only you know what data is worth storing in the cache, what format you want to store it in, how frequently you expect to query it, and when to refresh it. You make this judgment on a case-by-case basis, because you know better than any automatic system the likely behavior of your data and your app.
I have a user object represented in JPA which has specific sub-types. Eg, think of User and then a subclass Admin, and another subclass Power User.
Let's say I have 100k users. I have successfully implemented the second level cache using Ehcache in order to increase performance and have validated that it's working.
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html#performance-cache
I know it does work (ie, you load the object from the cache rather than invoke an sql query) when you call the load method. I've verified this via logging at the hibernate level and also verifying that it's quicker.
However, I actually want to select a subset of all the users...for example, let's say I want to do a count of how many Power Users there are.
Furthermore, my users have an associated ZipCode object...the ZipCode objects are also second level cached...what I'd like to do is actually be able to ask queries like...how many Power Users do i have in New York state...
However, my question is...how do i write a query to do this that will hit the second level cache and not the database. Note that my second level cache is configured to be read/write...so as new users are added to the system they should automatically be added to the cache...also...note that I have investigated the Query cache briefly but I'm not sure it's applicable as this is for queries that are run multiple times...my problem is more a case of...the data should be in the second level cache anyway so what do I have to do so that the database doesn't get hit when I write my query.
cheers,
Brian
(...) the data should be in the second level cache anyway so what do I have to do so that the database doesn't get hit when I write my query.
If the entities returned by your query are cached, have a look at Query#iterate(). This will trigger a first query to retrieve a list of IDs and then subsequent queries for each ID... that would hit the L2 cache.