Eclipselink caching problem (one database for two systems) - caching

I have two online-systems running. Both of them are using eclipselink.
The first system is a administration-system, where the prices for the second application are managed.
The second system is a online shop, where customer can buy articles.
Both of them run on the same server and use the same oracle database.
To provide a fast access, the price-objects are cached by eclipselink.
If I change the value of a price in the administration-system, the shop-system should flush its cache in order to get the new price value.
What is the best way to solve this problem?

I have a similar problem but it's with user credentials.
1) Configure caching in the shop-side
You can configure the EclipseLink caching to have an expiry. You can configure it to have a
TimeToLive or an expire at value. For example you could configure prices to expire after 1, 5 or 10 minutes. Not instant, but pretty quick and very easy to implement. Check out the #Cache annotation in EclipseLink. This is what I ended up using.
2) Have the admin application communicate with the shop application
It might be worth creating a web-service that lives in the shop side which will invalidate the cache when called. Kinda fragile but might be necessary depending on your setup.
3) Use Cache co-ordination
EclipseLink has functionality for cache co-ordination. I have never used it but it looks like it might be best policy for you. You can check out the EclipseLink documentation for more information.

Related

Handling dictionary values stored in DB - Spring

I am developing some SPA with a backend written in Java (Spring Boot). In relational DB that backend connects to, there is a table with some dictionary values. Values can edited by users of the app, but it's done really, really rarely (almost never).
Those dictionary values are used in a lot of pages on UI and because of that I would like to "cache" them in a way. What I want to achieve is that I want to load dictionary values on startup to avoid asking DB for values during every request between UI and Backend.
Firstly, I thought about just loading it on the UI part of the app, when user enters the page for the first time. Then I ruled it out, since when one of the users changes the values, it should be reloaded.
What I think might work is just loading them on startup of Backend into some collection (that can be safely used in concurent environment, probably ConcurrentMap) and then during some GET requests asking that collection for the values (instead of DB). When the values are changed, that request just updates the DB table and reloads them into collection.
Then I thought that the collection solution won't be enough, when my backend would be scaled up to more than one instance. In that case, only one of instances will be updated and the second one will provide outdated data. We can avoid it and force refreshes i.e. every 15 minutes (instead of on demand during values update).
But what I think is the best solution is to start some redis service on a side, load dictionary values into it and after every DB update of the values just update the redis instance with the new ones. Every instance of backend would use the same instance of redis, which seems quicker than executing query (select * from _ where _ = _) on DB.
What do you think? Is my thought process is correct? Do you have any ideas that can help solve my issue?
If you are using Spring you could check out Spring Cache Abstraction. That way your cache will be up-to-date whenever some change occurs.
Out of the box few implementations are supported by Spring:
Spring provides a few implementations of that abstraction: JDK java.util.concurrent.ConcurrentMap based caches, Ehcache 2.x, Gemfire cache, Caffeine, and JSR-107 compliant caches (such as Ehcache 3.x). See Plugging-in Different Back-end Caches for more information on plugging in other cache stores and providers.
If you decide to use Memcached implementation you can check out this library (uses Xmemcached under the hood) here.
You could also check a small demo app of how to use Spring Cache Abstraction in your project (link).
I think your in the right path with your approach in terms of 'caching'. I suggest you also check Memcached for it simplicity. Redis is a good choice but still it depends on your requirements and if you need that much feature. just my 2cent
https://aws.amazon.com/elasticache/redis-vs-memcached/
https://devcenter.heroku.com/articles/spring-boot-memcache#add-caching-to-spring-boot
Thanks,

how to update local memory cache in all server instances

I have a web server cluster that contains many running web server instances. each instance cache some configurations in its local memory, the original configurations are stored in Database.
these configurations are used for every request, so the cache may necessary for performance reason.
I want to provide an admin page, in which, the administrator can change the configurations. how do I update all the cache in every server instance?
now I have two solutions for this:
set an expire time for the cache.
when administrator update the configuration, notify each instance via some pub/sub mechanism(e.g. use redis).
for solution 1, the drawback is the changes can not take effect immediately.
for solution 2, I'm wondering, if the pub/sub will have impact on the performance of the web server.
which one is better? or is there any common solution for this problem?
Another drawback of option 1 is that you'll periodically hit your database unnecessarily.
If you're already using Redis then option 2 is a good solution. I've used it successfully and can't imagine how there could be a performance impact just because you're using pubsub.
Another option is to create a cache invalidation URL on each website, e.g. /admin/cache-reset/, and have your administration tool call the cache-reset URL on each individual server. The drawback of this solution is that you need to maintain a list of servers. If you're not already using Redis it could just be the simple/practical/low-tech solution that you're looking for.

Create a LDAP cache using unboundid LDAP SDK?

I would like to make a LDAP cache with the following goals
Decrease connection attempt to the ldap server
Read local cache if entry is exist and it is valid in the cache
Fetch from ldap if there is no such request before or the entry in the cache is invalid
Current i am using unboundid LDAP SDK to query LDAP and it works.
After doing some research, i found a persistent search example that may works. Updated entry in the ldap server will pass the entry to searchEntryReturned so that cache updating is possible.
https://code.google.com/p/ldap-sample-code/source/browse/trunk/src/main/java/samplecode/PersistentSearchExample.java
http://www.unboundid.com/products/ldapsdk/docs/javadoc/com/unboundid/ldap/sdk/AsyncSearchResultListener.html
But i am not sure how to do this since it is async or is there a better way to implement to cache ? Example and ideas is greatly welcomed.
Ldap server is Apache DS and it supports persistent search.
The program is a JSF2 application.
I believe that Apache DS supports the use of the content synchronization controls as defined in RFC 4533. These controls may be used to implement a kind of replication or data synchronization between systems, and caching is a somewhat common use of that. The UnboundID LDAP SDK supports these controls (http://www.unboundid.com/products/ldap-sdk/docs/javadoc/index.html?com/unboundid/ldap/sdk/controls/ContentSyncRequestControl.html). I'd recommend looking at those controls and the information contained in RFC 4533 to determine whether that might be more appropriate.
Another approach might be to see if Apache DS supports an LDAP changelog (e.g., in the format described in draft-good-ldap-changelog). This allows you to retrieve information about entries that have changed so that they can be updated in your local copy. By periodically polling the changelog to look for new changes, you can consume information about changes at your own pace (including those which might have been made while your application was offline).
Although persistent search may work in your case, there are a few issues that might make it problematic. The first is that you don't get any control over the rate at which updated entries are sent to your client, and if the server can apply changes faster than the client can consume them, then this can overwhelm the client (which has been observed in a number of real-world cases). The second is that a persistent search will let you know what entries were updated, but not what changes were made to them. In the case of a cache, this may not have a huge impact because you'll just replace your copy of the entire entry, but it's less desirable in other cases. Another big problem is that a persistent search will only return information about entries updated while the search was active. If your client is shut down or the connection becomes invalid for some reason, then there's no easy way to get information about any changes while the client was in that state.
Client-side caching is generally a bad thing, for many reasons. It can serve stale data to applications, which has the potential to cause incorrect behavior or in some cases pose a security risk, and it's absolutely a huge security risk if you're using it for authentication. It could also pose a security risk if not all of the clients have the same level of access to the data contained in the cache. Further, implementing a cache for each client application isn't a scalable solution, and if you were to try to share a cache across multiple applications, then you might as well just make it a full directory server instance. It's much better to use a server that can simply handle the desired load without the need for any additional caching.

What is the best practice to build 4 publich websites on the same database?

We have four public websites running on the same database with different schema(Oracle). All of them are 'AAA' application and have "20,0000PV~500,000PV"daily. 90% data in websites are read-only and updated daily(By Batch). Less than 10% data, such as announcement, are updated manually. We are looking for the best practices to solve following concerns.
Improve website availability. Though we have a BCP database, it might need 1~2 hours to recover 4 websites in case database server is down.
Since most data are read-only, we are considering using in-memory db (hsqldb) or cache component(ehcache) to improve performance. As default, we are using ibatis and hibernate. Ehcache might not only be used on Level-2 cache, but also page cache.
We trends to build web services framework(restful) instead of java solution since mobile application might reuse them. Not very sure if it is a good idea to run website on web service on the same web application server. We have active-active HTTP and web servers.
On-line shopping is in the future plan.
Add database processes, make it at least 4 for serving each website.
Consider memcache
The same application server can run multiple applications. Not a problem it there is a good amount of RAM. However, if there is an overwhelm of users, you can always move particular applications to a different server. But, a better idea is to wait and see which service is worth that privilege.
Another web-application, too much of security and state management. Better put it in a new server.

Doctrine 2 Caching Workflow

I am new to caching
What should I cache
eg. Do I cache user info? eg. since they are frequently used throughout the application (like in the header saying "welcome {username}")?
But most things should be used quite frequently anyways? eg. Users have projects. These projects don't belong to everyone, but they will be frequently used by specific users do I cache them too? Won't I be caching nearly everything then?
Also regarding CRUD, with doctrine queries, I can just use $query->useResultCache(true) but what happens when I update/delete an entity? I need to somehow update my cache too? how?
The basic principle of caching is to hold frequently used data that doesnt change often in memory to reduce database work.
Its more convenient to use the php session variables to hold basic things like username.
In case of projects, if they dont change often, and retrieved by users frequently, it would be a good idea to cache them. How long a project info stays cached depends on the change frequency.
Also note that if the info you present to users is vital or time important, you should use caching cautiously.
Check this reference page for basic information on caching http://www.doctrine-project.org/docs/orm/2.0/en/reference/dql-doctrine-query-language.html#cache-related-api
Or check http://www.doctrine-project.org/docs/orm/2.0/en/reference/caching.html for detailed explanation.

Resources