I am using ehcache with spring architecture.
Right now, I am refreshing the cache at FIXED interval of every 15 minute from the database.
#Cacheable(cacheName = "fpodcache",refreshInterval=60000, decoratedCacheType= DecoratedCacheType.REFRESHING_SELF_POPULATING_CACHE)
public List<Account> getAccount(String key) {
//Running a database query to fetch the data.
}
Instead of time based cache refresh, I want CONDITION BASED cache refresh. There are 2 reasons behind it -
1. database doesn't update very frequently(15 times a day but NOT at fixed interval) and 2. data fetched and cached is huge.
So, I decided to maintain two variables - a version in database (version_db) and one version in cache (version_cache). I wish to put a condition that if(version_db > version_cache) then only refresh cache, otherwise dont refresh. Something like -
#Cacheable(cacheName = "fpodcache", conditionforrefresh = version_db>version_cache, decoratedCacheType= DecoratedCacheType.REFRESHING_SELF_POPULATING_CACHE)
public List<Account> getAccount(String key) {
//Running a database query to fetch the data.
}
What is the right syntax for conditionforrefresh = version_db>version_cache in the above code ?
How do I achieve this?
You can have a check in the beginning of your refresh logic to do the check you need.
If true then load from DB otherwise use the existing loaded data.
#Cacheable(cacheName = "fpodcache",refreshInterval=60000, decoratedCacheType= DecoratedCacheType.REFRESHING_SELF_POPULATING_CACHE)
public List<Account> getAccount(String key) {
// Fetch version_db
// Fetch version_cache
// Check if version_db>version_cache
// Is true --> Run a database query to fetch the data.
// Else --> Return existing data
}
Related
So I have a Apache Camel route that reads Data elements from a JPA endpoint, converts them to DataConverted elements and stores them into a different database via a second JPA endpoint. Both endpoints are Oracle databases.
Now I want to set a flag on the original Data element that it got copied successfully. What is the best way to achieve that?
I tried it like that: saving the ID in the context and then reading it and accessing a dao method in the .onCompletion().onCompleteOnly().
from("jpa://Data")
.onCompletion().onCompleteOnly().process(ex -> {
var id = Long.valueOf(getContext().getGlobalOption("id"));
myDao().setFlag(id);
}).end()
.process(ex -> {
Data data = ex.getIn().getBody(Data.class);
DataConverted dataConverted = convertData(data);
ex.getMessage().setBody(data);
var globalOptions = getContext().getGlobalOptions();
globalOptions.put("id", data.getId().toString());
getContext().setGlobalOptions(globalOptions);
})
.to("jpa://DataConverted").end();
However, this seems to trigger a deadlock, the dao method is stalling on the commit of the update. The only explanation could be that the Data object gets locked by Camel and is still locked in the .onCompletion().onCompleteOnly() part of the route, therefore it can't get updated there.
Is there a better way to do it?
Have you tried using the recipient list EIP where first destination is the jpa:DataConverted endpoint and the second destination will be the endpoint to set the flag. This way both get the same message and will be executed sequentially.
https://camel.apache.org/components/3.17.x/eips/recipientList-eip.html
from("jpa://Data")
.process(ex -> {
Data data = ex.getIn().getBody(Data.class);
DataConverted dataConverted = convertData(data);
ex.getIn().setBody(data);
})
.recipientList(constant("direct:DataConverted","direct:updateFlag"))
.end();
from("direct:DataConverted")
.to("jpa://DataConverted")
.end();
from("direct:updateFlag")
.process(ex -> {
var id = ((MessageConverted) ex.getIn().getBody()).getId();
myDao().setFlag(id);
})
.end();
Keep in mind, you might want to make the route transactional by adding .transacted()
https://camel.apache.org/components/3.17.x/eips/transactional-client.html
Project : Spring Boot
I'm updating my elasticsearch document using following way,
#Override
public Document update(DocumentDTO document) {
try {
Document doc = documentMapper.documentDTOToDocument(document);
Optional<Document> fetchDocument = documentRepository.findById(document.getId());
if (fetchDocument.isPresent()) {
fetchDocument.get().setTag(doc.getTag());
Document result = documentRepository.save(fetchDocument.get());
final UpdateRequest updateRequest = new UpdateRequest(Constants.INDEX_NAME, Constants.INDEX_TYPE, document.getId().toString());
updateRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL);
updateRequest.doc(jsonBuilder().startObject().field("tag", doc.getTag()).endObject());
UpdateResponse updateResponse = client.update(updateRequest, RequestOptions.DEFAULT);
log.info("ES result : "+ updateResponse.status());
return result;
}
} catch (Exception ex) {
log.info(ex.getMessage());
}
return null;
}
Using this my document updated successfully and version incremented but when version goes 20+.
It takes lot many time to retrieve data(around 14sec).
I'm still confused regarding process of versioning. How it works in update and delete scenario? At time of search it process all the data version and send latest one? Is it so?
Elasticsearch internally uses Lucene which uses immutable segments to store the data. as these segments are immutable, every update on elasticsearch internally marks the old document delete(soft delete) and inserts a new document(with a new version).
The old document is later on cleanup during a background segment merging process.
A newly updated document should be available in 1 second(default refresh interval) but it can be disabled or change, so please check this setting in your index. I can see you are using wait_for param in your code, please remove this and you should be able to see the updated document fast if you have not changed the default refresh_interval.
Note:- Here both update and delete operation works similarly, the only difference is that in delete operation new document is not created, and the old document is marked soft delete and later on during segment merge deleted permanently.
In my WCF service's business logic, most of the places when I need to locate an entity, I use this syntax:
public void UpdateUser(Guid userId, String notes)
{
using (ProjEntities entities = new ProjEntities())
{
User currUser = entities.SingleOrDefault(us => us.Id == userId);
if (currUser == null)
throw new Exception("User with ID " + userId + " was not found");
}
}
I have recentely discovered that the DbContext has the Find method, and I understand I can now do this:
public void UpdateUser(Guid userId, String notes)
{
using (ProjEntities entities = new ProjEntities())
{
User currUser = entities.Find(userId);
if (currUser == null)
throw new Exception("User with ID " + userId + " was not found");
}
}
Note : the 'userId' property is the primary key for the table.
I read that when using Find method entity framework checks first to see if the entity is already in the local memory, and if so - brings it from there. Otherwise - a trip is made to the database (vs. SingleOrDefault which always makes a trip to the database).
I was wondering if I now will convert all my uses of SingleOrDefault to Find is there any potential of danger?
Is there a chance I could get some old data that has not been updated if I use Find and it fetches the data from memory instead of the database?
What happens if I have the user in memory, and someone changed the user in the database - won't it be a problem if I always use now this 'memory' replica instead of always fetching the latest updated one from the database?
Is there a chance I could get some old data that has not been updated
if I use Find and it fetches the data from memory instead of the
database?
I think you have sort of answered your own question here. Yes, there is a chance that using Find you could end up having an entity returned that is out of sync with your database because your context has a local copy.
There isn't much more anyone can tell you without knowing more about your specific application; do you keep a context alive for a long time or do you open it, do your updates and close it? obviously, the longer you keep your context around the more susceptible you are to retrieving an up to date entity.
I can think of two strategies for dealing with this. The first is outlined above; open your context, do what you need and then dispose of it:
using (var ctx = new MyContext())
{
var entity = ctx.EntitySet.Find(123);
// Do something with your entity here...
ctx.SaveChanges();
}
Secondly, you could retrieve the DbEntityEntry for your entity and use the GetDatabaseValues method to update it with the values from the database. Something like this:
var entity = ctx.EntitySet.Find(123);
// This could be a cached version so ensure it is up to date.
var entry = ctx.Entry(entity);
entry.OriginalValues.SetValues(entry.GetDatabaseValues());
I am having a problem with SysCache/SysCache2 on my MVC application. My configuration seems to be correct. I have set it up just like countless examples on the web.
On my class I have put: Cache.Region("LongTerm").NonStrictReadWrite().IncludeAll();
Here is a Test I made for the application cache.
[Test]
public void cache()
{
using (var session = sessionFactory.OpenSession())
using (var tx = session.BeginTransaction())
{
var acc = session.QueryOver<Log>().Cacheable().List();
tx.Commit();
}
var test = sessionFactory.Statistics.SecondLevelCacheHitCount;
using (var session = sessionFactory.OpenSession())
{
var acc = session.QueryOver<Log>().List();
}
var test1 = sessionFactory.Statistics.SecondLevelCacheHitCount;
}
The first statement is cached as I see in the session factory statistics (for example 230 records).
If i understand it right second statement that is below shouldnt hit the db but the Cache.
Problem here is that it goes to DB anyway. Checked with profiler to be 100% sure.
I don't know what am I doing wrong here. Anyone has an idea?
I have managed to solve this problem. It had to do with my session creation. I didn't use session per request which triggered not going to cache. I created transaction on begining and it lasted through entire session. I managed to trigger entering cache if i opened the session again within using mark like: using(var sess = session.SessionFactory.OpenSession()) but this solution was only a workaround which didn't suit me so I changed how I created sessions in the first place and it works fine now! :)
When data is entered, it ultimately needs to be saved remotely on a server. I do want the app to work if there is no data connection at the time also, so I need to save everything locally on the phone too. The app can then sync with the server when it gets a connection.
This brings up a little issue. I'm used to saving everything on the server and then getting the records back with id's generated from the server for them. If there is no connection, the app will save locally to the phone but not the server. When syncing with the server, I don't see a way for the phone to know when a record comes back which locally record it's associated with. There isn't enough unique data to figure this out.
What is the best way to handle this?
One way I've been thinking is to change the id of the records to a GUID and let the phone set the id. This way, all records will have an id locally, and when saving to the server, it should still be a unique id.
I'd like to know what other people have been doing, and what works and what doesn't from experience.
This is how we done with a first windows phone 7 app finished few days ago with my friend.
It might not be the best solution but 'till additional refactoring it works just fine.
It's an application for a web app like a mint.com called slamarica.
If we have feature like save transaction, we first check if we have connection to internet.
// Check if application is in online or in offline mode
if (NetworkDetector.IsOnline)
{
// Save through REST API
_transactionBl.AddTransaction(_currentTransaction);
}
else
{
// Save to phone database
SaveTransactionToPhone(_currentTransaction);
}
If transaction is successfully saved via REST, it responses with transaction object and than we save it to local database. If REST save failed we save data to local database.
private void OnTransactionSaveCompleted(bool isSuccessful, string message, Transaction savedTransaction)
{
MessageBox.Show(message);
if(isSuccessful)
{
// save new transaction to local database
DatabaseBl.Save(savedTransaction);
// save to observable collection Transactions in MainViewModel
App.ViewModel.Transactions.Add(App.ViewModel.TransactionToTransactionViewModel(savedTransaction));
App.ViewModel.SortTransactionList();
// Go back to Transaction List
NavigationService.GoBack();
}
else
{
// if REST is failed save unsent transaction to Phone database
SaveTransactionToPhone(_currentTransaction);
// save to observable collection Transactions in MainViewModel
App.ViewModel.Transactions.Add(App.ViewModel.TransactionToTransactionViewModel(_currentTransaction));
App.ViewModel.SortTransactionList();
}
}
Every Transaction object has IsInSync property. It is set to false by default until we got confirmation from REST API that it's saved successful on the server.
User has ability to refresh transactions. User can click on a button Refresh to fetch new data from the server. We do the syncing in the background like this:
private void RefreshTransactions(object sender, RoutedEventArgs e)
{
if (NetworkDetector.IsOnline)
{
var notSyncTransactions = DatabaseBl.GetData<Transaction>().Where(x => x.IsInSync == false).ToList();
if(notSyncTransactions.Count > 0)
{
// we must Sync all transactions
_isAllInSync = true;
_transactionSyncCount = notSyncTransactions.Count;
_transactionBl.AddTransactionCompleted += OnSyncTransactionCompleted;
if (_progress == null)
{
_progress = new ProgressIndicator();
}
foreach (var notSyncTransaction in notSyncTransactions)
{
_transactionBl.AddTransaction(notSyncTransaction);
}
_progress.Show();
}
else
{
// just refresh transactions
DoTransactionRefresh();
}
}
else
{
MessageBox.Show(ApplicationStrings.NETWORK_OFFLINE);
}
}
private void DoTransactionRefresh()
{
if (_progress == null)
{
_progress = new ProgressIndicator();
}
// after all data is sent do full reload
App.ViewModel.LoadMore = true;
App.ViewModel.ShowButton = false;
ApplicationBl<Transaction>.GetDataLoadingCompleted += OnTransactionsRefreshCompleted;
ApplicationBl<Transaction>.GetData(0, 10);
_progress.Show();
}
OnTransactionRefreshCompleted we delete all transaction data in local database and get the latest 10 transactions. We don't need all the data, and this way user have synced data. He can always load more data by taping load more at the end of transaction list. It's something similar like those twitter apps.
private void OnTransactionsRefreshCompleted(object entities)
{
if (entities is IList<Transaction>)
{
// save transactions
var transactions = (IList<Transaction>)entities;
DatabaseBl.TruncateTable<Transaction>();
DatabaseBl.Save(transactions);
((MainViewModel) DataContext).Transactions.Clear();
//reset offset
_offset = 1;
//update list with new transactions
App.ViewModel.LoadDataForTransactions(transactions);
App.ViewModel.LoadMore = false;
App.ViewModel.ShowButton = true;
}
if (entities == null)
{
App.ViewModel.ShowButton = false;
App.ViewModel.LoadMore = false;
}
// hide progress
_progress.Hide();
// remove event handler
ApplicationBl<Transaction>.GetDataLoadingCompleted -= OnTransactionsRefreshCompleted;
}
Caveat - I haven't tried this with windows phone development but use of GUID identities is something I usually do when faced with similar situations - eg creating records when I only have a one-way connection to the database - such as via a message bus or queue.
It works fine, albeit with a minor penalty in record sizes, and can also cause less performant indexes. I suggest you just give it a shot.