Spring #CachePut response is unexpected - caching

I am using EhCache with SpringBoot
I perform the Get operation to retrieve list and saved the response in cache.It is working fine as expected.
Now I am doing Put operation on the element of list.Put operation is successful for that element and cache also updated.
But when I am doing Get Operation again to retrieve list to test
whether my update cache is working or not, I got just an element in the response who is updated instead of total list
Here is the code snippet:
#Override
#Cacheable(value = "practiceId", key="#practiceId", unless="#result==null")
public List<ExposedLocation> getLocations(String practiceId) {
// getLocations list logic
}
#CachePut(value = "practiceId", key ="#practiceId")
public List<ExposedLocation> updateLocation(List<LocationDB> locationList, String practiceId) {
//Update location logic
}
ehcache.xml:
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="ehcache.xsd" updateCheck="false">
<cache name="practiceId" maxEntriesLocalHeap="200" eternal="false"
memoryStoreEvictionPolicy="LFU" timeToLiveSeconds="600"
timeToIdleSeconds="200">
</cache>
So according to my analysis of Response and some testcases I have used,
I concluded that as Response of Get and Put is different #CachePut will first remove cache and create new one
and it is putting the response of update in the new cache.
So can anyone assist me in that to retrieve full list containing updated element.
Where I am failing to configure #CachePut??????

The Spring caching abstraction does exactly what you ask by caching the result from either methods under the same key.
Now if indeed your methods return a different result, that's the root of the problem.
There is not built-in support for telling the cache that this list is to be merged with the currently cached one.

Related

Can't serialize `PaginatedScanList` - potentially after adding awssdk libraries

We have two versions of a SpringBoot server, both with a generic CrudController, simply relaying GET, PUT, POST and DELETE requests to the relevant DyamoDb table in the data layer.
The get handler is super-simple:
#GetMapping
public ResponseEntity<Iterable<U>> get() {
return ResponseEntity.ok(service.get());
}
Using the implementation of the service/CrudRepository as (e.g.)
public Iterable<U> get() {
return repository.findAll();
}
In the older version of the server, that doesn't have additional awssdk libraries (namely s3, sts, cognitoidentity and cognitoidentityprovider) the response gets serialized perfectly fine - as an array of the response objects.
In the new version, however, it gets serialized as an empty object - {}
I'm guessing this is down to losing the ability to serialize PaginatedScanList - as the return value is exactly the same in both server versions.
It's entirely possible that the libraries are a red herring but comparing the two versions, there aren't any other relevant changes on these code paths.
Any idea what could be causing this and how to fix it?

Can I store sensitive data in a Vert.x context in a Quarkus application?

I am looking for a place to store some request scoped attributes such as user id using a Quarkus request filter. I later want to retrieve these attributes in a Log handler and put them in the MDC logging context.
Is Vertx.currentContext() the right place to put such request attributes? Or can the properties I set on this context be read by other requests?
If this is not the right place to store such data, where would be the right place?
Yes ... and no :-D
Vertx.currentContext() can provide two type of objects:
root context shared between all the concurrent processing executed on this event loop (so do NOT share data)
duplicated contexts, which are local to the processing and its continuation (you can share in these)
In Quarkus 2.7.2, we have done a lot of work to improve our support of duplicated context. While before, they were only used for HTTP, they are now used for gRPC and #ConsumeEvent. Support for Kafka and AMQP is coming in Quarkus 2.8.
Also, in Quarkus 2.7.2, we introduced two new features that could be useful:
you cannot store data in a root context. We detect that for you and throw an UnsupportedOperationException. The reason is safety.
we introduced a new utility class ( io.smallrye.common.vertx.ContextLocals to access the context locals.
Here is a simple example:
AtomicInteger counter = new AtomicInteger();
public Uni<String> invoke() {
Context context = Vertx.currentContext();
ContextLocals.put("message", "hello");
ContextLocals.put("id", counter.incrementAndGet());
return invokeRemoteService()
// Switch back to our duplicated context:
.emitOn(runnable -> context.runOnContext(runnable))
.map(res -> {
// Can still access the context local data
String msg = ContextLocals.<String>get("message").orElseThrow();
Integer id = ContextLocals.<Integer>get("id").orElseThrow();
return "%s - %s - %d".formatted(res, msg, id);
});
}

Saving entities with user defined primary key values

I'm quite new to the Spring Data JDBC library, but so far I'm really impressed.
Unfortunately JDBC driver for my database (SAP Hana) doesn't support retrieving of generated keys after INSERT (implementation of method PreparedStatement.getGeneratedKeys() throws UnsupportedOperationException).
Therefore I decided, that I'll not use the generated keys and will define the PK values before saving (+ implement Persistable.isNew()). However even if the PK values are defined before saving, whenever an INSERT operation is triggered, it fails on the error, that the generated keys can't be retrieved.
After investigating source code of the affected method (DefaultDataAccessStrategy.insert) I've recognized, that there is every time executed the JDBC operations update method with the KeyHolder parameter.
I've (naively) tweaked the code with following changes and it started to work:
if the PK is already defined, the JDBC operations update method without the KeyHolder is invoked
such PK is then immediately returned from the method
Following code snippet from the tweaked insert method illustrates the changes.
Object idValue = getIdValueOrNull(instance, persistentEntity);
if (idValue != null) {
RelationalPersistentProperty idProperty = persistentEntity.getRequiredIdProperty();
addConvertedPropertyValue(parameterSource, idProperty, idValue, idProperty.getColumnName());
/* --- tweak start --- */
String insertSql = sqlGenerator.getInsert(new HashSet<>(parameterSource.getIdentifiers()));
operations.update(insertSql, parameterSource);
return idValue;
/* --- tweak end --- */
}
So, the question is, if similar change can be implemented in the Spring Data JDBC to support also such use case as mine.
The question can be considered as closed as the related feature request is registered in the issue tracker.

ODataController erroring after return

I have an OData service that is implemented with several MVC controllers using ODataController. I am having an issue with all but one of the controllers, where an Internal 500 error is being returned with nothing of help after my return statement:
/// <summary><see cref="ODataController" /> reacting to queries relating to <see cref="Contract" /></summary>
[CustomExceptionFilter]
public class ContractsController : ODataController
{
// GET: odata/Contracts
[EnableQuery]
public IQueryable<Contract> GetContracts()
{
return DataAccess.GetContracts();
}
... other methods
}
/// <summary>Single point of reference to access data</summary>
public static class DataAccess
{
/// <summary>Gets the queryable collection of <see cref="ContractCoverageDetail" /></summary>
/// <returns>The queryable collection of <see cref="ContractCoverageDetail" /></returns>
public static IQueryable<Contract> GetContracts()
{
IQueryable<Contract> results = null;
using (EntityFrameworkContext context = new EntityFrameworkContext())
results = context.Contracts.ToArray().AsQueryable();
return results;
}
}
Another controller using the same DataAccess class returns data just fine. All that is being returned for each other controller is:
<m:error xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<m:code/>
<m:message xml:lang="en-US">An error has occurred.</m:message>
</m:error>
The error appears to be raised after my return statement, and if I step through after the return (F10), I hit each individual { get; } property on the returned entity of the collection, after which a result with the above error is returned to the browser. I can't get actual error information (innererror) to appear for the life of me, and it's odd that one controller is working, while the remainder fail without any detail.
Does anyone have an Idea for what might be causing this, or how to turn on the error detail after the return statement?
GlobalConfiguration.Configuration.IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Always;
in the global.asax.cs does not help, nor does either of the following in web.config:
<system.web>
<customErrors allowNestedErrors="true" mode="On" />
</system.web>
<system.webServer>
<httpErrors existingResponse="PassThrough" />
</system.webServer>
Any idea how I can get to the actual exception being raised?
Here's the simple guide about how to debug the Web API OData and ODL lib.
http://odata.github.io/WebApi/10-01-debug-webapi-source/
It is hard to say exactly what the cause of this issue is but here are a couple things you can try. Hopefully, one of these will get you more information on your exception.
Debugging all code
It may seem obvious but make sure that you are not debugging only your code. You can check this by accessing the debugging options under the Debug -> Options and Settings... menu. Make sure that Enable Just My Code is not checked. This might be fine usually. When there are errors with exceptions and you cant seem to get to them, sometimes this setting can mess you up
Debugging Exceptions
Another thing that can be both useful and painful at the same time are the Debug -> Exceptions... settings. These are sets of exceptions that can be configured to cause Visual Studio to automatically break allowing you to see their moving parts. For debugging MVC and .NET in general you would want to enable the Common Language Runtime Exception.
With debugging exceptions turned on, you will probably get a ton of exceptions that you wont want. Just ignore them and keep running till you get to exceptions that relate to oData.

Best strategy to cache expensive web service call in Grails

I have a simple Grails application that needs to make a periodic call to an external web service several times during a user's session (while the use the interface).
I'd like to cache this web service response, but the results from the service change about every few days, so I'd like to cache it for a short time (perhaps daily refreshes).
The Grails cache plugin doesn't appear to support "time to live" implementations so I've been exploring a few possible solutions. I'd like to know what plugin or programatic solution would best solve this problem.
Example:
BuildConfig.groovy
plugins{
compile ':cache:1.0.0'
}
MyController.groovy
def getItems(){
def items = MyService.getItems()
[items: items]
}
MyService.groovy
#Cacheable("itemsCache")
class MyService {
def getItems() {
def results
//expensive external web service call
return results
}
}
UPDATE
There were many good options. I decided to go with the plugin approach that Burt suggested. I've included a sample answer with minor changes to above code example to help others out wanting to do something similar. This configuration expires the cache after 24 hours.
BuildConfig.groovy
plugins{
compile ':cache:1.1.7'
compile ':cache-ehcache:1.0.1'
}
Config.groovy
grails.cache.config = {
defaultCache {
maxElementsInMemory 10000
eternal false
timeToIdleSeconds 86400
timeToLiveSeconds 86400
overflowToDisk false
maxElementsOnDisk 0
diskPersistent false
diskExpiryThreadIntervalSeconds 120
memoryStoreEvictionPolicy 'LRU'
}
}
The core plugin doesn't support TTL, but the Ehcache plugin does. See http://grails-plugins.github.com/grails-cache-ehcache/docs/manual/guide/usage.html#dsl
The http://grails.org/plugin/cache-ehcache plugin depends on http://grails.org/plugin/cache but replaces the cache manager with one that uses Ehcache (so you need both installed)
A hack/workaround would be to use a combination of #Cacheable("itemsCache") and #CacheFlush("itemsCache").
Tell the getItems() method to cache the results.
#Cacheable("itemsCache")
def getItems() {
}
and then another service method to flush the cache, which you can call frequently from a Job.
#CacheFlush("itemsCache")
def flushItemsCache() {}
After several hours of fails in battles with SpEL I have won the war in the end!
So as you know Grails cache does not have TTL out of the box. You can stick to ehcache and do some fancy configuration. Or worse add logic flushing it on save/update etc. But my solution is:
#Cacheable(value = 'domainInstance', key="#someId.concat((new java.util.GregorianCalendar().getTimeInMillis()/10000))")
def getSomeStuffOfDb(String someId){
//extract something of db
}
}
and one more thing to point out. You can skip configuration in Config.groovy and it will be created and added automatically itself.
However if your app is under load straight after start it will cause some exceptions.
2017-03-02 14:05:53,159 [http-apr-8080-exec-169] ERROR errors.GrailsExceptionResolver - CacheException occurred when processing request: [GET] /some/get
Failed to get lock for campaignInstance cache creation. Stacktrace follows:
so to avoid that please add config so cache facilities will be ready beforehand.
grails.cache.enabled = true
grails.cache.clearAtStartup = true
grails.cache.config = {
defaults {
maxElementsInMemory 10000
overflowToDisk false
}
cache {
name 'domainInstance'
}
}
GregorianCalendar().getTimeInMillis()/10000 will make TTL ~10sec. /1000 ~1 sec. Pure maths here.
From the grails-cache unit tests(Look for timeToLiveSeconds), I see that you can configure caching at the cache level, not per method call or similar. Using this method, you would configure the settings for grails.cache.config.
You would create a dedicated cache with your time-to-live settings and then reference it in your service.

Resources