Apache Ignite : SQLQuery doesn't seem to work for as expected TouchedExpiryPolicy - caching

We set the TouchedExpiryPolicy to 10 seconds and we expected the cache content to be available as long as there is a access gap of 10 seconds or less. And the content should be expired after 10 seconds of non-usage.
But we find that the cache content is being removed at the 10th second mark after creation even when the the sqlQuery is accessing the data prior to the 10th second. It works fine if we use a scanQuery or cache.getAll. And it fails for sqlQuery().
Is it possible that the TouchedExpiryPolicy/getExpiryForAccess (JPA Method for setting cache access time) is not working for sqlQuery?

You're right, SQL SELECT indeed seems to work without updating the access-based TTLs. Filed https://issues.apache.org/jira/browse/IGNITE-7687 for that.

Related

af:poll timeout is not working as expected

I'm using af:poll in a page.
<af:poll binding="#{backingBeanScope.backing_pages_xyzView.p2}"
id="p2"
interval="#{sessionScope.manage_Template.dataRefreshRate}"
pollListener="#{backingBeanScope.backing_pages_xzy.pollTablexyzView}"
partialTriggers="resId1" timeout="36000000"/>
I have set a large value for timeout parameter as I don't want this page to timeout after the default 10 mins period.
However, page still times out after 10 mins time. Here's a screenshot.
Why is this happening? If I set a duration less than 10 mins as the timeout, it works.
Here is the documentation I referred http://docs.oracle.com/html/E12419_09/tagdoc/af_poll.html
Note: This Oracle web app is configured to run in tomcat 7.0.16 (ADF version: 11.1.1.7)
UPDATE: When I remove the partialTriggers="resId1" it works. Why?

Oracle Coherence Refresh-Ahead: refresh doesn't work if the cache is queried earlier than soft-expiration period

I got strange behaviour of refresh ahead functionality.
Here is my configuration:
<cache-config>
<defaults>
<serializer>pof</serializer>
<socket-provider system-property="tangosol.coherence.socketprovider"/>
</defaults>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>sample</cache-name>
<scheme-name>extend-near-distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<near-scheme>
<scheme-name>extend-near-distributed</scheme-name>
<front-scheme>
<local-scheme>
<high-units>20000</high-units>
<expiry-delay>10s</expiry-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>all</invalidation-strategy>
</near-scheme>
<distributed-scheme>
<scheme-name>distributed</scheme-name>
<service-name>sample</service-name>
<thread-count>20</thread-count>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<expiry-delay>10s</expiry-delay>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.sample.CustomCacheStore
</class-name>
</class-scheme>
</cachestore-scheme>
<refresh-ahead-factor>0.5</refresh-ahead-factor>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
and if I request my service with a period of 6s (10s*0.5) seconds everything is fine. I have no delaying in response(except for the first time), but if i change a period to 3 seconds for example, then i start getting delays every 10 seconds. I have no idea why it is happening. It looks like if i request my service before expectable period (from 5 to 10 seconds) asynchronous loading doesn't happen even if after that i request it again. Is there any explanation of it and how can i bypass this behaviour?
Thanks
The problem has been solved. The reason why i've got such a situation is that front-scheme didn't notify the back-scheme because of the same expiration time. In a few words, to use refresh-ahead functionality with near cache you have to set expiration time of front-scheme equal to soft-expiration time(in that case it will be 10s*0.5).

How to Fix Read timed out in Elasticsearch

I used Elasticsearch-1.1.0 to index tweets.
The indexing process is okay.
Then I upgraded the version. Now I use Elasticsearch-1.3.2, and I get this message randomly:
Exception happened: Error raised when there was an exception while talking to ES.
ConnectionError(HTTPConnectionPool(host='127.0.0.1', port=8001): Read timed out. (read timeout=10)) caused by: ReadTimeoutError(HTTPConnectionPool(host='127.0.0.1', port=8001): Read timed out. (read timeout=10)).
Snapshot of the randomness:
Happened --33s-- Happened --27s-- Happened --22s-- Happened --10s-- Happened --39s-- Happened --25s-- Happened --36s-- Happened --38s-- Happened --19s-- Happened --09s-- Happened --33s-- Happened --16s-- Happened
--XXs-- = after XX seconds
Can someone point out on how to fix the Read timed out problem?
Thank you very much.
Its hard to give a direct answer since the error your seeing might be associated with the client you are using. However a solution might be one of the following:
1.Increase the default timeout Globally when you create the ES client by passing the timeout parameter. Example in Python
es = Elasticsearch(timeout=30)
2.Set the timeout per request made by the client. Taken from Elasticsearch Python docs below.
# only wait for 1 second, regardless of the client's default
es.cluster.health(wait_for_status='yellow', request_timeout=1)
The above will give the cluster some extra time to respond
Try this:
es = Elasticsearch(timeout=30, max_retries=10, retry_on_timeout=True)
It might won't fully avoid ReadTimeoutError, but it minimalize them.
Read timeouts can also happen when query size is large. For example, in my case of a pretty large ES index size (> 3M documents), doing a search for a query with 30 words took around 2 seconds, while doing a search for a query with 400 words took over 18 seconds. So for a sufficiently large query even timeout=30 won't save you. An easy solution is to crop the query to the size that can be answered below the timeout.
For what it's worth, I found that this seems to be related to a broken index state.
It's very difficult to reliably recreate this issue, but I've seen it several times; operations run as normal except certain ones which periodically seem to hang ES (specifically refreshing an index it seems).
Deleting an index (curl -XDELETE http://localhost:9200/foo) and reindexing from scratch fixed this for me.
I recommend periodically clearing and reindexing if you see this behaviour.
Increasing various timeout options may immediately resolve issues, but does not address the root cause.
Provided the ElasticSearch service is available and the indexes are healthy, try increasing the the Java minimum and maximum heap sizes: see https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html .
TL;DR Edit /etc/elasticsearch/jvm.options -Xms1g and -Xmx1g
You also should check if all fine with elastic. Some shard can be unavailable, here is nice doc about possible reasons of unavailable shard https://www.datadoghq.com/blog/elasticsearch-unassigned-shards/

Solr performance with commitWithin does not make sense

I am running a very simple performance experiment where I post 2000 documents to my application.
Who in tern persists them to a relational DB and sends them to Solr for indexing (Synchronously, in the same request).
I am testing 3 use cases:
No indexing at all - ~45 sec to post 2000 documents
Indexing included - commit after each add. ~8 minutes (!) to post and index 2000 documents
Indexing included - commitWithin 1ms ~55 seconds (!) to post and index 2000 documents
The 3rd result does not make any sense, I would expect the behavior to be similar to the one in point 2. At first I thought that the documents were not really committed but I could actually see them being added by executing some queries during the experiment (via the solr web UI).
I am worried that I am missing something very big. Is it possible that committing after each add will degrade performance by a factor of 400?!
The code I use for point 2:
SolrInputDocument = // get doc
SolrServer solrConnection = // get connection
solrConnection.add(doc);
solrConnection.commit();
Where as the code for point 3:
SolrInputDocument = // get doc
SolrServer solrConnection = // get connection
solrConnection.add(doc, 1); // According to API documentation I understand there is no need to call an explicit commit after this
According to this wiki:
https://wiki.apache.org/solr/NearRealtimeSearch
the commitWithin is a soft-commit by default. Soft-commits are very efficient in terms of making the added documents immediately searchable. But! They are not on the disk yet. That means the documents are being committed into RAM. In this setup you would use updateLog to be solr instance crash tolerant.
What you do in point 2 is hard-commit, i.e. flush the added documents to disk. Doing this after each document add is very expensive. So instead, post a bunch of documents and issue a hard commit or even have you autoCommit set to some reasonable value, like 10 min or 1 hour (depends on your user expectations).

Zend Framework Cache

I'm trying to make an ajax autocomplete search box that of course uses SQL, min 3 characters, and have a SQL view of relevant fields already set up and indexed in the db. The CPU still spikes when searching, which I expected as it's running a query for every character. I want to use Zend shm cache to speed up results and reduce CPU usage. The results are stored in an array which is to be cached like this:
while($row = db2_fetch_row($stmt)) {
$fSearch[trim($row[0]).trim($row[1])] = array(/*array built here*/);
}
if (zend_shm_cache_store('fSearch', $fSearch, 10 * 60) === false) {
error_log('Failed to store search cache!');
}
Of course there's actual data inside the array instead of comments, I just shortened the code for simplicity. Rows 0&1 form the PK, and this has tested to be working properly. It's the zend_shm_cache_store that fails because the error log gets flooded with 'Failed to store search cache!'. I read that zend_shm_cache_store can store any array that can be serialized - how can I tell if my data is serialized or can be serialized? Are there any other potential causes? I did make a test page that only stored a string and that was successful, so I know caching is on.
Solved: cache size was too small for array - increased cache size and it worked fine. Sorry for the trouble.

Resources