I'm looking for examples for testing a couple of Caffiene caches I have implemented (one has a timed expiry that is cache-wide and the other has a timed expiry variable per entry using the Expiry interface). Tried to search for good examples online but have not found any well documented ones.
Related
I am building an application using couchbase as my primary db.
I want to make the application scalable enough to handle multiple requests at times concurrently.
How do you create connection pools for couchbase in Go?
Postgres has pgxpool.
I'll give a bit more detail about how gocb works. Under the hood of gocb is another SDK called gocbcore (direct usage of gocbcore is not supported) which is a fully asynchronous API. Gocb provides a synchronous API over gocbcore as well as making the API quite a lot more user friendly.
What this means is that if you issue requests across multiple goroutines then you can get multiple requests written to the network at a time. This is effectively how the gocb bulk API works - https://github.com/couchbase/gocb/blob/master/collection_bulk.go. Both of these approaches are documented at https://docs.couchbase.com/go-sdk/current/howtos/concurrent-async-apis.html.
If you still don't get enough throughput then you can look at using one of these approaches alongside increasing the number of connections that the SDK makes to each node by using the kv_pool_size query string option in your connection string, i.e. couchbases://10.112.212.101?kv_pool_size=2 however I'd recommend only changing this if the above approaches are not providing the throughput that you need. The SDK is designed to be highly performant anyway.
go-couchbase has already have a connection pool mechanism: conn_pool.go (even though there are a few issues linked to it, like issue 91).
You can see it tested in conn_pool_test.go, and in pools.go itself.
dnault points out in the comments to the more recent couchbase/gocb, which does have a Cluster instead of pool of connections.
In order to stay within the rate limits imposed by the Foursquare API, they previously recommended caching the data requested from it. However, after the recent site redesign, information on how long data should be cached is nowhere to be found. According to archive.org's WayBack machine, the documentation for the venues/categories endpoint previously said that the data for that endpoint should be cached for no more than a week, so I've implemented that in my app. That information is no longer on that documentation page. I'm now looking to cache the data from the venues/ endpoint (all the data of specific places), and likewise, no information about cache age is found, and I don't remember if there was any before. Would the 1 week previously recommended by the venues/categories endpoint be a reasonable cache lifetime for data from venues/? If not, what would be? The API Terms of Use say that no data can be cached more than 30 days without being updated, but that seems like a long time to keep data from a constantly-updated, crowdsourced platform. What cache age has worked well for you in the past?
According to their new documentation, as of May 15, 2018, Foursquare requires that all data must be cached for no more than 24 hours
I've come to you today in hopes of getting some support in regards to the Google Distance Matrix API. Currently I'm using this in a very simple way with a Web Services request through an HTTP interface and am having no problems getting results. Unfortunately my project seems to be running into Query limits due to the 2,500 query Quota limit. I have added Billing to the project to allow for going over 2,500 queries, and it reflects that increased quota in my project. What's funky though is that the console is not showing any usage and so I'm not sure if these requests are being ran against what I have set up.
I am using a single API Key for the project which is present in my requests, and as I said before the requests ARE working, but again I'm hoping to see if someone can shed some light as to why I might not be seeing my queries reflected in my usage, and to see how I can verify that my requests are being run under the project for which I have attached billing.
If there is any information I can provide to help assist in finding an answer, please feel free to let me know and I'll be happy to give what information I can.
After doing some digging I was able to find the following relevant thread to answer my question:
Google API Key hits max request limit despite billing enabled
I added GA to my site about 14 hours ago, and have been visiting the site with different platforms and IP's. Still haven't see any data populated for sessions, or any data populated for the Audience tab in GA. But when I head over to the real-time tab in GA while I'm connected to my site, I see that GA is tracking me and looking at my page-views.
Is there something wrong or how long does it take for sessions to take effect (it's been 14 hours since my first connect)?
For brand new accounts or properties, it usually takes about 24 hours to see data.
This usually also applies to Real Time data, so it's strange you're seeing that already, but I wouldn't worry about it unless you're still not seeing data tomorrow.
Please let me know if cache eviction can be done at particular time of the day instead of TTL. I am using spring framework so if any API provides this feature then I can use this API by plugging into Spring.
I did run through search mechanism if similar question has been asked but failed to find any prior question.
If similar question has been asked please let me know the link.
Thanks, Amitabh
According to GemFire docs:
You configure for eviction based on entry count, percentage of
available heap, and absolute memory usage. You also configure what to
do when you need to evict: destroy entries or overflow them to disk.
See Persistence and Overflow.
http://gemfire.docs.pivotal.io/latest/userguide/index.html#developing/eviction/configuring_data_eviction.html
But you may be able to get something closer to what you need through Custom expiration. Please check the following link:
http://gemfire.docs.pivotal.io/latest/userguide/index.html#developing/expiration/configuring_data_expiration.html
Ehcache expiration does not offer such a feature out of the box.
You still have some options:
Configure the TTL when creating the Element with a computed value.
Use refresh-ahead or even better scheduled refresh ahead
Have a look at the following question. Note that this may not work with all configurations as sometimes the Element gets re-created internally.