In order to stay within the rate limits imposed by the Foursquare API, they previously recommended caching the data requested from it. However, after the recent site redesign, information on how long data should be cached is nowhere to be found. According to archive.org's WayBack machine, the documentation for the venues/categories endpoint previously said that the data for that endpoint should be cached for no more than a week, so I've implemented that in my app. That information is no longer on that documentation page. I'm now looking to cache the data from the venues/ endpoint (all the data of specific places), and likewise, no information about cache age is found, and I don't remember if there was any before. Would the 1 week previously recommended by the venues/categories endpoint be a reasonable cache lifetime for data from venues/? If not, what would be? The API Terms of Use say that no data can be cached more than 30 days without being updated, but that seems like a long time to keep data from a constantly-updated, crowdsourced platform. What cache age has worked well for you in the past?
According to their new documentation, as of May 15, 2018, Foursquare requires that all data must be cached for no more than 24 hours
Related
I added GA to my site about 14 hours ago, and have been visiting the site with different platforms and IP's. Still haven't see any data populated for sessions, or any data populated for the Audience tab in GA. But when I head over to the real-time tab in GA while I'm connected to my site, I see that GA is tracking me and looking at my page-views.
Is there something wrong or how long does it take for sessions to take effect (it's been 14 hours since my first connect)?
For brand new accounts or properties, it usually takes about 24 hours to see data.
This usually also applies to Real Time data, so it's strange you're seeing that already, but I wouldn't worry about it unless you're still not seeing data tomorrow.
Okay, so I have an old ASP Classic website. I've determined I can reduce a huge number of DB calls by caching the data daily. Our site data is read only, and changes very slowly. I think based on our site usage, I would be able to cache pages by query string for every visit each day, without a hit to our server.
My first thought was to use Output Caching, but the problem I discovered right away was that it wasn't until the third page request was generated that I gained any performance. I verified this using SQL profiler, but I'm not sure why.
My second thought was to add this ObjPageCache include file from https://web.archive.org/web/20211020131054/https://www.4guysfromrolla.com/webtech/032002-1.shtml After some research I discovered that this could cause more issues than it may solve http://support.microsoft.com/kb/316451
I'm hoping someone on here will tell me that since 2002 the issue with Sending ServerXMLHTTP or WinHTTP Requests to the Same Server has been resolved with Microsoft.
Depending on how your data is maintained you could choose from a number of ways to cache it.
If your data is changed and saved in one single place you could choose to generate an html-file which you save to the serverdisk and refer to in your linking. This will require write access for the process running your site though (e.g. NETWORK SERVICE). This will produce fast pages as the server serves these pages without any scriptingengine getting involved.
Another option is reading the data into an DomDocument which you store in the Application object and refer to on the page that needs it (hence saving the roundtrip to the database). You could keep two timestamps together with the cached data (one for the cachingtime and one for the time of change of data in the database). Timestamps will allow for fast check for staleness of the cached data: cached timestamp <> database timestamp => refresh data; otherwise use cached data. One thing to note about this approach is that Application does not accept objects other than multithreaded object so you will have to use the MSXML2.FreeThreadedDomDocument.6.0
Personally I prefer the last one as it allows for a more dynamic usage and I don't have to worry about write access permissions for the process running my site (which would probably pose security risks anyways).
Since I am caching an item in azure cache appfabric and in some cases i need information when a item was cached (like datetime of caching), So azure cache provide any such kind of function to provide this data about timing. By using this i will find out whether item is cached on current day or some other day.
I don't think the WAAF Caching has this information. You might need to add the time in your data by yourself.
We have a need to convert MS Office documents to PDF real time when someone provides a link to a document after checking whether user is authorized to view the document or not for an intranet portal. We also need to cache the documents based on the last modified date of the document, we should not convert the document again if another user requests the same document and the document content is not modified since it was last converted.
I have some basic questions on how we can implement this - and would like to check if anyone has previous experience or thoughts how they see this implemented?
For example, if we choose J2EE as the technology, and choose one of the open source Java libraries for PDF conversion; I have following questions.
If there is a 100 MB document - we would need to download entire document from the system where the document is hosted before we start converting the document. This approach may have major concerns on the response time given that this needs to be real time viewing. Is there an option to read first page of a document without downloading entire document so that we can convert document page by page?
How can we cache a document? I do not think we can either store the document in server or database. The reason is this could lead to anyone who is having access to either database or server - can access document content. Any thoughts?
Or do you suggest any out of the box product to do this instead of custom development?
Thanks
I work for a company that creates a product that does exactly what you are trying to do using Java / .NET Web service calls, so let me see if I can answer your questions without bias.
The whole document will need to be downloaded as it will need to be interpreted before PDF Conversion (e.g. for page numbering purposes) can take place. I am sure you are just giving an example, but 100MB is very large for an MS-Office document, although we do see it from time to time.
You can implement caching based on your exact security requirements. If you don't want to store the converted files in a (secured) DB or file system then perhaps you want to store them on a different server behind a firewall. Depending on the number of documents and size you anticipate you may want to cache them in memory. I am sure there are many J2EE caching libraries available, I know there are plenty in .NET. Just keep the most frequently requested documents in your cache.
Depending on your budget you may go for an out of the box product (hint hint :-). I know there are free libraries available for Java that leverage Open Office, but you get the same formatting limitations when opening MS-Office Files in OO. Be careful when trying to do your own MS-Office integration / automation. It is possible to make it reliable and scalable (we did), but it takes a long time and a lot of work.
I hope this helps.
We're running our blog from a shared hosting account. Our host limits the allowed inodes/number of files on the hosting account to 150,000. We've implemented our caching mechanism that caches all pages in full as soon as they are accessed so that subsequent seeks are delivered from cache. Unfortunately, the inode limit won't allow us to store more pages very soon.
Fortunately we have sqlite on our server. Though we have mysql too, but our shared hosting account only allows us to have maximum 25 concurrent connections from the apache webserver to the mysql server. That's a major constraint! Its said that sqlite is "serverless", and so I believe sqlite won't have that kind of limitation.
With that, should I and can I use a sqlite table to store full cache pages of all dynamic pages of our blog ? The average cached page size is around 125 kbs and I have around 125,000 cache pages and growing.
Will that have any kind of bottlenecks to slow down the cache page delivery out of the sqlite database file?
Will I be able to write more pages to the cache_table in the sqlite database while simuntaneously deliverying sought pages from the cache_table to the site visitors?
I't not a good idea cause sqlite usage may impact you website performance (at least on response time).
I recommend to use Memcached or NoSQL DB as a last resort (need to test for response time rise).
But if you have not choise, sqlite will be better then MySQL, cause its select operations are faster.
Haven't calculated that because there has never been a need to calculate max page generation time. I manage all pages statically in full, and with that, it has always been a stable process without any trouble.
Our server load varies from 400 to 8000 page requests in an hour.