AppFabric using more and more memory - caching

We are using AppFabric 1.1 for caching in a multi services application.
The base functionalities works just fine but now we're fine tuning and we stumble across some memory issues.
We saw that AppFabric is using more and more memory as time goes by, yet we're supposed to have a finite volume of stored objects.
I spent some time reading online and checking my code but I couldn't find why. So before engaging in war against "memory leak", I mean storage of un-referenced data or not referenced anymore, I'd like to know if this is a standard behavior in AppFabric ?
Could the massive use of GetLock-PutUnlock and massive readings and writings could cause the memory to go up to the limit allowed for the cache cluster ?
We used different size limits (1024 - 1536 - 2048) and different watermarks but it still goes up to either the maximum size limit or High watermark (we tried 75, 80 and 90).
If you need anymore information please tell me.
Thanks in advance.
EDIT
we decided that we had to abandon AppFabric as there seems to be no valid work-around for this issue. Instead we'll use a Windows Azure hosted Redis cache. Hopefully there will be everything we need in Redis.
If a fix or a solution is known I'd still like to know of it, if anyone has it.
Thanks.

Related

Concurrent Connection Apache and Laravel

I'm a bit confused by a problem that has only become more apparently lately and I'm hoping that someone might be able to point me in the direction of either where I might look for appropriate settings, or if I am running into another problem they have come across before.
I have a Laravel application and a private server that I use for our little museum. Now as the application has become more complex, the lag is noticeable and you can see how it almost lines up the connections, finishing one request before moving along to the next, whether it be api, ajax, view responses, whatever.
I am running Apache 2.4.29 and my Ubuntu Server is 18.04.1.
I have been looking around but not much has helped, in regards to connections settings, if I look at my phpinfo() I see this Max Requests Per Child: 0 - Keep Alive: on - Max Per Connection: 100 but I believe these are just fine the way they are.
If I check my memory I think it says I have 65 GB of available memory, with 5 being used in caching. When reviewing the live data, the memory never crosses into the GB territory and solely remains in the MB territory. This server is absolutely only used for this Laravel project, so I don't have to worry about messing with other projects, I'd just like to make sure this application is getting the best use it can for its purpose.
I'd appreciate any suggestions, I know there's a chance the terms I am searching for are incorrect, or maybe just outdated, so if there are any potential useful resources out there, I'd appreciate those as well.
Thank you so much!
It's really hard to be able to tell since there a lot of details lacking but here some things that can give you a direction of where to look:
Try downloading htop via apt-get and see what happens on your CPU/RAM load with each request to the server.
Do you use php-fpm to manage the php requests? This might help in finding out if the problem lies in your PHP code or in apache configuration
Did you try deploying to a different server? Do you still see the lagging on the other server as well? If not, this indicates a misconfiguration problem and not an issue with your code.
Do you have other processes that are running in the background and might slow things down? Cron? Laravel Queue?
If you try to install another app on the server (let's say phpmyadmin) is it slow as well or it works fine?
Try to take it from here. Best of luck.

How to speed up the TYPO3 Backend?

Given: Each call to a BE module takes several seconds even with a SSD drive. (A well configured setup runs below 1 second for general BE tasks.)
What are likely bottlenecks?
How to check for them?
What options to speed up?
On purpose I don't give a special configuration, but ask for a general checklist, so that the answer is suitable for many people as first entry point.
General tips on performance tuning for TYPO3 can be found here: https://wiki.typo3.org/Performance_tuning
However, in my experience most general performance problems are due to one of a few reasons:
Bad/no caching. Usually this is a problem with one or more extensions (partly) disabling cache. Try disabling all third party extensions and enabling them one by one to see which causes the site to slow down the most. $GLOBALS['TSFE']->set_no_cache() will disable all cache, so you could search for that. USER_INT and COA_INT in TypoScript also disable cache for anything that's configured inside there.
A lot of data. Check the database for any tables containing a lot of data. How many constitutes "a lot", depends on a lot of factors, but generally anything below a million records shouldn't be too much of a problem unless for example you do queries with things like LIKE '%...%' on fields containing a lot of data.
Not enough resources on the server. To fix this, add more memory and/or CPU cores to the server. Or if it's a shared server, reduce the number of sites running on it.
Heavy traffic. No matter how many resources a server has, it will always have a limit to the number of requests it can process in a given time. If this is your problem you will have to look into load balancing and caching servers. If you don't (normally) have a lot of visitors, high traffic can still be caused by robots crawling your site too quickly. These are usually easy to block on IP address in your firewall or webserver configuration.
A slow backend on a server without any other traffic (you're the only one who can access it) rules out 1 (can only cause a slow backend if users are accessing the frontend and causing a high server load) and 4 (no other traffic).
one further aspect you could inspect: in the user record a lot of things are stored, for example the settings you used in the log module.
one setting which could consume a lot of memory (and time to serialize and deserialize) is the state of the pagetree (which pages are expanded/ which are not).
Cleaning the user settings could make the backend faster for this user.
If you have a large page tree and the user has to navigate through many pages the effect will stall. another draw back: you loose all settings as there still is no selective cleaning.
Cannot comment here but need to say: The TSFE-Object does absolutely nothing in the TYPO3 Backend. The Backend is always uncached. The TYPO3-Backend is a standalone module to edit and maintenance the frontend output. There are tons of Google search results that will ignore this fact.
Possible performance bottlenecks are poor written extensions that do rendering or data processing. Hooks to core functions are usually no big deal but rendering of many elements for edit forms (especially in TYPO3s Fluid Template Engine) can cause performance problems.
The Extbase-DBAL-Layer can also cause massive performance problems. The reason is the database model does not know indexes. It' simple but stupid. A SQL-Join on a big table of 2000 records+ will delay the output perceptibly, depending on the data model.
Also TYPO3 Backend does not really depend on the Typoscript-Configuration but in effect to control some output or loaded by extensions, the full parsing of the *.ts files is needed. And this parser is very slow.
If you want to speed things up you need to know what goes wrong. The only way to debug this behaviour is to inspect the runtime with a PHP profiling tool like xdebug because the TYPO3 Framework is very complex. It's using some kind of Doctrine Framework and will load tons of files, by every request. Thus a good configured OpCache is a must.
Most reason the whole thing is slow is because it is poor written. You can confirm that fact by inspecting the runtime.
In addition to what already has been said, put the runtime environment onto your checklist:
Memory:
If heavy IDE and other tools are open at the same time, available memory can become an issue. To check the memory profile, you may start a tool that monitors the memory usage of the machine.
If virtualization is used, check the memory assigned to the box. Try if assigning more memory improves behaviour.
If required and possible spend more memory to your machine. This should not be a bugfix to poorly written code. Bad code can blow up any size of memory.
File access:
TYPO3 reads and writes thousands of files. If you work with a contemporary SSD, this is surprisingly fast. I did measure this. Loading all class files of TYPO3 takes just a fraction of a second.
However this may look different if you do not work with a standard setup. Many factors may slow you down:
USB-Sticks as storage.
Memory cards as storage.
All kind of external storage may be limited due to slow drivers.
Virtualization can become an issue. Again it's a question of drivers.
In doubt test and store your files and DB on a different drive to compere the behaviour.
Routing
The database itself may be fast. A bad routing of your request may still slow you down. Think of firewalls, proxies etc. even on your local machine and specially if virtualisation is used.
Database connection:
I fast database connection is crucial. If the database access is slow TYPO3 can't be fast.
Especially due to Extbase TYPO3 often queries much more data than really required and more often than really required, because a lot of relations are resolved in the PHP layer instead of the DB layer itself. Loading data structures like the root line may cause a lot of ping-pong between the PHP and the DB layer.
I can't give advice, how to measure your DB-connection. You have to as your admin for that. What you always can do is to test and compare with another DB from a completely different environment.
The speed of the database may depend on the type of the database itself. Typically you use MySQL/Maria-DB which should be fast. It also depends on the factors mentioned above, memory, file access and routing.
Strategy:
Even without being and admin and knowing all performance tools, you can always exchange parts of your system and check if matters improve. By this approach you can localise the culprit without being an expert. Once having spotted the culprit, Google may help you to get more information.
When it comes to a clean and performant setup of routing or virtualisation it's still the best idea to ask an experienced admin.
Summary
This is all in addition to what others have already pointed to.
What would be really helpful would be a BE-Plugin, that analyses and measures the environment. May there are some out there I don't know.

Fetching operations from Ncache server is taking time than previously

In my office, we have a server that has Ncache installed for storing and retrieving data and our applications are also hosted there.
There was an issue where application was getting timed out. In depth, i found that getting cache method from Ncache is taking 8-9 seconds, which was previously taking 0.5 seconds. The application isn't changed recently and it was working fine previously. All of a sudden this issue has occurred. Some one told me that there was an issue where all of a sudden all clustered cache were deleted from ncache manager and we resolved it by setting basic values from tutorial available online. But this issue seems to be never getting solved. Can anyone through some light on it that we can do to overcome this time out issue?
This seems like some application/environment related issue where a working application is now showing slow fetch time while it was working fine previously. Also, if your console app is getting results in less than a second then it again shows that issue is not from NCache server end but isolated to the application.
I will suggest to review what has been changed in the application to start off. You can also profile your application on which calls are taking more time now. NCache client side windows performance counters can also be reviewed to rule out if it is slow because of NCache or due to some application related issue.
Moreover, caching an object which is huge in size is generally not recommended. You should always break your bigger objects to smaller objects and then cache them. This will reduce network and storage overhead for your application. If you have to use bigger object then consider using compression.
NCache default settings are already tuned for optimum performance and should not slow things down. You should check firewall between client and NCache server to rule out any environmental issues.

Google App-Engine memcached extremely slow

I recently launched my app for iPhone/Android running with AppEngine backend. This is my first experience using AppEngine in Production.
As I get more traffic, I am starting to experience serious latency issues. Currently minimum idle instance is 1, max_pending_latency is 1s.
Yes, there are rooms for optimizations on my side, however I do not understand
Why the latency is not correlated with request/sec, traffic, memoryUsage, memcacheUsage, anything. I do not understand why there was no significant latency on Sep 21.
Why the call to memcached needs to be as slow as 500ms. (Usually it is 10 times faster). I am using NDB and 1GB dedicated memcached. Increasing to 5GB had no effect.
Is this simply how AppEngine works? I would like to get your insight.
Thanks
I forgot to update this.... I remember the issue was caused by me creating datastore keys by myself. Basically, not well distributed keys introduced "hot tablet" problem. Once I stopped creating my own keys and let the AppEngine create them, the issue seemed to be resolved.
We experienced a very long time in deserialization when we stored a lot of entities under the same memcache key. It can take a long time if you store a big array of some entities with a lot of structured properties.
You cannot store object size more than 1Mo in the same cache key. You can use Titan for App Engine to divide your cache key in several others cache key using the sharded memcache. It's transparent.
I hope it will help you.

How does caching work in openCPU?

This question is directed towards Jeroen and is a follow-up to this answer: https://stackoverflow.com/a/12482918/177984
Jeroen wrote "the server does caching" .. "so if enough memory is available it will automatically be available from memory."
How can I confirm if an object is cached 'in-memory' or not? From what I can tell (by performance) all of my objects are being read from disk. I'd like to have things read from memory to speed up data load times. Is there a way to view what's in the in-memory cache? Is there a way to force caching objects in-memory?
Thanks for your help.
The OpenCPU project is rapidly evolving. Things have changed in OpenCPU 1.0. Have a look at the website for the latest information: http://www.opencpu.org.
The answer that you cited is outdated. Currently indeed all the caching is done on disk. In a previous version, OpenCPU used Varnish to do caching, which is completely in-memory. However this turned out to make things more complicated (especially https), and performance was a bit disappointing (especially in comparison with fast disks these days). So now we're back at nginx which caches on disk, but is much more mature and configurable as a web server, and has other performance benefits.

Resources