In my office, we have a server that has Ncache installed for storing and retrieving data and our applications are also hosted there.
There was an issue where application was getting timed out. In depth, i found that getting cache method from Ncache is taking 8-9 seconds, which was previously taking 0.5 seconds. The application isn't changed recently and it was working fine previously. All of a sudden this issue has occurred. Some one told me that there was an issue where all of a sudden all clustered cache were deleted from ncache manager and we resolved it by setting basic values from tutorial available online. But this issue seems to be never getting solved. Can anyone through some light on it that we can do to overcome this time out issue?
This seems like some application/environment related issue where a working application is now showing slow fetch time while it was working fine previously. Also, if your console app is getting results in less than a second then it again shows that issue is not from NCache server end but isolated to the application.
I will suggest to review what has been changed in the application to start off. You can also profile your application on which calls are taking more time now. NCache client side windows performance counters can also be reviewed to rule out if it is slow because of NCache or due to some application related issue.
Moreover, caching an object which is huge in size is generally not recommended. You should always break your bigger objects to smaller objects and then cache them. This will reduce network and storage overhead for your application. If you have to use bigger object then consider using compression.
NCache default settings are already tuned for optimum performance and should not slow things down. You should check firewall between client and NCache server to rule out any environmental issues.
Related
Given: Each call to a BE module takes several seconds even with a SSD drive. (A well configured setup runs below 1 second for general BE tasks.)
What are likely bottlenecks?
How to check for them?
What options to speed up?
On purpose I don't give a special configuration, but ask for a general checklist, so that the answer is suitable for many people as first entry point.
General tips on performance tuning for TYPO3 can be found here: https://wiki.typo3.org/Performance_tuning
However, in my experience most general performance problems are due to one of a few reasons:
Bad/no caching. Usually this is a problem with one or more extensions (partly) disabling cache. Try disabling all third party extensions and enabling them one by one to see which causes the site to slow down the most. $GLOBALS['TSFE']->set_no_cache() will disable all cache, so you could search for that. USER_INT and COA_INT in TypoScript also disable cache for anything that's configured inside there.
A lot of data. Check the database for any tables containing a lot of data. How many constitutes "a lot", depends on a lot of factors, but generally anything below a million records shouldn't be too much of a problem unless for example you do queries with things like LIKE '%...%' on fields containing a lot of data.
Not enough resources on the server. To fix this, add more memory and/or CPU cores to the server. Or if it's a shared server, reduce the number of sites running on it.
Heavy traffic. No matter how many resources a server has, it will always have a limit to the number of requests it can process in a given time. If this is your problem you will have to look into load balancing and caching servers. If you don't (normally) have a lot of visitors, high traffic can still be caused by robots crawling your site too quickly. These are usually easy to block on IP address in your firewall or webserver configuration.
A slow backend on a server without any other traffic (you're the only one who can access it) rules out 1 (can only cause a slow backend if users are accessing the frontend and causing a high server load) and 4 (no other traffic).
one further aspect you could inspect: in the user record a lot of things are stored, for example the settings you used in the log module.
one setting which could consume a lot of memory (and time to serialize and deserialize) is the state of the pagetree (which pages are expanded/ which are not).
Cleaning the user settings could make the backend faster for this user.
If you have a large page tree and the user has to navigate through many pages the effect will stall. another draw back: you loose all settings as there still is no selective cleaning.
Cannot comment here but need to say: The TSFE-Object does absolutely nothing in the TYPO3 Backend. The Backend is always uncached. The TYPO3-Backend is a standalone module to edit and maintenance the frontend output. There are tons of Google search results that will ignore this fact.
Possible performance bottlenecks are poor written extensions that do rendering or data processing. Hooks to core functions are usually no big deal but rendering of many elements for edit forms (especially in TYPO3s Fluid Template Engine) can cause performance problems.
The Extbase-DBAL-Layer can also cause massive performance problems. The reason is the database model does not know indexes. It' simple but stupid. A SQL-Join on a big table of 2000 records+ will delay the output perceptibly, depending on the data model.
Also TYPO3 Backend does not really depend on the Typoscript-Configuration but in effect to control some output or loaded by extensions, the full parsing of the *.ts files is needed. And this parser is very slow.
If you want to speed things up you need to know what goes wrong. The only way to debug this behaviour is to inspect the runtime with a PHP profiling tool like xdebug because the TYPO3 Framework is very complex. It's using some kind of Doctrine Framework and will load tons of files, by every request. Thus a good configured OpCache is a must.
Most reason the whole thing is slow is because it is poor written. You can confirm that fact by inspecting the runtime.
In addition to what already has been said, put the runtime environment onto your checklist:
Memory:
If heavy IDE and other tools are open at the same time, available memory can become an issue. To check the memory profile, you may start a tool that monitors the memory usage of the machine.
If virtualization is used, check the memory assigned to the box. Try if assigning more memory improves behaviour.
If required and possible spend more memory to your machine. This should not be a bugfix to poorly written code. Bad code can blow up any size of memory.
File access:
TYPO3 reads and writes thousands of files. If you work with a contemporary SSD, this is surprisingly fast. I did measure this. Loading all class files of TYPO3 takes just a fraction of a second.
However this may look different if you do not work with a standard setup. Many factors may slow you down:
USB-Sticks as storage.
Memory cards as storage.
All kind of external storage may be limited due to slow drivers.
Virtualization can become an issue. Again it's a question of drivers.
In doubt test and store your files and DB on a different drive to compere the behaviour.
Routing
The database itself may be fast. A bad routing of your request may still slow you down. Think of firewalls, proxies etc. even on your local machine and specially if virtualisation is used.
Database connection:
I fast database connection is crucial. If the database access is slow TYPO3 can't be fast.
Especially due to Extbase TYPO3 often queries much more data than really required and more often than really required, because a lot of relations are resolved in the PHP layer instead of the DB layer itself. Loading data structures like the root line may cause a lot of ping-pong between the PHP and the DB layer.
I can't give advice, how to measure your DB-connection. You have to as your admin for that. What you always can do is to test and compare with another DB from a completely different environment.
The speed of the database may depend on the type of the database itself. Typically you use MySQL/Maria-DB which should be fast. It also depends on the factors mentioned above, memory, file access and routing.
Strategy:
Even without being and admin and knowing all performance tools, you can always exchange parts of your system and check if matters improve. By this approach you can localise the culprit without being an expert. Once having spotted the culprit, Google may help you to get more information.
When it comes to a clean and performant setup of routing or virtualisation it's still the best idea to ask an experienced admin.
Summary
This is all in addition to what others have already pointed to.
What would be really helpful would be a BE-Plugin, that analyses and measures the environment. May there are some out there I don't know.
I recently launched my app for iPhone/Android running with AppEngine backend. This is my first experience using AppEngine in Production.
As I get more traffic, I am starting to experience serious latency issues. Currently minimum idle instance is 1, max_pending_latency is 1s.
Yes, there are rooms for optimizations on my side, however I do not understand
Why the latency is not correlated with request/sec, traffic, memoryUsage, memcacheUsage, anything. I do not understand why there was no significant latency on Sep 21.
Why the call to memcached needs to be as slow as 500ms. (Usually it is 10 times faster). I am using NDB and 1GB dedicated memcached. Increasing to 5GB had no effect.
Is this simply how AppEngine works? I would like to get your insight.
Thanks
I forgot to update this.... I remember the issue was caused by me creating datastore keys by myself. Basically, not well distributed keys introduced "hot tablet" problem. Once I stopped creating my own keys and let the AppEngine create them, the issue seemed to be resolved.
We experienced a very long time in deserialization when we stored a lot of entities under the same memcache key. It can take a long time if you store a big array of some entities with a lot of structured properties.
You cannot store object size more than 1Mo in the same cache key. You can use Titan for App Engine to divide your cache key in several others cache key using the sharded memcache. It's transparent.
I hope it will help you.
We are using AppFabric 1.1 for caching in a multi services application.
The base functionalities works just fine but now we're fine tuning and we stumble across some memory issues.
We saw that AppFabric is using more and more memory as time goes by, yet we're supposed to have a finite volume of stored objects.
I spent some time reading online and checking my code but I couldn't find why. So before engaging in war against "memory leak", I mean storage of un-referenced data or not referenced anymore, I'd like to know if this is a standard behavior in AppFabric ?
Could the massive use of GetLock-PutUnlock and massive readings and writings could cause the memory to go up to the limit allowed for the cache cluster ?
We used different size limits (1024 - 1536 - 2048) and different watermarks but it still goes up to either the maximum size limit or High watermark (we tried 75, 80 and 90).
If you need anymore information please tell me.
Thanks in advance.
EDIT
we decided that we had to abandon AppFabric as there seems to be no valid work-around for this issue. Instead we'll use a Windows Azure hosted Redis cache. Hopefully there will be everything we need in Redis.
If a fix or a solution is known I'd still like to know of it, if anyone has it.
Thanks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was asked this question once at an interview:
"Suppose you own a website where the server is at some remote location. One day, some user calls/emails you saying the site is abominably slow. How would you identify why the site is slow? Also, when you check the website yourself as any user would (using your browser), the site behaves just fine."
I could think of only one thing (which was shot down):
Check the server logs to analyse incoming traffic. Maybe a DoS attack or exceptionally high traffic. Interviewer told me to assume the server has normal traffic and no DoS.
I was kind of lost because I had never thought of this problem. I have almost no idea how running a server/website works. So if someone could highlight a few approaches, it would be nice.
While googling around, I could find only this relevant, wonderful article. That article is kind of too technical for me now, but I'm slowly breaking it down and understanding it.
Since you already said when you check the site yourself the speed is fine, this means that (at least for the pages you checked) there is nothing wrong with the server and it can serve those pages at a good speed. What you should be figuring out at this point is what the difference is between you and the user that reports your site is slow. It might be a lot of different things:
Is the user using a slow network connection (mobile for example)?
Does the user experience the same problems with other websites hosted at the same webhoster? If so, this could indicate a network problem. Normally this could also indicate a resource problem at the webserver, but in that case the site would also be slow for you.
If neither of the above leads to an answer, you could assume that the connection to the server and the server itself are fine. This means the problem must be in the users device. Find out which browser/OS he uses and try to replicate the problem. If that fails find out if he uses any antivirus or similar software that might cause problems.
This is a great tool to find the speed of web pages and tells you what makes it slow: https://developers.google.com/speed/pagespeed/insights
I think one of the important thing that is missing from above answers is the server location, which can play a vital in web performance.
When someone is saying that it is taking a longer time to open a web page that means high latency. High latency can be caused due to server location.
Let's assume as you are the owner of the web page then the server and client are co-located, so it will have a low latency.
But, now if client is across the border, then latency time will increase drastically. And hence a slow perfomance.
Another factor is caching which drastically affects the latency time.
Taking the example of facebook, they have server all over the world to reduce the latency time (and also to provide several other advantages) and they use huge caching system to cache their hot data (trending topics) whereas cold data (old data) are stored in hard disk so it takes a longer time to load an older photo or post.
So, a user might would have complained about this as they were trying load up some cold data.
I can think of these few reasons (first two are already mentioned above):
High Latency due to location of client
Server memory might need to be increased
Number of service calls from the page.
If a service could be down at the time of complaint, it could prevent page from loading.
The server load might be too high at the time of the poor experience. The server might need to increase the resources (e.g. adding another server/web server to the cluster).
Check if there was any background job running on the server at that time.
It is important to check the logs and schedules of the batch jobs to determine what all was running at that time.
Hope this help.
Normally the user takes the page loading time as a measure to find out that the site is slow. But if you really want to know that what is taking the maximum time the you can open the browser debugger by pressing f12. if your browser is chrome the click on network and see what calls your application is making and which are taking maximum time. If you are using Firefox the you need to install firebug. If you have that, then again press f12 and click on Net.
One reason could be the role of the user is different of your role. You might be having suppose an administrator privilege (some thing like super user role) and the code might be just allowing everything for such role that means it does not really do much of conditional checking to see what is allowed or not. Some times, it's a considerable over ahead to get all the privileges of the user and have the conditions checking, how course depends how how the authorization is implemented. That means, the page might be really slow for specific roles. Hence, you should find out the roles of the user and see if that is a reason.
Obviously an issue with the connection of the person connecting to your site OR it's possible it was a temporary issue and by the time you checked your site, everything was dandy. You could check your logs or ask your host if there was an issue at the time the slow down occured.
This is usually a memory issue and it can be resolved by increasing the Heap Size of the Web Server hosting the application. In case the application is running on Weblogic Server. Heap size can be increased in "setEnv" file located in Application Home.
Goodluck!
Michael Orebe
Though your question is quite clear, web site optimisation is a very extensive subject.
The majority of the popular web developing frameworks are for some reason, extremely processor inefficient.
The old fashioned way of developing n-tier web applications is still very relevant and is still considered to be best practice according the W3C. If you take a little time to read the source code structure of the most popular web developing frameworks you will see that they run much more code at the server than is necessary.
This may seem a bit of a simple answer but, the less code you run at the server and the more code you run at the client the faster your servers will work.
Sometimes contrasting framework code against the old fashioned way is the best way to get an understanding of this. Here is a link to a fully working mini web application which represents W3C best practices and runs the minimum amount of code at the server and the maximum amount of code at the client: http://developersfound.com/W3C_MVC_EX.zip this codebases is also MVC compliant.
This codebase comes with a MySQL database dump, php and client side code. To see this code in action you will need to restore the SQL dump to a MySQL instance (sql dump came from MySQL 8 Community) and add the user and schema permissions that are found in the php file (conn_include.php); setting the user to have execute permissions on the schema.
If you contrast this code base against all of the most popular web frameworks, it will really open your eyes to just how inefficient these frameworks are. The popular PHP frameworks that claim to be MVC frameworks aren’t actually MVC compliant at all. This is because they rely on embedding PHP tags inside HTML tags or visa-versa (considered very bad practice according the W3C). Also most popular node frameworks run way more code at the server than is necessary. Embedded tags also stop asynchronous calls from working properly unless the framework supports AJAX dumps such as Yii 2.
Two of the most important rules to follow with MVC compliance is: never embed server side tags (such as PHP tags) in HTML tags or visa-versa (unless there is a very good excuse such as SEO) and religiously never write code to run at the server if it can be run at the client. Also true MVC is based on tier separation, where as the MVC frameworks are based on code separation. True MVC compliance is very processor efficient. Don’t get me wrong MVC frameworks are very useful for a lot of things, but if you’re developing a site that is going to get millions of hits, they are quite useless, or at least they will drive your cloud bills so high that it will really eat into your company’s profits.
In summary frameworks don’t give much control over what code runs at the client or server and are very inefficient but you can get prototypes up and running quicker with less code.
In contrast the old fashioned way takes a bit more elbow grease but you have complete control over what runs at the server and what runs at the client.
As an additional bit of advice for optimisation avoid using pass-through queries and triggers and instead opt for stored procedures. Historically stored procedures weren’t invented at the time MVC was present as a paradigm but it definitely increases separation of concerns between the tiers and is much more processor efficient.
Hope this advice helps.
My website is http://secretpassagesbooks.com/. It runs on the latest version of wordpress and is hosted via GoDaddy on a shared web server.
My website takes at anywhere from ten seconds to one minute to load, and I don't understand why. I have tested in IE, FireFox, and Chrome, and the page speed is the same. I performed several speed tests at various online speed test sites and have an average load time of 5 - 6 seconds. Yet when I click on a link to my URL or enter it directly it takes in excess of 30 seconds (sometimes more than a minute) to load the index page.
Here is what I have done so far to troubleshoot the issue:
I have the YSlow and Page Speed extensions installed in Firebug
Yslow test gives me a "Grade A -Overall performance score 90"
My Page Speed a score is 94/100
I have the W3Cache wordpress plugin installed and am using page, browser, and database object caching
I've tried minimizing as much CSS and JavaScript as possible
The site is using HTTP compression
Is there anything more I can do with this design, or is it case of my shared web server being overloaded? Thanks in advance for all your help.
YSlow, etc detect problems in the HTML, Javascript and CSS parts, and these are probably OK. It looks like your hosting is to blame.
If those plug-in results are correct (and I've no reason to doubt they are), then it's most likely a case of your virtual server simply being overloaded.
I presume you have no such issues running an identical site in a "local" production environment either, although you might want to try this to confirm if you've not already done this.
Incidentally, a tale-tell sign of an overloaded VPS/shared hosting solution is if the first page load is incredibly slow, but subsequent loads are "normal" - a common reason being that your "decicated" sandbox is being awoken from a sleep/low resource state. (This also seems to be the case as far as your site is concerned.) As such, it's possible (I don't know the details of this server, such as whether you have a "guaranteed" resource level for CPU, memory, etc.) that other sites on this particular server are using more than their fair share of bandwidth until your site kicks in.
Based on some tests from a tool that I built (The Performance Grader at JoomlaPerformance.com), wow is it bad...
Notice that the HTML took approximately 21.83 seconds to download (from the initial request, to the last object being downloaded). Not to mention that the page is nearly 300kb (which is fairly large for only having 7 images)...
This is where the issue is. Notice that the connection and DNS phases are fine, but the generation phase is really REALLY slow. That's where your problems are. It's server-side. So, you need to debug why it's slow. Some areas to look at are the SQL queries that are being executed (and if they are slow), any slow plugins, etc. Try disabling things one at a time to see if each makes a measurable difference or not.
My "hunch" is that your database is either overloaded, or your queries are very expensive. So in short, you can try another host to see if that helps (which is the solution more than you'd think)...
As most of you pointed out, the issue seemed to be with the server. I contacted GoDaddy and explained the situation. It turns out that my site was hosted on one of their legacy servers and was most likely overloaded. They switched me over to one of their grid servers (no cost) and now everything is loading quickly. Thanks for all the responses. I spent a lot of time tweaking the design, removing plugins one by one, reducing as many HTTP requests as possible, and generally went crazy trying figure out how to best optimize my site. After a few days and a lot of tests, I could not accept that the problem was client-side, especially after all the optimization test I ran showed my site was ok. So good to have it settled...for now, at least.
GoDaddy's webhosting is the bottleneck to your website, you should probably go for a VPS if you have got an advanced website with loads of lookups!