Codeigniter APC cache performance - codeigniter

Today, I installed APC on ubuntu server(virtual machine) to test with my CodeIgniter code. I enabled profiling. All thing it did was reducing time from 0.4 sec to 0.32 sec.
APC disabled:
Loading Time: Base Classes 0.0444
Controller Execution Time 0.3630
Total Execution Time 0.4079
APC enabled:
Loading Time: Base Classes 0.0439
Controller Execution Time 0.2822
Total Execution Time 0.3265
Should I expect that much from APC or there is something more that I should obtain by better configuration?
Should I change anything in my Codeigniter code?
As far as I know APC is a bytecode caching and not page caching nor caching database rows, but here it seems strangely the author of user-guide is trying to cache the page. And I found no user guide about how to run codeigniter alongside APC.

Related

Sitecore page load slowness

I'm using Sitecore instance 9.1, Solr 7.2.1, and SXA 1.8.
I have deployed the environment on Azure and while monitoring incoming requests (to CD instance), I've noticed slowness in loading some pages at specific times.
I've explored App Insights and found an unexplainable behavior the request is taking 28.7 seconds while the breakdown of it shows executions of milli-seconds .. How is that possible? and How to explain what's happening during extra 28 seconds on the app service ??
I've checked the profiler and it shows that the thread is taking only 1042.48 ms .. How is that possible ?
This is an intermittent issue happens during the day .. regular requests are being served within 3 to 4 seconds.
I noticed that Azure often shows a profile trace for a "similar", but completely different request when clicking from the End-to-end transaction view. You can check this by comparing the timestamp and URL of the profile trace and the transaction you clicked from.
For example, I see a transaction logged at 8:58:39 PM, 2021-09-25 with 9.1 s response time:
However, when I click the profile trace icon, Azure takes me to a trace that was captured 10 minutes earlier, at 08:49:20 PM, 2021-09-25 and took only 121.64 ms:
So, if the issue you experience is intermittent and you cannot replicate it easily, try looking at the profile traces with the Slowest wall clock time by going to Application Insights → Performance → Drill into profile traces:
This will show you the worst-performing requests captured by the profiler at the top of the list:
In order to figure out why it is slow, you’ll need to understand what happens internally, f.e:
How the wall clock time is spent while processing your request?
Are there any locks internally?
The source of that data is dynamic profiling, Azure can do that on demand.
The IIS stats report would show you slowest requests, so you could look into Thread Time distribution to see where those 28 seconds are spent:
In Sitecore the when the application start the Initial prefetch configuration allows to pre-populate prefetch caches. Pre-heated prefetch caches help to reduce the processing time of incoming requests. The initial prefetch configuration of caches are taking time to load on initial stage.
Sitecore XP instance takes too long to load. This is caused by a performance issue in the CatalogRepository.GetCatalogItems method. It will be fixed in upcoming updates
see Site core knowledge base
In Sitecore XP 9.0 the initial prefetch configuration was revised. The prefetch cache for the core database was configured to include items that are used to render the Sitecore Client interface.
The Sitecore Client interface is not used on Content Delivery instances. Disabling initial prefetch configuration for the Core database helps in avoiding excessive resource consumption on the SQL Server hosting the Core database.
Change the configuration of the Core database in the \App_Config\Sitecore.config file:
Refer site core knowledge base

Yii2 basic application page - load testing with apache jmeter freezes server

I had configured yii2 basic application template in windows server (Dual core processor, 8 GB RAM) No extra code written other than just installing it.When testing with apache jmeter with 100 concurrent users in 10 min CPU usage get hit 99% and the server freezes.Normal static PHP page would work without any issues under the same test, if placed outside the framework.It take around 2-3 percent of CPU utilisation.
If you're allowing to run 100 concurrent PHP processes on 2-core CPU, this is more like issue with your server configuration - each process gets less that 1% of your CPU, which makes everything really slow. You should limit number of PHP processes (in php-fpm config for example) and queue them at webserver level - it is better to process 20 concurrent request at the same time and do it fast, than process 100 and do it slow.
You should start from guide tutorial about Yii optimization.
Definitely disable debug mode.
Use more efficient backend for cache (like APCu or redis/memcache) and session.
Disable Yii autoloader and use optimized autoloader from composer: https://www.yiiframework.com/doc/guide/2.0/en/concept-autoloading#using-other-autoloaders
You may also look at application prepared for basic benchmark and compare configs.
As per Patrick comment, you're comparing a simple PHP page with a more complex framework.
There are tons of possible reasons for your issues:
misconfiguration of YII2
Unrealistic JMeter test, you don't give any information on it
Issue in YII2

Remote API server slowiness

In our server we reach api.twitter.com and use REST API of Twitter. Until 3 days ago we had no problems. But since that time we have slowiness problem.
Regarding to Twitter API status page there is no problem. But we have very big delays.
We make 350-400 requests per minute.
Before, we had a performance of 600-700 ms. per request. (Snapshot image)
But now it became 3600-4000 ms per request. (Snapshot image)
It doesn't look like a temporary slowiness because it remains nearly for 3 days.
What did I check:
- I didn't make any big code change in our repo. Also when we make minimal reuqests with just one line of request, we still get this slowiness.
- I check server speed with Ookla's speedtest. It looks good. 800 Mb/s download, 250 Mb/s upload.
- We don't have any CPU, RAM or disk problem. CPU average is 30%, RAM is 50% loaded, disk IO is nearly 4-5%.
So what would be the probable causes ?
I can check them and update question.
(Centos 6.5, PHP 5.4.36, Nginx 1.6, Apache/2.2.15, Apache run PHP as PHP module, XCache 3.2.0)

Performance of memcache on a shared server

Lately I've been experimenting with increasing performance on my blog, and not just one-click fixes but also looking at code in addition to other things like CDN, cache, etc.
I talked to my host about installing memcache so I can enable it in W3 Total Cache and he seems to think it will actually hinder my site as it will instantaneously max out my RAM usage (which is 1GB).
Do you think he is accurate, and should I try it anyway? My blog and forum (MyBB) get a combined 200,000 pageviews a month.
In fact, having 200.000 pageviews a month, I would go a way from a 'shared' host, and buy a VPS or dedicated server or something, Memcache(d) is a good tool indeed, but there is lots of other way you can get better performance.
Memcached is good if you know how to use it correctly, (The w3 total cache memcached thing, doesn't do the job).
As a performance engineer, I think a lot about speed, but also about server load and stuff. Im working much with wordpress sites, and the way I increase the performance to the maximum on my servers, is to generate HTML pages of my wordpress sites, this will result in 0 or minimal access to the PHP handler itself, which increase performance a lot.
What you then again can do, is to add another caching proxy in front of the web server, etc Varnish, which caches results, which means you'll never touch the web-server either.
What it will do, is when the client request your page, it will serve the already processed page directly via the memory, which is pretty fast. You then have a TTL on your files, and can be as low as 50 seconds which is default. 50 seconds doesn't sounds a lot. But if you have 200k pageviews, that means you will have 4.5 pageviews each minute if you had same amount of pageviews each minute. So peak hours doesn't count.
When you do 1 page view, there will be a lot of processing going on:
Making the first request to the web-server, starting the php process, process data, grap stuff from the DB, process the data, process the PHP site, etc. If we can do this for a few requests it will speed up the performance.
Often you should be able to generate HTML files of your forum too, which then would be renewed each 1-2 minutes, if there is a request to the file. it will require 1 request being processed instead of 4-9 requests (if not more).
You can limit the amount of memory that memcached uses. If the memory is maxed out the oldest entries are pruned. In CentOS/Debian there is /etc/default/memcached and you can set the maximum memory with the -m flag.
In my experience 64MB or even 32MB of memcached memory are enough for Wordpress and make a huge difference. Be sure to not cache whole pages (that fills the cache pretty fast) instead use memcache for the Wordpress Object Cache.
For generall Performance: Make sure to have a recent PHP Version (5.3+) and have APC installed. For Database Queries I would skip W3TC and go directly for the MySQL Query Cache.

My Drupal site uses approx 30mb per page (node & user profiles) load. Acceptable or not?

I am using drupal and my website uses approx 30mb per page load for nodes and user profiles. My website has round about 150 contributed modules in addition to a few core optional modules. But most of them are small and installed to improve user experience.
My php memory limit is 128mb.
Is 30mb per page acceptable?? And how many page loads can be handled by it easily in 128mb??
Any idea?
Honestly, at 30MB your app is just sipping on memory. The PHP memory limits are set pretty low.
As far as how many "page loads can be handled by 128MB" of memory, well, that's not really valid. When a request comes in, Apache (or whatever server you're using) hands the request to mod_php or FCGI and your PHP code is interpreted, compiled, run, and then quit. The "application" doesn't act like a daemon waiting for requests to come in, so the memory it consumes is used for the duration of the request and then it gets released for use by other requests/processes.
That 128MB limit is per request. That means that so long as you have enough memory (and Apache child processes, etc) you can handle additional requests. If you want to see how your application performs under load, check out apachebench.

Resources