Phalcon Volt gets really slow after chaining different models - performance

For some reporting issues I have to use codes as
{{ Model1.Model2.Model3.name }}
in for loops. I know it is not the best way (or may be the worst) to use but things happened and now I have to figure out some way to make this load faster. Because although there are 300 rows it takes nearly 10 secs to load.
My question is, how can I cache some of these results which are not actually queries on the backend? Or would you suggest an other way to make page loading faster?

Have you tried using a different strategy to get the model meta data ? I mean, if you didn't setup anything on your config file, everytime you make a query, Phalcon must query the database first to know the "metadata" of the table(Columns, column type, nullable, etc).
You can change your strategy to annotation, or at least cache the table metadata.
Please check Phalcon documentation

Related

Tabular Model Not Caching Results

Can anyone, someone point me in the direction of how to troubleshoot why a Tabular model that I have built does not seem to want to cache query results?
It is my understanding that MDX queries to Tabular model will be cached, however with our model they never seem to be! And I can't figure out why..
My best guess is that it's memory pressure, and the system is clearing down the RAM, but even that is a guess..
Are there any counters, DMVs, or other perfmon stats etc that i can use to actually see what is going on and check?
Thanks.
Plenty of places to look, but I'd recommend starting with a Profiler/xEvent trace. Below is an example of 2 runs of the same MDX query.
The first run is on a cold-cache...
The second run is on a warm-cache and you can see that it is resolving the query from cache...
This is much easier to see if you can isolate the query on non-production server (e.g. test/dev environment). There are quite a few reasons why a particular query may not be taking advantage of the cache...but you need to first confirm that it is not using the cache.

How to decrease the startup time of this LINQ-EF query? Precompiled EF Queries?

I am using MVC3, .NET4.5, C#, EF5.0, MSSQL2008 R2
My web application can take between 30 and 60secs to warm up ie get through first page load. Following page loads are very quick.
I have done a little more analysis, using DotTrace.
I have discovered that some of my LINQ queries, particularly the .COUNT and .ANY() ones take a long time to execute first time ie:
if (!Queryable.Any<RPRC_Product>((IQueryable<RPRC_Product>) this.db.Class.OfType<RPRC_Product>(), (Expression<Func<RPRC_Product, bool>>) (c => c.ReportId == (int?) myReportId)))
takes around 12 secs on first use.
Can you provide pointer how I can get these times down. I have heard about precompilers for EF queries.
I have a feeling the answer lies in using precompilation rather than altering this specific query.
Many thanks in advance
EDIT
Just read up on EF5's auto compile feature, so second time round compiled queries are in cache. So first time round, compilation is still required to intermediate EF language. Also read up on pregeneration of Views which may help generally as well?
It is exactly as you said - you do not have to worry about compiling queries - they are cached automatically by ef. Pregenerated Views might help you or may not. Automatic tools for scaffolding them generates not all views required and the missing ones still needs to be made at run time. When I developed my solutions and expected the same problem as you the only thing that helped me was to simplify a query. It turns out that if the first query to run is very complex and involves complicated joins then ef needs to generate many views to execute it. So I simplified the queries - instead of loading whole joined entities I loaded only ids (grouped , filtered out or whatever) and then when I needed to load single entities I loaded them by ids one by one. This allowed me to avoid long execution time of my first query.

Does Wordpress load-cache functions such as wp_get_nav_menu_object?

I've been asking myself this question for quite a while. Maybe someone has already done some digging (or is involved in WP) to know the answer.
I'm talking about storing objects from WP-functions in PHP variables, for the duration of a page load, e.g. to avoid having to query the database twice for the same result set.
I don't mean caching in the sense of pre-rendering dynamic pages and saving them in HTML format for faster retrieval.
Quite a few "template tags" (Wordpress functions) may be used multiple times in a theme during one page load. When a theme or plugin calls such a function, does WP run a database query every time to retrieve the necessary data, and does it parse this data every time to return the desired object?
Or, does the function store the its result in a PHP variable the first time it runs, and checks if it already exists before it queries the database or parses?
Examples include:
wp_get_nav_menu_object()
wp_get_nav_menu_items()
wp_list_categories()
wp_tag_cloud()
wp_list_authors()
...but also such important functions as bloginfo() or wp_nav_menu().
Of course, it wouldn't make much sense to cache any and all queries like post-related ones. But for the above examples (there are more), I believe it would.
So far, I've been caching these generic functions myself when a theme required the same function to be called more than once on a page, by writing my own functions or classes and caching in global or static variables. I don't see why I should add to the server load by running the exact same generic query more than a single time.
Does this sort of caching already exist in Wordpress?
Yes, for some queries and functions. See WP Object Cache. The relevant functions are wp_cache_get, wp_cache_set, wp_cache_add, and wp_cache_delete. You can find these functions being used in many places through the WordPress code to do exactly what you are describing.

efficient serverside autocomplete

First off all I know:
Premature optimization is the root of all evil
But I think wrong autocomplete can really blow up your site.
I would to know if there are any libraries out there which can do autocomplete efficiently(serverside) which preferable can fit into RAM(for best performance). So no browserside javascript autocomplete(yui/jquery/dojo). I think there are enough topic about this on stackoverflow. But I could not find a good thread about this on stackoverflow (maybe did not look good enough).
For example autocomplete names:
names:[alfred, miathe, .., ..]
What I can think off:
simple SQL like for example: SELECT name FROM users WHERE name LIKE al%.
I think this implementation will blow up with a lot of simultaneously users or large data set, but maybe I am wrong so numbers(which could be handled) would be cool.
Using something like solr terms like for example: http://localhost:8983/solr/terms?terms.fl=name&terms.sort=index&terms.prefix=al&wt=json&omitHeader=true.
I don't know the performance of this so users with big sites please tell me.
Maybe something like in memory redis trie which I also haven't tested performance on.
I also read in this thread about how to implement this in java (lucene and some library created by shilad)
What I would like to hear is implementation used by sites and numbers of how well it can handle load preferable with:
Link to implementation or code.
numbers to which you know it can scale.
It would be nice if it could be accesed by http or sockets.
Many thanks,
Alfred
Optimising for Auto-complete
Unfortunately, the resolution of this issue will depend heavily on the data you are hoping to query.
LIKE queries will not put too much strain on your database, as long as you spend time using 'EXPLAIN' or the profiler to show you how the query optimiser plans to perform your query.
Some basics to keep in mind:
Indexes: Ensure that you have indexes setup. (Yes, in many cases LIKE does use the indexes. There is an excellent article on the topic at myitforum. SQL Performance - Indexes and the LIKE clause ).
Joins: Ensure your JOINs are in place and are optimized by the query planner. SQL Server Profiler can help with this. Look out for full index or full table scans
Auto-complete sub-sets
Auto-complete queries are a special case, in that they usually works as ever decreasing sub sets.
'name' LIKE 'a%' (may return 10000 records)
'name' LIKE 'al%' (may return 500 records)
'name' LIKE 'ala%' (may return 75 records)
'name' LIKE 'alan%' (may return 20 records)
If you return the entire resultset for query 1 then there is no need to hit the database again for the following result sets as they are a sub set of your original query.
Depending on your data, this may open a further opportunity for optimisation.
I will no comply with your requirements and obviously the numbers of scale will depend on hardware, size of the DB, architecture of the app, and several other items. You must test it yourself.
But I will tell you the method I've used with success:
Use a simple SQL like for example: SELECT name FROM users WHERE name LIKE al%. but use TOP 100 to limit the number of results.
Cache the results and maintain a list of terms that are cached
When a new request comes in, first check in the list if you have the term (or part of the term cached).
Keep in mind that your cached results are limited, some you may need to do a SQL query if the term remains valid at the end of the result (I mean valid if the latest result match with the term.
Hope it helps.
Using SQL versus Solr's terms component is really not a comparison. At their core they solve the problem the same way by making an index and then making simple calls to it.
What i would want to know is "what you are trying to auto complete".
Ultimately, the easiest and most surefire way to scale a system is to make a simple solution and then just scale the system by replicating data. Trying to cache calls or predict results just make things complicated, and don't get to the root of the problem (ie you can only take them so far, like if each request missed the cache).
Perhaps a little more info about how your data is structured and how you want to see it extracted would be helpful.

can browsers [especially IE6] handle big Json object

I am returning a big Json object [ 5000 records and 10 elements per record]
from the controller [asp.net mvc] using Jquery and Ajaxpost. till now I was dealing with just 20 records [testing] and it is working fine. But in production there are 5000 records so i am wondering if browser can handle huge amount of data. Especially IE6. I have to display all the 5000 records on a single page. I am confused. I dont have that much data now to test. I need you expert advice whether to Use or Not to Use Jquery Json AjaxPost to return huge amount of data. Thank you.
You should really page this data. You might want to have a look at this:
PagedList
A simple way to find out and probably good practice is to make sure you have that data to test and it will make it much easier to choose between differnt options. paging is one option but there are others. You may be surprised how quickly modern browsers can load that amount of data.
it should be easy enough with a vb.net page to generate all the data you need using random data. This is really essential if you want to be able to investigate what to do.

Resources