can browsers [especially IE6] handle big Json object - ajax

I am returning a big Json object [ 5000 records and 10 elements per record]
from the controller [asp.net mvc] using Jquery and Ajaxpost. till now I was dealing with just 20 records [testing] and it is working fine. But in production there are 5000 records so i am wondering if browser can handle huge amount of data. Especially IE6. I have to display all the 5000 records on a single page. I am confused. I dont have that much data now to test. I need you expert advice whether to Use or Not to Use Jquery Json AjaxPost to return huge amount of data. Thank you.

You should really page this data. You might want to have a look at this:
PagedList

A simple way to find out and probably good practice is to make sure you have that data to test and it will make it much easier to choose between differnt options. paging is one option but there are others. You may be surprised how quickly modern browsers can load that amount of data.
it should be easy enough with a vb.net page to generate all the data you need using random data. This is really essential if you want to be able to investigate what to do.

Related

SAPUI5: Does Using Formatters Impact Performance?

I am working on custom SAPUI5 app using ODataModel and for which I have to do the formatting for some of the fields which I will be displaying in List control.
I need to know which approach is good mentioned below is good w.r.t. performance of app.
1) Is it a good idea to use Formatter.js file and write each method for each field for formatting?
Example -
There are 2 fields which should be formatted before showing in UI and hence 2 formatter function.
2) Before binding Model to List - Do the formatting using Loop at each row.
Example -
Loop at OData.
--do formatting here for both the fields
move data to model.
Endloop.
Bind new model to UI
Is there any other way by which we can improve performance - apart from code minification or using grunt.
Appreciate your help.
Thanks,
Rahul
Replacing Formatters with other solutions is definitely NOT the point to start when optimizing performance not to mention that you will loose a lot of the convenience the ODataModel comes with when manually manipulating the data in it.
Formatter Performance
Anyways using a formatter is of course less performant then pre-formatting your data once after they were loaded. A formatter will be executed on every rerendering of your control. So you might not want to do heavy calculations or excessive looping in a formatter that is executed frequently. But given a normal usage using formatters is absolutely nothing you should worry about or that does noticeably affect end-user experience. Keep enjoying the convenience of formatters (and take a look at the cool Expression Binding).
General Performance Considerations
To improve performance it is first of all very important to identify the real bottle neck. In many cases this is simply the backend, there is usually much more to win with much less effort. Always keep that in mind. UI Code optimization is ridiculous as long as the main backend call runs for say 3s.
Things to improve your UI performance might be:
serve SAPUI5 from a CDN
use a Component-preload, can be generated with grunt-openui5 or gulp-ui5-preload (I think it does not minify XML yet so you could do that additionaly before creating Component-preload)
try to reduce the number of SAPUI5 libraries you are using
be aware of which SAPUI5 libraries you are NOT using and very consequently remove those (don't forget the dependencies section in Component metadata resp. manifest.json)
be aware that sap.ui.layout is a separate independent library (not registering it as such will result in a lot of extra requests)
if you use an ODataModel make sure you set useBatch to true (default in v2.ODataModel)
intelligently design your OData service (if you can influence it)
intelligently use $expands: sometimes it can make sense to preload $expand data on a parent binding that does not actually use it e.g. if you most probably need the data later on
think about bundling your app as native app and benefit from improved caching (Kapsel)
Check Performance: Speed Up Your App and Performance Issues
squeeze out some more bytes and save some requests by minifying/combining custom css or other resources if you have some
If you are generally interested in Web Performance I can recommend Steve Souders books.
I'm totally open for more ideas on SAPUI5 performance improvements! Anyone?
BR
Chris
the best practice is to do it this way. The formatter allows you to receive an input and return output. The formatter function will be called in runtime and will be called for each of the rows which are displayed in your list. The reason that it will be called for each of the rows is because that you cannot grantee that the input will be the same for all of the rows in the list.
The concept of binding is to loop on your data model and update the UI accordingly. It is much better to use binding because a lot of reasons like: maintainability, performance, separate the data layer from the presentation layer, core optimizations and more.

make magic suggestion in bootstrap faster

i used magic Suggest in bootstrap to auto complete search.
but now i want to become faster it.
i have many data, so i cached data from database to access data faster with out querying in database.
my source-URL is a spring-MVC controller that return data as a json type.
but i want to see result of search faster.how can i do this with magic suggest??
i think because of having many data.it is slow.
for example when i write 'm' in textBox it is slow to suggest data,and some times browser hanging.
set max suggestion minimal Integer, in magic suggestion. help me to improve performance.
before that i show all of the items in one time and it make my system slow.
now when set max suggestion=10; improve suggestion performance.

Ajax returned data size

Been searching for some info on how much data an ajax call can receive/handle and haven't found anything.
The scenario is a common one: a call to the back-end to retrieve some rows from a database. The call can return any number of rows. The question is, how much data can I safely return to the front end?
Hope someone could shed some light on the subject.
The user can digest far fewer rows than you can safely return. Either page the data for the user's benefit or provide searching/filtering options.
For a real answer, it's going to vary considerably for each browser, especially for mobile users, and depend a lot on each user's bandwidth and hardware. This is one of those things where if you have to ask, you're doing it wrong.

IE performance for rendering huge html data

I have a few perl CGIs which almost query the whole table with more than 5000 rows as result and send that data to browsers. The size of html data generated is around 1MB.
Earlier I was using tables(which should be ideal approach).
Unfortunately most users use IE and it does not display data till it receives closing table tag. Can we do something about it.
To push output as soon as its generated, I used another approach where in I was using printf and <pre>. Which reduced the response size by 200kb and and it appears more faster in display.
Again IE (not any other browser) eats up CPU and hangs for couple of seconds... :-( ..
Can we do something about it too.
FYI I am using IE8.
Perhaps it would be wise from a UX perspective to use some sort of pagination method. Having a single page with thousands upon thousands of rows sounds entirely unfriendly for the end user. Something like a simple means of pagination ("Skip to page =dropdown=") would certainly solve your problem, as well as decrease load times and increase usability.
There are also several solutions which are pre-built and would likely integrate rather easily. One which comes to mind almost immediately is Sencha's Paging Grid:
http://dev.sencha.com/deploy/dev/examples/grid/paging.html
http://www.sencha.com/products/js/
It's pretty nifty and you'll likely get some kudos for using a hip new technology. There are other options, too:
Ingrid: http://www.reconstrukt.com/ingrid/src/example3.html
YUI Data Grid: http://developer.yahoo.com/yui/datatable/#data
MyTableGrid: http://pabloaravena.info/mytablegrid/
Hope this helps!
More people use ie?
http://www.w3schools.com/browsers/browsers_stats.asp
anyways why do you still use tables?
we all know they are easy to maintain, easy to get confused and in your case slow...
output it as div which will show all the data and does not need to wait for a table tag to finish. as div and span have the least properties.

efficient serverside autocomplete

First off all I know:
Premature optimization is the root of all evil
But I think wrong autocomplete can really blow up your site.
I would to know if there are any libraries out there which can do autocomplete efficiently(serverside) which preferable can fit into RAM(for best performance). So no browserside javascript autocomplete(yui/jquery/dojo). I think there are enough topic about this on stackoverflow. But I could not find a good thread about this on stackoverflow (maybe did not look good enough).
For example autocomplete names:
names:[alfred, miathe, .., ..]
What I can think off:
simple SQL like for example: SELECT name FROM users WHERE name LIKE al%.
I think this implementation will blow up with a lot of simultaneously users or large data set, but maybe I am wrong so numbers(which could be handled) would be cool.
Using something like solr terms like for example: http://localhost:8983/solr/terms?terms.fl=name&terms.sort=index&terms.prefix=al&wt=json&omitHeader=true.
I don't know the performance of this so users with big sites please tell me.
Maybe something like in memory redis trie which I also haven't tested performance on.
I also read in this thread about how to implement this in java (lucene and some library created by shilad)
What I would like to hear is implementation used by sites and numbers of how well it can handle load preferable with:
Link to implementation or code.
numbers to which you know it can scale.
It would be nice if it could be accesed by http or sockets.
Many thanks,
Alfred
Optimising for Auto-complete
Unfortunately, the resolution of this issue will depend heavily on the data you are hoping to query.
LIKE queries will not put too much strain on your database, as long as you spend time using 'EXPLAIN' or the profiler to show you how the query optimiser plans to perform your query.
Some basics to keep in mind:
Indexes: Ensure that you have indexes setup. (Yes, in many cases LIKE does use the indexes. There is an excellent article on the topic at myitforum. SQL Performance - Indexes and the LIKE clause ).
Joins: Ensure your JOINs are in place and are optimized by the query planner. SQL Server Profiler can help with this. Look out for full index or full table scans
Auto-complete sub-sets
Auto-complete queries are a special case, in that they usually works as ever decreasing sub sets.
'name' LIKE 'a%' (may return 10000 records)
'name' LIKE 'al%' (may return 500 records)
'name' LIKE 'ala%' (may return 75 records)
'name' LIKE 'alan%' (may return 20 records)
If you return the entire resultset for query 1 then there is no need to hit the database again for the following result sets as they are a sub set of your original query.
Depending on your data, this may open a further opportunity for optimisation.
I will no comply with your requirements and obviously the numbers of scale will depend on hardware, size of the DB, architecture of the app, and several other items. You must test it yourself.
But I will tell you the method I've used with success:
Use a simple SQL like for example: SELECT name FROM users WHERE name LIKE al%. but use TOP 100 to limit the number of results.
Cache the results and maintain a list of terms that are cached
When a new request comes in, first check in the list if you have the term (or part of the term cached).
Keep in mind that your cached results are limited, some you may need to do a SQL query if the term remains valid at the end of the result (I mean valid if the latest result match with the term.
Hope it helps.
Using SQL versus Solr's terms component is really not a comparison. At their core they solve the problem the same way by making an index and then making simple calls to it.
What i would want to know is "what you are trying to auto complete".
Ultimately, the easiest and most surefire way to scale a system is to make a simple solution and then just scale the system by replicating data. Trying to cache calls or predict results just make things complicated, and don't get to the root of the problem (ie you can only take them so far, like if each request missed the cache).
Perhaps a little more info about how your data is structured and how you want to see it extracted would be helpful.

Resources