Server-side pagination in mean stack application - mean-stack

I am working on mean stack application with angular 4.I am worrying about pagination from the server side.I can not able to understand the logic.Please help me !

My guess is you are using MongoDB since you mentioned MEAN Stack. For implementing pagination, you can use the find, limit and skip functions.
Example: (pagesize is 10 records)
// Page 1
db.document.find().limit(10);
// Page 2
db.document.find().skip(10).limit(10);
// Page 3
db.document.find().skip(20).limit(10);
This is native to MongoDB however, this approach has a drawback as MongoDB manual states:
The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return results. As the offset (e.g. pageNumber above) increases, cursor.skip() will become slower and more CPU intensive. With larger collections, cursor.skip() may become IO bound.
You also can use any indexed field to achieve this (preferably _id field).

Related

make magic suggestion in bootstrap faster

i used magic Suggest in bootstrap to auto complete search.
but now i want to become faster it.
i have many data, so i cached data from database to access data faster with out querying in database.
my source-URL is a spring-MVC controller that return data as a json type.
but i want to see result of search faster.how can i do this with magic suggest??
i think because of having many data.it is slow.
for example when i write 'm' in textBox it is slow to suggest data,and some times browser hanging.
set max suggestion minimal Integer, in magic suggestion. help me to improve performance.
before that i show all of the items in one time and it make my system slow.
now when set max suggestion=10; improve suggestion performance.

How do I preload items in a SproutCore ListView?

I've got a relatively fast SproutCore app that I'm trying to make just a tad bit faster.
Right now, when the user scrolls my SC.ListView and they scroll into view some list items that have not been loaded from the server (say from a relationship), the app automatically makes a call to the server to load these records. Even though this is fast, there is still a short period of time where my list items are blank.
I know that I can make them say "Loading..." or something like that (and I have), but I was wondering: is there was a way to pre-load my "off-screen" records so that as the user scrolls, the list items are already loaded?
My ListItemViews will be fairly large (pixel-wise), so even loading double the amount of data is not going to be killer from an AJAX perspective, and it would be nice if as the user scrolled, the content was always loaded (unless they scroll SUPER-SUPER-fast, in which case I'm okay with them seeing a loading indicator).
I currently found a solution by adding the following to my SC.ListView, but I've noticed some major performance issues on mobile and they are directly related to making this change, so I was wondering if there was a better way.
contentIndexesInRect: function(rect) {
rect.height = rect.height * 2;
return sc_super();
}
Overriding contentIndexesInRect is the way I would do this. I would do less than double it though – I might get the result from sc_super() and then add a few extra items to the resulting index set. (I believe it comes back frozen, so you may have to clone-edit-freeze.) One or two extra may give you enough breathing room to get the stuff loaded, without contributing nearly as much to the apparent performance issue.
I'm surprised that it results in major performance issues though. It sounds to me like your list items themselves may be heavier-weight than they need to be – for example, they may have a lot of bindings to hook and unhook. If that's what's going on, you may benefit more from improving their efficiency.
I think you would be better served to load the additional data outside of the context of what the list is actually displaying. For instance, forcing more list items to render in order to trigger additional requests does result in having the extra data available, but also adds several unnecessary elements to the DOM, which is actually detrimental to overall performance. In fact these extra elements are most likely the cause of the major slowdown on mobile once you get to a sufficient number of extras.
Instead, I would first ensure that your list item views are properly pooling so that only the visible items are updating in place with as little DOM manipulation as possible. Then second, I would lazily load in additional data only after the required data is requested. There are quite a few ways to do this depending on your setup. You might want to add some logic to a data source to trigger an additional request on each filled request range or you might want to do something like override itemViewForContentIndex in SC.CollectionView as the point to trigger the extra data. In either case, I imagine it could look something like this,
// …
prefetchTriggered: function (lastIndex) {
// A query that will fetch more data (this depends totally on your setup).
var query = SC.Query.remote(MyApp.Record, {
// Parameters to pass to the data source so it knows what to request.
lastIndex: lastIndex
});
// Run the query.
MyApp.store.find(query);
},
// …
As I mention in the comments above, the structure of the request depends totally on your setup and your API so you'll have to modify it to meet your needs. It will work better if you are able to request a suitable range of items, rather than one-at-a-time.

Transferring lots of objects with Guid IDs to the client

I have a web app that uses Guids as the PK in the DB for an Employee object and an Association object.
One page in my app returns a large amount of data showing all Associations all Employees may be a part of.
So right now, I am sending to the client essentially a bunch of objects that look like:
{assocation_id: guid, employees: [guid1, guid2, ..., guidN]}
It turns out that many employees belong to many associations, so I am sending down the same Guids for those employees over and over again in these different objects. For example, it is possible that I am sending down 30,000 total guids across all associations in some cases, of which there are only 500 unique employees.
I am wondering if it is worth me building some kind of lookup index that I also send to the client like
{ 1: Guid1, 2: Guid2 ... }
and replacing all of the Guids in the objects I send down with those ints,
or if simply gzipping the response will compress it enough that this extra effort is not worth it?
Note: please don't get caught up in the details of if I should be sending down 30,000 pieces of data or not -- this is not my choice and there is nothing I can do about it (and I also can't change Guids to ints or longs in the DB).
Your wrote at the end of your question the following
Note: please don't get caught up in the details of if I should be
sending down 30,000 pieces of data or not -- this is not my choice and
there is nothing I can do about it (and I also can't change Guids to
ints or longs in the DB).
I think it's your main problem. If you don't solve the main problem you will be able to reduce the size of transferred data to 10 times for example, but you still don't solve the main problem. Let us we think about the question: Why so many data should be sent to the client (to the web browser)?
The data on the client side are needed to display some information to the user. The monitor is not so large to show 30,000 total on one page. No user are able to grasp so much information. So I am sure that you display only small part of the information. In the case you should send only the small part of information which you display.
You don't describe how the guids will be used on the client side. If you need the information during row editing for example. You can transfer the data only when the user start editing. In the case you need transfer the data only for one association.
If you need display the guids directly, then you can't display all the information at once. So you can send the information for one page only. If the user start to scroll or start "next page" button you can send the next portion of data. In the way you can really dramatically reduce the size of transferred data.
If you do have no possibility to redesign the part of application you can implement your original suggestion: by replacing of GUID "{7EDBB957-5255-4b83-A4C4-0DF664905735}" or "7EDBB95752554b83A4C40DF664905735" to the number like 123 you reduce the size of GUID from 34 characters to 3. If you will send additionally array of "guid mapping" elements like
123:"7EDBB95752554b83A4C40DF664905735",
you can reduce the original size of data 30000*34 = 1020000 (1 MB) to 300*39 + 30000*3 = 11700+90000 = 101700 (100 KB). So you can reduce the size of data in 10 times. The usage of compression of dynamic data on the web server can reduce the size of data additionally.
In any way you should examine why your page is so slowly. If the program works in LAN, then the transferring of even 1MB of data can be quick enough. Probably the page is slowly during placing of the data on the web page. I mean the following. If you modify some element on the page the position of all existing elements have to be recalculated. If you would be work with disconnected DOM objects first and then place the whole portion of data on the page you can improve the performance dramatically. You don't posted in the question which technology you use in you web application so I don't include any examples. If you use jQuery for example I could give some example which clear more what I mean.
The lookup index you propose is nothing else than a "custom" compression scheme. As amdmax stated, this will increase your performance if you have a lot of the same GUIDs, but so will gzip.
IMHO, the extra effort of writing the custom coding will not be worth it.
Oleg states correctly, that it might be worth fetching the data only when the user needs it. But this of course depends on your specific requirements.
if simply gzipping the response will compress it enough that this extra effort is not worth it?
The answer is: Yes, it will.
Compressing the data will remove redundant parts as good as possible (depending on the algorithm) until decompression.
To get sure, just send/generate the data uncompressed and compressed and compare the results. You can count the duplicate GUIDs to calculate how big your data block would be with the dictionary compression method. But I guess gzip will be better because it can also compress the syntactic elements like braces, colons, etc. inside your data object.
So what you are trying to accomplish is Dictionary compression, right?
http://en.wikibooks.org/wiki/Data_Compression/Dictionary_compression
What you will get instead of Guids which are 16 bytes long is int which is 4 bytes long. And you will get a dictionary full of key value pairs that will associate each guid to some int value, right?
It will decrease your transfer time when there're many objects with the same id used. But will spend CPU time before transfer to compress and after transfer to decompress. So what is the amount of data you transfer? Is it mb / gb / tb? And is there any good reason to compress it before sending?
I do not know how dynamic is your data, but I would
on a first call send two directories/dictionaries mapping short ids to long GUIDS, one for your associations and on for your employees e.g. {1: AssoGUID1, 2: AssoGUID2,...} and {1: EmpGUID1, 2:EmpGUID2,...}. These directories may also contain additional information on the Associations and Employees instances; I suspect you do not simply display GUIDs
on subsequent calls just send the index of Employees per Association { 1: [2,4,5], 3:[2,4], ...}, the key being the association short id and the ids in the array value, the short ids of the employees. Given your description building the reverse index: Employee to Associations may give better result size wise (but higher processing)
Then its all down to associative arrays manipulations which is straightforward in JS.
Again, if your data is (very) dynamic server side, the two directories will soon be obsolete and maintaining synchronization may cost you a lot.
I would start by answering the following questions:
What are the performance requirements? Are there size requirements? Speed requirements? What is the minimum performance that is truly needed?
What are the current performance metrics? How far are you from the requirements?
You characterized the data as possibly being mostly repeats. Is that the normal case? If not, what is?
The 2 options you listed above sound reasonable and trivial to implement. Try creating a look-up table and see what performance gains you get on actual queries. Try zipping the results (with look-ups and without), and see what gains you get.
In my experience if you're not TOO far from the goal, performance requirements are often trial and error.
If those options don't get you close to the requirements, I would take a step back and see if the requirements are reasonable in the time you have to solve the problem.
What you do next depends on which performance goals are lacking. If it is size, you're starting to be limited if you're required to send the entire association list ever time. Is that truly a requirement? Can you send the entire list once, and then just updates?

Most performant live search technique for mobile safari

I am building a mobile web application that targets webkit. I have a requirement to perform a live search (on keypress) against a database of ~5000 users.
I've tried a number of different techniques:
On page load, making an AJAX call which loads an in-memory representation of all 5000 users, and querying them on the client. I tried sending JSON, which proved to be too large, and also a custom delimited string, which was then parsed using split(). This was better, but ultimately searches against this array of users was slow.
I tried using a conventional AJAX call, which would return users based on a query, also using the custom delimited string technique. This was better, but I was forced to tune it so that searches were only performed with a minimum of 3 characters. This is not optimal, as I would like to be able to start filtering after 1 character. I could also throttle the calls so that not every keystroke within a certain threshold triggered a request. This could help with performance, but I'd rather not have to fiddle with that sort of thing.
Facebook mobile does this very well if you try their friend search. Searches happen instantaneously, and are triggered after 1 character.
My question is, does anyone have any suggestions for faster live searches for a mobile app? Should I be looking at localStorage? Is this reliable, feasible?
Is there any reason you can't use a binary search? The names you're looking for should be in a block. If you want first and last name search, you could create a second copy of the data sorted by last name and look in both sets.
Some helpful but more complicated data structures that address this type of problem include:
http://en.wikipedia.org/wiki/Directed_acyclic_word_graph
http://en.wikipedia.org/wiki/Trie

MongoDB embedded vs. reference from performance perspective

I read that embedding is better from a performance point of view:
"If performance is an issue, embed." (http://www.mongodb.org/display/DOCS/Schema+Design) and most guides always say contains should be embedded.
However I am not sure this is the case. Suppose we have two objects: Blog and Post. Blog contains posts.
Now making all posts embedded in blog will have the following issues:
Paging. Since it's not possible to filter embedded objects, we will always get all posts and need to filter them out in the application.
Filtering. Same as before, when searching for word inside posts, it will not be possible to filter the embedded collection from MongoDB.
Insert. I assume inserting to collection is faster than inserting to embedded object. Is this correct? this is written anywhere?
Update. Same as before, inline updating field inside smaller document (Post) might be faster then inline updating the post inside big document of Blog. Is this correct?
Taking all of the above, I would go for having posts in a separate collection referencing Blog. Is this the correct conclusion?
(Note: Please do not factor document size limit in the response, let's assume each blog will have at most 1000 posts)
1.Paging possible with $slice operator:
db.blogs.find({}, {posts:{$slice: [10, 10]}}) // skip 10, limit 10
2.Filtering also possible:
db.blogs.find({"posts.title":"Mongodb!"}, {posts:{$slice: 1}}) //take one post
3,4. Generally i guess you are speaking about small performance difference. It's not rocket science, it just blog with at most 1000 posts.
You said:
Is this the correct conclusion?
No, if you care about performance (in general if system will be small you can go with separate document).
I've done small performance test regarding 3,4, here is results:
-----------------------------------------------------------------
| Count/Time | Inserting posts | Adding to nested collection |
-------------|--------------------------------------------------
| 1 | 1 ms | 28 ms |
| 1000 | 81 ms | 590 ms |
| 10000 | 759 ms | 2723 ms |
---------------------------------------------------------------
As for 3 & 4, if you are inserting into a nested document, it is basically an update.
This can be terribly bad for your performance because inserts are generally appended to the end of the data which works fine and fast. Updates, on the other hand, can be much trickier.
If your update does not change the size of a document (meaning that you had a key\value pair and simply changed the value to a new value that takes up the same amount of space) then you will be ok but when you start modifying documents and adding new data, a problem arises.
The problem is that while MongoDB allots more space than it needs for each document, it may not be enough. If you insert a document that is 1k large, MongoDB may allot 1.5k for the document to ensure that minor changes to the document have enough space to grow. If you use more than the allocated space, MongoDB has to fetch the entire document and re-write it at the tail end of the data.
There is obviously a performance implication in fetching and re-writing the data which will be amplified by the frequency of such an operation. To make matters worse, when this happens you end up leaving holes or pockets of unused space in your data files.
This ultimately gets copied into memory which means that you may end up using 2GB of RAM to store your data set, while in reality the data itself only takes up 1.5GB because there are .5GB worth of pockets. This fragmentation can be avoided by doing inserts as opposed to updates. It can also be fixed by doing a database repair.
In the next version of MongoDB there will be an online compaction function.
You can paging with '$slice' on embedded element
You can search with "field1.field2": /aRegex/ with aRegex is the word you search. But take care of performance.
About 3. and 4. I have no proof data.
BTW 2 collections can be easier to code/use/manage. And you can simply register blogId in each 'blog' document and add "blogId":"1234ABCD" in all your query

Resources