I want to get first 20 records, I have response time as 200
After some time (By calling the same service) I want another 20 records.
By each hit I want to get 20 records. How can I implement this?
I am using Spring ,hibernate and angular as front-end.
Please provide a solution .
Thanks in advance.
Use spring-data-rest, you should be able to expose your hibernate entity to the user in a RESTful way. Using the auto generated end points you should be able to perform POST/PUT/GET/DELETE. When you expose the entity, the pagination is by default available for you.
By making use of spring-data-rest your scenario could be solved by giving a page size in the (GET) REST response
Example:
For instance assume that you have 200 user records in your DB and you want server 20 records per request,then the GET REST URL will be look like this:
http://localhost:8080/users?page=1&size=5
There are 2 key information to be noted:
page - the page number to access (0 indexed, defaults to 0).
size - the page size requested (defaults to 20).
So to get first 20 records, user will issue a request like:
http://localhost:8080/users or http://localhost:8080/users?page=0&size=20
To access next 20 items, change the page number alone: http://localhost:8080/users?page=1
Since default size is 20, in you case you can omit that; but if you decide to modify the size, say 25 or 30, then you should be able to supply that as part of size param.
Related
I have 100 rows of data in DynamoDB and a api with path api/get/{number}
Now when I say number=1 api should return me first 10 values. when I say number=2 it should return next 10 values. I did something like this with query, lastEvaluatedKey and sort by on createdOn . Now the use case is if the user passes number=10 after number=2 the lastEvaluatedKey is still that of page 2 and the result would be data of page 3. How can I get data directly. Also if the user goes from number=3 to number=1 still the data will not be of page 1.
I am using this to make API call based of pagination on HTML.
I am using java 1.8 and aws-java-sdk-dynamodb.
Non-sequential pagination in DynamoDB is tough - you have to design your data model around it, if it's an operation that needs to be efficient at all times. For a recommendation in your specific case I'd need more details about the data and access patterns.
In general you have the option of setting the ExclusiveStartKey attribute in the query call, which is similar to an offset in relational databases, but only similar and not identical. The ExclusiveStartKey is the key after which the query will continue, meaning data from your table and not just a number.
That means you usually can't guess it, unless it's a sequential number - which isn't ideal.
For sequential pagination, i.e. the user goes from page 1 to page 2, page 2 to page 3 etc. you can pass that along in the request as a token, but that won't work if the user moves in the other direction page 3 to page 2 or just randomly navigates to page 14.
In your case you only have a limited amount of data - 100 items, so my solution for your specific case would be to query all items and limit the amount of items in the response to n * 10, where n is the result page. Then you return the last 10 items from that result to your client.
This is a solution that would get expensive at scale (time + cost) though, fortunately not many people will use the pagination to go to page 7 or 8 though (you could bury a body on page 2 of the google search results).
Yan Cui has written an interesting post on this problem on Hackernoon, you might want to check it out.
I have a model called DemoModel and contains 1000 records in DB. So i am paginating using paginator in Django(assume that per page 15 records, so i have 67 pages).
So i want to get the records of 3,4 and 5 pages and i have to append the records into list.
So can i get the objects_list based on page range or anything else i want to do?
Example:
records.page(1)
Here i am getting only one page records at a time, but how can i get multiple page records i.e; from fist page to third page
Assuming you are asking about the API request to get the paginated resources, and you are using the default pagination class: rest_framework.pagination.LimitOffsetPagination, then you can make an request as such:
https://api.example.org/accounts/?limit=30&offset=15
which in turns give you the 2nd and 3rd "page".
The limit indicates the maximum number of items to return, and is equivalent to the page_size in other styles. The offset indicates the starting position of the query in relation to the complete set of unpaginated items. doc link
Need some advice and help from you!
Two questions.
how can I retrieve a list of patient resources with 30 _counts and sorted by last modified date? I don't have any searching parameters such as identifier, family and given;
since my application in browser is a single page application, when the user scroll down and all the first 30 patients have been shown, I will make another call to get the next 30 patients. I don't need the first 30 patients and just want the records from 31 to 60. What parameters should I used in this paging search? Do we have something like "?_count=30&_page=2". Similarly, if I need the page 100, I don't want the servers sending me the first 99 pages.
Thanks in advance.
Autorun
GET [baseUrl]/Patient?_count=30&sort=_lastUpdated
The response will be a Bundle. Look at the Bundle.link with a Bundle.link.relation of "next". The Bundle.link.url will be the URL to use to get the next "page" of content. The format of the URL is undefined and will be server-specific.
Be aware that _count only constrains the base resource. If you query Patient and do a _revinclude on Observation, you'll get 30 patients - but you'll also get all the observations for all 30 of those patients - which could be 10k+ rows in your result set - so be careful with _include and _revinclude.
This is a problem I have been thinking about for a long time but I haven't written any code yet because I first want to solve some general problems I am struggling with. This is the main one.
Background
A single page web application makes requests for data to some remote API (which is under our control). It then stores this data in a local cache and serves pages from there. Ideally, the app remains fully functional when offline, including the ability to create new objects.
Constraints
Assume a server side database of products containing +- 50000 products (50Mb)
Assume no db type, we interact with it via REST/GraphQL interface
Assume a single product record is < 1kB
Assume a max payload for a resultset of 256kB
Assume max 5MB storage on the client
Assume search result sets ranging between 0 ... 5000 items per search
Challenge
The challenge is to define a stateless but (network) efficient way fetch pages from a result set so that it is deterministic which results we will get.
Example
In traditional paging, when getting the next 100 results for some query using this url:
https://example.com/products?category=shoes&firstResult=100&pageSize=100
the search result may look like this:
{
"totalResults": 2458,
"firstResult": 100,
"pageSize": 100,
"results": [
{"some": "item"},
{"some": "other item"},
// 98 more ...
]
}
The problem with this is that there is no way, based on this information, to get exactly the objects that are on a certain page. Because by the time we request the next page, the result set may have changed (due to changes in the DB), influencing which items are part of the result set. Even a small change can have a big impact: one item removed from the DB, that happened to be on page 0 of the result set, will change what results we will get when requesting all subsequent pages.
Goal
I am looking for a mechanism to make the definition of the result set independent of future database changes, so if someone was looking for shoes and got a result set of 2458 items, he could actually fetch all pages of that result set reliably even if it got influenced by later changes in the DB (I plan to not really delete items, but set a removed flag on them, for this purpose)
Ideas so far
I have seen a solution where the result set included a "pages" property, which was an array with the first and last id of the items in that page. Assuming your IDs keep going up in number and you don't really delete items from the DB ever, the number of items between two IDs is constant. Meaning the app could get all items between those two IDs and always get the exact same items back. The problem with this solution is that it only works if the list is sorted in ID order... I need custom sorting options.
The only way I have come up with for now is to just send a list of all IDs in the result set... That way pages can be fetched by doing a SELECT * FROM products WHERE id IN (3,4,6,9,...)... but this feels rather inelegant...
Any way I am hoping it is not too broad or theoretical. I have a web-based DB, just no good idea on how to do paging with it. I am looking for answers that help me in a direction to learn, not full solutions.
Versioning DB is the answer for resultsets consistency.
Each record has primary id, modification counter (version number) and timestamp of modification/creation. Instead of modification of record r you add new record with same id, version number+1 and sysdate for modification.
In fetch response you add DB request_time (do not use client timestamp due to possibly difference in time between client/server). First page is served normally, but you return sysdate as request_time. Other pages are served differently: you add condition like modification_time <= request_time for each versioned table.
You can cache the result set of IDs on the server side when a query arrives for the first time and return a unique ID to the frontend. This unique ID corresponds to the result set for that query. So now the frontend can request something like next_page with the unique ID that it got the first time it made the query. You should still go ahead with your approach of changing DELETE operation to a removed operation because it would make sure that none of the entries from the result set it deleted. You can discard the result set of the query from the cache when the frontend reaches the end of the result set or you can set a time limit on the lifetime of the cache entry.
Just trying to get some clarity regarding parse.com request per second limit.
Do they count a 'get' from the data browser as a request? Or is it just
a request that starts from the client side?
I'm working on a project that retrieves a combo of images and text and it's over 30 per second.(around 45).
Thanks
Any request to the parse server is counted and his limitation is described on the website ( with new pricing, default req/s = 30 , so 1800 requests per minute).
Basically you have to retrieve as many elements as possible/needed with less number of request to parse ( balanced with the necessity and performance of course ), up to the limit of 1000 rows for query.
Anyway imagine you already have the list of photo objects ( let's say ,a list of Photo as Parse object ) and each one keep the image file field, any call to the direct image url is not counted for the burst limit
Hope it helps