I am using YUI datatable. When user search for anything and type 1234, then it sends 4 request for 1,2,3,4. It works fine if response from server is fast but in some places response from server takes around 30 sec. Sometimes response from 4th request comes earlier than 3rd request and results in table looks very unpredictable. Is YUI provides anyway where i can cancel previous request for datatable, so that only results from last request shows in datatable.
Related
I've been looking for answers and try so many ways. I need to automate one websites and this country field have 230 data. When I try to run the script, if the data less then 50 it can select easily. But if I want to select data from 50 onward it only scroll down until the last data. This is my code im using now. Working fine if the data that I want to select in the first 50 range
ObjPage. WebElement(xpath="some xpath data" ). Click
ObjPage. WebList("xpath=" some xpath data" ). Selecr strcountry
most probably the list items are loaded asynchronously(lazy loading) with AJAX, on-demand. Your best try is to look up in javascript / ask the developer about the events that make the list load the rest of the values. If this is clear, send those events to the list via UFT to trigger the loading
(Under the hood you could try calling the Javascript method directly with some runJS commands, however the event based solution is more realistic)
I have a requirement where I need to display a long table. It doesn't have to be displayed all at once, so ajax loading it is (load first 50 recs, then get another 50 rows everytime the user scrolls to/past the tenth row from the last).
But I'm not sure which of the two, pagination and infinite scrolling, is better. I'd like the user to be able to skip to the last scrolled-to point when returning to the page (through Back button, definitely; if I can do that whenever, however user visits the page, even better!) with the previous rows visible as well. At the same time, for performance, I want to restrict the number of ajax calls to as low as I can keep it.
Any thoughts?
To implement such scenerio, first consume an api with page no and number of records as request params in API calls
For Ex- 'www.abc.com/v1/tableData/pageId=1&noOfRecords=50'
Then you will get the first 50 records. Its response should also provide you the total number of recors avaiallbe in database after callling first api .
When you scroll down, increase the pageId with +1
For ex - 'www.abc.com/v1/tableData/pageId=2&noOfRecords=50'
In the same way, you will increase the pageId untill you check the total records you got till now, should be equals to the total records, you are getting from API key.
In this way you can able to impmentent it.
Talking about performance, its up to you whther you are using pagination or scroll, it does not matter, since you are restricting the number of records to display.
I am little bit confused over Elasticsearch by its scroll functionality.
In elasticsearch is it possible to call search API everytime whenever the user scrolls on the result set?
From documentation
"search_type" => "scan", // use search_type=scan
"scroll" => "30s", // how long between scroll requests. should be small!
"size" => 50, // how many results *per shard* you want back
Is that mean it will perform search for every 30 seconds and returns all the sets of results until there is no records?
For example my ES returns total 500 records. I am getting an data from ES as two sets of records each with 250 records. Is there any way I can display first set of 250 records first, when user scrolls then second set of 250 records.Please suggest
What you are looking for is pagination.
You can achieve your objective by querying for a fixed size and setting the from parameter. Since you want to set display in batches of 250 results, you can set size = 250 and with each consecutive query, increment the value of from by 250.
GET /_search?size=250 ---- return first 250 results
GET /_search?size=250&from=250 ---- next 250 results
GET /_search?size=250&from=500 ---- next 250 results
On the contrary, Scan & scroll lets you retrieve a large set of results with a single search and is ideally meant for operations like re-indexing data into a new index. Using it for displaying search results in real-time is not recommended.
To explain Scan & scroll briefly, what it essentially does is that it scans the index for the query provided with the scan request and returns a scroll_id. This scroll_id can be passed to the next scroll request to return the next batch of results.
Consider the following example-
# Initialize the scroll
page = es.search(
index = 'yourIndex',
doc_type = 'yourType',
scroll = '2m',
search_type = 'scan',
size = 1000,
body = {
# Your query's body
}
)
sid = page['_scroll_id']
scroll_size = page['hits']['total']
# Start scrolling
while (scroll_size > 0):
print "Scrolling..."
page = es.scroll(scroll_id = sid, scroll = '2m')
# Update the scroll ID
sid = page['_scroll_id']
# Get the number of results that we returned in the last scroll
scroll_size = len(page['hits']['hits'])
print "scroll size: " + str(scroll_size)
# Do something with the obtained page
In above example, following events happen-
Scroller is initialized. This returns the first batch of results along with the scroll_id
For each subsequent scroll request, the updated scroll_id (received in the previous scroll request) is sent and next batch of results is returned.
Scroll time is basically the time for which the search context is kept alive. If the next scroll request is not sent within the set timeframe, the search context is lost and results will not be returned. This is why it should not be used for real-time results display for indexes with a huge number of docs.
You are understanding wrong the purpose of the scroll property. It does not mean that elasticsearch will fetch next page data after 30 seconds. When you are doing first scroll request you need to specify when scroll context should be closed. scroll parameter is telling to close scroll context after 30 seconds.
After doing first scroll request you will get back scroll_idparameter in response. For next pages you need to pass that value to get next page of the scroll response. If you will not do the next scroll request within 30 seconds, the scroll request will be closed and you will not be able to get next pages for that scroll request.
What you described as an example use case is actually search results pagination, which is available for any search query and is limited by 10k results. scroll requests are needed for the cases when you need to go over that 10k limit, with scroll query you can fetch even the entire collection of documents.
Probably the source of confusion here is that scroll term is ambiguous: it means the type of a query, and also it is a name of a parameter of such query (as was mentioned in other comments, it is time ES will keep waiting for you to fetch next chunk of scrolling).
scroll queries are heavy, and should be avoided until absolutely necessary. In fact, in the docs
it says:
Scrolling is not intended for real time user requests, but rather for processing large amounts of data, ...
Now regarding your another question:
In elasticsearch is it possible to call search API everytime whenever the user scrolls on the result set?
Yes, even several parallel scroll requests
are possible:
Each scroll is independent and can be processed in parallel like any scroll request.
The documentation of the Scroll API at elastic explains this behaviour also.
The result size of 10k is a default value and can be overwritten during runtime, if necessary:
PUT { "index" : { "max_result_window" : 500000} }
The life time of the scroll id is defined in each scroll request with the parameter "scroll", e.g.
..
"scroll" : "5m"
..
In recent versions of Elasticsearch, you'll use search_after. The keep_alive you set there, much like the timeout in the scroll, is only the time needed for you to process one page.
That's because Elasticsearch will keep your "search context" alive for that amount of time, then removes it. Also, Elasticsearch won't fetch the next page for you automatically, you'll have to do that by sending requests with the ID from the last request.
It is wise to use the scroll api as one can not get more than 10K data at a time in elasticsearch.
Output looks like this----------
Hi,
I am using JQuery bootgrid to display a few hundred records. I am returning a rowCount=10 from server side but its not work and keep showing all the rows.
My source looks like this:
HTML:
<th data-column-id='ItemID' data-type='numeric' data-identifier='true'>ID</th>"+
"<th data-column-id='ItemNumber'>Item Number</th>"+
"<th data-column-id='ItemDescription'>Description</th>"+
"<th data-column-id='ItemStatus'>Status</th>"+
"<th data-column-id='DateReceived'>Received Date</th>"+
"<th data-column-id='ItemNotes' data-formatter='text' data-sortable='false'>Text Description</th>"+
//"<th data-column-id='NoOfItems' data-formatter='select' data-sortable='false'>No. of Items</th>"+
"<th data-column-id='commands' data-formatter='commands' data-sortable='false'>Actions</th>";
Ajax Request:
current "1"
rowCount "10"
searchPhrase ""
Ajax Response:
current 1
rowCount 10
rows [12]
0 Object
1 Object
2 Object
3 Object
4 Object
5 Object
6 Object
7 Object
8 Object
9 Object
10 Object
11 Object
total 12
Any help will be appreciated. Thanks
This confuses many people at first, as it did me.
It's important to remember that pagination is not done by JQ-BG. It's done by the server. JQ-BG only tells the server what page the user is requesting and details like rows per page, search strings, sorted columns, etc. It's up to the server to first filter by the search string (if applicable), sort according to the sorted column, and then do the math about which rows in that result set make up the page that the user is requesting. In the end, the server sends no more than one page's worth of rows. The server also feeds back the total number of pages that are available so that JQ-BG can arrange the tiled page numbers at the bottom for the user to click on.
In the end, this makes sense because no matter the size of the data, it isn't being all sent over the wire in a giant transaction that will, at some point, overwhelm the browser and make the network appear "slow".
But, it does create some challenges, like temporarily storing the filtered, sorted data across ajax requests and doing the pagination within the cached results.
I am making use of the Search.List method from YouTube Data API v3 and doing a keyword search with maxResults=50 per page. The totalResults has a value more than 13000 and I am able to send the nextPageToken from the second query and fetch the subsequent page results. But beyond 10-12 pages I do not get the nextPageToken parameter in my response at all.(Since the totalResults is more than 13000, I should atleast get around 260 pages.)
How do I get page results for the remaining pages? Is this something to do with the quota?