I am using server side processing (AJAX requests) to get my table with data. I am getting the correct data. However, pagination is not working. The table info on the bottom left says Showing 1 to 10 of 182 entries and the bottom right shows the page numbers as well but the table shows all the possible records in the first page itself. Here's the code I use:
$(tableId).DataTable({
"paging": true,
"scrollX": true,
"filter": false,
"serverSide":true,
"columns": [
{"data":'transaction_id'},
{"data":'merchant_id'},
{"data":'merchant_provider_id'},
{"data":'transaction_uuid'},
{"data":'transaction_status_type'},
{"data":'transaction_payment_method'},
{"data":'transaction_amount'},
{"data":'transaction_amount_aud'},
{"data":'transaction_aud_exchange_rate'},
{"data":'transaction_amount_usd'},
{"data":'transaction_usd_exchange_rate'},
{"data":'transaction_currency'},
{"data":'transaction_created'},
{"data":'transaction_processed'},
{"data":'transaction_settled'},
],
"ajax": {
"url": requestUrl,
"data": values
}
});
When using server side processing with Datatables, the server side handles dividing the entries into pages. The ajax request will include parameters for offset (start) and page length (length). The server side must use these parameters to select and return the correct entries for each page.
Related
I'm actually working on integrating an external API which uses pagination with pages, links and which also has some limits we should respect, for example : not more than 1 request per second.
So I've created a command, that will be scheduled every hour. This command goals is to fetch all data from the exteral API and dispatch them to a queued job so that I can throttle requests with redis.
The main problem is that this API has a pagination such as :
"links": [
{
"href": "https://testapi.io/v1/datas?size=50&limit=2000&page=1",
"rel": "first",
"method": "GET"
},
{
"href": "https://testapi.io/v1/datas?size=50&limit=2000&page=1",
"rel": "previous",
"method": "GET"
},
{
"href": "https://testapi.io/v1/datas?size=50&limit=2000&page=2",
"rel": "next",
"method": "GET"
},
{
"href": "https://testapi.io/v1/datas?size=50&limit=2000&page=4",
"rel": "last",
"method": "GET"
}
]
So I've tried to code an algorithm which :
Creates a job to fetch for the first page of results
This job make a request to the external API and saves first results
Doing a foreach loop on pages until I manage to get to the value in "last" in the links
Each pages should call the same job to register data into my database
The main problem is that my jobs will not wait for eachothers before being fired, which means, once I'm done with the first page, I can't return the current page I'm on, so there is no way I know when I'm at the end of results without returning data from the first executed job ?
I think this is not a problem related to Larave, but more a logical problem. This is the code I'm trying to implement in my command :
$testService = new testService();
// We fetch all datas of external API
$page = 1;
$lastPage = 0;
// We get datas of the first page of the API results
$results = MyJob::dispatch($this->size, $page, $testService)->onConnection('medium');
$page++;
// After the first request, we get the "last" page from the "links" of the reponse
// This can't work because my job will not return any data
$pages = $results->links;
foreach($pages as $singlePage)
{
if($singlePage->rel == "last")
{
$lastPage = substr($singlePage->href, -1, 1);
}
}
// While we have some datas and pages to explore we fetch them
while($page <= $lastPage)
{
MyJob::dispatch($this->size, $page, $testService)->onConnection('medium');
$page++;
}
The problem is that I can only know the total number of pages through the first job executed ... Which means actually my code would run in an infinity loop or will not work at all because I can't return data from the first executed job.
Is there any logical solution to fetch and to loop on an external API whith one job / request to the API to be sure to respect limitations ?
During developing pipeline which will use Elasticsearch as a source I faced with issue related paging. I am using SQL Elasticsearch API. Basically, I've started to do request in postman and it works well. The body of request looks following:
{
"query":"SELECT Id,name,ownership,modifiedDate FROM \"core\" ORDER BY Id",
"fetch_size": 20,
"cursor" : ""
}
After first run in response body it contains cursor string which is pointer to next page. If in postman I send the request and provide cursor value from previous request it return data for second page and so on. I am trying to archive the same result in Azure Data Factory. For this I using copy activity, which store response to Azure blob. Setup for source is following.
copy activity source configuration
This is expression for body
{
"query": "SELECT Id,name,ownership,modifiedDate FROM \"#{variables('TableName')}\" WHERE ORDER BY Id","fetch_size": #{variables('Rows')}, "cursor": ""
}
I have no idea how to correctly setup pagination rule. The pipeline works properly but only for the first request. I've tried to setup Headers.cursor and expression $.cursor but this setup leads to an infinite loop and pipeline fails with the Elasticsearch restriction.
I've also tried to read document at https://learn.microsoft.com/en-us/azure/data-factory/connector-rest#pagination-support but it seems pretty limited in terms of usage examples and difficult for understanding.
Could somebody help me understand how to build the pipeline with paging abilities utilization?
Responce with the cursor looks like:
{
"columns": [
{
"name": "companyId",
"type": "integer"
},
{
"name": "name",
"type": "text"
},
{
"name": "ownership",
"type": "keyword"
},
{
"name": "modifiedDate",
"type": "datetime"
}
],
"rows": [
[
2,
"mic Inc.",
"manufacture",
"2021-03-31T12:57:51.000Z"
]
],
"cursor": "g/WuAwFaAXNoRG5GMVpYSjVWR2hsYmtabGRHTm9BZ0FBQUFBRUp6VGxGbUpIZWxWaVMzcGhVWEJITUhkbmJsRlhlUzFtWjNjQUFBQUFCQ2MwNWhaaVIzcFZZa3Q2WVZGd1J6QjNaMjVSVjNrdFptZDP/////DwQBZgljb21wYW55SWQBCWNvbXBhbnlJZAEHaW50ZWdlcgAAAAFmBG5hbWUBBG5hbWUBBHRleHQAAAABZglvd25lcnNoaXABCW93bmVyc2hpcAEHa2V5d29yZAEAAAFmDG1vZGlmaWVkRGF0ZQEMbW9kaWZpZWREYXRlAQhkYXRldGltZQEAAAEP"
}
I finally find the solution, hopefully, it will be useful for the community.
Basically, what needs to be done it is split the solution into four steps.
Step 1 Make the first request as in the question description and stage file to blob.
Step 2 Read blob file and get the cursor value, set it to variable
Step 3 Keep requesting data with a changed body
{"cursor" : "#{variables('cursor')}" }
Pipeline looks like this:
pipeline
Configuration of pagination looks following
pagination . It is a workaround as the server ignores this header, but we need to have something which allows sending a request in loop.
How can I limit the results of data each page in Yajra datatables? Currently I'm using the code below:
Controller
return Datatables::of(collect($results))->make(true);
The $results variable is just an array of data from database.
JS
$('table.dataTableAjax').DataTable({
"processing": true, // Make this true, to show the "Processing" word while loading
"serverSide": true,
"paging": true,
"pageLength": 10,
"ordering": false, // Make this false, to disable sorting
"ajax": "..."
});
Example data from server
{
"data":[
{ "name": "Bob", "Age": 30 },
{ "name": "Billy", "Age": 33 },
{ "name": "Megan", "Age": 31 }
]
}
So for example, the first page should load 10 rows, next page 10 rows again, and so on. But what's happening is it loads the 5000+ rows, and just cutting them into 10 table rows in client side which affects the performance of the application. Any idea?
I ended up by just adding these code below, and not using yajra for the functionality.
$limit = request('length');
$start = request('start');
$query->offset($start)->limit($limit);
return response()->json([
"draw" => intval(request('draw')),
"recordsTotal" => intval(User::count()),
"recordsFiltered" => intval($total_filtered),
"data" => $results
]);
Everything works fine, and loads faster. I didn't notice that datatables actually throws requests to the laravel end point (route).
this is working for me and shorter code
return Datatables::of($data)->setTotalRecords(500)->make(true);
setTotalRecords() is a best answer for your looking for
My question is about the parent-child DataTable used in vue.js, I want to show the data in the descending format. But it's just allowing the filter for columnname:name.
this.datatable = ele.DataTable( {
"data": [],
"columns": [
{
"className":'details-control',
"orderable":false,
"data":"name",
"defaultContent": '',
},
]
} );
here above in data the name is passed, its just filtering through the name. I also want it according to one more column in my database table.so it should display the name also in descending format.
I have a bit of a bizarre problem happening in IE11. I'm running DataTables with server-side processing so I had to create a custom button for exporting the full data set, since the default buttons only export the visible data set.
Here's the Yajra DataTables for Laravel configuration for one of my buttons:
'buttons' => [
['extend' => 'csv',
'text' => '<i class="fa fa-file-excel-o"></i> CSV',
'action' => 'function(e,dt,node,config){
var data=$.extend(
true,
dt.context[0].oSavedState,
{
columns:dt.context[0].aoColumns.map(function(col){
return {"data":col.data}
})
})
window.location.href = window.location.href +
"?action=csv&" +
$.param(data);
}'
],
...
The button works fine, it basically compiles a list of the columns and filters and sends the user to a Laravel route that handles the action=csv request and generates an Excel download that triggers automatically in Chrome and Firefox.
In IE11, however, the browser redirects to the Excel download route but throws up a "Can’t reach this page" error message. I can see in the address bar that the URL is correct, and what's odd is that if I just hit Refresh in the browser, the CSV download is triggered and I am given the option to save.
This happens every time I click the download link. What might cause IE11 to think the page can't be reached, when it can?
I tried looking at the request/response headers in network tools and everything seems to be just fine. Any ideas?
Also, I tried rewriting my window.location logic to create a hyperlink element, attach it to the DOM, trigger a click, and it still yields the same thing.
More Information
I tried a few other things and was able to eject the download process at any point in the code up until final response to the browser. The browser responds with a 200 status code, when I look at the Network Tab and view the Response Body, I can see my CSV content right there with appropriate Content Disposition, Content Length headers. The Content-Type header is text/plain, but changing it to text/csv didn't solve the problem.
If the IE's Network Tab renders everything correctly, why might IE's browser renderer show a Page Can't Be Displayed error?
Well, I was finally able to resolve the IE/Edge exporting issue and it comes down to query-string length.
The DataTables grid makes a GET request with a whole bunch of properties related to the query (columns that are visible, which filters were applied, etc.).
Sample Request Params:
{
"action": "csv",
"time": "1529689896632",
"start": "0",
"length": "10",
"order": [
["2","asc"],
["1","asc"]
],
"search": {
"search": "",
"smart": "true",
"regex": "false",
"caseInsensitive": "true"
},
"columns": [
{
"visible": "true",
"search": {
"search": "",
"smart": "true",
"regex": "false",
"caseInsensitive": "false"
},
"data": "programs"
},
// ...
Because there are so many columns, these request params create a very lengthy query-string (in the realm of 3-4000 characters). IE and Edge appear to handle query-strings up to a certain length, as I've noticed some bugs come through where the query-string data was truncated.
I ended up reducing the query-string length by omitting unnecessary and default property values needed for the export. Now IE and Edge both immediately respond with the file download instead of throwing a page not found or other bizarre error.