I'm fetching large data amount to my Kendo Grid, they can reach 700,000 rows. And then I'm facing a bug in the dataSource -> parse method, the (response) object is null.
var dataSource = new kendo.data.DataSource({
....
schema: {
parse: function(response) {
// response is NULL
}
}
}
I know I should implement server side data processing (which needs a lot of time to fulfill filtering, sorting..etc), that's why I'm thinking of a quick workaround for now.
And, why does that happen? Is there any Kendo limitation there?
Related
This might not be the question but it was the list of doubts which comes when learning native script from scratch.
I had a 1000 or more list of data stored in data table. know i want to display it on a list view but i don't want to read all the data at once. because i have images stored in other directory and want to read that also. So, for 20 to 30 data's the performance is quite good. but for 1000 data it is taking more than 15 minutes to read the data as well as images associated with it. since i'm storing some high quality images.
Therefore i decided to read only 20 data's with their respective images. and display it on list. know when user reaches the 15th data of the list. i decided to read 10 more data from the server.
know when i search this i came across "RadListView Load on Demand".
then i just looked at the code below.
public addMoreItemsFromSource(chunkSize: number) {
let newItems = this._sourceDataItems.splice(0, chunkSize);
this.dataItems.push(newItems);
}
public onLoadMoreItemsRequested(args: LoadOnDemandListViewEventData) {
const that = new WeakRef(this);
const listView: RadListView = args.object;
if (this._sourceDataItems.length > 0) {
setTimeout(function () {
that.get().addMoreItemsFromSource(2);
listView.notifyLoadOnDemandFinished();
}, 1500);
args.returnValue = true;
} else {
args.returnValue = false;
listView.notifyLoadOnDemandFinished(true);
}
}
In nativescript if i want to access binding element xml element. i must use observables in viewmodel or exports.com_name on associated js file.
but in this example it is started with public..! how to use this in javascript.
what is new WeakRef(this) ?
why it is needed ?
how to identify user has scrolled to 15 data, as i want to load more data when he came at 15th data.
after getting data how to update array of list and show it in listview ?
Finally i just want to know how to use load on demand
i tried to create a playground sample of what i have tried but it is giving error. it cannot found module of radlistview.
Remember i'm a fresher So, kindly keep this in mind when answering. thank you,
please modify the question if you feel it is not upto standards.
you can check the updated answer here
https://play.nativescript.org/?template=play-js&id=1Xireo
TypeScript to JavaScript
You may use any TypeScript compiler to convert the source code to JavaScript. There are even online compilers like the official TypeScript Playground for instance.
In my opinion, it's hard to expect ES5 examples any more. ES6-9 introduced a lot of new features that makes JavaScript development much more easier and TypeScript takes JavaScript to next level, interpreter to compiler.
To answer your question, you will use the prototype chain to define methods on your class in ES5.
YourClass.prototype.addMoreItemsFromSource = function (chunkSize) {
var newItems = this._sourceDataItems.splice(0, chunkSize);
this.dataItems.push(newItems);
};
YourClass.prototype.onLoadMoreItemsRequested = (args) {
var that = new WeakRef(this);
var listView = args.object;
if (this._sourceDataItems.length > 0) {
setTimeout(function () {
that.get().addMoreItemsFromSource(2);
listView.notifyLoadOnDemandFinished();
}, 1500);
args.returnValue = true;
} else {
args.returnValue = false;
listView.notifyLoadOnDemandFinished(true);
}
}
If you are using fromObject syntax for your Observable, then these functions can be passed inside
addMoreItemsFromSource: function (chunkSize) {
....
};
WeakRef: It helps managing your memory effiencetly by keeping a loose reference to the target, read more on docs.
How to load more:
If you set loadOnDemandMode to Auto then loadMoreDataRequested event will be triggered whenever user reaches the end of scrolling.
loadOnDemandBufferSize decides how many items before the end of scroll the event should be triggered.
Read more on docs.
How to update the array:
That's exactly what showcased in addMoreItemsFromSource function. Use .push(item) on the ObservableArray that is linked to your list view.
When resolving large data I notice a very slow performance, from the moment of returning the result from my resolver to the client.
I assume apollo-server iterates over my result and checks the types... either way, the operation takes too long.
In my product I have to return large amount of data all at once, since its being used, all at once, to draw a chart in the UI. There is no pagination option for me where I can slice the data.
I suspect the slowness coming from apollo-server and not my resolver object creation.
Note, that I log the time the resolver takes to create the object, its fast, and not the bottle neck.
Later operations performed by apollo-server, which I dont know how to measure, takes a-lot of time.
Now, I have a version, where I return a custom scalar type JSON, the response, is much much faster. But I really prefer to return my Series type.
I measure the difference between the two types (Series and JSON) by looking at the network panel.
when AMOUNT is set to 500, and the type is Series, it takes ~1.5s (that is seconds)
when AMOUNT is set to 500, and the type is JSON, it takes ~150ms (fast!)
when AMOUNT is set to 1000, and the type is Series, its very slow...
when AMOUNT is set to 10000, and the type is Series, I'm getting JavaScript heap out of memory (which is unfortunately what we experience in our product)
I've also compared apollo-server performance to express-graphql, the later works faster, yet still not as fast as returning a custom scalar JSON.
when AMOUNT is set to 500, apollo-server, network takes 1.5s
when AMOUNT is set to 500, express-graphql, network takes 800ms
when AMOUNT is set to 1000, apollo-server, network takes 5.4s
when AMOUNT is set to 1000, express-graphql, network takes 3.4s
The Stack:
"dependencies": {
"apollo-server": "^2.6.1",
"graphql": "^14.3.1",
"graphql-type-json": "^0.3.0",
"lodash": "^4.17.11"
}
The Code:
const _ = require("lodash");
const { performance } = require("perf_hooks");
const { ApolloServer, gql } = require("apollo-server");
const GraphQLJSON = require('graphql-type-json');
// The GraphQL schema
const typeDefs = gql`
scalar JSON
type Unit {
name: String!
value: String!
}
type Group {
name: String!
values: [Unit!]!
}
type Series {
data: [Group!]!
keys: [Unit!]!
hack: String
}
type Query {
complex: Series
}
`;
const AMOUNT = 500;
// A map of functions which return data for the schema.
const resolvers = {
Query: {
complex: () => {
let before = performance.now();
const result = {
data: _.times(AMOUNT, () => ({
name: "a",
values: _.times(AMOUNT, () => (
{
name: "a",
value: "a"
}
)),
})),
keys: _.times(AMOUNT, () => ({
name: "a",
value: "a"
}))
};
let after = performance.now() - before;
console.log("resolver took: ", after);
return result
}
}
};
const server = new ApolloServer({
typeDefs,
resolvers: _.assign({ JSON: GraphQLJSON }, resolvers),
});
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
});
The gql Query for the Playground (for type Series):
query {
complex {
data {
name
values {
name
value
}
}
keys {
name
value
}
}
}
The gql Query for the Playground (for custom scalar type JSON):
query {
complex
}
Here is a working example:
https://codesandbox.io/s/apollo-server-performance-issue-i7fk7
Any leads/ideas would be highly appreciated!
There's a related open issue here. Lee Byron summed it up pretty well:
I think the TL;DR of this issue is that GraphQL has some overhead and that reducing that overhead is non-trivial and removing it completely may not be an option. Ultimately GraphQL.js is still responsible for making API boundary guarantees about the shape and type of the returned data and by design does not trust the underlying systems. In other words GraphQL.js does runtime type checking and sub-selection and this has some cost.
The benefits that GraphQL offers (validation, sub-selection, etc.) inevitably incur some overhead as they require additional processing of the data you're returning. And unfortunately, this overhead scales with the size of the data. I imagine if you were to implement a REST endpoint that supported partial responses and did response validation using something like Swagger or Joi, you'd encounter a similar issue.
The "heap out of memory" error means exactly what it says -- you're running out of memory on the heap. You can try to alleviate this by manually increasing the limit.
Typically, large datasets like this should be broken up by implementing pagination. If that's not an option, utilizing a custom scalar will be the next best approach. The biggest downside to this approach is that clients consuming your API will not be able to request specific fields inside the JSON object you return. Outside of patching GraphQL.js, there's really no other alternative to speed up the responses and reduce your memory usage.
Comment summary
This data structure/types:
are not individual entities;
just a series of [groupped] data;
don't need normalization;
won't be normalized properly in apollo cache (no id fields);
This way this dataset is not the graphQL was designed for. Of course graphQL still can be used for fetching this data but type parsing/matching should be disabled.
Using custom scalar types (graphql-type-json) can be a solution. If you need some hybrid solution - you can type Group.values as json (instead entire Series). Groups still should have an id field if you want to use normalized cache [access].
Alternative
You can use apollo-link-rest for fetching 'pure' json data (file) leaving type parsing/matching to be client side only.
More advanced alternative
If you want to use one graphql endpoint ...
write own link - use directives - 'ask for json, get typed' - mix of two above. Sth like in rest link with de-/serializers.
In both alternatives - why do you really need it? Just for drawing? Not worth the effort. No pagination but hopefully streaming (live updates?) ... no cursors ... load more (subscriptions/polling) by ... last time update? Doable but 'not feel right'.
I'm writing an Angular app which uses the ReactiveX API to handle asynchronous operations. I used the API before in an Android project and I really like how it simplifies concurrent task handling. But there is one thing which I'm not sure how to solve in a right way.
How to update observer from an ongoing task? The task in this case will take time to load/create a complex/large object and I'm able to return intermediate progress, but not the object itself. The observable can only return one dataType. Therefor I know two possibilities.
Create an object which has a progress field and a data field. This object can be simply returned with Observable.onNext(object). The progress field will update on every onNext, while the data field is empty until the last onNext, which will set it to the loaded value.
Create two observables, a data observable and a progress observable. The observer hast to subscribe to the progress observable for progress updates and to the data observable to be notified when the data is finally loaded/created. These can also be optionally be zipped together for one subscription.
I used both techniques, they both work, but I want to know if there is a unified standard, a clean way, how to solve this task. It can, of course, as well be a completly new one. Im open for every solution.
After careful consideration I use a
solution similar to option two in my question.
The main observable is concerned with the actual result of
the operation.
A http request in this case, but the File iteration example is similar.
It is returned by the "work" function.
A second Observer/Subscriber can be added through a function parameter. This subscriber is concerned only with
the progress information. This way all operations are nullsafe and no type checks are needed.
A second version of the work function, without the progress Observer,
can be used if no progress UI update is needed.
export class FileUploadService {
doWork(formData: FormData, url: string): Subject<Response> {
return this.privateDoWork(formData, url, null);
}
doWorkWithProgress(formData: FormData, url: string, progressObserver: Observer<number>): Subject<Response> {
return this.privateDoWork(formData, url, progressObserver);
}
private privateDoWork(formData: FormData, url: string, progressObserver: Observer<number> | null): Subject<Response> {
return Observable.create(resultObserver => {
let xhr: XMLHttpRequest = new XMLHttpRequest();
xhr.open("POST", url);
xhr.onload = (evt) => {
if (progressObserver) {
progressObserver.next(1);
progressObserver.complete();
}
resultObserver.next((<any>evt.target).response);
resultObserver.complete()
};
xhr.upload.onprogress = (evt) => {
if (progressObserver) {
progressObserver.next(evt.loaded / evt.total);
}
};
xhr.onabort = (evt) => resultObserver.error("Upload aborted by user");
xhr.onerror = (evt) => resultObserver.error("Error");
xhr.send(formData);
});
}
Here is a call of the function including the progress Subscriber. With this solution the caller of the upload function must
create/handle/teardown the progress subscriber.
this.fileUploadService.doWorkWithProgress(this.chosenSerie.formData, url, new Subscriber((progress) => console.log(progress * 100)).subscribe(
(result) => console.log(result),
(error) => console.log(error),
() => console.log("request Completed")
);
Overall I prefered this solution to a "Pair" Object with a single subscription. There is no null handling nececcary, and
I got a clean seperation of concerns.
The example is written in Typescript, but similar solutions should be possible with other ReactiveX implementations.
I have some data stored on Client side by Session.set(...) (which then is rendered into a template).
This data is changing dynamically... on Server side, how can i synchronize it, so client would update templates any time data is changing on the server? Best method would be Publish/Subscribe, but it's designed for use with database.
this is what i end up so far:
if (Meteor.isClient) {
Session.setDefault('dynamicArray', [{text: "item1"},{text: "item2"}]);
Template.body.helpers({
dynamicData: function(){
return Session.get('dynamicArray');
}
});
// place for code to sync dynamicArray with server
}
if (Meteor.isServer) {
Meteor.startup(function () {
var dynamicArray = [{text: "item3"},{text: "item4"},{text: "item5"}];
// place for code to publish dynamicArray for client
});
}
Regarding your comment, you will need to creata a DynamicData Collection first, located outside the .isClient and .isServer conditionals. From there, .find() will allow you to collect data from the server in the form of a cursor, which can be iterated through using {{#each dynamicData}}. An example of how you might set up the collection and the helper is as follows:
DynamicData = new Collection('dynamicData'); //Sets up new Collection
if (Meteor.isClient) {
Template.body.helpers({
dynamicData: function(){
return DynamicData.find({}, {fields: {dynamicArray: [item1, item2, item3]})
}
});
}
Of course, this depends on how the document(s) you are retrieving are structured and what you are using them for. For instance, if you're only looking to return a single dynamicArray you might be better off using:
return DynamicData.findOne({}, {fields: {dynamicArray: [item1, item2, item3]}).dynamicArray;
...since this will return the array [item1, item2, item3] directly. This seems to be what you're looking for, since I had used the same method to replace an initial over-reliance on session data to sync information. Rather, the key point is to make server info available to the client through the helpers, which will bypass the need to sync via session data. Hope this helps.
I have a mobile app that receives a push notification, when it does, I know there is a row that that needs to be updated. I have tried two methods for retrieving just that one:
var row = data_source.get(<id>);
row.dirty = true;
data_source.sync();
This tries to fire an UPDATE, which I could shoe-horn into doing what I want, but it's conceptually wrong.
The other option that I have tried:
data_source.read( { id : <id> } );
Which fires off a READ request, with the ID in options.data. When I try to hook this up, kendo complains about not getting an array back, and when I make the response into an array, it doesn't seem to work either.
How am I supposed to do this? GET it outside the context of the DataSource, and set the relevant parts, and then set the dirty bit to false?
Kendo doesn't do single row read out of the box. If its not feasible to do full read(), you have to do what you proposed. Try following :
1) make your server side read method to accept requests with or without id, without to read all items, with to read your one row.
2) use parameter map together with isPushNotificaton flag to control the read request params
parameterMap: function(data, type) {
if (type == "read") {
if (isPushNotificaton)
return { id : yourId };
else
return { id : 0}; // get all
}
}
3) use requestEnd to decide how to deal with read result
requestEnd: function(e) {
var type = e.type;
if (e.type == 'read') {
// examine the response(by count/add additional flags into response object)
var isFullReadResponse = ...;
if (!isFullReadResponse) {
// DIY
e.preventDefault();
UpdateSingleRow(response, datasource.data());
}
}
}
4) Implement UpdateSingleRow method as you proposed - .."and set the relevant parts, and then set the dirty bit to false", (Refresh a single Kendo grid row)