I have have an infinite listview in flutter and each item on that list need to make http call to get some data from server. I want to optimize call when user scroll i want to prevent item from making http call.
Any idea to make that
A great solution, used by most of mobile apps is to fetch your cells depending on the scroll. Initially, you can load for instance 20 cells and when the user reaches the end of the list, make another call in order to load 20 more cells.
You can handle the user's scroll like so:
var scrollController = ScrollController();
scrollController.addListener(scrollListener);
double maxScroll = listScrollController.position.maxScrollExtent;
double currentScroll = listScrollController.position.pixels;
if (maxScroll == currentScroll) {
// fetch more cells
}
Two ways to optimize could be:
ListView.builder(): Use the builder to construct items lazily. This can be used to avoid having bulk http calls at the beginning (items are built as user scrolls).
Cache: Cache the http calls for the duration based on your needs.
I found litle solution for my problem.
I used library visibility_detector (https://pub.dev/packages/visibility_detector)
In each item ,I wait for 2 second if the item still visible I make http call if not i do nothing
Related
I am trying to develop a webapp using Google Apps Script to be embedded into a Google Site which simply displays the contents of a Google Sheet and filters it using some simple parameters. For the time being, at least. I may add more features later.
I got a functional app, but found that filtering could often take a while as the client sometimes had to wait up to 5 seconds for a response from the server. I decided that this was most likely due to the fact that I was loading the spreadsheet by ID using the SpreadsheetApp class every time it was called.
I decided to cache the spreadsheet values in my doGet function using the CacheService and retrieve the data from the cache each time instead.
However, for some reason this has meant that what was a 2-dimensional array is now treated as a 1-dimensional array. And, so, when displaying the data in an HTML table, I end up with a single column, with each cell being occupied by a single character.
This is how I have implemented the caching; as far as I can tell from the API reference I am not doing anything wrong:
function doGet() {
CacheService.getScriptCache().put('data', SpreadsheetApp
.openById('####')
.getActiveSheet()
.getDataRange()
.getValues());
return HtmlService
.createTemplateFromFile('index')
.evaluate()
.setSandboxMode(HtmlService.SandboxMode.IFRAME);
}
function getData() {
return CacheService.getScriptCache().get('data');
}
This is my first time developing a proper application using GAS (I have used it in Sheets before). Is there something very obvious I am missing? I didn't see any type restrictions on the CacheService reference page...
CacheService stores Strings, so objects such as your two-dimensional array will be coerced to Strings, which may not meet your needs.
Use the JSON utility to take control of the results.
myCache.put( 'tag', JSON.stringify( myObj ) );
...
var cachedObj = JSON.parse( myCache.get( 'tag' ) );
Cache expires. The put method, without an expirationInSeconds parameter expires in 10 minutes. If you need your data to stay alive for more than 10 minutes, you need to specify an expirationInSeconds, and the maximum is 6 hours. So, if you specifically do NOT need the data to expire, Cache might not be the best use.
You can use Cache for something like controlling how long a user can be logged in.
You could also try using a global variable, which some people would tell you to never use. To declare a global variable, define the variable outside of any function.
I am trying to develop a webapp using Google Apps Script to be embedded into a Google Site which simply displays the contents of a Google Sheet and filters it using some simple parameters. For the time being, at least. I may add more features later.
I got a functional app, but found that filtering could often take a while as the client sometimes had to wait up to 5 seconds for a response from the server. I decided that this was most likely due to the fact that I was loading the spreadsheet by ID using the SpreadsheetApp class every time it was called.
I decided to cache the spreadsheet values in my doGet function using the CacheService and retrieve the data from the cache each time instead.
However, for some reason this has meant that what was a 2-dimensional array is now treated as a 1-dimensional array. And, so, when displaying the data in an HTML table, I end up with a single column, with each cell being occupied by a single character.
This is how I have implemented the caching; as far as I can tell from the API reference I am not doing anything wrong:
function doGet() {
CacheService.getScriptCache().put('data', SpreadsheetApp
.openById('####')
.getActiveSheet()
.getDataRange()
.getValues());
return HtmlService
.createTemplateFromFile('index')
.evaluate()
.setSandboxMode(HtmlService.SandboxMode.IFRAME);
}
function getData() {
return CacheService.getScriptCache().get('data');
}
This is my first time developing a proper application using GAS (I have used it in Sheets before). Is there something very obvious I am missing? I didn't see any type restrictions on the CacheService reference page...
CacheService stores Strings, so objects such as your two-dimensional array will be coerced to Strings, which may not meet your needs.
Use the JSON utility to take control of the results.
myCache.put( 'tag', JSON.stringify( myObj ) );
...
var cachedObj = JSON.parse( myCache.get( 'tag' ) );
Cache expires. The put method, without an expirationInSeconds parameter expires in 10 minutes. If you need your data to stay alive for more than 10 minutes, you need to specify an expirationInSeconds, and the maximum is 6 hours. So, if you specifically do NOT need the data to expire, Cache might not be the best use.
You can use Cache for something like controlling how long a user can be logged in.
You could also try using a global variable, which some people would tell you to never use. To declare a global variable, define the variable outside of any function.
I have the following setup: Im listing all items on the frontpage (publication frontpageItems) and listing selected items on a userpage (publication userpageItems). Im sorting the items on both pages on a lastactivity field and im doing this on the server side publication and not on the client side since I want the frontpage to be static once loaded.
Whenever the page loads initialy everything sorts fine, ie: 1,2,3.
When I navigate from frontpage to the user I have a subset of 2,3 for example
When I navigate back to frontpage the sort is as
follows: 2,3,1
I assume this is because meteor caches the items, but the sort order is definitely wrong here. Refreshing the frontpage makes the sort correct again.
Is there any way to fix this? ie, clear the subscription on page switch for exampe? Im using iron-router btw to subscribe to the publications before page load. Adding client side sorting + reactive:false on the client solves my problem btw, but I cant use this since I DO need reactivity on the limit of the subscription for pagination/infinite scrolling.
Or, as a workaround, is it possible to disable reactivity on the client for sort, but keep it for limit?
As David mentioned below I do needed sorting on the client so I hold on to that and tried some different directions using my publication to achieve some sort of partial reactivity on the client.
I ended up implementing a publication with an observeChanges pattern and sort on lastactivity on the client side. This publication makes sure that:
Initially all items are send to the client (within the limit ofcourse)
Whenever an item is changed, it removes the lastactivity field and doesnt update that but all other attributes are updated and send over to the client
Whenever an item is added its gets a later lastactivity value then beforeLastactivity variable and thus is not added
Increasing the limit for infinite scrolling keeps working
When a client refreshes everything is send down to the client again because beforeLastactivity gets updated
Meteor.publish('popularPicks', function(limit,beforeLastactivity) {
var init = true;
var self = this;
if(!beforeLastactivity)
var beforeLastactivity = new Date().getTime();
if(!limit) {
var limit = 18;
}
var query = Picks.find({},{
limit: limit,
sort: { lastactivity: -1 }
});
var handle = query.observeChanges({
added: function( id,doc ){
if(init){
if(doc.lastactivity < beforeLastactivity)
self.added( 'picks', id, doc );
}
},
changed: function( id,fields ){
if(fields.lastactivity)
delete fields.lastactivity;
self.changed( 'picks', id, fields );
}
});
var init = false;
self.ready();
self.onStop( function(){
handle.stop();
});
});
As I explained in the answer to this question, sorting in the publish function has no affect on the order of documents on the client. Because you are using a limit, however, sorting on the server does affect which items will be on the client.
I've read the rest of your question a dozen times, and I'm unclear exactly which situations require reactivity and which do not. Based on the phrase: "I want the frontpage to be static once loaded", I'd recommend using a method (instead of a subscription) to load the required items (maybe when the client connects?) and insert the data into a session variable.
Based on our discussion, if you could get an array of ids which you want to have appear on the page you can reactively update them by subscribing to a publish function like this:
Meteor.publish('itemsWithoutLastActivity', function(ids) {
check(ids, [String]);
return Items.find({id: {$in: ids}}, {fields: {lastActivity: 0}});
});
This will publish all items with ids in the given array, but it will not publish the lastActivity property.
However, the documents received from your initial subscription will also still be reactive - here's where it gets really tricky. If you leave that the first subscription running, your sort order will change when items get updated.
One way to deal with this is to not subscribe to the data to begin with. Make a method call to get an ordered set of ids and then subscribe to itemsWithoutLastActivity with those ids. You will need to come up with a creative way to order them on the client - maybe just an {{#each}} which iterates over the ids and each sub-template loads the required item by id.
I would like to create a web application.
There are cases, when a user's request would result in refreshing the content in 2-3 or sometimes even more containers. The cleanest way would be to run the requests one after the other in a sequence and handle the results one by one. I would like to have only one request if it is possible, and collecting all the results for the responses.
Eg.: let's say, there is a main container (M), and some other (smaller) containers (A, B, C, etc.)
The simple way would be:
request main function from server for M, show it in M container
fetch some other content from server dedicated for A_container, show it in A_container
fetch some other content from server dedicated for B_container, show it in B_container
fetch some other content from server dedicated for C_container, show it in C_container
Depending on the main function requested by the user, just some of the containers have to refresh it's content, not all of them.
I would like to minimise the number of requests. Actually minimise it to one. Of course, it could be done by Json, or XML.
I would like to collect all the responses from the server, encapsulate them (A_container:"this is content for A_container", C_container: "Other content for C_container"), and send it back to the client, where the client would unpack it, and regarding to the content, it would delegate each of them to the appropriate container.
My question is, that how would you do it, when you know, that the returned data is very variable: in some cases it might contain even html elements, quotes, double quotes, etc. If I am not mistaken, in some special cases this could screw up Json and xml as well.
My idea was to divide the different contents by special separators (for now just for an easy example let's use xxx, and yyy). It is a very uggly solution, but you cannot screw up the content for sure, if the separators are unique enough.
The previous Json-like solution would look like this:
"A_containeryyythis is content for A_containerxxxC_containeryyyOther content for C_container"
Then on the client side split the code first by xxx, which would result in this case in two array elements (strings), what are going to be split again by yyy. The first element would give the container's id, the second one the content for it. After this the last step would be walking through the new arrays, and display the result in the appropriate containers.
Is there anybody who would suggest a nicer way for this, or who could show a safe way to do it in Json, or XML, etc?
The previous Json-like solution would look like this: "A_containeryyythis is content for A_containerxxxC_containeryyyOther content for C_container"
THat woul seem more like a "text/plain" response from the server that you would handle via a javascript split. What you would really want to do is return a json object from the server, e.g.
object = {'A container content','etc','etc','etc'}
Then evaluate that using javascript from the client side using eval(object). From there I would populate each one of your "containers".
Format your request using XML:
<response>
<m>Content</m>
<a>Content</a>
<b>Content</b>
<c>Content</c>
</response>
Then parse the XML in your XMLHttpRequest callback:
var xmldoc = req.responseXML;
var root = xmldoc.getElementsByTagName('response').item(0);
var contentm = root.getElementsByTagName('m');
var contenta = root.getElementsByTagName('a');
var contentb = root.getElementsByTagName('b');
var contentc = root.getElementsByTagName('c');
if (contentm) {
/* Update Container M */
}
if (contenta) {
/* Update Container A */
}
if (contentb) {
/* Update Container B */
}
if (contentc) {
/* Update Container C */
}
I am displaying a list of items using a SAP ABAP column tree model, basically a tree of folder and files, with columns.
I want to load the sub-nodes of folders dynamically, so I'm using the EXPAND_NO_CHILDREN event which is firing correctly.
Unfortunately, after I add the new nodes and items to the tree, the folder is automatically collapsing again, requiring a second click to view the sub-nodes.
Do I need to call a method when handling the event so that the folder stays open, or am I doing something else wrong?
* Set up event handling.
LS_EVENT-EVENTID = CL_ITEM_TREE_CONTROL=>EVENTID_EXPAND_NO_CHILDREN.
LS_EVENT-APPL_EVENT = GC_X.
APPEND LS_EVENT TO LT_EVENTS.
CALL METHOD GO_MODEL->SET_REGISTERED_EVENTS
EXPORTING
EVENTS = LT_EVENTS
EXCEPTIONS
ILLEGAL_EVENT_COMBINATION = 1
UNKNOWN_EVENT = 2.
SET HANDLER GO_APPLICATION->HANDLE_EXPAND_NO_CHILDREN
FOR GO_MODEL.
...
* Add new data to tree.
CALL METHOD GO_MODEL->ADD_NODES
EXPORTING
NODE_TABLE = PTI_NODES[]
EXCEPTIONS
ERROR_IN_NODE_TABLE = 1.
CALL METHOD GO_MODEL->ADD_ITEMS
EXPORTING
ITEM_TABLE = PTI_ITEMS[]
EXCEPTIONS
NODE_NOT_FOUND = 1
ERROR_IN_ITEM_TABLE = 2.
It's been a while since I've played with SAP, but I always found the SAP Library to be particularly helpful when I got stuck...
I managed to come up with this one for you:
http://help.sap.com/saphelp_nw04/helpdata/en/47/aa7a18c80a11d3a6f90000e83dd863/frameset.htm, specifically:
When you add new nodes to the tree model, set the flag ITEMSINCOM to 'X'.
This informs the tree model that you want to load the items for that node on demand.
Hope it helps?
Your code looks fine,
I would use the method ADD_NODES_AND_ITEMS myself if I were to add nodes and items ;)
Beyond that, try to call EXPAND_NODE after you added the items/nodes and see if that helps.