React-Native performance hints for FlatList - performance

I'm having trouble to scroll lists smoothly on relatively old hardware (Ipad 3 with IOS 9.3.5) although performance in newer devices is ok, I'm doing tests with approx 8000 items per list.
Any hint to improve performance is welcome, to make demo more simple I’m using images from the web, but in my real project I load them locally (although it doesn’t improve scroll at all), performance seems to degrade when I increase the number of columns, with one column the response is acceptable but there are some jitters anyway, my application require four columns.
Here is my app link:
https://snack.expo.io/Hk46ErZrb
Thanks !!

This might help with the list options http://www.reactnativeexpress.com/lists.
Moreover, you can optimise the data you are importing like compressing the images and serving it via CDN.

Related

Ideal Hostnames Number to Parallelize Downloads

We are setting up a CDN to serve CSS, JS and the images. We are wondering what will be the ideal number of hostnames to distribute the ressources across. This technique is use by many websites to increase parallelize downloads and page loading. However, DNS lookups slow down the page loading so the rule is not the more hostname you have, the more performance you will get.
I've read somewhere that the ideal number is between 2 and 4. I wonder if there is a rule of thumbs that apply to all webpages or if there is a rule of thumbs according to the number of ressources being served and the size of them.
Specific case : Our websites are composed of two kinds of pages. One kind serve a list of thumbnails (15-20 or so images, varying) and the other serve a flash or shockwave application (mostly games) with a lot less images. Obviously, we have regular JS, images and CSS on all pages. When I mean regular, that correctly optimized elements, 1 CSS, a few images of the UI, 2/3 JS...
I will love to have answers for our specific case but I will be also very interested to have general answers too!
We (Yahoo!) did a study back in 2007 that showed that 2 is a good number, and anything more than that doesn't improve page performance, in some cases having more than 2 domains degraded performance.
What I would recommend is - if you have a A/B testing infrastructure then go ahead and try it out on your site, use different number of domains and measure the page load time from your end users.
If you don't have a A/B testing framework then just try a different value for few days, measure it, try a new one, measure that ... do this till you find that point where performance starts to degrade.
There is no silver bullet for this recommendation. This is something that depends a lot on how many assets are there on each page and what browser (number of parallel downloads) your end users use. Hope that helps.

How to handle large numbers of pushpins in Bing Maps

I am using Bing Maps with Ajax and I have about 80,000 locations to drop pushpins into. The purpose of the feature is to allow a user to search for restaurants in Louisiana and click the pushpin to see the health inspection information.
Obviously it doesn't do much good to have 80,000 pins on the map at one time, but I am struggling to find the best solution to this problem. Another problem is that the distance between these locations is very small (All 80,000 are in Louisiana). I know I could use clustering to keep from cluttering the map, but it seems like that would still cause performance problems.
What I am currently trying to do is to simply not show any pins until a certain zoom level and then only show the pins within the current view. The way I am currently attempting to do that is by using the viewchangeend event to find the zoom level and the boundaries of the map and then querying the database (through a web service) for any points in that range.
It feels like I am going about this the wrong way. Is there a better way to manage this large amount of data? Would it be better to try to load all points initially and then have the data on hand without having to hit my web service every time the map moves. If so, how would I go about it?
I haven't been able to find answers to my questions, which usually means that I am asking the wrong questions. If anyone could help me figure out the right question it would be greatly appreciated.
Well, I've implemented a slightly different approach to this. It was just a fun exercise, but I'm displaying all my data (about 140.000 points) in Bing Maps using the HTML5 canvas.
I previously load all the data to the client. Then, I've optimized the drawing process so much that I've attached it to the "Viewchange" event (which fires all the time during the view change process).
I've blogged about this. You can check it here.
My example does not have interaction on it but could be easily done (should be a nice topic for a blog post). You would have thus to handle the events manually and search for the corresponding points yourself or, if the amount of points to draw and/or the zoom level was below some threshold, show regular pushpins.
Anyway, another option, if you're not restricted to Bing Maps, is to use the likes of Leaflet. It allows you to create a Canvas Layer which is a tile-based layer but rendered in client-side using HTML5 canvas. It opens a new range of possibilities. Check for example this map in GisCloud.
Yet another option, although more suitable to static data, is using a technique called UTFGrid. The lads that developed it can certainly explain it better than me, but it scales for as many points as you want with a fenomenal performance. It consists on having a tile layer with your info, and an accompanying json file with something like an "ascii-art" file describing the features on the tiles. Then, using a library called wax it provides complete mouse-over, mouse-click events on it, without any performance impact whatsoever.
I've also blogged about it.
I think clustering would be your best bet if you can get away with using it. You say that you tried using clustering but it still caused performance problems? I went to test it out with 80000 data points at the V7 Interactive SDK and it seems to perform fine. Test it out yourself by going to the link and change the line in the Load module - clustering tab:
TestDataGenerator.GenerateData(100,dataCallback);
to
TestDataGenerator.GenerateData(80000,dataCallback);
then hit the Run button. The performance seems acceptable to me with that many data points.

The best realtime search platform for realtime indexing of large db?

I am building a site atm which requires realtime indexing of results (not 10,000 docs per second, I mean millisecond updates). I went anout researching the different techs and originally came up with dozens of different platforms. I have been able to narrow my choices down to about 3 through the use of deduction (doc complexity, different types of support etc):
Lucence
Xapian
Sphinx
I originally tried to choose between these by the sites using them but then, to my surprise, lots and lots of high profile sites trust all three of these. I also found that all three of these also allow millisecond updates.
I thought about Sphinx originally because it is the only one of the three to say full realtime indexing instead of near realtime indexing only to find it is still in beta (not sure how reliable this tech would be in realtime indexing tbh).
I am leaning towards lucene since when solr gets realtime indexing moving my schema to solr will be insanely easy.
I am also leaning towards Xapian because a number of sites I know implement it very well.
I am having huge problems deciding between these techs and which one would be best suited.
I am looking at a site with millions maybe even tens of millions of records that needs an index that can be appended/deleted/updated in realtime.
Can anyone share their experiences on working with realtime search platforms to help me choose the right one for me? I am open to suggestions that are not here :).
P.S I use MongoDB so don't post SQL only search platforms please :).
I am answering this question with what I found, after a couple of weeks, was the best option.
I found Lucene the best actually since Zoies user base was, is.....**. I wanted to post a topic on the google group (the only form of support) and so far a couple of weeks later it still has not been moderated and approved for display.
That really put me off Zoie so in the end I decided to give Lucene a try.
Thanks anyway :).
I would recommend zoie based on lucene.

Performance issue with QGraphicsScene::createItemGroup

I'm using the Qt graphics API to display layers in some GIS software.
Each layer is a group, containing graphic primitives. I have a
performance issue when loading fairly large data sets, for example
this is what happens when making a group composed of ~96k circular
paths (points from a shapefile):
callgrind image http://f.imagehost.org/0750/profile-createItemGroup.png
The complete callgrind dump is here.
The QGraphicsScene::createItemGroup() call takes about 150 seconds to
complete on my 2.4GHz core2, and it seems all this time is used in
QGraphicsItemPrivate::updateEffectiveOpacity(), which itself consumes
37% of its time calling QGraphicsItem::flags() 4 billion times (the
data comes from a test script with no GUI, just a scene, not even tied
to a view).
All the rest is pretty much instantaneous (creating the items,
reading the file, etc...). I tried to disable the scene's index before
creating the group and obtained similar results.
What could I do to improve performances in this case ? If I can't is there a way to create groups faster ?
After studying the source code a little bit, I found out that the updateEffectiveOpacity has O(n²) with regard to the children of the item's parent item (search for the method qt_allChildrenCombineOpacity). This is probably also the reason that method disappeared in Qt 4.6 and apparently been replaced by something else. Anyway, you should try out setting the ItemDoesntPropagateOpacityToChildren flag on the group item (i.e. you'll have to create it yourself), at least while adding all the items.

What is the general consensus on preemptively loading images on web pages?

So I've been wondering this for a while, I'm currently building a website which is very image oriented. What do people think of preloading images? How do they do it? (Javascript versus display:none css?).
As users what do you think of it? Does the speed gained while using the website justify the extra time you have to spend waiting for it to load?
From a programmer's stand point, what is better practice?
If you really have to preload (e.g. for rollover images), definitely use CSS. JavaScript can be blocked, and you can't rely on it.
Rather than pre-loading multiple images, I recommend you use CSS Sprites:
http://css-tricks.com/css-sprites/
Which is a technique where you consolidate multiple images into a single image (and use background-position to select the correct portion) to reduce the number of HTTP requests made to the web server and reduce the overall page load time.

Resources