Large dataset crashes my browser - d3.js

I am loading a dataset of ~ 80,000 rows into a time series chart object I've created, and it's crashing my browser.
I don't think it should be a problem for d3, as this Crossfilter example demonstrates with a dataset of several hundred thousand rows. (Although, the data are being aggregated, whereas I am graphing each point).
I am not sure how to debug this. Chrome isn't giving me any useful messages and Google results are scarce. Any ideas?

If you are loading huge remote data using chrome than this is known issue. Chrome crashes on receiving large datasets via xhr. To solve this problem you can either receive data by chunks or receive it over web sockets.

It depends, if you are appending 80,000 elements to the DOM, thats huge and I'm not surprised if its crashing the browser. The Crossfilter example does indeed have several hundred thousand rows but performs minimal DOM manipulation due to the aggregation (as you mentioned). You might take a look at canvas instead.

you can disable browser extensions, and try, if your results are in flash, or java, disable chrome java extension, if they are showing in pdf disable chrome pdf extension and let OS decide which program to use, it'll still show up in chrome but won't crash. chrome://plugins/
sometimes chrome has two extensions for one program disable one of them

Related

How to prevent rendering artifacts in PDFKit/NSImage?

I'm trying to create a tool to rasterise vector images—stored in PDF files—on macOS, but the resulting images contain artifacts around the edges of some shapes. Preview.app, on the other hand, always renders the PDF flawlessly, as shown in this example:
I've tried:
Loading the PDF document using PDFKit, and rendering the page using both draw(with:to:) and thumbnail(of:for:)
Loading the PDF document into an NSImage (which creates an NSPDFImageRep), and using cgImage(forProposedRect:context:hints:)
In both cases I get these aliasing-like artifacts as seen on the left-hand-side of the image above. The PDF file is out of my control, so can't be changed to fix any issues it might have. I'm currently trying to migrate away from Cairo (which renders correctly) to Apple's PDF rendering for performance reasons (PDFKit renders it much more quickly, albeit with these artifacts).
Is there anything I've missed which would fix the output?
So it looks like the issue was caused due to me rasterising PDFs on multiple threads (specifically my tool rasterises PDFs in multiple resolutions, so I thought why not simultaneously).
Performing the operations sequentially on the main thread instead fixed it. I thought that I had come up with a way to use it concurrently by initialising the CGContext manually (instead of using NSImage's lockFocus()/unlockFocus() and NSGraphicsContext.current), but alas, as I soon as I add a context.scaleBy (to generate the images at different sizes), it fails again.
So for now I'm just doing it on the main thread until another solution comes along.

Knockoutjs operates extremely slow in Internet Explorer

In brief: I've got a page with KO-code that operates absolutely cool in Google Chrome, Firefox, Safari, etc. But the performance is gone in Internet Explorer. I tried IE10, IE11. It takes from 10 to 25 seconds to render about 150 rows.
Details: There page represents a work queue for users, where their tasks are shown. The requirement is not to use any paging on that page. Each row of the table has at least a dozen of variants to display (links, buttons, inputs, css styling, handling user events, custom js plugins, etc.). The average number of rows on prod is 100-200+. User is able to apply different filters and sortings.
Things I've already tried:
reduced the number of computed properties (changed to pureComputed, where possible)
reduced the number of using the template, if and ifnot bindings (according to profiler they are the most time consuming task) - I use the visible, where possible
tried to use the knockout-fast-foreach custom binding (https://github.com/brianmhunt/knockout-fast-foreach)
profiled the code with IE and Chrome tools to eliminate the memory leaks
profiled the code with ko.bindingReport.js (https://gist.github.com/kamranayub/65399fa247a6c182bc65)
The approaches specifed above tuned the code (according to ko.bindingReport.js) almost two times faster in Chrome. But IE is still too slow - about 10 seconds for rendering.
Chrome:
Internet Explorer:
Folks, any ideas?
"The table binding provides a fast method for displaying tables of data using Knockout. table is about ten times faster than nested foreach bindings."
This claims to be 10x faster.
https://github.com/mbest/knockout-table
You reduced the amount of computed observables, but did you also reduce the amount of observables? I'm not seeing a lof of editable fields. The ones that are not being edited on the page probably don't need to be an observable? This has boosted my performance quite a lot of times.

Time loading webpage

I'm trying many problems with the time loading of my web page:
www.alvaromillan.es.
I've tried to minify the js and the images but the problem is, as you can see, that my web site is only this page so every image and js is on this document...
The loading time is really high and even the smooth scrolling movement lasts a lot and the first time you use it it doesn´t go fine...
Please may any of you help me??
I took a quick look at the page just using the chrome developer tools and while there are probably several things you can do which YSlow would suggest, I think the biggest gain would come from optimizing and spriting your images. 131 of the 156 requests on your page are for images. Thats alot of images and many are fairly small. Also alot of the images seem quite large in bytes for their size. Here is what I would do:
Combine the images using several sprite sheets about 50k-100k per sheet.
Use the PNG format.
Quantize the sprite sheets to 8bit PNGs. My guess is that you will not experience perceptible quality loss by doing this. You could use spmething like pngquant to do this.
Use something like optipng to apply lossless compression on the quantized image.
I think this will yield dramatic improvements.
As skaffman suggests, do run yslow and/or google page speed test for more thorough suggestions. I also like using webpagetest.org which provides great metrics for optimizing pages.
Give the YSlow Firefox plugin a try. It will analyze the load times of your site and advise you the best course of action to take to fix it.
OK, here's some quick initial thoughts...
Flush the page after the head so that the browser can start downloading those resources sooner.
Remove the iframe
jquery appears to be loaded twice - once directly and once via google.load
Can you defer the loading of the javascript until later e.g. put it at the bottom of the page or load it asynchronously?
Rather than preloading the slideshow images - why not load them on demand when clicked on, or lazy load them after the page has finished load?
Also do you really want IE to emulate IE6???

Cant find why some small images are taking 10 seconds to load

I am trying to optimize my site to accomplish at least 90 on YSlow and PageSpeed.
I am doing pretty well. But in the following result, there are 4 images that show that take 9-10 seconds to load. If you see the detail, it actually shows that of those 10 seconds, mostluy 99% is while connecting only.
This is a magento store, and I am not sure what I should do to fix this problem because the images are not really big.
http://gtmetrix.com/reports/www.theprinterdepo.com/FyZjLbUX
Thank you
Interesting, even WebPageTest.org shows the same issues, but there's nothing obvious as to why this occuring (at least to me), but I can make some guesses:
you're calling an image from within the .css file on www.printerdepot.net to an image on www.printerdepot.com? (additional DNS lookup)? Sharding issue?
but then why does it only affect some images?
a bunch of other possibly related issues that are cascading through?
I'd suggest trialling converting them to Base64 Data URIs and updating the CSS to see if that improves the performance. See this article for more.
instead of having small images why don't you have only one image sprite? It will be much faster (http://www.w3schools.com/css/css_image_sprites.asp) and you will only need to send to the client one image

Mobile webapp performance issues

I’m building a mobile web application, and even though I’m still in a prototyping kind of the process, I’m having a hard time fixing certain performance problems.
The application itself (works all smooth in desktop browsers, but significantly sluggish in Mobile Safari): Hancards webapp prototype. You may login as mifeng:wangwang or create a new user.
The overall clumsy performance could be tolerable though, except for one thing: the browser simply crashes (!) when you open a set page, tap ‘view’ (enlarge all cards) and then try to go back to the previous page.
The code that gets executed when ‘view’ is tapped is this (very sluggish by itself as well; any way to improve it?):
if ($(this).hasClass('big')) {
$('.card').unwrap().removeClass('big flippable').addClass('small');
$(this).removeClass('big');
}
else {
$('.card').wrap('<div class="bigCardWrap" />').removeClass('small').addClass('big flippable');
$(this).addClass('big');
}
And another thing, a pretty weird bug. Very often the ‘word of the day‘ block won’t display the text node for the last element (<div class="meaning">), even though it’s in the code. The text will not show unless you ‘shake’ the DOM anyhow (unticking and ticking back one of the associated CSS properties can also achieve that). This happens in both desktop and mobile Safari browsers.
The code that writes it in there is this:
// While we are here, also display the Word of the day
$.post('ajax.php', {action: 'stuff:showWotd'}, function(data) {
// Decode the received data
var msg = decodeResponse(data);
// Insert the values
$('.wotd .hanzi').text(msg.content[0]['hanzi']);
$('.wotd .pinyin').text(msg.content[0]['pinyin']);
$('.wotd .meaning').text(msg.content[0]['meaning']);
});
I don’t expect you to advice me on how to fix the performance of the whole application (I will probably have to revise the overall scope of the project instead of trying to find workarounds), but I at least would like to see how to solve these two problems. Thank you!
The only performance issue I see in the script is the wrap/unwrap calls - adding and removing elements from the DOM tends to be fairly slow, and you can probably get the same effect by always having a wrapper element and changing its class rather than adding or removing it.
However, the performance issues you are seeing are most likely in your css:
3D transforms can be much faster than 2D due to hardware acceleration. It looks like you already have this, though you do need to be careful about which elements it is applied to
Shadows have real performance issues, especially when animated. Removing them will probably fix most of the slowness.
Rearranging background images can help - A single background image under transparent pages is faster than having a background image for each page.

Resources