I'm using an infinite loop in Lottie.
I wonder if this will affect device or browser performance degradation.
https://airbnb.io/lottie/
Related
Images don't block initial render. I believe this to be mostly true. This means requesting/downloading the image from the network does not happen on the main thread. I'm assuming decoding/rasterizing an image also happen off the main thread for some browsers (but I might be wrong).
Often I've heard people simply say "just let the images download in the background". However, making the next reasonable step with this information alone, images should have 0 impact on the performance of your web app when looking at Time to Interactive or Time to First Meaningful paint. From my experience, it doesn't. Performance is improved by 2-4 seconds by lazy loading images (using IntersectionObserver) on an image heavy page vs. "just letting them download in the background".
What specific browser internals/steps to decode/paint an image causes performance regressions when loading a web page? What steps take resources from the main thread?
That's a bit broad, there are many things that will affect the rest of the page, and all depending of a lot of different factors.
Network requests are not handled by the CPU, so there should be no overhead here.
Parsing the metadata is implementation dependent, could be using the same process or some dedicated one, but generally that's not an heavy call.
Decoding the image data and rasterization is implementation dependent too, some browsers will do it directly when they get the data, while others will wait until it's really needed to do this, though there are ways to ensure it's done synchronously (on the same thread).
Painting the image may be Hardware Accelerated (done on the GPU) or done by software (on the CPU) and in that case that could impact general performances, but modern renderers will probably discard the images that are not in the current viewport.
And finally, having your <img> elements get resized by their content will cause a complete reflow of your page. That's usually the most notable performance hit when loading images in a web-page, so be sure you set the sizes of your images through CSS in order to prevent that reflow.
When we load in a very heavy web page with a huge html form and lots of event handler code on it, the page gets very laggy for some time, responding to any user interaction (like changing input values) with a 1-2 second delay.
Interestingly, after a while (depending on the size of the page and code to parse, but around 1-2 minutes) it gets as snappy as it normally is with average size web pages. We tried to use the profiler in the dev tools to see what could be running in the background but nothing surprising is happening.
No network traffic is taking place after the page load, neither is there any blocking code running and HTML parsing is long gone at the time according to the profiler.
My questions are:
do browsers do any kind of indexing on the DOM in the background to speed up queries of elements?
any other type of optimization like caching repeated function call results?
what causes it to "catch up" after a while?
Note: it is obvious that our frontend is quite outdated and inefficient but we'd like to squeeze out everything from it before a big rewrite.
Yes, modern browsers, namely modern Javascript runtimes performs many optimisations during load and more importantly during page lifecycle: one of them is "Lazy / Just In Time Compilation, what in general means that runtime observes demanding or frequently performed patterns and translates them to faster, "closer to metal" format. Often in cost of higher memory consumption. Amusing fact is that such optimisations often makes "seemingly ugly and bad but predictable" code faster than well-thought complex "hand-crafted optimised" one.
But Iʼm not completely sure this is the main cause of phenomenon you are describing. Initial slowness and unresponsiveness is more often caused by battle of network requests, blocking code, HTML and CSS parsing and CPU/GPU rendering, i.e. wire/cache->memory->cpu/gpu loop, which is not that dependant on Javascript optimisations mentioned before.
Further reading:
http://creativejs.com/2013/06/the-race-for-speed-part-3-javascript-compiler-strategies/
https://developers.google.com/web/tools/chrome-devtools/profile/evaluate-performance/timeline-tool
Infinite loops are taught as evil. Is there ever a good use?
When coding them by accident, the CPU peaks and I imagine memory does too, especially if assigning variables inside the loop.
If there is a good use, how are those issues prevented?
Basicly every operating system or server spins in an infinte loop.
To avoid these memory issues normally you wouldn't allocate memory inside the loop unless it can be freed later inside the same loop. For example you would allocate memory for a request and delete it once it was served.
To avoid cpu peaks you would wait for interrupts in case of an os or call a blocking function like poll() which waits for a new event once per iteration.
First of all, the word "infinite" in this phrase should be taken a bit more loosely. I am presuming you are talking about a while (true) loop with a break instruction, which will eventually end, as opposed to a loop which will run until the end of time and all humanity.
In the former sense, yes, there are use cases where it's appropriate:
Games use infinite game loops.
Embedded programs use infinite main loops.
Windows applications use infinite message loops.
One example where they might be used inappropriately is when they are used to create time delays by spinning the CPU, which is what novice programmers tend to do to avoid dealing with timer interrupts (or timer events, or other non-procedural constructs). However, when spinning the CPU is done to acquire a shared resource, then the "infinite loop" is also a perfectly valid implementation choice. Even the .NET CLR Monitor, for example, tries spinning for several hundred cycles before issuing a true wait on a kernel event handle and creating a more expensive thread switch.
In addition to programs that run on event loops (like the the system processes that #Christoph mentions), some languages have a concept known as a generator, that allow and even encourage you to write an infinite loop. The trick is that the object only runs for a finite time when it "yields" (returns) some expression. After that its state is "frozen" until it is needed again. For example, in Python you can have an object that alternates between LEFT and RIGHT:
def side():
while True:
yield "LEFT"
yield "RIGHT"
a = side()
print a.next()
print a.next()
print a.next()
Which would give LEFT RIGHT LEFT. The side function looks like an infinite loop with the statement While True:, but it will only ever run for a finite amount of time per call.
All the applications on your handset run in infinite event loops.
There are plenty of examples in Windows of applications triggering code at fairly high and stable framerates without spiking the CPU.
WPF/Silverlight/WinRT applications can do this, for example. So can browsers and media players. How exactly do they do this, and what API calls would I make to achieve the same effect from a Win32 application?
Clock polling doesn't work, of course, because that spikes the CPU. Neither does Sleep(), because you only get around 50ms granularity at best.
They are using multimedia timers. You can find information on MSDN here
Only the view is invalidated (f.e. with InvalidateRect)on each multimedia timer event. Drawing happens in the WM_PAINT / OnPaint handler.
Actually, there's nothing wrong with sleep.
You can use a combination of QueryPerformanceCounter/QueryPerformanceFrequency to obtain very accurate timings and on average you can create a loop which ticks forward on average exactly when it's supposed to.
I have never seen a sleep to miss it's deadline by as much as 50 ms however, I've seen plenty of naive timers that drift. i.e. accumalte a small delay and conincedentally updates noticable irregular intervals. This is what causes uneven framerates.
If you play a very short beep on every n:th frame, this is very audiable.
Also, logic and rendering can be run independently of each other. The CPU might not appear to be that busy, but I bet you the GPU is hard at work.
Now, about not hogging the CPU. CPU usage is just a break down of CPU time spent by a process under a given sample (the thread schedulerer actually tracks this). If you have a target of 30 Hz for your game. You're limited to 33ms per frame, otherwise you'll be lagging behind (too slow CPU or too slow code), if you can't hit this target you won't be running at 30 Hz and if you hit it under 33ms then you can yield processor time, effectivly freeing up resources.
This might be an intresting read for you as well.
On a side note, instead of yielding time you could effecivly be doing prepwork for future computations. Some games when they are not under the heaviest of loads actually do things as sorting and memory defragmentation, a little bit here and there, adds up in the end.
I browse to a web page that has a javascript memory leak. If I refresh the page multiple times, it will eventually use up a significant amount of memory, and javascript on the page will slow down. On this particular page, I notice a very significant slow down when IE gets up to 100MB of RAM, even though I have multiple GB free.
My question is why should leaked objects cause javascript to run slowly? Does anyone have any insight into how the JS interpreter in IE is designed, such that this happens?
Even without swapping,that's caused by the "stupid" implementation of the Garbage Collector for Javascript in IE. It uses some heuristics that call the GC more often, if there are more objects.
There's not way you can avoid this, other than avoiding memory leaks like hell and also avoid creating too many Javascript objects.
Regards,
Markus
I would imagine that a memory leak could result in some memory fragmentation, which could slow the application down. I'm not sure about how this works, but is it possible that parts of the js code are still running in the background - as orphaned processes? This could explain the slowdown - as the page gets busier and busier, while you're not actually seeing the old copies running.
I could be pulling that out of my ass though.