I'm writing my first complex Redux/React SPA and I expect to end up with at least 500 components.
I was told that I should merge some components because the less components the faster the app will be (I guess that's the rendering issue?). On the other hand everywhere I go (like official docs) it says the components should be as small as possible.
Which approach is best?
I think, you should read this article https://medium.com/dailyjs/react-is-slow-react-is-fast-optimizing-react-apps-in-practice-394176a11fba
The main steps for improving your application:
1) small reusable components
2) using shouldComponentUpdate
3) using PureComponent
Big components is the wrong way to structure your application, because react.js spends more time to render screen
Related
This is from this issue: https://github.com/sveltejs/svelte/issues/2546
Why is the statement "surely the incremental cost of a component is significantly larger?" true?
Can someone unpack that for me?
Further to hackape's answer, here's a chart that I hope will illustrate the difference.
On the left, we have traditional frameworks that have a black-box approach. The components work like an instruction set for the framework. As a result, the framework usually includes all the functionality available, since it doesn't know what may be used. (caveat, some frameworks like Vue-3 now allow creating a bundle that only includes the parts needed). On the other hand Svelte will compile the components, and inject the parts needed into the component. The tipping point happens when the all the functionality added to each component surpasses the size of the framework (react, vue, etc). Given that the size of the injected svelte script is based on the contents of the component, it is hard to tell, based on number of components alone, when this tipping point may occur.
To be clear when Rich said “cost” he was referring to the bundle size of the compiled code. In web app’s context that’s obviously a cost. For illustration purpose, let’s compare svelte against react.
React needs both the 'react' runtime lib and the 'react-dom' renderer lib, aka the VDOM runtime, in order to work properly. So you pay the upfront cost for the size of these runtime libs. But for each component you add to you app’s bundle, the “incremental cost” in term of bundle size is just the size of component’s code, almost as is, if you don’t use JSX. Even if you do, the inflation rate of code size, from source code to compiled code, is close to 1.
With svelte, you don’t need a VDOM runtime. So there’s little upfront cost beside the tiny svelte runtime lib. But for each component you add, your source .svelte code will be compiled (and inevitably inflated) by the compiler into .js code, and the inflation rate is much greater than 1. Thus Rich said “ the incremental cost of a component is significantly larger”.
Some one has done the math. According to #halfnelson we have these two equations:
Svelte Bundle Bytes = 0.493 * Source_Size + 2811
React Bundle Bytes = 0.153 * Source_Size + 43503
His calculation is done using minified code so both multiplier is less than 1. You can tell the “incremental cost” per component of svelte is 3x the cost of react. But the upfront runtime cost is 1/15 of react’s.
Context:
We built a data-intensive app for US region for just one client using ASP.NET MVC and now we are slowly moving to ASP NET Core. we have a requirement to develop similar version for Canada, our approach was to maintain two different code bases even though the UI is 70% same.
Problem:
Two code bases seems maintainable but were ending up doing double work if a generic component has to changed. Now we have multiple clients coming from multiple regions and UI can be a little different by client and region and we are bit confused on how to architect the such an app with just once code base.
I am not sure on what would be a maintainable and scalable approach.
One approach is having an UI powered by rules engine that is capable of showing and hiding the components. How maintainable is this approach in deployments perspective?
what would be other approaches to solve this problem?
The main approaches I can think of are:
Separate code bases and release pipelines. This seems to be your current approach.
Pros:
independent releases - no surprises like releasing a change to Canada which the other team made for US
potentially simpler code base - less configuration, fewer "if (region == 'CANADA')..."
independent QA - it's much simpler to automate testing if you're just testing one environment
Cons:
effort duplication as you've already noticed
One code base, changes configuration driven.
Pros:
making a change in one place
Cons:
higher chance on many devs working on the same code at the same time
you're likely to end up with horrible 'ifs'
separating release pipelines can be very tricky. If you have a change for Canada, you need to test everything for US - this can be a significant amount of effort depending on the level of QA automation and the complexity of your test scenarios. Also, do you release US just because someone in Canada wanted to change the button color to green? If you do then you waste time. If you don't then potentially untested changes pile up for the next US release.
if you have other regions coming, this code quickly becomes complex - many people just throw stuff in to make their region work and you end up with spaghetti code.
Separate code bases using common, configurable modules.
This could contain anything you decide is unlikely to differ across regions: Nuget packages with core logic, npm packages with javascript, front end styling, etc.
Pros:
if done right you can get the best of both worlds - separate release pipelines and separate (simple) region specific code
you can make a change to the common module and decide when/if to update each region to the newest version separately
Cons:
more infrastructure effort - you need a release pipeline per app and one per each package
debugging and understanding packaged code when used in an app is tricky
changing something in common module and testing it in your app is a pain - you have to go to the common repository, make a change, test it, create a PR, merge it, wait for the package to build and get released, upgrade in your app... and then you discover the change was wrong.
I've worked with such projects and there are always problems - if you make it super configurable it becomes unreadable and overengineered. If you make it separated you have to make changes in many places and maintaining things like unit tests in many places is a nightmare.
Since you already started with approach 1 and since you mentioned other regions are coming, I'd suggest going with your current strategy and slowly abstracting common pieces to separate repos (moving towards 3rd approach).
I think the most important piece that will make such changes easier is a decent level of test automation - both for your apps and for your common modules when you create them.
One piece of advice I can give you is to be pragmatic. Some level of duplication is fine, especially if the alternative is a complex rule engine that no one understands and no one wants to touch because it's used everywhere.
Allow me to explain, so that this doesn't just get marked as an "opinion-based" question.
I'm learning processing.js right now, and I can't help but notice many of the similarities in functionality with what already exists in the Canvas API of Vanilla-JS. Perhaps writing a set of large-scale animations is much more complicated in plain old Canvas than it is in processing?
I'm asking this because, as I continue to learn more about the vanilla APIs, I'm seeing a lot of new functionality added in JS over the years that is starting to (VERY SLOWLY) make certain aspects of popular frameworks, no longer necessary (jQuery being a great example). I'm curious as to whether or not this is the case with Canvas and processing.js as well.
Personally, I'm trying to determine whether or not I should still be spending a lot of time in processing.js (I'm not asking you to make that decision for me though, but I just want some information that can help me decide what's best for me).
Stackoverflow allows specific non-coding questions about programming tools-like ProcessingJS, but your question seems likely to be closed as too broad.
Even so, here are my thoughts...
Native Canvas versus ProcessingJS
Html5 canvas was born with a rich set of possibilities rivaling Photoshop itself. However, native canvas is a relatively low-level tool where you must handle structuring, eventing, serialization and animation with your own code.
ProcessingJS adds structure, eventing, serialization, animation & many (amazing!) mathematical functions to native canvas. IMHO, ProcessingJS is a higher-level tool that's well worth learning.
Extending native canvas into a higher level tool instead of a low-level tool
With about 500 lines of javascript, you can add a reusable framework to native canvas that adds these features in within a higher level structure: eventing (including drag/drop, scaling, rotating, hit testing, etc), serialization / deserialization.
With about 100 more lines you can add a reusable framework to native canvas that does animation with easing.
Even though native canvas was born with most of the capabilities needed to present even complex content, a PathObject is sorely needed in native canvas. The PathObject would serialize paths to make them reusable. With about 50 lines you can create a reusable PathObject.
Here's a fairly useless recommendation :-p
Try to use the right tool for the job (yeah, not specifically helpful).
Learning native canvas alone will let you do, maybe 70% of pixel display tasks.
Coding the extensions (above) will get you to 90%.
Using a tool like ProcessingJS will get you to 98%.
Yes, there are always about 2% edge cases where you either "can't get there" or must reduce your design requirements to accommodate coding limitations.
A slightly more specific recommendation
Since ProcessingJS merely extends native canvas, IMHO it's well worthwhile to take a few days and learn native canvas. This knowledge will let you determine the right tool for the job.
Interesting note on the performance of these technologies. Are saying? which choose to do a project? and I'm looking for one of these technologies for a project
http://paulhammant.com/2012/04/12/performance-testing-knockout-angular-and-backbone-with-selenium2/
I don't think that this post is conclusive in downgrading angular.js due to performance problems. So you're question leads basically to comparing these three technologies...
They solve very different kind of problems, e.g. backbone.js is in fact only a library for building event-based MV* architectures, while knockout.js and angular.js are more opinionated frameworks. So it's really comparing apples to oranges... But people try anyways: http://codebrief.com/2012/01/the-top-10-javascript-mvc-frameworks-reviewed/
None of the frameworks are made for performance. They are made to give direction to the developer.
Backbone is by far the least performant but even with Backbone if it's tuned right you can get a high FPS on tablets, mobiles and desktop.
Rendering Performance means:
only create DOM elements once, update DOM with new model contents
use object pooling as much as possible
minimize image loading/parsing until the last minute possible
watch out of JavaScript that triggers CSS to relayout
tie your render loop to the browser's paint loop
be smart about when to use GPU layers and compositing
opt out of the Garbage Collector as much as possible to maintain high frame rates
I have a PerfView on github that extends Backbone to make rendering performant. https://github.com/puppybits/BackboneJS-PerfView It can maintain 120FPS in Chrome and 56FPS on iPad with some real world examples.
I was naturally drawn to Ember's nice API/design/syntax compared to the competitors but was very saddened to see the performance was significantly worse. (For example, see the now well known http://jsfiddle.net/samdelagarza/ntMdB/167/ .) My eyes tell me at least 4 times slower than Backbone in Chrome.
The version 0.9.6 of EmberJS apparently has many performance fixes, in particular around bindings and rendering. However the above benchmark still performs poorly when using this version of Ember.
I see the above benchmark as demonstrative of one framework's binding cost. I come from Flex where bindings perform well enough that you don't have to constantly think whether these 5 bindings per renderer (multiplied by maybe 20 renderers) you want to use aren't going to be too much of an overhead. Ease of use is nice, but only if good enough performance is maintained. (Even more so since HTML5 also often targets mobiles).
As it stands, I tend to think the beauty of Ember is not worth the performance hit compared to some of its competitors, as we're talking about big apps with many bindings here, else you wouldn't need such framework in the first place. I could live with Ember performing slightly less well; after all it brings more to the table.
So my questions are fairly general and open:
Is the Ember part of the benchmark written well enough that it shows
a genuine issue?
Are the 0.9.6 performance updates maybe very low
key?
Are the areas of bad performances identified by the main
contributors?
This isn't really an issue of bindings being slow, but doing more DOM updates than necessary. We have been doing some investigation into this particular issue and we have some ideas for how to coalesce these multiple operations into one, so I do expect this to improve in the future.
That said, I can't see that this is a realistic benchmark. I would never recommend doing heavy animation in Ember (or with Backbone, for that matter). In standard app development, you shouldn't ever have to update that many different views simultaneous with that frequency.
If you can point out slow areas in a normal app we would be very happy to investigate. Performance is of great concern to us, and if things are truly slow during normal operation, we would consider that a bug. But, like I said, performant binding driven animations is not one of our goals, nor do I know of anyone for whom it is. Ember generally plays well with other libraries so it should be possible to plug in an animation library to do the animations outside of Ember.