Good day,
we just moved from asp.net 1.1 to asp.net 2.0. We are using ajax update panels.
In an Apress book (Pro asp.net 2008) , I've read that when you use the updatepanel, you don't reduce the acount of bandwidth sent, because the entire page is still sent.
That in mind, I've also read on many websites that it is better to use multiple updatepanels instead of only one containing the entire page to 'reduce the amount of bandwidth sent'. In my opinion, there is a contradiction with the Apress book, and I was wondering what you guys think.
Is it better to use only one updatepanel containing the entire page, or many ones? The performance is my main concern.
It depends on how many parts of your page you want to be able to load separately.
Each panel = separate updateable part
Regards
K
It depends on how many parts of your page you want to be able to load separately, you'll need a panel for each part.
The complete viewstate is always sent with the postback to the server.
Regards
K
Related
Context
I am learning Nextjs which is a framework for developing react applications quickly by providing many functionalities out of the box such as Server Side Rendering, Fast Refresh and many others out of the box without any configuration. It also provides a functionality to optionally generate some web pages statically which are pre rendered at build time instead of rendering on demand. It achieves it by querying the data required for the page at build time. Nextjs also provides an optional argument expressed in seconds after which the data is re queried and the page re rendered. All of it happens on page level rather than rebuilding the entire website.
Problems
We cannot know in advance how frequently data would change, the data may change after 1 second or 10 minutes and it is impossible to know in advance and extremely hard to predict. However, it is most certainly not a constant number of seconds. With this approach, I might show outdated information due to higher time limit or I might end up querying the database unnecessarily even if data hasn't changed.
Suppose I have implemented some sort of pagination and I want to exploit the fact that most users would only visit first few pages before going to a different link. I could statically pre render first 1000 pages, so the most visited pages are served statically without going to the database whereas the rest are server side rendered. Now, if my data might change frequently, I would have to re render the first 1000 pages after regular intervals and each page would issue a separate query against the same database or external API which would cause too many round trips. I am not aware of the details of Nextjs but I suspect this would be true because Nextjs does not assume anything about the function which pulls the data and a generic implementation would necessitate it.
Attempted Solution
Both problems can be solved by client or server side rendering because the data would be fetched on demand but we lose the benefits of static generation specifically serving static assets compared to querying the database. I believe static generation would be useful if mutations to my data happen infrequently most of the time but we still want to show the updated information as fast as we can when it becomes available.
If I forget about Nextjs for a a while, both problems can be solved by spawning a new process which listens for mutations to the relevant data and only rebuilds those static assets which needs to be updated; kind of like React updates components but on server side. However Nextjs offers a lot of functionalities which would be difficult to replicate, so I cannot use this approach.
If I want to use Nextjs, problem (1) seems impossible to solve due to (perceived?) limitation of Nextjs which only offers one way to rebuild static pages, periodically re render them after a predetermined time. However, (2) can be solved by using some sort of in memory cache which pulls all the required data from the data store in one round trip and structures it up for every page. Then every page will pull data from this cache instead of the database. However, it looks like a hack to me.
Questions
Are there other ways to deal with the problem I might have have missed?
Is there a built-in way to deal with problem (1) and (2) in Nextjs?
Is my assessment of attempted solutions and their viability correct?
If this is the not the right place to ask this please direct me to the correct place.
My question is this. Which is better:
A. a page that uses a lot of resources and a lot of RAM, but only one Apache connection.
B. that same page, loaded from cache, with several ajax calls to populate the parts of it that need live data.
I'm thinking B. The only negative to B is that i'm adding 4-5 Apache connections per page load. But these connections are super small resource wise.
Thanks
This is broad and open-based, so it will likely be closed. The answer depends a lot on what type of page you are serving and what experience you want the user to have. Both options will use a lot of resources in a short period of time, although option B may be easier to load-balance. The downside is that option B also uses a lot more connections, resulting in slower load and render times, not to mention a little more load on Apache itself.
Last 6 months I'm developing an Emberjs components.
I started had first performance problems when I tried develop a table component. Each cell in this table was Ember.View and each cell had a binding to object property. When the table had 6 columns and I tried to list about 100 items, it caused that browser was frozen for a while. So I discovered, that it is better write a function which return a string instead of using handlebars and handle bindings manually by observers.
So, is there any good practice how to use minimum of bindings? Or how to write bindings and not lose a lot of performance. For example... not use bindning for a big amount of data?
How many Ember.View objects is possible append to page?
Thanks for response
We've got an ember application that displays 100s of complex items on a single page. Each item uses out-of-box handlebars bindings to output about a dozen properties. It renders very quickly (< 100ms) and very little of that time is spent on bindings.
If you find the browser is hanging at 6 columns and 100 items there is for sure something else is wrong.
So, is there any good practice how to use minimum of bindings? Or how to write bindings and not lose a lot of performance.
Try setting Ember.LOG_BINDINGS = true to see what is going on. Could be the properties you are binding to have circular dependencies or costly to resolve. See this post for great tips on debugging:
http://www.akshay.cc/blog/2013-02-22-debugging-ember-js-and-ember-data.html
How many Ember.View objects is possible append to page?
I don't believe there is a concrete limit, for sure it will depend on browser. I wouldn't consider optimizing unless there were more than a few thousand.
Check out this example to see how ember can be optimized to handle very large tables:
http://addepar.github.com/ember-table/
Erik Bryn recently gave a talk about Ember.ListView which would drastically cut down the number of views on a page by reusing list cells which have gone out of the viewport.
Couple this technique with group-helper to reduce the number of metamorph script tags if your models multiple properties at the same time. Additionally, consider using {{unbound}} if you don't need any of the properties to live update.
What 3 things would you tell developers new to XPages to do to help maximize the performance of their XPages apps?
Tim Tripcony had given a bunch of suggestion here.
http://www-10.lotus.com/ldd/xpagesforum.nsf/topicThread.xsp?action=openDocument&documentId=365493C31B0352E3852578FD001435D2#AEFBCF8B111E149B852578FD001E617B
Not sure if this tipp is for beginners, but use any of the LifeCyclePhaseListeners from the OpenNTF Snippets to see what is going on in your datasources during a complete or partial refresh (http://openntf.org/XSnippets.nsf/snippet.xsp?id=a-simple-lifecyclelistener-)
Use the extension Library. Report Bugs ( or what you consider a bug ) at OpenNTF.
Use the SampleDb from the extLib. ou can easily modify the samples to your own need. Even good for testing if the issue you encounter is reproducable in this DB.
Use Firebug ( or a similar tool that comes with the browser of your choice ) If you see an error in the error tab, go and fix it.
Since you're asking for only 3, here are the tips I feel make the biggest difference:
Determine what your users / customers mean by "performance", and set the page persistence option accordingly. If they mean scalability (max concurrent users), keep pages on disk. If they mean speed, keep pages in memory. If they want an ideal mixture of speed and scalability, keep the current page in memory. This latter option really should be the server default (set in the server's xsp.properties file), overridden only as needed per application.
Set value bindings to compute on page load (denoted by a $ in the source XML) wherever possible instead of compute dynamically (denoted by a #). $ bindings are only evaluated once, # bindings are recalculated over and over again, so changing computations that only need to be loaded once per page to $ bindings speed up both initial page load and any events fired against the page once loaded.
Minimize the use of SSJS. Wherever possible, use standard EL instead (e.g. ${database.title} instead of ${javascript:return database.getTitle();}). Every SSJS expression must be parsed into an abstract syntax tree to be evaluated, which is incrementally slower than the standard EL resolver.
There are many other ways to maximize performance, of course, but in my opinion these are the easiest ways to gain noticeable improvement.
1. Use the Script Library instead writing a bulk of code into the Xpage.
2. Use the Theme or separate CSS class for each elements [Relational]
3. Moreover try to control your SSJS code. Because server side request only reduce our system performance.
4. Final point consider this as sub point of 3, Try to get the direct functions from our SSJS, Don't use the while llop and for loop for like document collection, count and other things.
The basics like
Use the immediate flags ( or one of the other flags) on serverside events if possible
Check the Flag which (forgot its name..) generates the css and js as
one big file at runtime therefore minimizing the ammount of
requests.
Choose your scope wisely. Dont put everything in your sessionscope but define when, where and how your are using the data and based on that use the correct scope. This can lead to better memory usage..
And of course the most important one read the mastering xpages book.
Other tips I would add:
When retrieving data use viewentrycollections or the viewnavigstor
Upgrade to 8.5.3
Use default html tags if possible. If you dont need the functionality of a xp:div or xp:panel use a <div> instead so you dont generate an extra uicomponent on the tree.
Define what page persistance mode you need
Depends a lot what you mean by performance. For performance of the app:
Use compute on page load wherever feasible. It significantly improves performance.
In larger XPages particularly, combine code into single controls where possible. E.g. Use a single Computed Field control combining literal strings, EL and SSJS rather than one control for each language. On that point, EL performs better than SSJS, and SSJS on the XPage performs better than SSJS in a Script Library.
Use dataContexts for properties that are calculated more than once on an XPage.
Partial Execution mode is a very strong recommendation, but probably beyond new XPages developers at this point. Java will also perform better than SSJS in a Script Library, but again beyond new developers. XPages controls you've created with the Extensibility Framework should perform better, because they should run fewer lines of Java than multiple controls, but I haven't tested that.
If you mean performance of the developer:
Get the Extension Library.
Use themes to set default properties, e.g. A standard style for all your pagers.
Use Firebug. If you're developing for Notes Client or IE, still use Firebug. You'll spend longer suffering through Client/IE thank you will fixing the few quirks that will remain.
Hoping someone can chime in on an ideal methodology.
I don't want to run my site through a crawler every month to add new pages to my sitemap, I'd like some robust systematic method to do so, because maintaining it by hand seems very prone to ahem human forgetfulness. Is there some sorta way to programmatically validate new controllers, controller methods, views, etc. to some special controller? What I'm picturing is some mechanism that enforces updating the sitemap whenever you create a new controller method or view. I work in LAMP stack if that's relevant. This guy here is doing it through the file system and that's not what I want for a public facing sitemap.
Perhaps there's another best practice for this type of maintenance other than the concept I'm proposing. Would love to hear how everyone else does this! :)
If your site is content based, best practise is reading database periodically and generating each contents link. With this method you can specify some subjects are more prior or vice versa in sitemap.
That method already mentioned before at topic that you linked.
Else, you can hold a visited pages list (static) at server-side. Or just log them. After recording your site traffic, without blocking the user experience, I mean asynchronously, check the sitemaps and add your page links there. You can specify priority with this method too, by visiting intensity of your pages and some statistical logic.