What are your top 3 XPages performance tips for new XPages developers? - performance

What 3 things would you tell developers new to XPages to do to help maximize the performance of their XPages apps?

Tim Tripcony had given a bunch of suggestion here.
http://www-10.lotus.com/ldd/xpagesforum.nsf/topicThread.xsp?action=openDocument&documentId=365493C31B0352E3852578FD001435D2#AEFBCF8B111E149B852578FD001E617B

Not sure if this tipp is for beginners, but use any of the LifeCyclePhaseListeners from the OpenNTF Snippets to see what is going on in your datasources during a complete or partial refresh (http://openntf.org/XSnippets.nsf/snippet.xsp?id=a-simple-lifecyclelistener-)
Use the extension Library. Report Bugs ( or what you consider a bug ) at OpenNTF.
Use the SampleDb from the extLib. ou can easily modify the samples to your own need. Even good for testing if the issue you encounter is reproducable in this DB.
Use Firebug ( or a similar tool that comes with the browser of your choice ) If you see an error in the error tab, go and fix it.

Since you're asking for only 3, here are the tips I feel make the biggest difference:
Determine what your users / customers mean by "performance", and set the page persistence option accordingly. If they mean scalability (max concurrent users), keep pages on disk. If they mean speed, keep pages in memory. If they want an ideal mixture of speed and scalability, keep the current page in memory. This latter option really should be the server default (set in the server's xsp.properties file), overridden only as needed per application.
Set value bindings to compute on page load (denoted by a $ in the source XML) wherever possible instead of compute dynamically (denoted by a #). $ bindings are only evaluated once, # bindings are recalculated over and over again, so changing computations that only need to be loaded once per page to $ bindings speed up both initial page load and any events fired against the page once loaded.
Minimize the use of SSJS. Wherever possible, use standard EL instead (e.g. ${database.title} instead of ${javascript:return database.getTitle();}). Every SSJS expression must be parsed into an abstract syntax tree to be evaluated, which is incrementally slower than the standard EL resolver.
There are many other ways to maximize performance, of course, but in my opinion these are the easiest ways to gain noticeable improvement.

1. Use the Script Library instead writing a bulk of code into the Xpage.
2. Use the Theme or separate CSS class for each elements [Relational]
3. Moreover try to control your SSJS code. Because server side request only reduce our system performance.
4. Final point consider this as sub point of 3, Try to get the direct functions from our SSJS, Don't use the while llop and for loop for like document collection, count and other things.

The basics like
Use the immediate flags ( or one of the other flags) on serverside events if possible
Check the Flag which (forgot its name..) generates the css and js as
one big file at runtime therefore minimizing the ammount of
requests.
Choose your scope wisely. Dont put everything in your sessionscope but define when, where and how your are using the data and based on that use the correct scope. This can lead to better memory usage..
And of course the most important one read the mastering xpages book.
Other tips I would add:
When retrieving data use viewentrycollections or the viewnavigstor
Upgrade to 8.5.3
Use default html tags if possible. If you dont need the functionality of a xp:div or xp:panel use a <div> instead so you dont generate an extra uicomponent on the tree.
Define what page persistance mode you need

Depends a lot what you mean by performance. For performance of the app:
Use compute on page load wherever feasible. It significantly improves performance.
In larger XPages particularly, combine code into single controls where possible. E.g. Use a single Computed Field control combining literal strings, EL and SSJS rather than one control for each language. On that point, EL performs better than SSJS, and SSJS on the XPage performs better than SSJS in a Script Library.
Use dataContexts for properties that are calculated more than once on an XPage.
Partial Execution mode is a very strong recommendation, but probably beyond new XPages developers at this point. Java will also perform better than SSJS in a Script Library, but again beyond new developers. XPages controls you've created with the Extensibility Framework should perform better, because they should run fewer lines of Java than multiple controls, but I haven't tested that.
If you mean performance of the developer:
Get the Extension Library.
Use themes to set default properties, e.g. A standard style for all your pagers.
Use Firebug. If you're developing for Notes Client or IE, still use Firebug. You'll spend longer suffering through Client/IE thank you will fixing the few quirks that will remain.

Related

SAPUI5: Does Using Formatters Impact Performance?

I am working on custom SAPUI5 app using ODataModel and for which I have to do the formatting for some of the fields which I will be displaying in List control.
I need to know which approach is good mentioned below is good w.r.t. performance of app.
1) Is it a good idea to use Formatter.js file and write each method for each field for formatting?
Example -
There are 2 fields which should be formatted before showing in UI and hence 2 formatter function.
2) Before binding Model to List - Do the formatting using Loop at each row.
Example -
Loop at OData.
--do formatting here for both the fields
move data to model.
Endloop.
Bind new model to UI
Is there any other way by which we can improve performance - apart from code minification or using grunt.
Appreciate your help.
Thanks,
Rahul
Replacing Formatters with other solutions is definitely NOT the point to start when optimizing performance not to mention that you will loose a lot of the convenience the ODataModel comes with when manually manipulating the data in it.
Formatter Performance
Anyways using a formatter is of course less performant then pre-formatting your data once after they were loaded. A formatter will be executed on every rerendering of your control. So you might not want to do heavy calculations or excessive looping in a formatter that is executed frequently. But given a normal usage using formatters is absolutely nothing you should worry about or that does noticeably affect end-user experience. Keep enjoying the convenience of formatters (and take a look at the cool Expression Binding).
General Performance Considerations
To improve performance it is first of all very important to identify the real bottle neck. In many cases this is simply the backend, there is usually much more to win with much less effort. Always keep that in mind. UI Code optimization is ridiculous as long as the main backend call runs for say 3s.
Things to improve your UI performance might be:
serve SAPUI5 from a CDN
use a Component-preload, can be generated with grunt-openui5 or gulp-ui5-preload (I think it does not minify XML yet so you could do that additionaly before creating Component-preload)
try to reduce the number of SAPUI5 libraries you are using
be aware of which SAPUI5 libraries you are NOT using and very consequently remove those (don't forget the dependencies section in Component metadata resp. manifest.json)
be aware that sap.ui.layout is a separate independent library (not registering it as such will result in a lot of extra requests)
if you use an ODataModel make sure you set useBatch to true (default in v2.ODataModel)
intelligently design your OData service (if you can influence it)
intelligently use $expands: sometimes it can make sense to preload $expand data on a parent binding that does not actually use it e.g. if you most probably need the data later on
think about bundling your app as native app and benefit from improved caching (Kapsel)
Check Performance: Speed Up Your App and Performance Issues
squeeze out some more bytes and save some requests by minifying/combining custom css or other resources if you have some
If you are generally interested in Web Performance I can recommend Steve Souders books.
I'm totally open for more ideas on SAPUI5 performance improvements! Anyone?
BR
Chris
the best practice is to do it this way. The formatter allows you to receive an input and return output. The formatter function will be called in runtime and will be called for each of the rows which are displayed in your list. The reason that it will be called for each of the rows is because that you cannot grantee that the input will be the same for all of the rows in the list.
The concept of binding is to loop on your data model and update the UI accordingly. It is much better to use binding because a lot of reasons like: maintainability, performance, separate the data layer from the presentation layer, core optimizations and more.

What is the biggest performance issue in Emberjs?

Last 6 months I'm developing an Emberjs components.
I started had first performance problems when I tried develop a table component. Each cell in this table was Ember.View and each cell had a binding to object property. When the table had 6 columns and I tried to list about 100 items, it caused that browser was frozen for a while. So I discovered, that it is better write a function which return a string instead of using handlebars and handle bindings manually by observers.
So, is there any good practice how to use minimum of bindings? Or how to write bindings and not lose a lot of performance. For example... not use bindning for a big amount of data?
How many Ember.View objects is possible append to page?
Thanks for response
We've got an ember application that displays 100s of complex items on a single page. Each item uses out-of-box handlebars bindings to output about a dozen properties. It renders very quickly (< 100ms) and very little of that time is spent on bindings.
If you find the browser is hanging at 6 columns and 100 items there is for sure something else is wrong.
So, is there any good practice how to use minimum of bindings? Or how to write bindings and not lose a lot of performance.
Try setting Ember.LOG_BINDINGS = true to see what is going on. Could be the properties you are binding to have circular dependencies or costly to resolve. See this post for great tips on debugging:
http://www.akshay.cc/blog/2013-02-22-debugging-ember-js-and-ember-data.html
How many Ember.View objects is possible append to page?
I don't believe there is a concrete limit, for sure it will depend on browser. I wouldn't consider optimizing unless there were more than a few thousand.
Check out this example to see how ember can be optimized to handle very large tables:
http://addepar.github.com/ember-table/
Erik Bryn recently gave a talk about Ember.ListView which would drastically cut down the number of views on a page by reusing list cells which have gone out of the viewport.
Couple this technique with group-helper to reduce the number of metamorph script tags if your models multiple properties at the same time. Additionally, consider using {{unbound}} if you don't need any of the properties to live update.

Why Play! framework chose Groovy for template engine

From their website http://www.playframework.org/documentation/1.0/faq
"
The biggest CPU consumer in the Play stack at this time is the template engine based on Groovy. But as Play applications are easily scalable, it is not really a problem if you need to serve a very high traffic: you can balance the load between several servers. And we hope for a performance gain at this level with the new JDK7 and its better support of dynamic languages.
"
So there are no better choices? What about JSP?
JSP is not feasible as every JSP compiles to a Servlet and the servlet API provides things like the server side session which are not compatible with the RESTful paradigm. We don't want to go back to the dark ages of badly scalable server side sessions, back buttoning problems in the browser, reposts etc.
Japid templates are interesting, but they are not backed by a great community and perhaps didn't even exist at the time play was created (I don't know for sure though). I tried Japid as a replacement for the Groovy templates in my own application and found out in a JMeter test that the benefit would be only marginal, 10% to max. 25% improvement.
I guess in the end it all depends on your scalability requirements and the structure of your pages. I picked the 90% use case of my application and did the test. To me, the little improvement did not justify for the additional costs of the extra dependency (I like to keep dependencies to a minimum for maintainability).
Groovy templates are not bad or slow in general. Use typed variables wherever possible (instead of "def"), even in closures! Keep values of accessed properties in local variables. Do reasonable results paging. Then keep your fingers crossed that GSP might be able to run on groovy++ in the future and you're done ;)
To me, the question would not be why they used groovy in the views. That is, because I rather do miss it so much in the controller layer. Groovy would make coding the controller behaviour a lot easier IMHO.
First off, JSP was not a valid option for Play since it chose not to go down the Java EE route (which JSP is part of). Instead, Play chose to use Groovy as an intuitive, simple but powerful templating engine.
However, one of Play's greatest features is that it is a pluggable system, meaning that many parts of the core system can simply be replaced. This includes the template engine, and there are a couple that are already available.
The most popular is Japid. It claims to be 2-20x faster than the standard templating engine, and has been around for a while. For more info, see here http://www.playframework.org/modules/japid.
A second option is Cambridge, although this has only been out for a little while, but is reasonably active in the message boards (see https://groups.google.com/forum/?hl=en#!searchin/play-framework/cambridge/play-framework/IxSei-9BGq8/X-3pF5tWAKAJ).
I tend to stick to Groovy, as I like the way it works, and have not found it to be too bad in terms of performance, but every application is individual, so your own performance tests should lead you down your own particular path.
Yes there is Japid. Which is much, much faster.
http://www.playframework.org/modules/japid
I totally agree with the choice of ease over speed the Play Framework designers made here. My guess is that if the templating starts getting in the way in terms of performance, you can (and should!) measure the slow bits, and refactor them into fast tags where possible. With that, you're likely to save 80% of CPU by moving 20% into fast tags, leaving you with flexibility and adequate speed.
Having said that, I'm looking forward to an experiment I'm planning to see how well the new Scala templates (loosely "borrowed" from Razor.NET - awesome clean syntax) work with Java controllers/models. The Scala backend isn't there yet in terms of comparative ease, but the templating language certainly rocks.
I may be late to the party in 2016. Play 2 is out, and the JDK (not to mention the hardware) drastically improved. I am not using Play or Spring Boot, since my platform doesn't need them - just pure runtime text/HTML generation from templates.
First, when talking about Groovy templates, there is more than one. I use the original Groovy SimpleTemplateEngine for anything from emails to rich Web pages, whether most people nowadays favor the "advanced" MarkupTemplateEngine with its non-HTML builder syntax. I did not go that route because of the IDE (e.g. UntelliJ) support for JSPish HTML files with JavaScript - catching unclosed tags, braces, etc. Besides, how would you include JavaScript into the curly brace based "builder" style template?
Whichever you chose, both SimpleTemplateEngine and MarkupTemplateEngine statically compile their templates, even though the Groovy doc only mentions it for the latter. Why wouldn't it generates a class for the former? I didn't compare them against each other, but I'd expect SimpleTemplateEngine to be faster, since it is... well, simpler - doesn't translate any syntax into String concatenations with ifs and loops in between.
And it is indeed very fast. I was concerned about invoking it in a loop. Doesn't make any difference. There is no overhead, as the template is compiled once.
I use multiple small templates responsible for generating individual form control markup (HTML + JS) to generate a composite form, included in a higher-level container, included in another container, and so on, until the entire page is formed. Decomposing your view like that makes it, as you already guessed, modular, encapsulated, and "object-oriented" - composed of many individual MVC components building upon each other. Sort of like good old custom JSP tags, only evaluated at runtime and compatible with technologies like Spring Boot, if you cannot resist trendy resume-boosting stuff like that.
A test form with a 100 fields (all complex "smart" controls with encapsulated state management and behavior) renders in 150ms the first time, and then 10-14ms thereafter. In an IDE debugger on my memory-starved 4y.o. notebook. I also verified it is thread-safe, since Groovy never mentioned it explicitly. Why wouldn't it be, if it is compiled into a stateless Groovy class like any other? Call createTemplate() once, store the Template somewhere, and use it (call Template.make()) in your servlet or another concurrent context. Obviously I'll never have a 100-field form in a real application. Anyone who does, needs to reconsider his/her UX.
The performance is quite adequate. I'd even accept one second to render a 100-field page. Think of it, you don't need ultimate nanotrading or nuclear missile tracking performance in a Web app. If you do, pick Jamon: http://www.jamon.org/Overview.html, which generates a Java class, you'd normally write to concatenate Strings. I didn't test it, as I don't like extra compilation steps (automatically executed by Maven, but still). Compiled Groovy bytecode is good enough for me - compared to the compiled, yes, strongly-typed Java. The difference would be marginal unless you are doing something complex, which you shouldn't inside a template (see below). Playing with typed Groovy variables vs. def as suggested in this thread, only saved me a couple of milliseconds on that 100-template run.
Templates should not have much procedural logic (internal variables, ifs and loops) anyway - that's the controller's, not view's responsibility. That said, ifs and loops are a must for a template engine. Why would one use Handlebars/Mustache, if he/she can simply call String.replace()?
The rest of the template engines is also irrelevant. No String concatenation (e.g. Velocity, or Freemarker) or interpreted JS-based technology (e.g. Jade) would ever beat the most direct Jamon's approach performance-wise. And being a Java programmer, you want to use your favorite language/syntax: either directly (Jamon) or 90% close to Java, Groovy is (being a scripting-centric concise interpreted Java). I wouldn't comment on Scala - the matter of preference. Other than its allegedly "concise" syntax (less and less relevant with Java 8+) comes at a price. And only matters for complex loops. You do not want to write your entire app inside one template, like I already said. A couple of loops and up to ten if statements max.
And, like everyone mentioned, the intuitive syntax, and ease of use is the key. They drastically reduce the number of bugs. A good (additional) server costs a $1000, while developer salaries - to fix all of the bugs stemming form the complexity of marginal performance optimization, are 100 times higher.

Web site performance - what is it about? Readability and ignorance Vs. Performance

Let me cut to the chase...
On one hand, many of the programming advices given (here and on other places) emphasize the notion that code should always be as readable and as clear as possible, at (almost?!) any pefromance cost.
On the other hand there are SO many slow web sites (at least one of whom, I know from personal experience).
Obviously round trips and DB access, are issues a web developer should always keep in mind. But the trade-off between readability and what not to do because it slows things down, for me is very unclear.
Question are- 1.What else? 2.Is there a rule (preferably simple, but probably quite general) one should adhere to in order to make sure his code does not slow things down too much?
General best practices as well as specific advices would be much appreciated. Advices based on experience would be especially appreciated.
Thanks.
Edit: A little clarification: General peformance advices aren't hard to find. That's not what I'm looking for. I'm asking about two things- 1. While trying to make my code as readable as possible, when should I stop and say: "Now I'm hurting performance too much". 2. Little, less known things like- is selecting just one column faster than selecting all (Thanks Otávio)... Thanks again!
See the Stack Overflow discussion here:
What is the most important effect on performance in a database-backed web application?
The top voted answer was, "write it clean, and use a profiler to identify real problems and address them."
In my experience, the biggest mistake (using C#/asp.net/linq) is over-querying due to LINQ's ease-of-use. One huge query is usually much faster than 10000 small ones.
The other ASP.NET gotcha I see a lot is when the viewstate gets extremely fat and bloated. EnableViewState=false is your best friend, start every new project with it!
For web applications that have a database back end, it is extremely important that:
indexing is done properly
retrieval is done for what is needed (avoid select * when selecting specific fields will do - even more so if they are part of a covered index)
Also, whenever possible an appropriate caching strategy can help performance
Optimizing your code.
While making your code as readable as possible is very important. Optimizing it is equally as important. I've listed some items that will hopefully get you in the right direction.
For example in regards to Databases:
When you define the schema of your database, you should make sure that it is normalized and the indexes of fields are defined properly.
When running a query, specifically SELECT, only select the fields you need.
You should only make one connection to the database per page load.
Re-factor. This is probably the most important factor in producing clean, optimized code. Always go back and look at your code and see what can be done to improve it.
PHP Code:
Always test your work with a tool like PHPUnit.
echo is faster than print.
Wrap your string in single quotes (‘) instead of double quotes (“) is faster because PHP searches for variables inside “…” and not in ‘…’, use this when you’re not using variables you need evaluating in your string.
Use echo’s multiple parameters (or stacked) instead of string concatenation.
Unset or null your variables to free memory, especially large arrays.
Use strict code, avoid suppressing errors, notices and warnings thus resulting in cleaner code and less overheads. Consider having error_reporting(E_ALL) always on.
Incrementing an undefined local variable is 9-10 times slower than a pre-initialized one.
Methods in derived classes run faster than ones defined in the base class.
Error suppression with # is very slow.
Website Optimization
A good place to start is here (http://developer.yahoo.com/performance/rules.html)
Performance is a huge topic and there are a lot of things that you can do to help improve the performance of your website. It's something that takes time and experience.
Best,
Richard Castera
Scott and Rcastera did a good job covering DB and querying optimization. To address your question from a HTML / CSS / JavaScript standpoint:
CSS:
Readability is key. CSS is rendered so fast that you should never feel it is necessary to sacrifice readability for performance. As such, focus on adding in as many comments as necessary to document the code, why certain rules (like hacks) are there, and whatever else floats your comment boat. In CSS there are a few obvious rules to follow: 1) Use external stylsheets. 2) Limit external stylesheets to limit GET requests.
HTML: Like CSS, HTML is read so fast by the browser you should really only focus on writing clean code. Use whitespace, indentation, and comments to properly document your work. Only major things in HTML to remember are: 1) declare the <meta charset /> early within the head section. 2) Follow this guys advice to minimize browser reflows. *this rule actually applies to CSS as well.
JavaScript: Most optimizations for JavaScript are really well known by now so these'll seem obvious, like initializing variables outside of loops, pushing javascript to bottom of body so DOM loads before scripts start tying up all of the resources, avoiding costly statements like eval() or with(). Not to sound like a broken record, but keeping a well commented and easily readable script should still be a priority when developing JavaScript code. Especially since you can just minimize and compress away all the excess when you deploy it.

Generating UI from DB - the good, the bad and the ugly?

I've read a statement somewhere that generating UI automatically from DB layout (or business objects, or whatever other business layer) is a bad idea. I can also imagine a few good challenges that one would have to face in order to make something like this.
However I have not seen (nor could find) any examples of people attempting it. Thus I'm wondering - is it really that bad? It's definately not easy, but can it be done with any measure success? What are the major obstacles? It would be great to see some examples of successes and failures.
To clarify - with "generating UI automatically" I mean that the all forms with all their controls are generated completely automatically (at runtime or compile time), based perhaps on some hints in metadata on how the data should be represented. This is in contrast to designing forms by hand (as most people do).
Added: Found this somewhat related question
Added 2: OK, it seems that one way this can get pretty fair results is if enough presentation-related metadata is available. For this approach, how much would be "enough", and would it be any less work than designing the form manually? Does it also provide greater flexibility for future changes?
We had a project which would generate the database tables/stored proc as well as the UI from business classes. It was done in .NET and we used a lot of Custom Attributes on the classes and properties to make it behave how we wanted it to. It worked great though and if you manage to follow your design you can create customizations of your software really easily. We also did have a way of putting in "custom" user controls for some very exceptional cases.
All in all it worked out well for us. Unfortunately it is a sold banking product and there is no available source.
it's ok for something tiny where all you need is a utilitarian method to get the data in.
for anything resembling a real application though, it's a terrible idea. what makes for a good UI is the humanisation factor, the bits you tweak to ensure that this machine reacts well to a person's touch.
you just can't get that when your interface is generated mechanically.... well maybe with something approaching AI. :)
edit - to clarify: UI generated from code/db is fine as a starting point, it's just a rubbish end point.
hey this is not difficult to achieve at all and its not a bad idea at all. it all depends on your project needs. a lot of software products (mind you not projects but products) depend upon this model - so they dont have to rewrite their code / ui logic for different client needs. clients can customize their ui the way they want to using a designer form in the admin system
i have used xml for preserving meta data for this sort of stuff. some of the attributes which i saved for every field were:
friendlyname (label caption)
haspredefinedvalues (yes for drop
down list / multi check box list)
multiselect (if yes then check box
list, if no then drop down list)
datatype
maxlength
required
minvalue
maxvalue
regularexpression
enabled (to show or not to show)
sortkey (order on the web form)
regarding positioning - i did not care much and simply generate table tr td tags 1 below the other - however if you want to implement this as well, you can have 1 more attribute called CssClass where you can define ui specific properties (look and feel, positioning, etc) here
UPDATE: also note a lot of ecommerce products follow this kind of dynamic ui when you want to enter product information - as their clients can be selling everything under the sun from furniture to sex toys ;-) so instead of rewriting their code for every different industry they simply let their clients enter meta data for product attributes via an admin form :-)
i would also recommend you to look at Entity-attribute-value model - it has its own pros and cons but i feel it can be used quite well with your requirements.
In my Opinion there some things you should think about:
Does the customer need a function to customize his UI?
Are there a lot of different attributes or elements?
Is the effort of creating such an "rendering engine" worth it?
Okay, i think that its pretty obvious why you should think about these. It really depends on your project if that kind of model makes sense...
If you want to create some a lot of forms that can be customized at runtime then this model could be pretty uselful. Also, if you need to do a lot of smaller tools and you use this as some kind of "engine" then this effort could be worth it because you can save a lot of time.
With that kind of "rendering engine" you could automatically add error reportings, check the values or add other things that are always build up with the same pattern. But if you have too many of this things, elements or attributes then the performance can go down rapidly.
Another things that becomes interesting in bigger projects is, that changes that have to occur in each form just have to be made in the engine, not in each form. This could save A LOT of time if there is a bug in the finished application.
In our company we use a similar model for an interface generator between cash-software (right now i cant remember the right word for it...) and our application, just that it doesnt create an UI, but an output file for one of the applications.
We use XML to define the structure and how the values need to be converted and so on..
I would say that in most cases the data is not suitable for UI generation. That's why you almost always put a a layer of logic in between to interpret the DB information to the user. Another thing is that when you generate the UI from DB you will end up displaying the inner workings of the system, something that you normally don't want to do.
But it depends on where the DB came from. If it was created to exactly reflect what the users goals of the system is. If the users mental model of what the application should help them with is stored in the DB. Then it might just work. But then you have to start at the users end. If not I suggest you don't go that way.
Can you look on your problem from application architecture perspective? I see you as another database terrorist – trying to solve all by writing stored procedures. Why having UI at all? Try do it in DB script. In effect of such approach – on what composite system you will end up? When system serves different businesses – try modularization, selectively discovered components, restrict sharing references. UI shall be replaceable, independent from business layer. When storing so much data in DB – there is hard dependency of UI – system becomes monolith. How you implement MVVM pattern in scenario when UI is generated? Designers like Blend are containing lots of features, which cannot be replaced by most futuristic UI generator – unless – your development platform is Notepad only.
There is a hybrid approach where forms and all are described in a database to ensure consistency server side, which is then compiled to ensure efficiency client side on deploy.
A real-life example is the enterprise software MS Dynamics AX.
It has a 'Data' database and a 'Model' database.
The 'Model' stores forms, classes, jobs and every artefact the application needs to run.
Deploying the new software structure used to be to dump the model database and initiate a CIL compile (CIL for common intermediate language, something used by Microsoft in .net)
This way is suitable for enterprise-wide software and can handle large customizations. But keep in mind that this approach sets a framework that should be well understood by whoever gonna maintain and customize the application later.
I did this (in PHP / MySQL) to automatically generate sections of a CMS that I was building for a client. It worked OK my main problem was that the code that generates the forms became very opaque and difficult to understand therefore difficult to reuse and modify so I did not reuse it.
Note that the tables followed strict conventions such as naming, etc. which made it possible for the UI to expect particular columns and infer information about the naming of the columns and tables. There is a need for meta information to help the UI display the data.
Generally it can work however the thing is if your UI just mirrors the database then maybe there is lots of room to improve. A good UI should do much more than mirror a database, it should be built around human interaction patterns and preferences, not around the database structure.
So basically if you want to be cheap and do a quick-and-dirty interface which mirrors your DB then go for it. The main challenge would be to find good quality code that can do this or write it yourself.
From my perspective, it was always a problem to change edit forms when a very simple change was needed in a table structure.
I always had the feeling we have to spend too much time on rewriting the CRUD forms instead of developing the useful stuff, like processing / reporting / analyzing data, giving alerts for decisions etc...
For this reason, I made long time ago a code generator. So, it become easier to re-generate the forms with a simple restriction: to keep the CSS classes names. Simply like this!
UI was always based on a very "standard" code, controlled by a custom CSS.
Whenever I needed to change database structure, so update an edit form, I had to re-generate the code and redeploy.
One disadvantage I noticed was about the changes (customizations, improvements etc.) done on the previous generated code, which are lost when you re-generate it.
But anyway, the advantage of having a lot of work done by the code-generator was great!
I initially did it for the 2000s Microsoft ASP (Active Server Pages) & Microsoft SQL Server... so, when that technology was replaced by .NET, my code-generator become obsoleted.
I made something similar for PHP but I never finished it...
Anyway, from small experiments I found that generating code ON THE FLY can be way more helpful (and this approach does not exclude the SAVED generated code): no worries about changing database etc.
So, the next step was to create something that I am very proud to show here, and I think it is one nice resolution for the issue raised in this thread.
I would start with applicable use cases: https://data-seed.tech/usecases.php.
I worked to add details on how to use, but if something is still missing please let me know here!
You can change database structure, and with no line of code you can start edit data, and more like this, you have available an API for CRUD operations.
I am still a fan of the "code-generator" approach, and I think it is just a flavor of using XML/XSLT that I used for DATA-SEED. I plan to add code-generator functionalities.

Resources