How to optimize logical app bad performance - performance

I have built a large logic app on the Azure portal (about 485 actions) but when I open logic app designer the scrolling is too slow and if I want to create any action it Takes time and it have a bad performance. Any advice?

You should consider separating the 485 actions into multiple (in your case, many) logic apps with single responsibilities. You can then chain them together using appropriate triggers.

You can separate your logic app to several parts and implement different functions in them. For example, if we want to combine a user's name and email, we can do it in "childLogic" first (the screenshot below is my child logic app):
Then add it(childLogic) to your parent logic app by create nested logic apps.

Related

Novice question about structuring events in Tcl/Tk

If one is attempting to build a desktop program with a semi-complex GUI, especially one in which users can open multiple instances of identical GUI components such as having a "project" GUI and permitting users to open multiple projects concurrently within the main window, is it good practice to push the event listeners further up the widget hierarchy and use the event detail to determine upon which widget the event took place, as opposed to placing event listeners on each individual widget?
For example, in doing something similar in a web browser, there were no event listeners on any individual project GUI elements. The listeners were on the parent container that held the multiple instances of each project GUI. A project had multiple tabs within its GUI, but only one tab was visible within a project at a time and only one project was visible at any one time; so, it was fairly easy to use classes on the HTML elements and then the e.matches() method on the event.target to act upon the currently visible tab within the currently visible project in a manner that was independent of which project it was that was visible. Without any real performance testing, it was my unqualified impression as an amateur that having as few event listeners as possible was more efficient and I got most of that by reading information that wasn't very exact.
I read recently in John Ousterhout's book that Tk applications can have hundreds of event handlers and wondered whether or not attempting to limit the number of them as described above really makes any difference in Tcl/Tk.
My purpose in asking this question is solely to understand events better in order to start off the coding of my Tcl/Tk program correctly and not have to re-code a bunch of poorly structured event listeners. I'm not attempting to dispute anything written in the mentioned book and don't know enough to do so if I wanted to.
Thank you for any guidance you may be able to provide.
Having hundreds of event handlers is usually just a mark that there's a lot of different events possibly getting sent around. Since you usually (but not always) try to specialize the binding to be as specific as possible, the actual event handler is usually really small, but might call a procedure to do the work. That tends to work out well in practice. Myself, my rule of thumb is that if it is not a simple call then I'll put in a helper procedure; it's easier to debug them that way. (The main exception to my rule is if I want to generate a break.)
There are four levels you can usually bind on (plus more widget-specific ones for canvas and text):
The individual widget. This is the one that you'll use most.
The widget class. This is mostly used by Tk; you'll usually not want to change it because it may alter the behaviour of code that you just use. (For example, don't alter the behaviour of buttons!)
The toplevel containing the widget. This is ideal for hotkeys. (Be very careful though; some bindings at this level can be trouble. <Destroy> is the one that usually bites.) Toplevel widgets themselves don't have this, because of rule 1.
all, which does what it says, and which you almost never need.
You can define others with bindtags… but it's usually not a great plan as it is a lot of work.
The other thing to bear in mind is that Tk supports virtual events, <<FooBarHappened>>. They have all sorts of uses, but the main one in a complex application (that you should take note of) is for defining higher-level events that are triggered by a sequence of low-level events occasionally, and yet which other widgets than the originator may wish to take note of.

CF Project getting too big, what shall one do?

A simple billing system (on top of ColdBox MVC) is ballooning into a semi-enterprisey inventory + provisioning + issue-tracking + profit tracking app. They seem to be doing their own thing yet they share many things including a common pool of Clients and Staff (login's), and other intermingled data & business logic.
How do you keep such system modular? from a maintenance, testability & re-usability stand point?
single monolithic app? (i.e. new package for the base app)
ColdBox module? not sure how to make it 'installable' and what benefits does it bring yet.
Java Portlet? no idea, just thinking outside the box
SOA architecture? through webservice API calls?
Any idea and/or experience you'd like to share?
I would recommend you break the app into modular pieces using ColdBox Modules. You can also investigate on separate business logic into a RESTful ColdBox layer also and joining the system that way also. Again, it all depends on your requirements and needs at the moment.
Modules are designed to break monolithic applications into more manageable parts that can be standalone or coupled together.
Stop thinking about technology (e.g. Java Portals, ColdBox modules, etc...) and focus on architecture. By this I mean imagining how you can explain your system to an observer. Start by drawing a set of boxes on a whiteboard that represent each piece - inventory, clients, issue tracking, etc... - and then use lines to show interactions between those systems. This focuses you on a separation of concerns, that is grouping together like functionality. To start don't worry about the UI, instead focus on algorithms and data.
If you we're talking about MVC, that step is focusing on the model. With that activity complete comes the hard part, modifying code to conform to that diagram (i.e the model). To really understand what this model should look like I suggest reading Domain Driven Design by Eric Evans. The goal is arriving at a model whose relationships are manageable via dependency injection. Presumably this leaves you with a set of high level CFCs - services if you will - with underlying business entities and persistence management. Their relationships are best managed by some sort of bean container / service locator, of which I believe ColdBox has its own, another example is ColdSpring.
The upshot of this effort is a model that's unit testable. Independent of of the user interface. If all of this is confusing I'd suggest taking a look at Working Effectively with Legacy Code for some ideas on how to make this transition.
Once you have this in place it's now possible to think about a controller (e.g. ColdBox) and linking the model to views through it. However, study whatever controller carefully and choose it because of some capability it brings to the table that your application needs (caching is an example that comes to mind). Your views will likely need to be reimagined as well to interact with this new design, but what you should have is a system where the algorithms are now divorced from the UI, making the views' job easy.
Realistically, the way you tackle this problem is iteratively. Find one system that can easily be teased out in the fashion I describe, get it under unit tests, validate with people as well, and continue to the next system. While a tedious process, I can assure it's much less work than trying to rewrite everything, which invites disaster unless you have a very good set of automated validation ahead of time.
Update
To reiterate, the tech is not going to solve your problem. Continued iteration toward more cohesive objects will.
Now as far as coupled data, with an ORM you've made a tradeoff, and monolithic systems do have their benefits. Another approach would be giving one stateful entity a reference to another's service object via DI, such that you retrieve it through that. This would enable you to mock it for the purpose of unit testing and replace it with a similar service object and corresponding entity to facilitate reuse in other contexts.
In terms of solving business problems (e.g. accounting) reuse is an emergent property where you write multiple systems that do roughly the same thing and then figure out how to generalize. Rarely if ever in my experience do you start out writing something to solve some business problem that becomes a reusable component.
I'd suggest you invest some time in looking at Modules. It will help with partitioning your code into logical features whilst retaining the integration with the Model.
Being ColdBox there is loads of doc's and examples...
http://wiki.coldbox.org/wiki/Modules.cfm
http://experts.adobeconnect.com/p21086674/
You need to get rid of the MVC and replace it with an SOA architecture that way the only thing joining the two halves are the service requests.
So on the server side you have the DAO and FACADE layers. And the client side can be an MVC or what ever architecture you want to use sitting somewhere else. You can even have an individual client for each distinct business.
Even for the server side you can break the project down into multiple servers: what's common between all businesses and then what's distinct between all of them.
The problem we're facing here luckily isn't unique.
The issue here seems not to be the code itself, or how to break it apart, but rather to understand that you're now into ERP design and development.
Knowing how best to develop and grow an ERP which manages the details of this organization in a logical manner is the deeper question I think you're trying to get at. The design and architecture itself of how to code from this flows from an understanding of the core functional areas you need.
Luckily we can study some existing ERP systems you can get a hold of to see how they tackled some of the problems. There's a few good open source ERP's, and what brought this tip to my mind is a full cycle install of SAP Business One I oversaw (a small-mid size ERP that bypasses the challenges of the big SAP).
What you're looking for is seeing how others are solving the same ERP architecture you're facing. At the very least you'll get an idea of the tradeoffs between modularization, where to draw the line between modules and why.
Typically an ERP system handles everything from the quote, to production (if required), to billing, shipping, and the resulting accounting work all the way through out.
ERPS handle two main worlds:
Production of goods
Delivery of service
Some businesses are widget factories, others are service businesses. A full featured out of the box ERP will have one continuous chain/lifecycle of an "order" which gets serviced by a number of steps.
If we read a rough list of the steps an ERP can cover, you'll see the ones that apply to you. Those are probably the modules you have or should be breaking your app into. Imagine the following steps where each is a different document, all connected to the previous one in the chain.
Lead Generation --> Sales Opportunities
Sales Opportunities --> Quote/Estimate
Quote Estimate --> Sales Order
Sales Order --> Production Order (Build it, or schedule someone to do the work)
Production order --> Purchase orders (Order required materials or specialists to arrive when needed)
Production Order --> Production Scheduling (What will be built, when, or Who will get this done, when?)
Production Schedule --> Produce! (Do the work)
Produced Service/Good --> Inventory Adjustments - Convert any raw inventory to finished goods if needed, or get it ready to ship
Finished Good/Service --> Packing Slip
Packing Slip items --> Invoice
Where system integrators come in is using the steps required, and skipping over the ones that aren't used. This leads to one thing for your growing app:
Get a solid data security strategy in place. Make sure you're confortable that everyone can only see what they should. Assuming that is in place, it's a good idea to break apart the app into it's major sections. Modules are our friends. The order to break them up in, however, will likely have a larger effect on what you do than anything.
See which sections are general, (reporting, etc) that could be re-used between multiple apps, and which are more specialized to the application itself. The features that are tied to the application itself will likely be more tightly coupled already and you may have to work around that.
For an ERP, I have always preferred a transactional "core" module, which all the other transaction providers (billing pushing the process along once it is defined).
When I converted a Lotus Notes ERP from the 90's to the SAP ERP, the Lotus Notes app was excellent, it handled everything as it should. THere were some mini-apps built on the side that weren't integrated as modules which was the main reason to get rid of it.
If you re-wrote the app today, with today's requirements, how would you have done it differently? See if there's any major differences from what you have. Let the app fight for your attention to decide what needs overhauling / modularization first. ColdBox is wonderful for modularization, whether you're using plugin type modules or just using well separated code you won't go wrong with it, it's just a function of developer time and money available to get it done.
The first modules I'd build / automate unit testing on are the most complex programatically. Chances are if you're a decent dev, you don't need end to end unit testing as of yesterday. Start with the most complex, move onto the core parts of the app, and then spread into any other areas that may keep you up at night.
Hope that helped! Share what you end up doing if you don't mind, if anything I mentioned needs further explanation hit me up on here or twitter :)
#JasPanesar

Repository pattern with "modern" data access strategies

So I was searching the web looking for best practices when implementing the repository pattern with multiple data stores when I found my entire way of looking at the problem turned upside down. Here's what I have...
My application is a BI tool pulling data from (as of now) four different databases. Due to internal constraints, I am currently using LINQ-to-SQL for data access but require a design that will allow me to change to Entity Framework or NHibernate or the next data access du jour. I also hold steadfast to decoupled layers in my apps using an IoC framework (Castle Windsor in this case).
As such, I've used the Repository pattern to abstract the actual data access code from my business layer. As a result, my business object is coded against some I<Entity>Repository interface and the IoC Container is used to manage the actual implementation. In this case, I would expect to have a concrete Linq<Entity>Repository that implements the interface using LINQ-to-SQL to do the work. Later I could replace this with an EF<Entity>Repository with no changes required to my business layer.
Also, because I'm coding against the interface, I can easily mock the repository for unit testing purposes.
So the first question that I have as I begin coding the application is whether I should have one repository per DataContext or per entity (as I've typically done)? Let's say one database contains Customers and Sales with the expected relationship. Should I have a single OrderTrackingRepository with methods that work with both entities or have a separate CustomerRepository and a different SalesRepository?
Next, as a BI tool, the primary interface is for reporting, charting, etc and often will require a "mashup" of data across multiple sources. For instance, the reality is that one database contains customer information while another handles sales information and a third holds other financial information but one of my requirements is to display aggregated information that spans all three. Plus, I have to support dynamic filtering in the UI. Obviously working directly against the LINQ-to-SQL or EF DataContext objects (Table<Entity>, for instance) will allow me to pretty much do anything. What's the best approach to expose that same functionality to my business logic when abstracting the DAL with a repository interface?
This article: link text indicates that EF4 has turned this approach around and that the repository is nothing more than an IQueryable returned from the EF DataContext which brings up a whole other set of questions.
But, I think I've rambled on enough...
UPDATE (Thanks, Steven!)
Okay, let me put a more tangible (for me, at least) example on the table and clarify a few points that will hopefully lead to an approach I can better wrap my head around.
While I understand what Steven has proposed, I have a team of developers I have to consider when implementing such things and I'm afraid they will get lost in the complexity (yes, a real problem here!).
So, let's remove any direct tie-in with Linq-to-Sql because I don't want a solution that is dependant upon the way L2S works - or even EF, for that matter. My intent has been to abstract away the data access technology being used so that I can change it as needed without requiring collateral changes to the consuming code in my business layer. I've accomplished this in the past by presenting the business layer with IRepository interfaces to work against. Perhaps these should have been named IUnitOfWork or, more to my liking, IDataService, but the goal is the same. These interfaces typically exposed methods such as Add, Remove, Contains and GetByKey, for example.
Here's my situation. I have three databases to work with. One is DB2 and contains all of the business information for a customer (franchise) such as their info and their Products, Orders, etc. Another, SQL Server database contains their financial history while a third SQL Server database contains application-specific information. The first two databases are shared by multiple applications.
Through my application, the customer may enter/upload their financial information for a given time period. When entered, I have to perform the following steps:
1.Validate the entered data against a set of static rules. For example, the data must contain a legitimate customer ID value (in the case of an upload). This requires a lookup in the DB2 database to verify that the supplied customer ID exists and is current.
2.Next I have to validate the data against a set of dynamic rules which are contained in the third (SQL Server) database. An example may be that a given value cannot exceed a certain percentage of another value.
3.Once validated, I persist the data to the second SQL Server database containing the financial data.
All the while, my code must have loosely-coupled dependencies so I may mock them in my unit tests.
As part of the analysis, I know that I have three distinct data stores to work with and about a half-dozen or so entities (at this time) that I am working with. In generic terms, I presume that I would have three DataContexts in my application, one per data store, with the entities exposed by the appropriate data context.
I could then create a separate I{repository|unit of work|service} for each entity that would be consumed by my business logic with a concrete implementation that knows which data context to use. But this seems to be a risky proposition as the number of entities increases, so does the number of individual repository|UoW|service types.
Then, take the case of my validation logic which works with multiple entities and, thereby, multiple data contexts. I'm not sure this is the most efficient way to do this.
The other requirement that I have yet to mention is on the reporting side where I will need to execute some complex queries on the data stores. As of right now, these queries will be limited to a single data store at a time, but the possibility is there that I might need to have the ability to mash data together from multiple sources.
Finally, I am considering the idea of pulling out all of the data access stuff for the first two (shared) databases into their own project and have been looking at WCF Data Services as a possible approach. This would give me the basis for a consistent approach for any application making use of this data.
How does this change your thinking?
In your case I would recommend returning IEnummerables's for your data queries for the repo. I usually aggregate calls from multiple repo's through a service class that represents the domain problem and encapsulates my business logic. To keep it clean I try keep my repros focused on the domain problem. I liken my Datacontext to a repo, and extract an interface using a T4 template to make life easier for mocking. But there is nothing stopping you using a traditional repo that encapsulates your calls. Doing it this way will allow you to switch ORM's at any stage.
EDIT: IQueryable IS NOT THE ANSWER! :-)
I have also done a lot of work in this area, and INITIALLY came to the same conclusion, however it is NOT a good solution. The point of the Repo is to abstract queries into discrete chunks of work. Exposing IQueryable is too adhoc and raises some issues later down the line. You loose your ability to scale. You loose your ability to optimize queries (Lets say I want to move to a highly optimized stored proc). You loose your ability to use IoC for the repo to switch out data access layers (switch the project from SQL to Mongo). You loose your ability to provide effective data caching in the Repo (Which is a major strength in the Repo pattern). I would recommend taking a CLOSE look as to WHY we have a Repo pattern. It isn't simply an "ORM" mapping layer. What made this really clear to me was the CQRS pattern.
Further to this allowing the ad-hoc nature of IQueryable opens you to misfitting reuse of queries. It is GENERALLY not a good idea to reuse queries, since query to query you see slight deviations, which ends up with 2 byproducts: Queries become too broad and inefficient. Queries become riddled with unmaintainable IF THEN statements to cater for the deviations.
IQueryable is easy, but opens you up to an unmaintainable mess.
Look at this SO answer. I think it shows a simplified model of what you want. IQueryable<T> is indeed our new Repository :-). DataContext and ObjectContext are our Unit of Work.
UPDATE 2:
Here is a blog post that describes the model you might be looking for.
UPDATE 3
It would be wise to hide the shared databases behind a service. This will solve several problems:
This will make the database private to the service, which makes it much easier to change the implementation when needed.
You can put the needed validation logic (for database 1) in that service and can create tests for that validation logic in that project.
Clients accessing that service can assume correctness of the service, and its validation logic.
The result of this is that your application will send data to the service to validate it. Call the service to fetch data. Query its own private database (database 3) and join the data of the three data source locally together. I've never been a fan of using cross-database or even cross-server (in your situation) database calls and letting the database join everything together. Transactions will be promoted to distributed-transactions and it's hard to predict how many data the servers will exchange.
When you abstract the shared databases behind the service, things get easier (at least from your application's point of view). Your application calls services it trusts which limits the amount of code in that application and the amount of tests. You still want to mock the calls to such a service, but that would be pretty easy. It should also solve the problem of validating over multiple data sources.
Validation is always a hard part. I'm very familiar with Validation Application block, and love it for it's flexibility. It isn't however an easy framework, but you might take a peek at what you can do with it. For instance, I've written several articles about integration with O/RM tools and how to 'embed' a context (context as in DataContext/Unit of Work) in Validation Application Block.
Please have a look at my IRepository pattern implementation using EF 4.0.
My solution has the following features:
supports connections to multiple dbs
One repository per entity
Support for execution of queries
Unit of work pattern implementation
Support for validating entities using VAB guidance
Common operations are kept at base class level. High use of OOPS techniques for code re-usability and ease of maintenance.

What should my ASP.NET MVC Controllers represent - "real world" application

I have an established web application built as an ASP.NET 3.5 Web App. We recently modified it to mix MVC into the app for some new functionality.
Now that it's in there, we want to leverage MVC wherever possible to begin to "transform" the app from clunky webforms to a more maintainable and testable MVC app.
The question that just came up in adding some new functionality is what controller should be responsible for a certain action.
Let me be more detailed.
The scenario involves at least three major conceptual areas in our app. The app needs to be able to set their PREFERENCE for a default MAP view while they are on a SEARCH screen. Preferences, Maps and Search are all major concepts in our system. Furthermore, this preference setting (basically, where should the map start out) may be used to set the initial map in more than one search page (it's basically a search preference).
The existing MVC controller in the app is a MAPCONTROLLER, with 3 actions that are responsible for generating HTML or JSON data to put on a map.
What we need to do now, is add an MVC route (controller + action) to allow the client view to save some information as their preference. Basically, whenever they are on the search page looking at a map, they can click a button that says "remember this as my default map view", and from then on, their map will always start with that view.
My question is (and I apologize, but I wanted to be very very clear, I see too many questions with no context to help). What should my controller represent? I obviously have 3 major system areas involved. Would it be proper to create a new SEARCH or PREFERENCES controller with a SaveDefaultMapView action (no view required), or piggyback on the xisting MAP controller, even though this new function is more about search and preferences than actual map generation? Should an MVC controller be aligned mostly with the screen (search page/search subsystem), the domain / data being manipulated (preferences), or the very specific visual element under scrutiny at the time the action is taken (the map)?
All of the examples and bootcamp projects are all well and good, but they are far too clean and simplified to apply to a huge legacy app. How does one design their MVC components around a system that incorporates many domain concerns into a single webpage?
Thanks all!
There are no hard and fast rules for how the controllers are organized. You organize them the way it makes most logical sense to you. This will require a bit of experimentation as you see how the routing works out, and you find the cleanest, most elegant design.
ASP.NET MVC is brilliantly agnostic in this respect. It doesn't care how you design your controller/route substructure, and it is flexible enough to handle most any design.
Your application design should be heavy on the Model side. Your controllers should be relatively small; if you find that you are stuffing a large amount of logic in the controllers, you should refactor that logic to the model, or add a service layer to contain the logic. Your controller layer is best thought of as a "patch panel"; it is the place where you connect your incoming Urls via routes to your model/service layer and your View Model/Views.
You should definitely check out Project Areas, as this might be an appropriate mechanism to contain your three different system areas.
Thanks, Robert.
I guess I could rephrase a bit...what guidelines have others found to be useful for keeping their controller responsibilities organized and logical?
While my example above only touches 3 of our areas, I expect to eventually replace most/all of the application with MVC.
Furthermore, each of the 3 areas I mentioned has relationships to multiple other areas.(eg, maps can be used to plot several location-based entities, preferences can apply to any area of the system, and, like maps, is capable of searching for several kinds of business entities (one at a time, not all together).
So the lines are blurry. I'm interested in hearing how others have found workable guidelines for controller organization.
Oh, and at the very least, we are sticking to the skinny controller/fat model paradigm!

Generating UI from DB - the good, the bad and the ugly?

I've read a statement somewhere that generating UI automatically from DB layout (or business objects, or whatever other business layer) is a bad idea. I can also imagine a few good challenges that one would have to face in order to make something like this.
However I have not seen (nor could find) any examples of people attempting it. Thus I'm wondering - is it really that bad? It's definately not easy, but can it be done with any measure success? What are the major obstacles? It would be great to see some examples of successes and failures.
To clarify - with "generating UI automatically" I mean that the all forms with all their controls are generated completely automatically (at runtime or compile time), based perhaps on some hints in metadata on how the data should be represented. This is in contrast to designing forms by hand (as most people do).
Added: Found this somewhat related question
Added 2: OK, it seems that one way this can get pretty fair results is if enough presentation-related metadata is available. For this approach, how much would be "enough", and would it be any less work than designing the form manually? Does it also provide greater flexibility for future changes?
We had a project which would generate the database tables/stored proc as well as the UI from business classes. It was done in .NET and we used a lot of Custom Attributes on the classes and properties to make it behave how we wanted it to. It worked great though and if you manage to follow your design you can create customizations of your software really easily. We also did have a way of putting in "custom" user controls for some very exceptional cases.
All in all it worked out well for us. Unfortunately it is a sold banking product and there is no available source.
it's ok for something tiny where all you need is a utilitarian method to get the data in.
for anything resembling a real application though, it's a terrible idea. what makes for a good UI is the humanisation factor, the bits you tweak to ensure that this machine reacts well to a person's touch.
you just can't get that when your interface is generated mechanically.... well maybe with something approaching AI. :)
edit - to clarify: UI generated from code/db is fine as a starting point, it's just a rubbish end point.
hey this is not difficult to achieve at all and its not a bad idea at all. it all depends on your project needs. a lot of software products (mind you not projects but products) depend upon this model - so they dont have to rewrite their code / ui logic for different client needs. clients can customize their ui the way they want to using a designer form in the admin system
i have used xml for preserving meta data for this sort of stuff. some of the attributes which i saved for every field were:
friendlyname (label caption)
haspredefinedvalues (yes for drop
down list / multi check box list)
multiselect (if yes then check box
list, if no then drop down list)
datatype
maxlength
required
minvalue
maxvalue
regularexpression
enabled (to show or not to show)
sortkey (order on the web form)
regarding positioning - i did not care much and simply generate table tr td tags 1 below the other - however if you want to implement this as well, you can have 1 more attribute called CssClass where you can define ui specific properties (look and feel, positioning, etc) here
UPDATE: also note a lot of ecommerce products follow this kind of dynamic ui when you want to enter product information - as their clients can be selling everything under the sun from furniture to sex toys ;-) so instead of rewriting their code for every different industry they simply let their clients enter meta data for product attributes via an admin form :-)
i would also recommend you to look at Entity-attribute-value model - it has its own pros and cons but i feel it can be used quite well with your requirements.
In my Opinion there some things you should think about:
Does the customer need a function to customize his UI?
Are there a lot of different attributes or elements?
Is the effort of creating such an "rendering engine" worth it?
Okay, i think that its pretty obvious why you should think about these. It really depends on your project if that kind of model makes sense...
If you want to create some a lot of forms that can be customized at runtime then this model could be pretty uselful. Also, if you need to do a lot of smaller tools and you use this as some kind of "engine" then this effort could be worth it because you can save a lot of time.
With that kind of "rendering engine" you could automatically add error reportings, check the values or add other things that are always build up with the same pattern. But if you have too many of this things, elements or attributes then the performance can go down rapidly.
Another things that becomes interesting in bigger projects is, that changes that have to occur in each form just have to be made in the engine, not in each form. This could save A LOT of time if there is a bug in the finished application.
In our company we use a similar model for an interface generator between cash-software (right now i cant remember the right word for it...) and our application, just that it doesnt create an UI, but an output file for one of the applications.
We use XML to define the structure and how the values need to be converted and so on..
I would say that in most cases the data is not suitable for UI generation. That's why you almost always put a a layer of logic in between to interpret the DB information to the user. Another thing is that when you generate the UI from DB you will end up displaying the inner workings of the system, something that you normally don't want to do.
But it depends on where the DB came from. If it was created to exactly reflect what the users goals of the system is. If the users mental model of what the application should help them with is stored in the DB. Then it might just work. But then you have to start at the users end. If not I suggest you don't go that way.
Can you look on your problem from application architecture perspective? I see you as another database terrorist – trying to solve all by writing stored procedures. Why having UI at all? Try do it in DB script. In effect of such approach – on what composite system you will end up? When system serves different businesses – try modularization, selectively discovered components, restrict sharing references. UI shall be replaceable, independent from business layer. When storing so much data in DB – there is hard dependency of UI – system becomes monolith. How you implement MVVM pattern in scenario when UI is generated? Designers like Blend are containing lots of features, which cannot be replaced by most futuristic UI generator – unless – your development platform is Notepad only.
There is a hybrid approach where forms and all are described in a database to ensure consistency server side, which is then compiled to ensure efficiency client side on deploy.
A real-life example is the enterprise software MS Dynamics AX.
It has a 'Data' database and a 'Model' database.
The 'Model' stores forms, classes, jobs and every artefact the application needs to run.
Deploying the new software structure used to be to dump the model database and initiate a CIL compile (CIL for common intermediate language, something used by Microsoft in .net)
This way is suitable for enterprise-wide software and can handle large customizations. But keep in mind that this approach sets a framework that should be well understood by whoever gonna maintain and customize the application later.
I did this (in PHP / MySQL) to automatically generate sections of a CMS that I was building for a client. It worked OK my main problem was that the code that generates the forms became very opaque and difficult to understand therefore difficult to reuse and modify so I did not reuse it.
Note that the tables followed strict conventions such as naming, etc. which made it possible for the UI to expect particular columns and infer information about the naming of the columns and tables. There is a need for meta information to help the UI display the data.
Generally it can work however the thing is if your UI just mirrors the database then maybe there is lots of room to improve. A good UI should do much more than mirror a database, it should be built around human interaction patterns and preferences, not around the database structure.
So basically if you want to be cheap and do a quick-and-dirty interface which mirrors your DB then go for it. The main challenge would be to find good quality code that can do this or write it yourself.
From my perspective, it was always a problem to change edit forms when a very simple change was needed in a table structure.
I always had the feeling we have to spend too much time on rewriting the CRUD forms instead of developing the useful stuff, like processing / reporting / analyzing data, giving alerts for decisions etc...
For this reason, I made long time ago a code generator. So, it become easier to re-generate the forms with a simple restriction: to keep the CSS classes names. Simply like this!
UI was always based on a very "standard" code, controlled by a custom CSS.
Whenever I needed to change database structure, so update an edit form, I had to re-generate the code and redeploy.
One disadvantage I noticed was about the changes (customizations, improvements etc.) done on the previous generated code, which are lost when you re-generate it.
But anyway, the advantage of having a lot of work done by the code-generator was great!
I initially did it for the 2000s Microsoft ASP (Active Server Pages) & Microsoft SQL Server... so, when that technology was replaced by .NET, my code-generator become obsoleted.
I made something similar for PHP but I never finished it...
Anyway, from small experiments I found that generating code ON THE FLY can be way more helpful (and this approach does not exclude the SAVED generated code): no worries about changing database etc.
So, the next step was to create something that I am very proud to show here, and I think it is one nice resolution for the issue raised in this thread.
I would start with applicable use cases: https://data-seed.tech/usecases.php.
I worked to add details on how to use, but if something is still missing please let me know here!
You can change database structure, and with no line of code you can start edit data, and more like this, you have available an API for CRUD operations.
I am still a fan of the "code-generator" approach, and I think it is just a flavor of using XML/XSLT that I used for DATA-SEED. I plan to add code-generator functionalities.

Resources