Is it better-practice to AJAX every form element separately (eg. send request onChange, etc) or collect all the data, then submit with 1 click save?
Essentially, auto-save or user-initiated-save?
I would generally say that a user-initiated save is the way to go for most web-applications. If for nothing else, this is how users are used to interacting with web apps; familiarity and ease of use is extremely important in web applications. Not to mention it can cut down on unnecessary traffic.
This is not to say that auto-saving does not have it's place, but often it can be cause unnecessary traffic. For example, if I am auto-saving a contact form, fill out my name, then email, then back to name to change it, that is already 3 requests that have been sent with no benefit - this is extra work for no added advantage.
Once again, I think it does have a lot to do with your application or where you are planning on using it. Inline edits are something that often uses auto-saving and there I think it is useful, whereas a contact form/signup form would not be a good idea.
I'd say that depends on the nature of your application and whether "auto-save" is a behaviour desired by your users.
"User initiated save" is what a user would expect from their experience with web forms nowadays - I would not deviate from that unless there's a good reason.
Depends on following factors:
What kind of data are you trying to save. E.g. is it okay to be able to save the data partly or you need to save it all at once?
How much data do you want to save? If you have many fields, you might want to send data in chunks (In case of wizards) or save everything at once
Its also a good idea to have data saved (in background) for large forms in a temp way if the user may take a long time to fill in the data (e.g. emails saved as drafts)
It also depends on your web app and the way you have designed your forms. In some forms you may allow certain fields to be modified and saved inplace, so that you can fetch additional data for example
In most cases it would be good to have an explicit "Save" action for your data forms
Related
I can't seem to find any information how it would be best to put data inside a component. To define the problem, lets say we have a user table in a database and this table has an ID and maybe 30 fields with details about the user.
Now if I want to create a Vue component that shows a list of many users details, lets just call it <user-details>. To show this on a page, would you:
1) Call the database to get all users you want to show and get their ID, then do a for loop with <user-details id="xxx"> and make Vue do ajax call to some API and get the details?
2) OR, use the inline version <user-details id="xxx" name="user name" ...> with 30+ fields?
3) OR, have some specific Vue component for this user list, maybe it's users who did not validate email or something, then <users-not-validated> and use ajax?
The problem I see is, that in case 1, you already called the database for the IDs, then call the database once again with ajax with pretty much the same SQL.
In case 2, it's just annoying to fill so many fields out each time you use the component.
In case 3, you will end up with a TON of components...
How do you approach this?
You won't find such information because it's not Vue related. Vue doesn't care what you use it for and how you structure your data. It aims to allow you to do anything you want.
Just as it doesn't care what your folder structure looks like (because, at its core, all it needs in order to render is a single DOM element), it also doesn't care how you organize your API, how you structure your application, your pages or even your components.
Obviously, having this amount of freedom is not always a good thing. If you look around, you'll notice people who use Vue professionally have embraced certain patterns/structures which allow for better code reuse and more flexibility. Nuxt is one such good example.
To anyone just starting with Vue, I recommend trying to use Nuxt as soon as possible, even if its overkill for their little project because they will likely pick up some good patterns.
Getting down to your specific question, in terms of data API architecture, you always have to ask yourself: what's the underlying principle?
The underlying principle is to make your application as fast as possible. In order to do that, ideally, you want to fetch exactly how much data you want to display, but not more. Therefore:
when getting the same data, if you have a choice, always try to lower the number of requests. You don't want each item in the list to initiate a call to the server when it is rendered. Make a single call for the entire list (only fetching what you display in the list view) and call for details if the user requests it (presses the details button).
adjust your pagination to cater how many items you can display on a screen, but also according to how long it takes to load a page. If it takes too long, lower the pageSize and allow your items more padding. If you think about it, most people prefer a snappy app with fewer items on page (and generously padded items) to one which takes seconds to load each page and displays items so crummed they're hard to click/tap on or hard to follow in the list without losing the row.
However, you have to take these guidelines with a grain of salt. In the vast majority of cases fetching full data in one call makes little to no difference in user experience. Many times the delays have to do with server cold-starts (first call to a server takes longer, as it needs to "wake it up" - but all subsequent calls of the same type are faster), with unoptimized images or with bad internet connectivity (as in, it works poorly regardless of whether you receive only the names or the full list of details).
Another aspect to keep in mind is that getting all the data at once is a trade-off. You do get a slower initial call but afterwards you are able to do seamless animations between list view and detail view as the data is already fetched, no more loading required. If you handle the loading state graciously, it's a viable option in many scenarios.
Last, but not least, your 2nd point's drawback does not exist. You can always bind all the details in one go:
<user-details v-bind="user" />
is equivalent to
<user-details :id="user.id" :name="user.name" :age="user.age" ... />
To give you a very basic example, the typical markup for your use-case would be:
<div v-if="isLoadingUsers" />
<user-list v-else :users="users">
<user-list-item v-for="(user, key) in users"
:key="key"
v-bind="user"
#click="selectedUser = user" />
</user-list>
<user-details-modal v-bind="selectedUser" />
It's obviously a simplification, you might opt to not have a user details modal but a cool transform on the list item, making it grow and display more details, etc...
When in doubt, simplify. For example, only showing details for one selected item (and closing it when selecting another) will solve a lot of UI problems right off the bat.
As for the last question: whether or not to have different components for different states, the answer should come from answering a different question: how large should you allow your component to get? The upper limit is generally considered around 300 lines, although I know developers who don't go above 200 and others who don't have a problem having 500+ lines in a component).
When it becomes too large, you should extract a part of it (let's say the user-not-validated functionality into a sub-component) and end up with this inside the <user-detail> component:
<user-detail>
... common details (title, description, etc...)
<div v-if="user.isValidated">
...normal case
</div>
<user-not-validated v-bind="user" v-else />
... common functionality (action bar, etc...)
</user-detail>
But, these are sub-components of your <user-detail> component, which are extracted to help you keep the code organized. They shouldn't replace <user-detail> in its entirety. Similarly, you could extract the user-detail header or footer components, whatever makes sense. Your goal should be to keep your code neat and organized. Follow whatever principles make more sense to you.
Finally, if I had to single out one helpful guideline when taking code architecture decisions, it would definitely be the DRY principle. If you end up not having to write the same code in multiple places in the same application, you're doing it right.
Hope you'll find some of the above useful.
I want to know if it is possible to pass data to another page without Querystrings or Session.
In other words:
Can i do this using delegates or any other way?
You can POST data to another page (this is slightly different than using querystrings but may be too similar for your liking). Any data POSTED to another web form can be read with Request.Form["name_of_control"].
In certain cases I've had to develop my own approach involving generating a GUID and passing that around from page-to-page. Then any page can pull my data structures associated with a given GUID from a static key/value structure... Similar to Sessions I suppose but I had more control over how it worked. It allowed for any user to have multiple simultaneous windows/tabs open to my application and each one would work without affecting or being affected by the others (because each were passing around a different GUID). Whatever approach you choose I do urge you to consider the fact that users may want to use your application via multiple windows/tabs at the same time.
The right tool for you depends on your needs. Remember your challenge lies is making HTTP which is inherently stateless more state-ful. This thread has a very good discussion on this topic: Best Practices for Passing Data Between Pages
The title pretty much sums up my question.
When is it more efficient to generate a static page, that a user can access, as apposed to using dynamically generated pages that query a database? As in what situations would one be better than the other.
To serve up a static page, your web server just needs to read the page off the disk and send it. Virtually no processing will be required. If the page is frequently accessed, it will probably be cached in memory, so even the disk access will not be needed.
Generating pages dynamically obviously has more overhead. There is a cost for every DB access you make, no matter how simple the query is. (On a project I worked on recently, I measured a minimum overhead of 0.7ms for each query, even for SELECT 1;) So if you can just generate a static page and save it to disk, page accesses will be faster. How much faster? It just depends on how much work is being done to generate the page dynamically. We don't know what you are doing, so we can't comment on that.
Now, if you generate a static page and save it to disk, that means you need to re-generate it every time the data which went into generating that page changes. If the data changes more often than the page is actually accessed, you could be doing more work rather than less! But in most cases, that's a very unlikely situation.
More likely, the biggest problem you will experience from generating static pages and saving them to disk is coding (and maintaining) the logic for re-generating the pages whenever necessary. You will need to keep track of exactly what data goes into each page, and in every place in the code where data can be changed, you will need to invoke re-generation of all the relevant pages. If you forget just one, then your users may be looking at stale data some of the time.
If you mix dynamic generation per-request and caching generated pages on disk, then your code will be harder to read and maintain, because of mixing the two styles.
And you can't really cache generated pages on disk in certain situations -- like responding to POST requests which come from a form submission. Or imagine that when your users invoke certain actions, you have to send a request to a 3rd party API, and the data which comes back from that API will be used in the page. What comes back from the API may be different each time, so in this case, you need to generate the page dynamically each time.
Static pages (or better resources) are filled with content, that does not change or at least not often, and does not allow further queries on it: About Page, Contact, ...
In this case it doesn't make any sense to query these pages. On the other side we have Data (e.g. in a Database) and want to query it/give the user the opportunity to query it. In this case you give the User a page with the possibility to specify the query and return a rendered page with the dynamically generated data.
In my opinion it depends on the result you want to present to the user. Either it is only an information or it is the possibility to query a Datasource. The first result is known before you do something, the second (query data) is known after you have the query parameters, which means you don't know the result beforehand (it could be empty or invalid).
It depends on your architecture, but when you consider that GET Requests should be idempotent it should be also easy to cache dynamic Pages with a Proxy, and invalidate the cache, when something new happens to the data which is displayed on the cached path. In this case one could save a lot of time, because the system behaves like the cached pages would be static, but instead coming from the filesystem, they come from your memory, which is really fast.
Cheers
Laidback
For context: this is an HTML app, with little or no browser side JavaScript. I can't easily change that so need to do this on the server.
CouchDB is built to not have side effects. This is fair enough. But there seems to be no method that i can conceive of with shows, views, lists to change what is shown to a user with subsequent requests, or based on user objects, without writing data.
And can a get request for document result in the creation of a new record? Im guessing not as that would be a side effect.
But if you can, you could just create a log and then have a view that picks an advert firm a set of documents describing adverts which is affected by the change in the log when a previous ad was shown.
I'm not actually going to show adverts on my site, I'm going to have tips, and article summaries and minor features that vary from page load to page load.
Any suggestions appreciated.
I've wrapped my head around how to work with the grain for the rest of the functionality I need, but this bit seems contrary to the way couchdb works.
I think you're going to need a list function that receives a set of documents from the view and then chooses only one to return, either at random or some other method. However, because you're inside a list function you gain access to the user's request details, including cookies (which you can also set, btw.) That sounds more like what you want.
In addition, you could specify different Views for the list function to use at query-time. This means you could, say, have only random articles show up on the homepage, but any type of content show up on all others.
Note: You can't get access to the request in a map/reduce function and you'll run into problems if you do something like Math.random() inside a map function.
So a list function is the way to go.
http://guide.couchdb.org/draft/transforming.html
Look into the various methods of selecting a random document from a view. That should enable you to choose a random document (presumably representing an ad, tip, etc.) to display.
I've read a statement somewhere that generating UI automatically from DB layout (or business objects, or whatever other business layer) is a bad idea. I can also imagine a few good challenges that one would have to face in order to make something like this.
However I have not seen (nor could find) any examples of people attempting it. Thus I'm wondering - is it really that bad? It's definately not easy, but can it be done with any measure success? What are the major obstacles? It would be great to see some examples of successes and failures.
To clarify - with "generating UI automatically" I mean that the all forms with all their controls are generated completely automatically (at runtime or compile time), based perhaps on some hints in metadata on how the data should be represented. This is in contrast to designing forms by hand (as most people do).
Added: Found this somewhat related question
Added 2: OK, it seems that one way this can get pretty fair results is if enough presentation-related metadata is available. For this approach, how much would be "enough", and would it be any less work than designing the form manually? Does it also provide greater flexibility for future changes?
We had a project which would generate the database tables/stored proc as well as the UI from business classes. It was done in .NET and we used a lot of Custom Attributes on the classes and properties to make it behave how we wanted it to. It worked great though and if you manage to follow your design you can create customizations of your software really easily. We also did have a way of putting in "custom" user controls for some very exceptional cases.
All in all it worked out well for us. Unfortunately it is a sold banking product and there is no available source.
it's ok for something tiny where all you need is a utilitarian method to get the data in.
for anything resembling a real application though, it's a terrible idea. what makes for a good UI is the humanisation factor, the bits you tweak to ensure that this machine reacts well to a person's touch.
you just can't get that when your interface is generated mechanically.... well maybe with something approaching AI. :)
edit - to clarify: UI generated from code/db is fine as a starting point, it's just a rubbish end point.
hey this is not difficult to achieve at all and its not a bad idea at all. it all depends on your project needs. a lot of software products (mind you not projects but products) depend upon this model - so they dont have to rewrite their code / ui logic for different client needs. clients can customize their ui the way they want to using a designer form in the admin system
i have used xml for preserving meta data for this sort of stuff. some of the attributes which i saved for every field were:
friendlyname (label caption)
haspredefinedvalues (yes for drop
down list / multi check box list)
multiselect (if yes then check box
list, if no then drop down list)
datatype
maxlength
required
minvalue
maxvalue
regularexpression
enabled (to show or not to show)
sortkey (order on the web form)
regarding positioning - i did not care much and simply generate table tr td tags 1 below the other - however if you want to implement this as well, you can have 1 more attribute called CssClass where you can define ui specific properties (look and feel, positioning, etc) here
UPDATE: also note a lot of ecommerce products follow this kind of dynamic ui when you want to enter product information - as their clients can be selling everything under the sun from furniture to sex toys ;-) so instead of rewriting their code for every different industry they simply let their clients enter meta data for product attributes via an admin form :-)
i would also recommend you to look at Entity-attribute-value model - it has its own pros and cons but i feel it can be used quite well with your requirements.
In my Opinion there some things you should think about:
Does the customer need a function to customize his UI?
Are there a lot of different attributes or elements?
Is the effort of creating such an "rendering engine" worth it?
Okay, i think that its pretty obvious why you should think about these. It really depends on your project if that kind of model makes sense...
If you want to create some a lot of forms that can be customized at runtime then this model could be pretty uselful. Also, if you need to do a lot of smaller tools and you use this as some kind of "engine" then this effort could be worth it because you can save a lot of time.
With that kind of "rendering engine" you could automatically add error reportings, check the values or add other things that are always build up with the same pattern. But if you have too many of this things, elements or attributes then the performance can go down rapidly.
Another things that becomes interesting in bigger projects is, that changes that have to occur in each form just have to be made in the engine, not in each form. This could save A LOT of time if there is a bug in the finished application.
In our company we use a similar model for an interface generator between cash-software (right now i cant remember the right word for it...) and our application, just that it doesnt create an UI, but an output file for one of the applications.
We use XML to define the structure and how the values need to be converted and so on..
I would say that in most cases the data is not suitable for UI generation. That's why you almost always put a a layer of logic in between to interpret the DB information to the user. Another thing is that when you generate the UI from DB you will end up displaying the inner workings of the system, something that you normally don't want to do.
But it depends on where the DB came from. If it was created to exactly reflect what the users goals of the system is. If the users mental model of what the application should help them with is stored in the DB. Then it might just work. But then you have to start at the users end. If not I suggest you don't go that way.
Can you look on your problem from application architecture perspective? I see you as another database terrorist – trying to solve all by writing stored procedures. Why having UI at all? Try do it in DB script. In effect of such approach – on what composite system you will end up? When system serves different businesses – try modularization, selectively discovered components, restrict sharing references. UI shall be replaceable, independent from business layer. When storing so much data in DB – there is hard dependency of UI – system becomes monolith. How you implement MVVM pattern in scenario when UI is generated? Designers like Blend are containing lots of features, which cannot be replaced by most futuristic UI generator – unless – your development platform is Notepad only.
There is a hybrid approach where forms and all are described in a database to ensure consistency server side, which is then compiled to ensure efficiency client side on deploy.
A real-life example is the enterprise software MS Dynamics AX.
It has a 'Data' database and a 'Model' database.
The 'Model' stores forms, classes, jobs and every artefact the application needs to run.
Deploying the new software structure used to be to dump the model database and initiate a CIL compile (CIL for common intermediate language, something used by Microsoft in .net)
This way is suitable for enterprise-wide software and can handle large customizations. But keep in mind that this approach sets a framework that should be well understood by whoever gonna maintain and customize the application later.
I did this (in PHP / MySQL) to automatically generate sections of a CMS that I was building for a client. It worked OK my main problem was that the code that generates the forms became very opaque and difficult to understand therefore difficult to reuse and modify so I did not reuse it.
Note that the tables followed strict conventions such as naming, etc. which made it possible for the UI to expect particular columns and infer information about the naming of the columns and tables. There is a need for meta information to help the UI display the data.
Generally it can work however the thing is if your UI just mirrors the database then maybe there is lots of room to improve. A good UI should do much more than mirror a database, it should be built around human interaction patterns and preferences, not around the database structure.
So basically if you want to be cheap and do a quick-and-dirty interface which mirrors your DB then go for it. The main challenge would be to find good quality code that can do this or write it yourself.
From my perspective, it was always a problem to change edit forms when a very simple change was needed in a table structure.
I always had the feeling we have to spend too much time on rewriting the CRUD forms instead of developing the useful stuff, like processing / reporting / analyzing data, giving alerts for decisions etc...
For this reason, I made long time ago a code generator. So, it become easier to re-generate the forms with a simple restriction: to keep the CSS classes names. Simply like this!
UI was always based on a very "standard" code, controlled by a custom CSS.
Whenever I needed to change database structure, so update an edit form, I had to re-generate the code and redeploy.
One disadvantage I noticed was about the changes (customizations, improvements etc.) done on the previous generated code, which are lost when you re-generate it.
But anyway, the advantage of having a lot of work done by the code-generator was great!
I initially did it for the 2000s Microsoft ASP (Active Server Pages) & Microsoft SQL Server... so, when that technology was replaced by .NET, my code-generator become obsoleted.
I made something similar for PHP but I never finished it...
Anyway, from small experiments I found that generating code ON THE FLY can be way more helpful (and this approach does not exclude the SAVED generated code): no worries about changing database etc.
So, the next step was to create something that I am very proud to show here, and I think it is one nice resolution for the issue raised in this thread.
I would start with applicable use cases: https://data-seed.tech/usecases.php.
I worked to add details on how to use, but if something is still missing please let me know here!
You can change database structure, and with no line of code you can start edit data, and more like this, you have available an API for CRUD operations.
I am still a fan of the "code-generator" approach, and I think it is just a flavor of using XML/XSLT that I used for DATA-SEED. I plan to add code-generator functionalities.