Coldfusion automatically save data - ajax

Instead of the traditional Posting of forms (with a save button) to save data to a database using coldfusion.
Is there a sensible way of having information saved as the user exits the field.
Is this even good practice?

All you need to do is via JavaScript, assign a change event to every field, then define that the event will make an Ajax call to save the data in that particular field. You should need a single target URL that takes some primary key and the field name in question.
What you really need to consider though, is the bandwidth required to support such a process. What is your current load? Concurrent users? Concurrent form usage?
If you have 100 people filling out a 10 field form, you currently have 100 HTTP POST requests to deal with. Can you handle 1000 HTTP POST calls if every field saves on its own? What about 1000 people at a time? 10k? 100k? And larger forms, how many of those do you have?
The functionality is fairly trivial to implement, what is not trivial is the potential impact on your infrastructure.

Related

What is the best practice to get the required data into VueJS component?

I'm building my first VueJS application which is intended to be used by hundreds of people in the future. I tried to make the individual components reusable and indpendent as possible. To achieve this i decided to let every component fetch its required data themselves. This works fine but i'm not sure if its best practice. I could also pass the data between the components or even using the 2-way data binding functionality.
The sketch bellow describes one of the situations i have. Note that 1 account has 1..* users. As you can see i need to fetch the accounts to display them in the accountOverviewComponent.
Currently i only fetch the accounts in the accountOverviewComponent and fetch the users when the account edit button by the passed accountId in the accountOverviewComponent is clicked. This way i don't fetch data i don't need at the time.
I can also include the users (god knows which data/relations will be added in future) to the fetch account response as wel so i can pass all required data to the accountShowComponent when a account edit button is clicked. This way i can save requests to the server with the side note that i fetch users of accounts i dont need. A possible disadvantage is that the account is updated in the accountShowComponent and not in the accountOverviewComponent (for example when the accountShowComponent is a modal on top of the accountOverviewComponent. So i need to pass the updatet account back or re-fetch the accounts after a save or something.
As third option I can do the same in option 2 but than with the 2-way data binding which handles the data synchronization between the components. This will probably restrict the usage of the accountShowComponent to cases where the accountShowComponent is used "on top" of a parent which contains the data.
I can also store the data in a Vuex store and update the stores all the time. I read that this is bad practive as it should be only used for data which is required accros the SPA. I think Vuex is overkill in "simple" situations like this?
What is the best practice of the described situation? I have a bunch of comparable situations/scenarios in my application. Performance (also for mobile devices), scalability and being "future proof" (extendability/modularity) are important for me. Can someone help me out because i'm a bit lost in the options i have?
UPDATE
The reason i think Vue is overkill is comming from this article which makes totally sense from a software engineer perspective to me (i may be wrong). As my components have a kind of "parent - child" relation so i can solve my "issue" easily with passing data (or use 2-way data binding) and callback-events.
The number one use case for storing data in a centralized store like Vuex, is, because the data must be accessible in multiple places of your application, by components which oftentimes are not related in any way (they neither are parents or children of each other). An example of this would be certain user settings to configure how your application looks or what date format should be used, to name a concrete example.

Should I make multiple Ajax requests or combine into one

I am building some html reports. The user can choose to view additional data for individual elements of the report, or choose to view all additional data.
To view a single line of additional data, an Ajax request is made.
My question is that if a user clicks "View all additional data", should I make 20 or so asynchronous Ajax calls, or just make a single Ajax call that might take a little longer.
Aside from usability, are there any best practices as far as making lots of smaller Ajax requests vs one larger one?
I would say normally you would want to make one call. Your sending a request to the server - while you are there - just get all the data you need before coming back. Depending on the situation you could always cache some of the data (by storing in a variable) - to limit the amount of information you are retrieving.

Attribute Based Access Control (ABAC) in a microservices architecture for lists of resources

I am investigating options to build a system to provide "Entity Access Control" across a microservices based architecture to restrict access to certain data based on the requesting user. A full Role Based Access Control (RBAC) system has already been implemented to restrict certain actions (based on API endpoints), however nothing has been implemented to restrict those actions against one data entity over another. Hence a desire for an Attribute Based Access Control (ABAC) system.
Given the requirements of the system to be fit-for-purpose and my own priorities to follow best practices for implementations of security logic to remain in a single location I devised to creation of an externalised "Entity Access Control" API.
The end result of my design was something similar to the following image I have seen floating around (I think from axiomatics.com)
The problem is that the whole thing falls over the moment you start talking about an API that responds with a list of results.
Eg. A /api/customers endpoint on a Customers API that takes in parameters such as a query filter, sort, order, and limit/offset values to facilitate pagination, and returns a list of customers to a front end. How do you then also provide ABAC on each of these entities in a microservices landscape?
Terrible solutions to the above problem tested so far:
Get the first page of results, send all of those to the EAC API, get the responses, drop the ones that are rejected from the response, get more customers from the DB, check those... and repeat until either you get a page of results or run out of customers in the DB. Tested that for 14,000 records (which is absolutely within reason in my situation) would take 30 seconds to get an API response for someone who had zero permission to view any customers.
On every request to the all customers endpoint, a request would be sent to the EAC API for every customer available to the original requesting user. Tested that for 14,000 records the response payload would be over half a megabyte for someone who had permission to view all customers. I could split it into multiple requests, but then you are just balancing payload size with request spam and the performance penalty doesn't go anywhere.
Give up on the ability to view multiple records in a list. This totally breaks the APIs use for customer needs.
Store all the data and logic required to perform the ABAC controls in each API. This is fraught with danger and basically guaranteed to fail in a way that is beyond my risk appetite considering the domain I am working within.
Note: I tested with 14,000 records just because its a benchmark of our current state of data. It is entirely feasible that a single API could serve 100,000 or 1m records, so anything that involves iterating over the whole data set or transferring the whole data set over the wire is entirely unsustainable.
So, here lies the question... How do you implement an externalised ABAC system in a microservices architecture (as per the diagram) whilst also being able to service requests that respond with multiple entities with a query filter, sort, order, and limit/offset values to facilitate pagination.
After dozens of hours of research, it was decided that this is an entirely unsolvable problem and is simply a side effect of microservices (and more importantly, segregated entity storage).
If you want the benefits of a maintainable (as in single piece of externalised infrastructure) entity level attribute access control system, a monolithic approach to entity storage is required. You cannot simultaneously reap the benefits of microservices.

Handling forms with many fields

I have a very large webform that is the center of my Yii web application. The form actually consists of multiple html form elements, many of which are loaded via AJAX as needed.
Because of the form's size and complexity, having multiple save or submit buttons isn't really feasible. I would rather update each field in the database as it is edited by asynchrously AJAXing the new value to the server using jeditable or jeditable-like functionality.
Has anyone done anything like this? In theory I'm thinking I could set up an AJAX endpoint and have each control pass in its name, its new value, and the CRUD operation you want to perform. Then the endpoint can route the request appropriately based on some kind of map and return the product. It just seems like someone has to have solved this problem before and I don't want to waste hours reinventing the wheel.
Your thoughts on architecture/implementation are appreciated, thanks for your time.
In similar situation I decided to use CActiveForm only for easy validation by Yii standarts (it can use Ajax validation), avoiding "required" attribute. And of course to keep logical structure of the form in a good view.
In common you're right. I manually used jQuery to generate AJAX-request (and any other actions) to the controller and process them there as you want.
So you may use CRUD in controller (analyzing parameters in requests) and in your custom jQuery (using group selectors), but you can hardly do it in CActiveForm directly (and it's good: compacting mustn't always beat the logic and structure of models).
Any composite solution with javascript in PHP will affect on flexibility of your non-trivial application.
After sleeping on it last night, I found this post:
jQuery focus/blur on form, not individual inputs
I'm using a modified version of this at the client to update each form via AJAX, instead of updating each field. Each form automatically submits its data after a two seconds of inactivity. The downside is the client might lose some data if their browser crashes, but the benefit is I can mostly use Yii's built-in controller actions and I don't have to write a lot of custom PHP. Since my forms are small, but there are many of them, it seems to be working well so far.
Thanks Alexander for your excellent input and thanks Afnan for your help :)

Best practice for combining requests with possible different return types

Background
I'm working on a web application utilizing AJAX to fetch content/data and what have you - nothing out of the ordinary.
On the server-side certain events can happen that the client-side JavaScript framework needs to be notified about and vice versa. These events are not always related to the users immediate actions. It is not an option to wait for the next page refresh to include them in the document or to stick them in some hidden fields because the user might never submit a form.
Right now it is design in such a way that events to and from the server are riding a long with the users requests. For instance if the user clicks a 'view details' link this would fire a request to the server to fetch some HTML or JSON with details about the clicked item. Along with this request or rather the response, a server-side (invoked) event will return with the content.
Question/issue 1:
I'm unsure how to control the queue of events going to the server. They can ride along with user invoked events, but what if these does not occur, the events will get lost. I imagine having a timer setup up to send these events to the server in the case the user does not perform some action. What do you think?
Question/issue 2:
With regards to the responds, some being requested as HTML some as JSON it is a bit tricky as I would have to somehow wrap al this data for allow for both formalized (and unrelated) events and perhaps HTML content, depending on the request, to return to the client. Any suggestions? anything I should be away about, for instance returning HTML content wrapped in a JSON bundle?
Update:
Do you know of any framework that uses an approach like this, that I can look at for inspiration (that is a framework that wraps events/requests in a package along with data)?
I am tackling a similar problem to yours at the moment. On your first question, I was thinking of implementing some sort of timer on the client side that makes an asycnhronous call for the content on expiry.
On your second question, I normaly just return JSON representing the data I need, and then present it by manipulating the Document model. I prefer to keep things consistent.
As for best practices, I cant say for sure that what I am doing is or complies to any best practice, but it works for our present requirement.
You might want to also consider the performance impact of having multiple clients making asynchrounous calls to your web server at regular intervals.

Resources