For context: this is an HTML app, with little or no browser side JavaScript. I can't easily change that so need to do this on the server.
CouchDB is built to not have side effects. This is fair enough. But there seems to be no method that i can conceive of with shows, views, lists to change what is shown to a user with subsequent requests, or based on user objects, without writing data.
And can a get request for document result in the creation of a new record? Im guessing not as that would be a side effect.
But if you can, you could just create a log and then have a view that picks an advert firm a set of documents describing adverts which is affected by the change in the log when a previous ad was shown.
I'm not actually going to show adverts on my site, I'm going to have tips, and article summaries and minor features that vary from page load to page load.
Any suggestions appreciated.
I've wrapped my head around how to work with the grain for the rest of the functionality I need, but this bit seems contrary to the way couchdb works.
I think you're going to need a list function that receives a set of documents from the view and then chooses only one to return, either at random or some other method. However, because you're inside a list function you gain access to the user's request details, including cookies (which you can also set, btw.) That sounds more like what you want.
In addition, you could specify different Views for the list function to use at query-time. This means you could, say, have only random articles show up on the homepage, but any type of content show up on all others.
Note: You can't get access to the request in a map/reduce function and you'll run into problems if you do something like Math.random() inside a map function.
So a list function is the way to go.
http://guide.couchdb.org/draft/transforming.html
Look into the various methods of selecting a random document from a view. That should enable you to choose a random document (presumably representing an ad, tip, etc.) to display.
Related
I can't seem to find any information how it would be best to put data inside a component. To define the problem, lets say we have a user table in a database and this table has an ID and maybe 30 fields with details about the user.
Now if I want to create a Vue component that shows a list of many users details, lets just call it <user-details>. To show this on a page, would you:
1) Call the database to get all users you want to show and get their ID, then do a for loop with <user-details id="xxx"> and make Vue do ajax call to some API and get the details?
2) OR, use the inline version <user-details id="xxx" name="user name" ...> with 30+ fields?
3) OR, have some specific Vue component for this user list, maybe it's users who did not validate email or something, then <users-not-validated> and use ajax?
The problem I see is, that in case 1, you already called the database for the IDs, then call the database once again with ajax with pretty much the same SQL.
In case 2, it's just annoying to fill so many fields out each time you use the component.
In case 3, you will end up with a TON of components...
How do you approach this?
You won't find such information because it's not Vue related. Vue doesn't care what you use it for and how you structure your data. It aims to allow you to do anything you want.
Just as it doesn't care what your folder structure looks like (because, at its core, all it needs in order to render is a single DOM element), it also doesn't care how you organize your API, how you structure your application, your pages or even your components.
Obviously, having this amount of freedom is not always a good thing. If you look around, you'll notice people who use Vue professionally have embraced certain patterns/structures which allow for better code reuse and more flexibility. Nuxt is one such good example.
To anyone just starting with Vue, I recommend trying to use Nuxt as soon as possible, even if its overkill for their little project because they will likely pick up some good patterns.
Getting down to your specific question, in terms of data API architecture, you always have to ask yourself: what's the underlying principle?
The underlying principle is to make your application as fast as possible. In order to do that, ideally, you want to fetch exactly how much data you want to display, but not more. Therefore:
when getting the same data, if you have a choice, always try to lower the number of requests. You don't want each item in the list to initiate a call to the server when it is rendered. Make a single call for the entire list (only fetching what you display in the list view) and call for details if the user requests it (presses the details button).
adjust your pagination to cater how many items you can display on a screen, but also according to how long it takes to load a page. If it takes too long, lower the pageSize and allow your items more padding. If you think about it, most people prefer a snappy app with fewer items on page (and generously padded items) to one which takes seconds to load each page and displays items so crummed they're hard to click/tap on or hard to follow in the list without losing the row.
However, you have to take these guidelines with a grain of salt. In the vast majority of cases fetching full data in one call makes little to no difference in user experience. Many times the delays have to do with server cold-starts (first call to a server takes longer, as it needs to "wake it up" - but all subsequent calls of the same type are faster), with unoptimized images or with bad internet connectivity (as in, it works poorly regardless of whether you receive only the names or the full list of details).
Another aspect to keep in mind is that getting all the data at once is a trade-off. You do get a slower initial call but afterwards you are able to do seamless animations between list view and detail view as the data is already fetched, no more loading required. If you handle the loading state graciously, it's a viable option in many scenarios.
Last, but not least, your 2nd point's drawback does not exist. You can always bind all the details in one go:
<user-details v-bind="user" />
is equivalent to
<user-details :id="user.id" :name="user.name" :age="user.age" ... />
To give you a very basic example, the typical markup for your use-case would be:
<div v-if="isLoadingUsers" />
<user-list v-else :users="users">
<user-list-item v-for="(user, key) in users"
:key="key"
v-bind="user"
#click="selectedUser = user" />
</user-list>
<user-details-modal v-bind="selectedUser" />
It's obviously a simplification, you might opt to not have a user details modal but a cool transform on the list item, making it grow and display more details, etc...
When in doubt, simplify. For example, only showing details for one selected item (and closing it when selecting another) will solve a lot of UI problems right off the bat.
As for the last question: whether or not to have different components for different states, the answer should come from answering a different question: how large should you allow your component to get? The upper limit is generally considered around 300 lines, although I know developers who don't go above 200 and others who don't have a problem having 500+ lines in a component).
When it becomes too large, you should extract a part of it (let's say the user-not-validated functionality into a sub-component) and end up with this inside the <user-detail> component:
<user-detail>
... common details (title, description, etc...)
<div v-if="user.isValidated">
...normal case
</div>
<user-not-validated v-bind="user" v-else />
... common functionality (action bar, etc...)
</user-detail>
But, these are sub-components of your <user-detail> component, which are extracted to help you keep the code organized. They shouldn't replace <user-detail> in its entirety. Similarly, you could extract the user-detail header or footer components, whatever makes sense. Your goal should be to keep your code neat and organized. Follow whatever principles make more sense to you.
Finally, if I had to single out one helpful guideline when taking code architecture decisions, it would definitely be the DRY principle. If you end up not having to write the same code in multiple places in the same application, you're doing it right.
Hope you'll find some of the above useful.
I've got a relatively fast SproutCore app that I'm trying to make just a tad bit faster.
Right now, when the user scrolls my SC.ListView and they scroll into view some list items that have not been loaded from the server (say from a relationship), the app automatically makes a call to the server to load these records. Even though this is fast, there is still a short period of time where my list items are blank.
I know that I can make them say "Loading..." or something like that (and I have), but I was wondering: is there was a way to pre-load my "off-screen" records so that as the user scrolls, the list items are already loaded?
My ListItemViews will be fairly large (pixel-wise), so even loading double the amount of data is not going to be killer from an AJAX perspective, and it would be nice if as the user scrolled, the content was always loaded (unless they scroll SUPER-SUPER-fast, in which case I'm okay with them seeing a loading indicator).
I currently found a solution by adding the following to my SC.ListView, but I've noticed some major performance issues on mobile and they are directly related to making this change, so I was wondering if there was a better way.
contentIndexesInRect: function(rect) {
rect.height = rect.height * 2;
return sc_super();
}
Overriding contentIndexesInRect is the way I would do this. I would do less than double it though – I might get the result from sc_super() and then add a few extra items to the resulting index set. (I believe it comes back frozen, so you may have to clone-edit-freeze.) One or two extra may give you enough breathing room to get the stuff loaded, without contributing nearly as much to the apparent performance issue.
I'm surprised that it results in major performance issues though. It sounds to me like your list items themselves may be heavier-weight than they need to be – for example, they may have a lot of bindings to hook and unhook. If that's what's going on, you may benefit more from improving their efficiency.
I think you would be better served to load the additional data outside of the context of what the list is actually displaying. For instance, forcing more list items to render in order to trigger additional requests does result in having the extra data available, but also adds several unnecessary elements to the DOM, which is actually detrimental to overall performance. In fact these extra elements are most likely the cause of the major slowdown on mobile once you get to a sufficient number of extras.
Instead, I would first ensure that your list item views are properly pooling so that only the visible items are updating in place with as little DOM manipulation as possible. Then second, I would lazily load in additional data only after the required data is requested. There are quite a few ways to do this depending on your setup. You might want to add some logic to a data source to trigger an additional request on each filled request range or you might want to do something like override itemViewForContentIndex in SC.CollectionView as the point to trigger the extra data. In either case, I imagine it could look something like this,
// …
prefetchTriggered: function (lastIndex) {
// A query that will fetch more data (this depends totally on your setup).
var query = SC.Query.remote(MyApp.Record, {
// Parameters to pass to the data source so it knows what to request.
lastIndex: lastIndex
});
// Run the query.
MyApp.store.find(query);
},
// …
As I mention in the comments above, the structure of the request depends totally on your setup and your API so you'll have to modify it to meet your needs. It will work better if you are able to request a suitable range of items, rather than one-at-a-time.
The title pretty much sums up my question.
When is it more efficient to generate a static page, that a user can access, as apposed to using dynamically generated pages that query a database? As in what situations would one be better than the other.
To serve up a static page, your web server just needs to read the page off the disk and send it. Virtually no processing will be required. If the page is frequently accessed, it will probably be cached in memory, so even the disk access will not be needed.
Generating pages dynamically obviously has more overhead. There is a cost for every DB access you make, no matter how simple the query is. (On a project I worked on recently, I measured a minimum overhead of 0.7ms for each query, even for SELECT 1;) So if you can just generate a static page and save it to disk, page accesses will be faster. How much faster? It just depends on how much work is being done to generate the page dynamically. We don't know what you are doing, so we can't comment on that.
Now, if you generate a static page and save it to disk, that means you need to re-generate it every time the data which went into generating that page changes. If the data changes more often than the page is actually accessed, you could be doing more work rather than less! But in most cases, that's a very unlikely situation.
More likely, the biggest problem you will experience from generating static pages and saving them to disk is coding (and maintaining) the logic for re-generating the pages whenever necessary. You will need to keep track of exactly what data goes into each page, and in every place in the code where data can be changed, you will need to invoke re-generation of all the relevant pages. If you forget just one, then your users may be looking at stale data some of the time.
If you mix dynamic generation per-request and caching generated pages on disk, then your code will be harder to read and maintain, because of mixing the two styles.
And you can't really cache generated pages on disk in certain situations -- like responding to POST requests which come from a form submission. Or imagine that when your users invoke certain actions, you have to send a request to a 3rd party API, and the data which comes back from that API will be used in the page. What comes back from the API may be different each time, so in this case, you need to generate the page dynamically each time.
Static pages (or better resources) are filled with content, that does not change or at least not often, and does not allow further queries on it: About Page, Contact, ...
In this case it doesn't make any sense to query these pages. On the other side we have Data (e.g. in a Database) and want to query it/give the user the opportunity to query it. In this case you give the User a page with the possibility to specify the query and return a rendered page with the dynamically generated data.
In my opinion it depends on the result you want to present to the user. Either it is only an information or it is the possibility to query a Datasource. The first result is known before you do something, the second (query data) is known after you have the query parameters, which means you don't know the result beforehand (it could be empty or invalid).
It depends on your architecture, but when you consider that GET Requests should be idempotent it should be also easy to cache dynamic Pages with a Proxy, and invalidate the cache, when something new happens to the data which is displayed on the cached path. In this case one could save a lot of time, because the system behaves like the cached pages would be static, but instead coming from the filesystem, they come from your memory, which is really fast.
Cheers
Laidback
Is it better-practice to AJAX every form element separately (eg. send request onChange, etc) or collect all the data, then submit with 1 click save?
Essentially, auto-save or user-initiated-save?
I would generally say that a user-initiated save is the way to go for most web-applications. If for nothing else, this is how users are used to interacting with web apps; familiarity and ease of use is extremely important in web applications. Not to mention it can cut down on unnecessary traffic.
This is not to say that auto-saving does not have it's place, but often it can be cause unnecessary traffic. For example, if I am auto-saving a contact form, fill out my name, then email, then back to name to change it, that is already 3 requests that have been sent with no benefit - this is extra work for no added advantage.
Once again, I think it does have a lot to do with your application or where you are planning on using it. Inline edits are something that often uses auto-saving and there I think it is useful, whereas a contact form/signup form would not be a good idea.
I'd say that depends on the nature of your application and whether "auto-save" is a behaviour desired by your users.
"User initiated save" is what a user would expect from their experience with web forms nowadays - I would not deviate from that unless there's a good reason.
Depends on following factors:
What kind of data are you trying to save. E.g. is it okay to be able to save the data partly or you need to save it all at once?
How much data do you want to save? If you have many fields, you might want to send data in chunks (In case of wizards) or save everything at once
Its also a good idea to have data saved (in background) for large forms in a temp way if the user may take a long time to fill in the data (e.g. emails saved as drafts)
It also depends on your web app and the way you have designed your forms. In some forms you may allow certain fields to be modified and saved inplace, so that you can fetch additional data for example
In most cases it would be good to have an explicit "Save" action for your data forms
I have a Perl script that generates a web page. It takes a non-trivial amount of time to run. I would like to be able to render a complete HTML table to the user so they know what results to expect, but fill in the details slowly as the Perl script generates them.
What approach should I be taking here?
My initial assumption was that I would be able to assign an ID to my various table data elements and then adjust their innerHTML properties as and when I got the results in. But it doesn't seem like I can perform such manipulations whilst the page is still loading.
There's no consistantly reliable way to modify a web page as it's loading.
You can create the effect by initially loading a compact loading page, and then loading the rest of the content via AJAX calls back to the server to get the individual components.
You can then load those components as your AJAX calls are completed.
EDIT
As the comments have pointed out...while this would achieve the results you want, it's a terrible idea.
Search Engine Indexing being the primary reason. You're also relying on Javascript to do a lot of heavy lifting...and it might not always be enabled.
One solution would be to progressively load the data via AJAX. You would need to do something like this:
Load the webpage
Using javascript, query the webserver for table values
Populate table with received values
Loop until table filled
Obviously this solution presents problems if the data is meant to be crawled. Since crawlers don't take into account dynamic data via javascript.
The other issue to consider is usability. Web Users are not used to this type of progressive loading, so informing them that the data is still being loaded would be very important. Also some type of accurate progress bar would provide good usability.
As suggested, AJAX is probably the way to go.
Create a basic HTML page with an empty div to hold your data, then using repeating AJAX calls, fill in the div.
This page describes how to do this:
link text