I'm about to embark on a project where a user will be able to create their own custom fields. MY QUESTION - what's the best approach for something like this?
Use case: we have medical records with attributes like first_name, last_name etc... However we also want a user to be able to log into their account and create custom fields. For instance they may want to create a field called 'second_phone' etc... They will then map their CRM to their fields within this app so they can import their data.
I'm thinking on creating tables like 'field_sets (has_many fields)', 'fields', 'field_values' etc...
This seems like it would be somewhat common hence why I thought I would first ask for opinions and/or existing examples.
This is where some modern schemaless databases can help you. My favourite is MongoDB. In short: you take whatever data you have and stuff a document with it. No hard thinking required.
If, however, you are in relational land, EAV is one of classic approaches.
I have also seen people do these things:
predefine some "optional" fields in the schema and use them if necessary.
serialize this optional data to string (using JSON, for example) and write it to text blob.
Related
I'm working on a portal based on Orchard CMS. We're using Orchard to manage the "normal" content of the site, as well as to model what's essentially data for a small application embedded in it.
We figured that doing it that way is "recommended" for working in Orchard, and that it would save us duplicating a bunch of effort in features that Orchard already provides, mainly generating a good enough admin UI. This is also why we're using fields wherever possible.
However, for said application, the client wants to be able to display the data in the regular UI in a garden-variety datagrid that can be filtered, sorted, and paged.
I first tried to implement this by cobbling together a page with a bunch of form elements for the filtering, above a projection with filters bound to query string parameters. However, I ran into the following issues with this approach:
Filters for numeric fields crash when the value is missing - as would be pretty common to indicate that the given field shouldn't be considered when filtering. (This I could achieve by changing the implementation in the Orchard source, which would however make upgrading trickier later. I'd prefer to keep anything I haven't written untouched.)
It seems the sort order can only be defined in the administration UI, it doesn't seem to support tokens to allow for the field to sort by to be changed when querying.
So I decided to dump that approach and switched to trying to do this with just MVC controllers that access data using IContentQuery. However, there I found out that:
I have no clue how, if at all, it's possible to sort the query based on field values.
Or, for that matter, how / if I can filter.
I did take a look at the code of Orchard.Projections, however, how it handles sorting is pretty inscrutable to me, and there doesn't seem to be a straightforward way to change the sort order for just one query either.
So, is there any way to achieve what I need here with the rest of the setup (which isn't little) unchanged, or am I in a trap here, and I'll have to move every single property I wish to use for sorting / filtering into a content part and code the admin UI myself? (Or do something ludicrous, like create one query for every sortable property and direction.)
EDIT: Another thought I had was having my custom content part duplicate the fields that are displayed in the datagrids into Hibernate-backed properties accessible to query code, and whenever the content item is updated, copy values from these fields into the properties before saving. However, again, I'm not sure if this is feasible, and how I would be able to modify a content item just before it's saved on update.
Right so I have actually done a similar thing here to you. I ended up going down both approaches, creating some custom filters for projections so I could manage filters on the frontend. It turned out pretty cool but in the end projections lacked the raw querying power I needed (I needed to filter and sort based on joins to aggregated tables which I think I decided I didn't know how I could do that in projections, or if its nature of query building would allow it). I then decided to move all my data into a record so I could query and filter it. This felt like the right way to go about it, since if I was building a UI to filter records it made sense those records should be defined in code. However, I was sorting on users where each site had different registration data associated to users and (I think the following is a terrible affliction many Orchard devs suffer from) I wanted to build a reusable, modular system so I wouldn't have to change anything, ever!
Didn't really work out quite like I hoped, but to eventually answer the question in your title: yes, you can query fields. Orchard projections builds an index that it uses for querying fields. You can access these in HQL, get the ids of the content items, then call getmany to get them all. I did this several years ago, and I cant remember much but I do remember having a distinctly unenjoyable time with it haha. So after you have an nhibernate session you can write your hql
select distinct civr.Id
from Orchard.ContentManagement.Records.ContentItemVersionRecord civr
join civ.ContentItemRecord cir
join ci.FieldIndexPartRecord fipr
join fipr.StringFieldIndexRecord sfir
This just shows you how to join to the field indexes. There are a few, for each different data type. This is the string one I'm joining here. They are all basically the same, with a PropertyName and value field. Hql allows you to add conditions to your join so we can use that to join with the relevant field index records. If you have a part called Group attached directly to your content type then it would be like this:
join fipr.StringFieldIndexRecord sfir
with sfir.PropertyName = 'MyContentType.Group.'
where sfir.Value = 'HR'
If your field is attached to a part, replace MyContentType with the name of your part. Hql is pretty awesome, can learn more here: https://docs.jboss.org/hibernate/orm/3.3/reference/en/html/queryhql.html But I dunno, it gave me a headache haha. At least HQL has documentation though, unlike Orchard's query layer. Also can always fall back to pure SQL when HQL wont do what you want, there is an option to write SQL queries from the NHibernate session.
Your other option is to index your content types with lucene (easy if you are using fields) then filter and search by that. I quite liked using that, although sometimes indexes are corrupted, or need to be rebuilt etc. So I've found it dangerous to rely on it for something that populates pages regularly.
And pretty much whatever you do, one query to filter and sort, then another query to getmany on the contentmanager to get the content items is what you should accept is the way to go. Good luck!
You can use indexing and the Orchard Search API for this. Sebastien demoed something similar to what you're trying to achieve at Orchard Harvest recently: https://www.youtube.com/watch?v=7v5qSR4g7E0
I just ran into an interesting situation about relationships and databases. I am writing a ruby app and for my database I am using postgresql. I have a parent object "user" and a related object "thingies" where a user can have one or more thingies. What would be the advantage of using a separate table vs just embedding data within a field in the parent table?
Example from ActiveRecord:
using a related table:
def change
create_table :users do |i|
i.text :name
end
create_table :thingies do |i|
i.integer :thingie
i.text :discription
end
end
class User < ActiveRecord::Base
has_many :thingies
end
class Thingie < ActiveRecord::Base
belongs_to :user
end
using an embedded data structure (multidimensional array) method:
def change
create_table :users do |i|
i.text :name
i.text :thingies, array: true # example contents: [[thingie,discription],[thingie,discription]]
end
end
class User < ActiveRecord::Base
end
Relevant Information
I am using heroku and heroku-posgres as my database. I am using their free option, which limits me to 10,000 rows. This seems to make me want to use the multidimensional array way, but I don't really know.
Embedding a data structure in a field can work for simple cases but it prevents you from taking advantage of relational databases. Relational databases are designed to find, update, delete and protect your data. With an embedded field containing its own wad-o-data (array, JSON, xml etc), you wind up writing all the code to do this yourself.
There are cases where the embedded field might be more suitable, but for this question as an example I will use a case that highlights the advantages of a related table approch.
Imagine a User and Post example for a blog.
For an embedded post solution, you would have a table something like this (psuedocode - these are probably not valid ddl):
create table Users {
id int auto_increment,
name varchar(200)
post text[][],
}
With related tables, you would do something like
create table Users {
id int auto_increment,
name varchar(200)
}
create table Posts {
id auto_increment,
user_id int,
content text
}
Object Relational Mapping (ORM) tools: With the embedded post, you will be writing the code manually to add posts to a user, navigate through existing posts, validate them, delete them etc. With the separate table design, you can leverage the ActiveRecord (or whatever object relational system you are using) tools for this which should keep your code much simpler.
Flexibility: Imagine you want to add a date field to the post. You can do it with an embedded field, but you will have to write code to parse your array, validate the fields, update the existing embedded posts etc. With the separate table, this is much simpler. In addition, lets say you want to add an Editor to your system who approves all the posts. With the relational example this is easy. As an example to find all posts edited by 'Bob' with ActiveRecord, you would just need:
Editor.where(name: 'Bob').posts
For the embedded side, you would have to write code to walk through every user in the database, parse every one of their posts and look for 'Bob' in the editor field.
Performance: Imagine that you have 10,000 users with an average of 100 posts each. Now you want to find all posts done on a certain date. With the embedded field, you must loop through every record, parse the entire array of all posts, extract the dates and check agains the one you want. This will chew up both cpu and disk i/0. For the database, you can easily index the date field and pull out the exact records you need without parsing every post from every user.
Standards: Using a vendor specific data structure means that moving your application to another database could be a pain. Postgres appears to have a rich set of data types, but they are not the same as MySQL, Oracle, SQL Server etc. If you stick with standard data types, you will have a much easier time swapping backends.
These are the main issues I see off the top. I have made this mistake and paid the price for it, so unless there is a super-compelling reason do do otherwise, I would use the separate table.
what if users John and Ann have the same thingies? the records will be duplicated and if you decide to change the name of thingie you will have to change two or more records. If thingie is stored in the separate table you have to change only one record. FYI https://en.wikipedia.org/wiki/Database_normalization
Benefits of one to many:
Easier ORM (Object Relational Mapping) integration. You can use it either way, but you have to define your tables with native sql. Having distinct tables is easier and you can make use of auto-generated mappings.
Your space limitation of 10,000 rows will go further with the one to many relationship in the case that 2 or more people can have the same "thingies."
Handle users and thingies separately. In some cases, you might only care about people or thingies, not their relationship with each other. Some examples, updating a username or thingy description, getting a list of all thingies (or all users). Selecting from the single table can make it harding to work with.
Maintenance and manipulation is easier. In the case that a user or a thingy is updated (name change, email address update, etc), you only need to update 1 record in their table instead of writing update statements "where user_id=?".
Enforceable database constraints. What if a thingy is not owned by anyone? Is the user column now nillable? It would have to be in the single table case, so you could not enforce a simple "not nillable" username, for example.
There are a lot of reasons of course. If you are using a relational database, you should make use of the one to many by separating your objects (users and thingies) as separate tables. Considering your limitation on number of records and that the size of your dataset is small (under 10,000), you shouldn't feel the down side of normalized data.
The short truth is that there are benefits of both. You could, for example, get faster read times from the single table approach because you don't need complicated joins.
Here is a good reference with the pros/cons of both (normalized is the multiple table approach and denormalized is the single table approach).
http://www.ovaistariq.net/199/databases-normalization-or-denormalization-which-is-the-better-technique/
Besides the benefits other mentioned, there is also one thing about standards. If you are working on this app alone, then that's not a problem, but if someone else would want to change something, then the nightmare starts.
It may take this guy a lot of time to understand how it works alone. And modifing something like this will take even more time. This way, some simple improvement may be really time consuming. And at some point, you will be working with other people. So always code like the guy who works with your code at the end is the brutal psychopath who knows where you live.
I am in the midst of designing an application following the mvc paradigm. I'm using the sqlalchemy expression language (not the orm), and pyramid if anyone was curious.
So, for a user class, that represents a user on the system, I have several accessor methods for various pieces of data like the avatar_url, name, about, etc. I have a method called getuser which looks up a user in the db(by name or id), retrieves the users row, and encapsulates it with the user class.
However, should I have to make this look-up every-time I create a user class? What if a user is viewing her control panel and wants to change avatars, and sends an xhr; isn't it a waste to have to create a user object, and look up the users row when they wont even be using the data retrieved; but simply want to make a change to subset of the columns? I doubt this lookup is negligible despite indexing because of waiting for i/o correct?
More generally, isn't it inefficient to have to query a database and load all a model class's data to make any change (even small ones)?
I'm thinking I should just create a seperate form class (since every change made is via some form), and have specific form classes inherit them, where these setter methods will be implemented. What do you think?
EX: Class: Form <- Class: Change_password_form <- function: change_usr_pass
I'd really appreciate some advice on creating a proper design;thanks.
SQLAlchemy ORM has some facilities which would simplify your task. It looks like you're having to re-invent quite some wheels already present in the ORM layer: "I have a method called getuser which looks up a user in the db(by name or id), retrieves the users row, and encapsulates it with the user class" - this is what ORM does.
With ORM, you have a Session, which, apart from other things, serves as a cache for ORM objects, so you can avoid loading the same model more than once per transaction. You'll find that you need to load User object to authenticate the request anyway, so not querying the table at all is probably not an option.
You can also configure some attributes to be lazily loaded, so some rarely-needed or bulky properties are only loaded when you access them
You can also configure relationships to be eagerly loaded in a single query, which may save you from doing hundreds of small separate queries. I mean, in your current design, how many queries would the below code initiate:
for user in get_all_users():
print user.get_avatar_uri()
print user.get_name()
print user.get_about()
from your description it sounds like it may require 1 + (num_users*3) queries. With SQLAlchemy ORM you could load everything in a single query.
The conclusion is: fetching a single object from a database by its primary key is a reasonably cheap operation, you should not worry about that unless you're building something the size of facebook. What you should worry about is making hundreds of small separate queries where one larger query would suffice. This is the area where SQLAlchemy ORM is very-very good.
Now, regarding "isn't it a waste to have to create a user object, and look up the users row when they wont even be using the data retrieved; but simply want to make a change to subset of the columns" - I understand you're thinking about something like
class ChangePasswordForm(...):
def _change_password(self, user_id, new_password):
session.execute("UPDATE users ...", user_id, new_password)
def save(self, request):
self._change_password(request['user_id'], request['password'])
versus
class ChangePasswordForm(...):
def save(self, request):
user = getuser(request['user_id'])
user.change_password(request['password'])
The former example will issue just one query, the latter will have to issue a SELECT and build User object, and then to issue an UPDATE. The latter may seem to be "twice more efficient", but in a real application the difference may be negligible. Moreover, often you will need to fetch the object from the database anyway, either to do validation (new password can not be the same as old password), permissions checks (is user Molly allowed to edit the description of Photo #12343?) or logging.
If you think that the difference of doing the extra query is going to be important (millions of users constantly editing their profile pictures) then you probably need to do some profiling and see where the bottlenecks are.
Read up on the SOLID principle, paying particular attention to the S as it answers your question.
Create a single class to perform user existence check, and inject it into any class that requires that functionality.
Also, you need to create a data persistence class to store the user's data, so that the database doesn't have to be queried every time.
How should one structure validation, preparation and arrangement (etc) of data before dealing with the DB?
The data I expect to be passed might need to be validated (ex: category books actually exists) or contain conditional values (ex: sale price should only be set if ad = sale) or values that must be converted to ids (ex: category books must be converted to category_id 123).
I imagine that there are numerous ways to go about this like clumping everything together, grouping by field (do validation, prep etc together per field) or separating by action (validation, prep, etc) and field.
Are there any concepts when it comes to this topic just like the concept of MVC exists? Achieving flexibility, ease of maintenance or something like that?
Anything relating to common used components of model?
(I'm not sure if it helps but I'm currently using CodeIgniter / PHP)
In codeigniter, you can use the Form_Validation class with a callback method that you create. http://codeigniter.com/user_guide/libraries/form_validation.html#callbacks.
In your callback method you can check to see if things exist in database, etc.
I am wondering how the models in code ignitor are suposed to be used.
Lets say I have a couple of tables in menu items database, and I want to query information for each table in different controllers. Do I make different model classes for each of the tables and layout the functions within them?
Thanks!
Models should contain all the functionality for retrieving and inserting data into your database. A controller will load a model:
$this->load->model('model_name');
The controller then fetches any data needed by the view through the abstract functions defined in your model.
It would be best to create a different model for each table although its is not essential.
You should read up about the MVC design pattern, it is used by codeigniter and many other frameworks because it is efficient and allows code reuse. More info about models can be found in the Codeigniter docs:
http://codeigniter.com/user_guide/general/models.html
CodeIgniter is flexible, and leaves this decision up to you. The user's guide does not say one way or the other how you should organize your code.
That said, to keep your code clean and easy to maintain I would recommend an approach where you try to limit each model to dealing with an individual table, or at least a single database entity. You certainly want to avoid having a single model to handle all of your database tables.
For my taste, CodeIgniter is too flexible here - I'd rather call it vague. A CI "model" has no spec, no interface, it can be things as different as:
An entity domain object, where each instance represents basically a record of a table. Sometimes it's an "anemic" domain object, each property maps directly to a DB column, little behaviour and little or no understanding of objects relationships and "graphs" (say, foreign keys in the DB are just integer ids in PHP). Or it can also be a "rich (or true) domain object", with all the business intelligence, and also knows about relations: say instead of $person->getAccountId() (returns int) we have $person->getAccount(); perhaps also knows how to persist itself (and perhaps also the full graph or related object - perhaps some notion of "dirtiness").
A service object, related to objects persistence and/or general DB querying: be a DataMapper, a DAO, etc. In this case we have typically one single instance (singleton) of the object (little or no state), typically one per DB table or per domain class.
When you read, in CI docs or forums, about , say, the Person model you can never know what kind of patter we are dealing with. Worse: frequently it's a ungly mix of those fundamentally different patterns.
This informality/vagueness is not specific to CI, rather to PHP frameworks, in my experience.