I'm creating a Laravel app but come from a WordPress background. In WordPress, it's reasonably straight forward to create custom fields that are repeatable e.g. I have a field called "Task" that can be repeated X number of times and it will be stored on the database.
Is there a best practice way of doing this in Laravel?
I understand that Javascript can be used to create repeatable form fields, and I could store that data as JSON in a MySQL database (using the latest versions of MySQL), but I'd also like this repeatable data to hold relationships e.g. relate a task to a day of the week (stored in another table).
Any advice or thoughts are much appreciated.
Related
I'm endeavoring to develop an application that uses Oracle as the database back-end. The application will calculate several statistics from the various tables in the database. The front-end will most likely be a web application and this front-end will display various charts and calculated statistics. Now, I imagine that it would be more efficient to perform the calculations in the database rather than in the service layer because said calculations would need to be performed for every web request. That being the case, I'm not sure which mechanism to use. (e.g. stored procedure, function, view) To illustrate what I'm going for, suppose I want to keep statistics of student grades for many students. I would like to have a web interface that lets me view those statistics on student-by-student basis and also an all-inclusive basis. Some of the stats are dependent on aggregates (e.g. average, min, max) of all of the student grades and some stats are dependent only on an individual student. In this situation, every time a record is added or updated, the aggregates would have to be recalculated. So I am speculating that if I had a special table that held all of the calculated values I need and a trigger(s) to recalculate everything when a record is added/updated then all I would need to do from a web request point-of-view is have the service layer pull the desired values from this special table. I'm just not sure if this is the best way to go or not so I am asking the community for any input/advice. Note: Although I'm using Oracle, I'm open to using PostgreSQL or mySQL.
Thanks in advance
The scenario you are describing would be ideal for using materialized views. They can be designed to refresh automatically (and incrementally) every time the source data is updated by your application. The calculations would be built in to the view definition. No triggers required, and likely no stored procedures unless your calculations involve multiple steps. Check here: https://oracle-base.com/articles/misc/materialized-views and here: https://medium.com/oracledevs/lightning-fast-sql-with-real-time-materialized-views-12-things-developers-will-love-about-oracle-54bcc9eac358 for more info.
I have two different apps with two separate Laravel installs. The 1st one has some models, and its data is indexed with Scout (using the TNT search driver, which created index files on disk, if it's relevant).
Now I would like to use the 1st's models and data (that part is done, I use the same model with a different connexion to the 1st app's database), but I would also like to use the 1st's index when using the search() method.
Is it at all possible? If so, what would be the required configuration to do?
At the moment, I need to re-index all of my data in the 2nd app, and it's a bit of a waste because if I make any change in App nb. 1, the index won't update on app nb. 2.
The only idea I had at the moment (and I don't like it at all) would be to rsync the index files between the two apps, but there has to be some better way!
tl;dr: I have two apps, and I want to use the model/data/search index of one from the other app. Is it at all possible?
In Laravel we have the concept of seeding data that is used in conjunction with model factories in order to populate data for testing environment.
How should we proceed (where to put the code) when we need to populate data for production? For example I might have a permission table, and I need to add some default permissions along with the schema creation. After a while I might need to add to my app a new permission. Should these data insertion stay together with the migrations?
What mechanism should we use for inserting data? Models or data array? My problem with data array is that no business from models, will be helpful: like: casts or relationships.
I know, that there two discussions about this subject, but for me the solutions do no cover all the problems:
Laravel : Migrations & Seeding for production data
Laravel DB Seeds - Test Data v Sample Data
How should we proceed (where to put the code) when we need to populate data for production?
Our team makes a brand new migration for inserting production seeds (separate from the migration that creates the table). That way, if you need to add more seeds to your production data in the future, you can simply make a new standalone migration.
For example, your first migration could be 2016_03_05_213904_create_permissions_table.php, followed by your production seeds: 2016_03_05_214014_seed_permissions_table.php.
You could put this data in the same migration as your table creation, but in my opinion, the migration becomes less readable and arguably violates SRP. If you needed to add more seeds in the future, you would have two different "standards" (one group of seeds in your original migration, and another in a separate migration).
To answer your second question:
What mechanism should we use for inserting data?
I would always use your model's create() method for inserting production seeds. This ensures that any event listeners that are listening for your model's creation event properly fire. As you said, there could be extra code that needs to fire when your model is created.
what is purpose of $table->json('options'); as field type of laravel database schema builder.I tried searching hard but couldn't get any relevant info on it.Please some one state list purpose with example
Some database engines - PostgreSQL being a major example - have JSON-friendly data types (that MySQL currently lacks - it'll just store as a TEXT data type there). This can be handy for working with data (like the options example you cite) that might contain a large amount of schema-less or loosely-structured data.
http://www.postgresql.org/docs/9.4/static/datatype-json.html
http://www.postgresql.org/docs/9.3/static/functions-json.html
Instead of having 100+ columns for a bunch of on/off options for a model, you could store them in a JSON object in the database.
Sometimes it is useful, even with MySQL to store data as JSON.
If you are building an application with user settings, when you only require a handful of user settings for your applications, a few columns in your users or settings table will do the trick nicely. But what about when you have dozens and dozens of configuration options? Well, in these cases, you might consider encoding a bit of JSON, and saving it to a single column.
I need to synchronize my Relational database(Oracle or Mysql) to CouchDb. Do anyone has any idea how its possible. if its possbile than how we can notify the CouchDb for any changes happened on the relational DB.
Thanks in advance.
First of all, you need to change the way you think about database modeling. Synchronizing to CouchDB is not just creating documents of all your tables, and pushing them to Couch.
I'm using CouchDB for a site in production, I'll describe what I did, maybe it will help you:
From the start, we have been using MySQL as our primary database. I had entities mapped out, including their relations. In an attempt to speed up the front-end I decided to use CouchDB as a content repository. The benefit was to have fully prepared documents, that contained all the relational data, so data could be fetched with much less overhead.
Because the documents can contain related entities - say a question document that contains all answers - I first decided what top-level entities I wanted to push to Couch. In my example, only questions would be pushed to Couch, and those documents would contain the answers, and possible some metadata, such as tags, user info, etc. When requesting a question on the frontend, I would only need to fetch one document to have all the information I need at that point.
Now for your second question: how to notify CouchDB of changes. In our case, all the changes in our data are done using a CMS. I have a single point in my code which all edit actions call. That's the place where I hooked in a function that persisted the object being saved to CouchDB. The function determines if this object needs persisting (ie: is it a top level entity), then creates a document of this object (think about some sort of toArray function), and fetches all its relations, recursively. The complete document is then pushed to CouchDB.
Now, in your case, the variables here may be completely different, but the basic idea is the same: figure out what documents you want saved, and how they look like. Then write a function that composes these documents and make sure this is called when changes are made to your relational database.
Notifying CouchDB of a change
CouchDB is very simple. Probably the easiest thing is directly updating an existing document. Two ways to implement this come to mind:
The easiest way is a normal CouchDB update: Fetch the current document by id; modify it; then send it back to Couch with HTTP PUT or POST.
If you have clear application-specific changes (e.g. "the views value was incremented") then writing an _update function seems prudent. Update function are very simple: they receive an HTTP query and a document; they modify the document; and then CouchDB stores the new version. You write update functions in Javascript and they run on the server. It is a great way to "compress" common actions into simpler (and fewer) HTTP queries.