Recommended / Standard handling of Laravel Data Migrations - laravel

Laravel ships with database migrations for managing CRUD operations regarding the structure of a database, but what is the appropriate/recommended/standardized way to handle migration of actual data?
My question is, should the data migration take place directly inside the database migration file? Should it be a seeder? Should it be a job that is dispatched from within the database migration? Where should such logic go. Sometimes these data migrations can become incredibly complex depending on what the database migration does, and in the spirit of maximizing readability and keeping responsibilities separate, I feel like the logic belongs somewhere else.
This question, I suppose, is more attributable to OOP programming structure and practice as a whole, rather than laravel specific, but Laravel is the framework I'm working in right now so framing my question in that regard.

I've done this several times, and I do it right there in the migration up() and down() functions unless we're talking about millions of records. I agree with you, it feels like there should be a clearly defined function in the migration for this. We want the data changed before another migration on the table is triggered, so I feel it needs to be done right away.
Using your example, this is what a simple migration would look like for splitting the name into a first_name and last_name in the up() function:
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
use Illuminate\Support\Facades\DB;
class Test extends Migration
{
/**
* Run the migrations.
*
* #return void
*/
public function up()
{
Schema::table('users', function (Blueprint $table) {
$table->string('last_name')->after('name');
$table->string('first_name')->after('name');
});
DB::statement("UPDATE users SET first_name = SUBSTRING_INDEX(name, ' ', 1), last_name = SUBSTRING(name from instr(name, ' ') + 1)");
Schema::table('users', function (Blueprint $table) {
$table->dropColumn('name');
});
}
...
If you have complex data changes, take a look at the $table->temporary(); option to create temporary tables to do data manipulation with SQL, and/or make command scripts which are called within the migration using the Artisan::call().
$table->temporary(): https://laravel.com/docs/8.x/migrations#database-connection-table-options
Artisan::call(): https://laravel.com/docs/8.x/artisan#programmatically-executing-commands

I prefer to separate data and structure migrations. I think that migration files should include only schema related queries.
Conditionally migration could contain data changes if:
Data is dependant on the time of deployment/migration (Can't really think of a case, but I am sure there are some :)).
We are making a schema change that directly affects the data. For example: changing the type of column or creating a new key that has to be seeded before future migrations take place.
Additional reasons why I prefer to have data in seeder files:
Running migrations on productions always carries certain risks. You can lower the risks of losing your data by testing the deployment process and using some fancy CD processes, but the risk is always present.
Static data that you think will never change, will change. For example, you start a new project in 2010 and the project's database contains table 'countries', which contains a list of countries and their properties. But after 2011 you get a new country: South Sudan. Will you create new migration or just update the seeder?

Adding to the answers by #jon__o and you can find more information here. Also, I will recommend that you refer to this link where they used temporary tables based on hashed_id where temporary tables are basically identical to the normal tables in the database. It has many features that are useful for migrations.
Schema::create('temp_mappings', function (Blueprint $table) {
$table->temporary(); // thanks, Laravel
$table->integer('id')->primary();
$table->string('hash_id');
});

Related

What is the best way to copy data from one field to another when creating a migration of a new field?

I have a database table having a field that has a boolean field type. Now, as per the new requirement, the field should be changed to the small integer type.
In order to achieve it, I created a migration and added the script in the same migration file to copy the value from the old field to the new field. However, I think this is not the best approach that I have followed. Can someone please help and advise about the best way to handle this scenario.
public function up()
{
Schema::table('skills', function (Blueprint $table) {
$table->tinyInteger('skill_type_id')->nullable()->comment = '1 for advisory skills, 2 for tools, 3 for language & framework';
});
$skill_object = (new \App\Model\Skill());
$skills = $skill_object->get();
if (_count($skills)) {
foreach($skills as $skill) {
$skill_type = 1;
if ($skill->is_tool) {
$skill_type = 2;
}
$skill_object->whereId($skill->id)->update(['skill_type_id' => $skill_type]);
}
}
}
You can do it with 02 migrations, the first one is to create the new field, as already did. The second is create a migration with raw statement to copy value from old field to new field.
If you don't need anymore old field, you can create a third migration deleting the old field.
public function up()
{
Schema::table('skills', function (Blueprint $table) {
DB::statement('UPDATE skills SET skill_type_id = IF(is_tool, 2, 1)');
}
}
You can do this(update the data) from the following way in your scenario.
Create separate routes and update the data after the migrations.
Create seeder(having the same query as above in migrations file) run the seeder.
But above both solutions are little risky if you are trying to do this with your production database. If someone mistakenly hit URL and run seeder multiple time, It's difficult to manage.
I believe the best way to solve your problem by seed(modify) the data on the same migrations file after modifying the schema because migrations won't run again (even mistakenly), Once it migrated.
You are doing the correct way as I believe.
You are free to develop your own way to achieve this task, but as far as migrations are concerned, these are meant for controlling and sharing the application's database schema among the team, not the actual data ;)
You can create separate seeder for this task.
It will keep your migration clean and easy to rollback if needed.
NOTE: Don't include this seeder class in DatabaseSeeder.
These kind of seeder class are only meant for update the existing data after fixing the current functionality(I am taking into consideration, you have already fixed the code as per your new requirement). So, there is not need to worry about re run the same seeder class.
Considering (laracast, stack-overflow), i will prefer to go by your way over the suggestions provided above as neither i have to maintain extra route nor additional migration (03).
The only improvement i can suggest here is you can use databse-transaction something like this :
// create new column
DB::transaction(function () {
update new column
delete old column
});

Current model is incompatible with old migrations

I have following sitation (I will describe it as history line):
I setup project witch User model (and users table) with migration file A
After some time i add user_modules table many-to-many and I was force to initialize this array during schama update in migration file B. I do it by
User::chunk(100, function($users) {
foreach ($users as $user) {
$user->userModule()->create();
}
});
After some time i need to update User model and table by add soft-delete (column delete_at) in migration file C and field $dates=['deleted_at'] in User model.
Then I develop system and add more migrations but at some point new developer join to our team and he must build DB schema from scratch so he run php artisan:migrate but he get error in migration file B:
[Illuminate\Database\QueryException (42S22)]
SQLSTATE[42S22]: Column not found: 1054 Unknown column
'users.deleted_at' in 'where clause' (SQL: select * from users
where users.deleted_at is null order by users.id asc limit 100
off set 0)
So the current User model is incompatible witch migration file B
How to deal with that situation?
Where I made mistake and what to do to prevent such situation in future?
This is because of Soft Deletes. When you add the trait SoftDeletes to a model, it will automatically add where users.deleted_at is null to all queries. The best way to get around this is to add withTrashed() to your query in migration B.
To do this, change your query in migration B to look like the following. This should remove the part where it's trying to access the non existent deleted_at column. This migration, after all, is not aware that you want to add soft deletes later on, so accessing all users, including those that are trashed, makes perfect sense.
User::withTrashed()->chunk(100, function($users) {
foreach ($users as $user) {
$user->userModule()->create();
}
});
You could always comment out the SoftDelete trait on the user model before running the migrations also, but that's a temporary fix since you'll need to explain it to all future developers. Also, it can be very handy to run php artisan migrate:fresh sometimes. You don't want to have to remember to comment out the trait each time, so adding withTrashed() seems like the most desirable solution to me.
As a final note, I highly suggest NOT adding seeds to your migrations. Migrations should ONLY be used for schema changes. In cases like this, I would use a console command, or a combination of console commands.
For example, you could make a console command that gets triggered by php artisan check:user-modules. Within this command, you could have the following which will create a user module only if one does not yet exist.
User::chunk(100, function($users) {
foreach ($users as $user) {
if (!$user->userModule()->exists()) {
$user->userModule()->create();
}
}
});
You should be able to run this command at any time since it won't overwrite existing user modules.
Alternative answer: In such situation when we need to generate or transform some data after db schema change - we should NOT use Models (which can independently change in future) but instead use inserts/updates:
DB::table('users')->chunkById(100, function ($users) {
foreach ($users as $user) {
DB::table('user_modules')->insert(
['user_id' => $user->id, 'module_id' => 1]
);
}
});
As it is written in laravel documentation, seeders are designed for data seeding with test data but not for data transformation - so migration files are probably good place to put transformation code (which can generate or change some production data in DB after schema update)
Laravel includes a simple method of seeding your database with test data using seed classes.
Add this to your old migration queries
use Illuminate\Database\Eloquent\SoftDeletingScope;
User::withoutGlobalScope(new SoftDeletingScope())

Laravel Database migrations Schema builder custom column type

I am trying to create migrations with Laravel but I am in a situation where I need custom column type since the one I want isn't included in schema builder , which is "POLYGON". So I want to know, how I can create my custom column type, other than those that are already in the Schema builder.
What I want would look like this in SQL statement:
alter table xxx add polygon POLYGON not null
Is it possible to do it by myself or I am forced to use some library like this?
I know that I can do like this:
DB::statement('ALTER TABLE country ADD COLUMN polygon POLYGON');
but it leads me to the error that the table doesn't exist.
There is no built in way to do this but you can achieve a good result with minimal code.
<?php
use Illuminate\Database\Schema\Grammars\Grammar;
// Put this in a service provider's register function.
Grammar::macro('typePolygon', function (Fluent $column) {
return 'POLYGON';
});
// This belongs in a migration.
Schema::create('my_table', function (Blueprint $table) {
$table->bigIncrements('id');
$table->addColumn('polygon', 'my_foo');
});
The key is to add a function with the name typePolygon to the Grammar class because this function is what determines the actual type used by the particular DBMS. We achieve this by adding a macro to the Grammar.
I have written a blog post about how to extend this solution to any custom type: https://hbgl.dev/add-columns-with-custom-types-in-laravel-migrations/
I assume you require spatial fields in your DB... I would consider via Packagist.org and search for laravel-geo (or equivalent) - which supports spatial column tyes inc Polygon. You could then use standard Laravel migration files for your custom fields -e.g.
$table->polygon('column_name');
In your UP function in your migration file...

What to put in down() function if up() drops table?

Just starting to learn Laravel, so go easy. I made a couple migration files to try out. The first creates a table, the second adds a column, and the third drops the table. I'm curious to know what I should put in the down() function of the third migration, since you can't "undrop" a table. How do you handle rolling back a migration that drops a table?
The point of the down function is to restore the database to the same state it was in before you ran the up function. So if up() drops a table, then down() should recreate that table.
It is important to note that you probably will lose data if you do this. But migrations are intended to manage the scheme of the database, not the contents. If you want to preserve the data, that's a backup.

Adding new columns to an Existing Doctrine Model

First of all Hats of to StackOverflow for their great service and to you guys for taking your time to answer our questions.
I am using Doctrine ORM 1.2.4 with CodeIgniter 1.7.3. I created a Site with some required tables and pumped in with datas only to realize at a later point of time that a specific table needs to have one more column.
The way i created the tables was by writing the model as php classes which extend the Doctrine_Record.
Now i am wondering if i need to just add the column in the model that requires a new column in the setTableDefinition() method and recreate that table or is there any other way that easily does this. The former method i've mentioned requires me to drop the current table along with the datas and recreate the table which i do not wish. Since doctrine seems to be a very well architect-ed database framework, i believe it is lack of my knowledge but surely should exist a way to add new columns easily.
PS: I am not trying to alter a column with relations to other tables, but just add a new column which is not related to any other table. Also i create the tables in the database using Doctrine::createTablesFromModels(); When i alter a table with a new column and run this method it shows errors.
Since you don't want to drop & recreate, use a Doctrine Migration.
The official docs here show many examples:
http://www.doctrine-project.org/projects/orm/1.2/docs/manual/migrations/en
Since you just want to add a field, look at their second code example as being the most relevant which is like this:
// migrations/2_add_column.php
class AddColumn extends Doctrine_Migration_Base
{
public function up()
{
$this->addColumn('migration_test', 'field2', 'string');
}
public function down()
{
$this->removeColumn('migration_test', 'field2');
}
}

Resources