Laravel - Flush Schema Commands In Migration - laravel

I am writing a migration in Laravel to try and change the primary key of a table to another column. In order to work around the fact that the dropPrimary method doesn't work when one has a custom defined constraint name in PostgreSQL, I currently have the following line in my migration:
DB::statement('ALTER TABLE users DROP CONSTRAINT "idx_16618_primary";');
However, I am hitting issues with the fact that DB::statement commands are executed immediately, whereas all the Schema:: commands essentially build queries that are put into a commands array and executed later. I am assuming this because when I look at something like the renameColumn method in Blueprint.php, I can see that it runs addCommand, which puts the command into an array:
protected function addCommand($name, array $parameters = [])
{
$this->commands[] = $command = $this->createCommand($name, $parameters);
return $command;
}
This means all of the preparation I do before the DB::statement command have not executed yet. This preparation includes going to other tables and removing foreign key constraints that point to the primary key that I am trying to drop.
Is there a way to get trigger the execution of all of the built Schema commands mid-way through a migration (e.g. something like Schema::flush()), or do I either have to convert all my prep work into DB::statement commands, or split my current migration code across two migrations, in order to have the DB::statement command executed after the prep work has been done?

Related

How to run Laravel Migrations, then Seeders, then more Migrations

I am re-building my Laravel app and, in the process, redesigning the data model.
I have a subset of my Migrations (35) I need to run to create the data model in the new app.
Then, I need to run some Seeders to populate the new tables.
Some of the new tables (12) have a column "old_id" where I place the "id" from the old data model to handle foreign keys/joins. I run a series of Update statements to change the foreign key values from the "old_id" to the new id.
Then, I want to run additional Migrations (12) that drop the "old_id" columns.
Here are the commands I'm running currently that do everything for me - clear DB, run migrations, populate data, and update keys.
php artisan migrate:reset
php artisan migrate:fresh --seed --seeder=DatabaseSeeder
I'm trying to find a way to only run a portion of my Migrations prior to executing DatabaseSeeder, and then run the remaining Migrations after (or as the last step of) the DatabaseSeeder.
Contents of DatabaseSeeder::class:
public function run()
{
$this->call([
// Seeders to populate data
UserSeeder::class,
AssociationSeeder::class,
... lots more classes ...
// Last Seeder class executes Update statements to update foreign keys
DatabaseUpdateSeeder::class,
]);
Thank you!
I ended up following the advice of the comment on the original post. While my previous "answer" also works, this is the right way to do it; separating round 1 and round 2 Migrations, and not executing Seeders from a Migration.
Here are my 2 commands (could be 3 if I wanted a separate command in the middle that only executes the Seeder), but adding the Seeder on the end of the command is supported syntax.
php artisan migrate:fresh --path=/database/migrations/batch-one --seed --seeder=DatabaseSeeder
php artisan migrate --path=/database/migrations/batch-two
I was able to achieve this by creating a Migration that calls my DatabaseSeeder. Here is how: https://owenconti.com/posts/calling-laravel-seeders-from-migrations
Then, after that Migration, I just have an additional Migration(s) to drop the "old_id" columns.
I also want to call out the comment from Tim on my original post with an alternative that would also work, but required multiple statements from the command line.

Laravel: Running a database migration on a production server

I have a local repo for development and a production server in Laravel.
When I work locally, as the project grows, so does the DB. Therefore I keep adding new migrations that sometimes change existing tables. Locally I can refresh / recreate tables and seed without worrying.
However when I want to update the production DB where actual data is stored, what's the best method? If I need to update an existing table I cannot just drop and recreate it, as data would be lost. And if I run the migration directly, I get an error like "table already exists". So I end up manually adding the fields in the DB, which I don't think it's the best way to go.
As already mentioned, you can create migrations to update the columns without dropping the tables. And the 'Modifying columns' docs provide a clear explanation for this. However that is docs for modifying columns, if you want to modify tables instead of columns, you can use the 'Updating tables' docs
This uses the SchemaBuilder to do various things, for example, adding columns in an existing table:
Schema::table('table_name'), function ($table) {
$table->string('new_column')->after('existing_column');
// however you can leave the ->after() part out, this just specifies that the
// new column should be insterted after the specified column.
});
Or delete a column from an existing table
Schema::table('table_name'), function ($table) {
$table->dropColumn('new_column');
});
You can also rename but I'll leave it to you to explore the docs further.

Current model is incompatible with old migrations

I have following sitation (I will describe it as history line):
I setup project witch User model (and users table) with migration file A
After some time i add user_modules table many-to-many and I was force to initialize this array during schama update in migration file B. I do it by
User::chunk(100, function($users) {
foreach ($users as $user) {
$user->userModule()->create();
}
});
After some time i need to update User model and table by add soft-delete (column delete_at) in migration file C and field $dates=['deleted_at'] in User model.
Then I develop system and add more migrations but at some point new developer join to our team and he must build DB schema from scratch so he run php artisan:migrate but he get error in migration file B:
[Illuminate\Database\QueryException (42S22)]
SQLSTATE[42S22]: Column not found: 1054 Unknown column
'users.deleted_at' in 'where clause' (SQL: select * from users
where users.deleted_at is null order by users.id asc limit 100
off set 0)
So the current User model is incompatible witch migration file B
How to deal with that situation?
Where I made mistake and what to do to prevent such situation in future?
This is because of Soft Deletes. When you add the trait SoftDeletes to a model, it will automatically add where users.deleted_at is null to all queries. The best way to get around this is to add withTrashed() to your query in migration B.
To do this, change your query in migration B to look like the following. This should remove the part where it's trying to access the non existent deleted_at column. This migration, after all, is not aware that you want to add soft deletes later on, so accessing all users, including those that are trashed, makes perfect sense.
User::withTrashed()->chunk(100, function($users) {
foreach ($users as $user) {
$user->userModule()->create();
}
});
You could always comment out the SoftDelete trait on the user model before running the migrations also, but that's a temporary fix since you'll need to explain it to all future developers. Also, it can be very handy to run php artisan migrate:fresh sometimes. You don't want to have to remember to comment out the trait each time, so adding withTrashed() seems like the most desirable solution to me.
As a final note, I highly suggest NOT adding seeds to your migrations. Migrations should ONLY be used for schema changes. In cases like this, I would use a console command, or a combination of console commands.
For example, you could make a console command that gets triggered by php artisan check:user-modules. Within this command, you could have the following which will create a user module only if one does not yet exist.
User::chunk(100, function($users) {
foreach ($users as $user) {
if (!$user->userModule()->exists()) {
$user->userModule()->create();
}
}
});
You should be able to run this command at any time since it won't overwrite existing user modules.
Alternative answer: In such situation when we need to generate or transform some data after db schema change - we should NOT use Models (which can independently change in future) but instead use inserts/updates:
DB::table('users')->chunkById(100, function ($users) {
foreach ($users as $user) {
DB::table('user_modules')->insert(
['user_id' => $user->id, 'module_id' => 1]
);
}
});
As it is written in laravel documentation, seeders are designed for data seeding with test data but not for data transformation - so migration files are probably good place to put transformation code (which can generate or change some production data in DB after schema update)
Laravel includes a simple method of seeding your database with test data using seed classes.
Add this to your old migration queries
use Illuminate\Database\Eloquent\SoftDeletingScope;
User::withoutGlobalScope(new SoftDeletingScope())

How to create a new record without executing a database query?

I have Question and Answer models. The Question hasMany Answers. Following commands run in the php artisan tinker mode invoke a database query for no apparent reason:
$q = new Question;
$q->answers[] = new Answer; // invokes the below query
// the executed query
select * from `answers` where `answers`.`question_id` is null and `answers`.`question_id` is not null
As you see, there is no need for database call whatsoever. How can I prevent it?
When you do $q->answers, Laravel tries to load all of the answers on the question object - regardless of whether they exist or not.
Any time you access a relationship on a model instance, you can either call it like answers() and get a query builder back, or you can call it like answers without parentheses and Laravel will fetch a collection for you based on the relationship.
You can prevent it easily by doing this:
$q = new Question;
$a = new Answer;
And then, when you're ready to save them, associate them with each other. In its simplest form, that looks like this:
$q->save();
$q->answers()->save($answer);
It's doing that because you're assigning it to the Question object. It wants to see if you're adding an extant record reference. All Laravel Eloquent models contain magic methods for properties, and trying to use them as temporary data storage is a really bad idea unless you've defined a property on them ahead of time for that specific purpose.
Just use a regular array instead and then associate them after the models have been prepared.
Documentation on one-to-many relationships:
https://laravel.com/docs/5.1/eloquent-relationships#one-to-many

EntityFramework code-first, run a database update script after DropCreate

I'm trying to find some nice work arounds for the issues of computed columns in code first. Specifically, I have a number of CreatedAt datetime columns that need to be set to getdate().
I've looked at doing this via the POCO constructors, but to do that I must remove the Computed option (or it won't persist the data), however, there is no easy to way ensure the column is only set if we are inserting a record. So this would overwrite the CreatedAt each time we update.
I'm looking to create an alter script that can be called after the DropCreate that would go through and alter various columns to include the default value of getdate().
Is there an event to hook into something like OnDropCreateCompleted where I could then run additional SQL
What would be the best way handle the alter script? I am thinking just sending raw sql to the server that would run.
Is there another way to handle the getdate() issue that might be more graceful and more inline with code first that I'm missing?
Thanks
You can just make custom initializer derived from your desired one and override Seed method where you can execute any SQL you want to use - here is some example for creating such initializer.
If you are using migrations you can just the custom SQL to Up method.

Resources