im using the development version of Codeigniter 3.0.
I can see that there is a Schema property to the $db config. But it doesnt look like its using it for anything atm. as my script wont connect to multiple schemas.
Does anyone know how to setup CI with multiple Schemas in postgre?
Here is the solution by the CI Developers.
https://github.com/EllisLab/CodeIgniter/commit/485a348a7a633d38f69a963e9f77e23077f75d11
I recommend you to download only the folder 'database' from this link of your CI system/core and add to your application/config/database.php the line...
$db['default']['schema'] = 'NAME_OF_YOUR_SCHEMA';
or if you are using with a array list, something like this:
$db['default'] = array('schema'=>'NAME_OF_YOUR_SCHEMA');
Something like this was driving me crazy, specially with migrations.
What works for me was:
$this->db->query('SET search_path TO <schema>');
For example, at the begining and end of the up() method:
public function up()
{
$this->db->query('SET search_path TO <custom_schema>');
...
# Database stuff here
...
$this->db->query('SET search_path TO public'); //where migrations table exists
}
Try this to avoid pg_query() errors with schema.table (syntax) and relation "migrations" does not exist.
Related
As I've updated my postgres database scheme, I wanted to use a migration to migrate all existing data to the new tables in a migration, so it runs automagically when I deploy it to my host.
Unfortunately, that doesn't work. You'll find my code below. When I copy the exact same code to an artisan command, it works. When I copy that to artisan tinker, it works. When I run it via the migration, it doesn't. While DB#insert returns true, nothing gets inserted into the database.
I also tried calling the previously created working artisan command from the migration, but it doesn't insert any data then.
I've truncated the tables between the tests. Also the connection is correct as DB#getDefaultConnection returns the correct connection name and all other migrations work as expteced. Also another migration with DB#insert in it.
The weird thing: It worked a few weeks before. When I wrote this, I only made changes in the VueJS frontend, but not in the backend. I also haven't updated any packages or the postgres database. I am absolutely not sure what's going on there.
Also: No constraints are violated.
public function up()
{
$platform_id = DB::table('platforms')->where('slug', '[...]')->first()->id;
DB::table('apps')->chunkById(20, function ($apps) use ($platform_id) {
$transformed = [];
foreach ($apps as $app) {
echo "{$app->app_id} - {$app->name}\n";
$transformed[] = [
'uuid' => (string)Uuid::generate(4),
'platform_id' => $platform_id,
'remote_id' => $app→app_id,
// [...]
];
}
echo "Inserting...\n\n";
DB::table('products')->insert($transformed);
});
// [...]
// The same thing again with another table.
I have a database table having a field that has a boolean field type. Now, as per the new requirement, the field should be changed to the small integer type.
In order to achieve it, I created a migration and added the script in the same migration file to copy the value from the old field to the new field. However, I think this is not the best approach that I have followed. Can someone please help and advise about the best way to handle this scenario.
public function up()
{
Schema::table('skills', function (Blueprint $table) {
$table->tinyInteger('skill_type_id')->nullable()->comment = '1 for advisory skills, 2 for tools, 3 for language & framework';
});
$skill_object = (new \App\Model\Skill());
$skills = $skill_object->get();
if (_count($skills)) {
foreach($skills as $skill) {
$skill_type = 1;
if ($skill->is_tool) {
$skill_type = 2;
}
$skill_object->whereId($skill->id)->update(['skill_type_id' => $skill_type]);
}
}
}
You can do it with 02 migrations, the first one is to create the new field, as already did. The second is create a migration with raw statement to copy value from old field to new field.
If you don't need anymore old field, you can create a third migration deleting the old field.
public function up()
{
Schema::table('skills', function (Blueprint $table) {
DB::statement('UPDATE skills SET skill_type_id = IF(is_tool, 2, 1)');
}
}
You can do this(update the data) from the following way in your scenario.
Create separate routes and update the data after the migrations.
Create seeder(having the same query as above in migrations file) run the seeder.
But above both solutions are little risky if you are trying to do this with your production database. If someone mistakenly hit URL and run seeder multiple time, It's difficult to manage.
I believe the best way to solve your problem by seed(modify) the data on the same migrations file after modifying the schema because migrations won't run again (even mistakenly), Once it migrated.
You are doing the correct way as I believe.
You are free to develop your own way to achieve this task, but as far as migrations are concerned, these are meant for controlling and sharing the application's database schema among the team, not the actual data ;)
You can create separate seeder for this task.
It will keep your migration clean and easy to rollback if needed.
NOTE: Don't include this seeder class in DatabaseSeeder.
These kind of seeder class are only meant for update the existing data after fixing the current functionality(I am taking into consideration, you have already fixed the code as per your new requirement). So, there is not need to worry about re run the same seeder class.
Considering (laracast, stack-overflow), i will prefer to go by your way over the suggestions provided above as neither i have to maintain extra route nor additional migration (03).
The only improvement i can suggest here is you can use databse-transaction something like this :
// create new column
DB::transaction(function () {
update new column
delete old column
});
I have following sitation (I will describe it as history line):
I setup project witch User model (and users table) with migration file A
After some time i add user_modules table many-to-many and I was force to initialize this array during schama update in migration file B. I do it by
User::chunk(100, function($users) {
foreach ($users as $user) {
$user->userModule()->create();
}
});
After some time i need to update User model and table by add soft-delete (column delete_at) in migration file C and field $dates=['deleted_at'] in User model.
Then I develop system and add more migrations but at some point new developer join to our team and he must build DB schema from scratch so he run php artisan:migrate but he get error in migration file B:
[Illuminate\Database\QueryException (42S22)]
SQLSTATE[42S22]: Column not found: 1054 Unknown column
'users.deleted_at' in 'where clause' (SQL: select * from users
where users.deleted_at is null order by users.id asc limit 100
off set 0)
So the current User model is incompatible witch migration file B
How to deal with that situation?
Where I made mistake and what to do to prevent such situation in future?
This is because of Soft Deletes. When you add the trait SoftDeletes to a model, it will automatically add where users.deleted_at is null to all queries. The best way to get around this is to add withTrashed() to your query in migration B.
To do this, change your query in migration B to look like the following. This should remove the part where it's trying to access the non existent deleted_at column. This migration, after all, is not aware that you want to add soft deletes later on, so accessing all users, including those that are trashed, makes perfect sense.
User::withTrashed()->chunk(100, function($users) {
foreach ($users as $user) {
$user->userModule()->create();
}
});
You could always comment out the SoftDelete trait on the user model before running the migrations also, but that's a temporary fix since you'll need to explain it to all future developers. Also, it can be very handy to run php artisan migrate:fresh sometimes. You don't want to have to remember to comment out the trait each time, so adding withTrashed() seems like the most desirable solution to me.
As a final note, I highly suggest NOT adding seeds to your migrations. Migrations should ONLY be used for schema changes. In cases like this, I would use a console command, or a combination of console commands.
For example, you could make a console command that gets triggered by php artisan check:user-modules. Within this command, you could have the following which will create a user module only if one does not yet exist.
User::chunk(100, function($users) {
foreach ($users as $user) {
if (!$user->userModule()->exists()) {
$user->userModule()->create();
}
}
});
You should be able to run this command at any time since it won't overwrite existing user modules.
Alternative answer: In such situation when we need to generate or transform some data after db schema change - we should NOT use Models (which can independently change in future) but instead use inserts/updates:
DB::table('users')->chunkById(100, function ($users) {
foreach ($users as $user) {
DB::table('user_modules')->insert(
['user_id' => $user->id, 'module_id' => 1]
);
}
});
As it is written in laravel documentation, seeders are designed for data seeding with test data but not for data transformation - so migration files are probably good place to put transformation code (which can generate or change some production data in DB after schema update)
Laravel includes a simple method of seeding your database with test data using seed classes.
Add this to your old migration queries
use Illuminate\Database\Eloquent\SoftDeletingScope;
User::withoutGlobalScope(new SoftDeletingScope())
What i am doing is, want to create table in another database which is not set in .env file. and i want to do this on controller functionality. And I am using a eloquent model throughtout my project. How to create table with this scenario?
In addition to the linked question that #szebasztian provided in the comment, take a look at the documentation. You can specify what connection Schema will use to create the table when running a migration in this way:
Schema::connection('foo')->create('users', function ($table) {
$table->increments('id');
});
foo is the new connection you define in config/database.php. I would personally still add the connection details (host, username, password, etc) to .env for that information.
Im learning laravel.
My question is about some simple way do display model structure. I have little experience with django and as i remember, structure for each model was placed inside model files.
Yet in laravel, i need to put starting structure inside migration file:
$table->increments('id');
$table->timestamps();
$table->string('name')->default('');
Then if i want to add some new field, i will place this field in next migration file, etc.
So, is there any way to see some kind of summary for model? Maybe some bash command for tinker?
There are a bunch of options for you to choose from.
If you would like to show a summary of a model while you are in tinker, you can call toArray() on an instance of your model.
Ex:
$ php artisan tinker;
>>> $user = new App\User(['email' => 'john#doe.com', 'password' => 'password]);
>>> $user->toArray();
If you are trying to see a summary of a model displayed on your webpage, just var_dump or dd(...) an instance of your model after calling toArray() on it, and you'll get the same result as above, just in your web browser.
If you are looking for a way to show the table structure without creating any Model instances, you can display the table structure in your terminal, the exact command depending on what database you are using.
For example in MySQL you would do something like:
mysql> show COLUMNS from USERS;
It might also be a good idea to get a GUI app, I like Sequel Pro (for Mac).
P.S. I would just add that you should only have separate migrations for adding new fields when you are already in production and can't lose data from your database. While you are still in development and don't care about your data, it is much better to call php artisan migrate:rollback, add the new field to your create migration, and then php artisan migrate again, rather than making tons of new migration files.