I'm using Laravel and as part of my deploy routine I have the command
RUN php artisan migrate
Since I'm in production, I get the error
Application in production, Command Cancelled!
The fix is easy: RUN php rankbot/artisan migrate --force but I feel this is not the right way to do it? What's the best way to ensure the DB schema is always up to date?
This is the right way to go about it.
When you run a migration on production, you best be sure what it's going to do to your database, as some actions might not be rollbackable.
The confirmation prompt is there to make you stop and think twice before potentially cause harm.
Some migration operations are destructive, which means they may cause
you to lose data. In order to protect you from running these commands
against your production database, you will be prompted for
confirmation before the commands are executed. To force the commands
to run without a prompt, use the --force flag
https://laravel.com/docs/5.5/migrations#running-migrations
Related
I tried to deploy a keystone app to Heroku and I did it but while I tried to open the app I got the following error:
An error occurred handling a request for the Admin UI: Error: Prisma error: The table main.User does not exist in the current database.
Here's a screenshot containing more details about the error:
I tried to locate the database and create the User table.
I expect to know the steps of how to solve this issue.
It looks like your DB hasn't been initialised properly. The error you've included is failing to count the items in the User list which (if you don't have sessions configured) is likely the first query to run – a count of items in each list is shown on the Admin UI the landing page so that's the first thing it does.
So something about how your migrations are being generated or applied in production isn't setup right. Most of the relevant docs on how this works are in the CLI guide, specifically, see the section about database migrations and the db.useMigrations flag.
Having db.useMigrations turned off can be handy if you're just playing around in dev. Keystone will automatically sync your DB structure to what's defined in your list configs whenever it starts, and does so without creating any physical migration files. If you're prototyping some change or just mucking around, this may be what you want but – if you're deploying somewhere – better to turn db.useMigrations on. Then, if Keystone detects changes to the DB when it runs, it'll prompt you to create a migration file, which can be tweaked to protect existing data if needed, tracked under version control (eg. git) and deployed.
Getting these migrations to run in an environment like Heroku is a little slightly weird as (assuming it's enabled for your app) Heroku can auto-scale. Migrations on the other hand need to be run exactly once. You also can't just lock the DB and run migrations when the first instance of the app starts – this delays the start up of the HTTP server so, if the migrations run for too long, Heroku may think the deployment has failed.
The way we suggest getting around this is to run migrations in the build staging. Fans of the 12-factor app methodology will notice this violated the separation of build and release stages but, for a simple Heroku deploy, it works fine. For larger/more serious apps, creating and applying migrations usually an area that needs significant thought and attention. The specific infrastructure and rollout processes required will be project dependant.
I'd also encourage you to check out the Keystone 6 Heroku example codebase if you haven't already. It's a little out of date but it shows the migrations and package.json scripts in action.
I am aware of some potential solutions, but they all feel awful to me.
In pipeline (github actions), run a one-off task on fargate to migrate DB before the deployments.
Publish some kind of cloudformation event as a deploy hook and use it as a lambda trigger, and lambda will do the migration.
Leverage laravel crons with onOneServer() to continually check if a migration is necessary
[problem, no good] docker entrypoint command to run db migrations on task startup. (Bad, all instances will try to migrate the DB in quick succession, probably)
Each of these has various things I dislike.
This one will migrate the DB, and then deploy. If the deploy fails, the DB is now migrated and to fix it I would have to somehow run a db migration rollback in pipeline after a failure. Also it feels really bad to rely on one-off tasks through pipeline in general.
This one has more moving parts than I think should be necessary. Multiple points of failure: cloudformation event, lambda function failure. Also the deploy seems like it would be the event trigger, which means the deploy could be a success, but the lambda db migration fail, and the pipeline be unaware. Thus requiring a manual rollback of the deployment.
This one feels hacky, and yet, seems to have the least amount of moving parts and least entropy. The major downside however is that, I think this would essentially require a 1/min cron spamming php artisan migrate (nothing to migrate), so that it catches deploys with migrations. The benefit is that with onOneServer(), it should actually solve the concern: we don't want multiple instances to all try to migrate the database on a deploy, just one. This has a big benefit of linking the deploy and migrations, so if deploy fails, there is no migration yet, and if the migration fails, at least it is easier to rollback the task to the older task version quite easily. Less moving parts are involved. The resource overhead of spamming php artisan migrate each minute and it have nothing to rollback, should be very small/not noticeable resource usage. But, it still bothers me very much how inefficient it is resource-wise.
Is there another solution out there? I am anticipating someone may suggest to me to control instances with env variables, but I also don't want to do that. If we deploy and have 3 instances running, they should all be updated and they are all 'the same' instance states. Otherwise, I'd have to make a 2nd service that also runs 24/7 to check for migrations as its own special job. I guess that is solution 5:
Have a separate service task from the request-handling instances that runs 24/7 and whose only job is to run crons and migrate the DB after deploys. This also sucks though because you have a task running 24/7 to check for deploys, which are not so frequent.
I think solution 3 is my preferred solution, despite its resource overhead. I would love to hear some insight from others on this problem. I am in a situation where this pipeline really should be easy for non-ops-people to deal with if I get hit by a bus. Keeping it simple inside of the laravel app code seems like it fits that requirement. I know there are scheduled task / cloudformation event solutions, but keep in mind I have a big goal of as little entropy / moving parts as possible, within reason.
I have read every single blog post and every single google hit I can find on this subject, and have not found a clear obvious answer. I've come up with solution 3 myself and don't see it suggested anywhere.
Possibly automated DB migrations in all circumstances is too ambitious, and a manual process should be developed and followed. Especially if a DB migration contains a change which won't work on the old instances -- migrating it before deploy would break those temporarily.
Running database migrations before deployment (option 1) is the industry standard & what you should be doing, regardless of your cloud platform, database engine or application language.
The short and long answer is that DB migrations are there for fault tolerance - if for whatever reason you need to reverse your deployment, you know exactly what has happened to be able to roll back.
Most (if not all) ORMs e.g. Entity Framework for .NET or Liquibase for Java allow you to roll back the migration with a simple command. The Eloquent ORM included with Laravel for PHP also allows you to roll back migrations using php artisan migrate:rollback.
A step in your pipelines before deployment should apply the database migrations. If deployment then fails for any reason, you should manually roll back.
This is the intersection of your application & the database at an infrastructure level - unfortunately, expect some manual work to be needed if something fails.
use database migration:
php artisan migrate:fresh
this will drop all tables and create again
php artisan migrate:refresh
this will drop all tables
php artisan migrate:rollback
this will rollback tables
Im looking at Sqitch and so far it seems like a great tool, however I have an existing project that I want to use it with, Is there a way to create a baseline?
For example, I take a backup of my schema then add it to the deploy script, I then want to run a command that will not run the this script on the database as it already exists, but would apply everything after this point?
I need the full base schema in there so that we can re-deploy the whole schema if required
You can use the --log-only option of sqitch deploy command
From the docs: https://sqitch.org/docs/manual/sqitch-deploy/
--log-only
Log the changes as if they were deployed, but without actually running the deploy scripts. Useful for an existing database that is being converted to Sqitch, and you need to log changes as deployed because they have been deployed by other means in the past.
I have two laravel apps, the first is used as a Server Management System (SMS) that creates a host on the server. When this host is created it does a git clone to bring in the second laravel app that is used as a CMS for that host.
What I am trying to do is create a plugin within the SMS so that you can just select a check box and it will install the CMS for you when you create a new host. I have most of the code in place and I am testing locally and everything works grand until at the end when I try to install the migrations, when I try to run;
'php artisan cms:update'.
I also tried;
'php artisan migrate'.
What ends up happening is that rather the command being run against the CMS database, it is affecting the SMS's database adding a couple of tables and breaking the SMS database. I have done a 'pwd' and checked to make sure I am in the correct directory;
'/Directory/Directory/host/cms'
As it makes more sense for someone to read all of the code rather than snippets here is a link to the plugin:
CODE LINK
So to clarify what I need is to be able to test the plugin locally so I need the migrations to install into the correct database, and make sure the CMS works before I push to production. If anyone could shed some light on why the migrations is affecting the SMS rather than the CMS it would be greatly appreciated.
I have been working with Laravel 3 on my local server. I have been using terminal and Artisan to perform my migrations.
I want to install my site on my production server, but I want to create a sort of 'install/migration' script that will perform all the migrations and guide a user through configuration.
I have found where all migration methods are (used by artisan) but I'm struggling to use them. Anyone know how?
I think you are confusing some things (I'm not sure, so I'll tell just in case).
Migrations are meant for developers. Your end users don't run migrations directly. So migrations are for you and your fellow developers. If you want your users to run migrations, than you just create a normal page and have some link or a button that the user presses and this will run an action (a function) on your controller (if you have routes set up this way). In this function, you should run the migration.
Running migrations from PHP: you can use the Command class to run tasks.
Command::run(array('migrate'));
This will run the migrate task, obviously.
Is this what you're after?