So, let's say on the first of this month, I create branch A, with a migration file named 2020_04_01_113108_modify_request_logs_table.php
But let's say I don't merge this branch into my Master branch yet, and then I start working on branch B 2 days later, with a migration file named 2020_04_03_113108_create_label_logs_table.php
So on the 4th, I merge branch B into master and run php artisan migrate, and it runs the second migration.
And then on the 6th, I'm finally ready to merge branch A into master and run php artisan migrate. Is there anything that's going to go wrong with this migration? Does the migration system care that the dates of the files happened out of order? Will it ignore the A-branch file because it's already run a migration with a date later than that file?
Migration that haven't been executed yet, will be.
To check this before hand, you can run php artisan migrate:status to see which one are already executed ('Yes') and the ones that aren't ('No').
The output will look like this:
+------+-------------------------------------------------------------------+-------+
| Ran? | Migration | Batch |
+------+-------------------------------------------------------------------+-------+
| Yes | 2019_12_12_184629_create_users_table | 1 |
| Yes | 2020_03_27_153830_create_another_table | 1 |
| No | 2020_04_01_090622_modify_user_table | |
| Yes | 2020_04_11_102846_update_level | 1 |
| No | 2020_04_22_094132_dummy_migration | |
+------+-------------------------------------------------------------------+-------+
Acutally, Laravel will resolve this out of the box. All previously ran migrations are stored in your databse, in table migrations. Upon running new migrations, Laravel will compare them against the migrations that have already been run for this application, by looking in that migrations table.
I did some research, and actually found the methods in framework that do the described logic above. You can check them here.
Related
We use liquibase with oracle. So we have some packages with procedures inside.
In our case changelog structure looks like:
master.xml
|
| release1
| |
| release-maser.xml
| release2
| |
| release-maser.xml
| softObjects
| |
| package-master.xml
| packages
| |
| somePackage.pkb
| somePackage.pks
change sets inside "releases" with runOnChange=false and "softObjects"(that can be create used CREATE OR REPLACE) with runOnChange=true like in best practices:
Try to maintain separate changelog for Stored Procedures and use
runOnChange=”true”. This flag forces LiquiBase to check if the
changeset was modified. If so, liquibase executes the change again.
So every update intall some "delta" change sets from "release" and AFTER reinstal all "softObjects" that was changed and in usual case all is OK.
But if I need to setup new DB, I will get a problem:
in change set from second release I use somePackage(v1), but in next release i need to change logic/API somePackage so I will get somePackage(v2) that can not be used in previously created change sets. So for now I have change set that on update will try to use wrong package version.
For avoid this I can add soft objects directly inside release folder without runOnChange=true when i create it. When I need to change it, i should just copy previous version of file to my new release and make changes inside a COPY.
This approach has some disadvantages:
you make a lot of copy files that can consist from thousands lines of
code(yes I know that it isn`t good)
version control system recognize it as a new file (good challenge for
reviewers)
What i undestand wrong? How should I work with "softObjects", if this objects might be changed?
recently I started to work with db2, and created few databases.
To drop a single DB I should use db2 drop db demoDB, is there a way to drop all DBs at once?
Thanks
Taking into account the previous answer, this set of lines do the same without creating a script.
db2 list db directory | tail -n +6 | sed 'N;N;N;N;N;N;N;N;N;N;N;s/\n/ /g' | awk '$28 = /Indirect/ {print "db2 drop database "$7}' | source /dev/stdin
This filters the local databases, and executes the generated output.
(Only works in English environment)
first , i don't think there is any db2 nature way to do that. But I usually do the following thing. At start, the way to see all the databases on your instance is one of the following:
db2 list db directory
db2 list active active databases
Depends on your need ( all DBs or just the active DBs)
I'm sure there is more DBs lists you can get.(at DB2 user guide)
The way I usually drop all my DBs is by using shell script:
1. create new script by using 'vi db2_drop_all.sh' or some other way you want.
2. paste the code:
#!/bin/bash -x
for db_name in $(db2 list db directory | grep Database | \
grep name | cut -d= -f2); do
db2 drop db $db_name || true
done
exit 0
3. save changes
4. and just run the script (after you switched to your instance of course ) sh db2_drop_all.sh
Notice that in step 2 you can change the list of DBs as you wish. ( for example to db2 list active databases)
Hope it helped you. :)
I've got a simple Laravel app that I've just been told that the public DIR has to be in quite a different place to the core code.
I need to have s'thing like this:
folder-root
| - site
| - | - someFolder
| - | - | - codeFolder
| - | - | - | - app
| - | - | - | - bootstrap
| - | - | - | - config (etc etc etc)
| - newPublicFolder
| - | - index.php
I've copied all my files to this structure locally and altered my public index.php file to point to the bootstrap/autoload.php file and can echo out test variables from that file, so I know it is pointing it to it correctly.
Is there a guide anywhere to do this, or is there a config file I'm missing?
UPDATE: This is from my apache error log:
PHP Fatal error: Call to a member function make() on a non-object in /var/www/test/site/public/index.php on line 49
UPDATE 2:
I've ONLY copied the files over. The DB remains the same and I've not run any compser updates or anything.. if that makes a differnce?
Cheers!
To all in the same boat and here because of a Google search..
I'd copied line 22 and pasted into line 36.
I'd missed that the first was to 'autoload.php' and the second was to 'app,php'
Rookie mistake, but if it helps you.. all good..
Here's the structure of my templates:
master
#yield('master1')
#yield('master2')
dashboard
#extends('master')
#section('master1')
#include('sub-1')
submaster
#yield('submaster1')
sub-1
#extends('submaster')
#section('submaster1')
#section('master2') <-- this is what I am trying to do
Here's a more visual representation
master submaster
#yield('master1') #yield('submaster1')
#yield('master2') |
| |
|___ dashboard |
#extends('master') |
#section('master1') |
#include('sub-1') |
| |
|_______ sub-1 _______|
#extends('submaster')
#section('submaster1') <-- from the right
#section('master2') <-- from the left
Is this kind of thing possible with Blade? when I remove the implementation of master2 from sub-1, everything works fine. when I add it back in, the code from submaster continues to render, the code in master2 seems to work and get included in the expected place, but the code in submaster1 stops getting included in the appropriate sections in submaster.
Is it possible to create aliases when I enter a certain folder?
What I want:
I use composer a lot (a PHP package manager), which installs binaries in ./vendor/bin. I would like to run the binaries directly from ..
For example:
/path/to/project
| - composer.json // dictates dependencies for the project
| - vendor // libs folder for composer, is created by composer
| | - bin // if lib has bin, composer creates this folder
| | | phpunit // binary
| | | phinx // binary
| | - somelib1 // downloaded by composer
| | - somelib2 // downloaded by composer
Is it possible to get this to work:
> cd /path/to/project
> phpunit
And get phpunit to execute?
Something like "sensing" the composer.json file and dynamically find the binaries in ./vendor/bin and then do something like alias="./vendor/bin/<binary-name> $#" automatically?
I use OS X 10.9 and the boxed in Terminal app.
You can override cd, trap my_function DEBUG to run something on every command, or add a command into PS1 or PROMPT_COMMAND.
These have different behaviour and caveats, and I can't recommend doing any of them for this use case (after having used each of them at some point). They are bad solutions to X-Y problems.
An alternative which is much less likely to break things horribly is to create a custom function to do both things:
cdp() {
cd "$#" && phpunit
}