Flyway repeatable migrations - Checksum doesn't change but Flyway still executes scripts - oracle

I'm using Flyway 4.2.0 Community Edition. It's possible that this issue is resolved in a future release, but our DB upgrade project is still a ways out and no luck with being approved for licensing to go to Enterprise.
We've been successfully using Flyway for migrations on our Oracle databases for about a year (11.2.0.4 and 11.2.0.2) using standard migrations and with the default prefix (V) and suffix (.sql). We had a homegrown approach to handling our source, but we'd like to move to Repeatable Migrations to simplify things.
We have previously exported all of our PL/SQL into a git repository, using specific suffixes for different object types (trigger=.trg, procedure=.prc, etc.). We'd like to keep these suffixes, but the version we're on doesn't support the newer parameter flyway.sqlMigrationSuffixes, so in we're trying to use a solution with a list of suffixes and a for-loop. This solution is mostly working in my testing, with two very notable exceptions: package specs and package bodies (stored separately as .pks and .pkb).
Here's the script we're using to do our migrations (I know it needs work):
###Determine deployment environment for credential extraction
echo "Determining the deployment environment"
case "${bamboo_deploy_environment}" in
prod)
path=prod
dbs=( db1 db2 db3 )
;;
stage)
path=stage
dbs=( db1 db2 db3 )
;;
*)
path=dev
dbs=( db1 db2 db3 )
;;
esac
echo "Environment for credentials unlock is ${path}"
packages=( .sql .trg .pks .pkb .fnc .typ .java .class .properties .config .txt .dat )
echo "Packages to loop through when deploying flyway are ${packages[*]}"
echo "Databases to run against in this environment are ${dbs[*]}"
###Flyway execution stuff
for db in "${dbs[#]}"
do
if [ -z ${db} ]; then
echo "No db specified"
exit 2
else
echo "Working on db ${db}"
case "${db}" in
db1)
sid=db1
host=db1.fqdn
port=$portnm
;;
db2)
sid=db2
host=db2.fqdn
port=$portnm
;;
db3)
sid=db3
host=db3.fqdn
port=$portnm
;;
esac
fi
echo "Current directory is `pwd`" && echo "\n Contents of current directory as `ls -l`"
echo "Executing Flyway against ${db} for package ${pkg}"
for pkg in "${packages[#]}"
###Target the specific migrations starting folder (it goes recursively)
do
case "${pkg}" in
.sql)
loc=filesystem:${db}/migrations
;;
*)
loc=filesystem:${db}
migrateParams="-repeatableSqlMigrationPrefix=RM -table=REPEATABLE_MIGRATIONS_HISTORY"
;;
esac
echo "Running flyway for package type ${pkg} against ${db} db with location set to ${loc}"
baseParams="-configFile=${db}/migrations/base.conf -locations=${loc} -url=jdbc:oracle:thin:#${host}:${port}:${sid}"
migrateParams="${migrateParams} -sqlMigrationSuffix=${pkg} ${baseParams}"
addParams=" -ignoreMissingMigrations=True"
flyway "repair" "${migrateParams}"
flyway "migrate" "${migrateParams}${addParams}"
echo "Finished with ${pkg} against ${db} db"
unset baseParams
unset migrateParams
unset addParams
done
done
echo "Finished with the migration runs"
My approach has been to run the deployment in an environment, export the data from the REPEATABLE_MIGRATIONS_HISTORY table (custom table for the repeatable migrations) as insert statements, then truncate the table, execute the inserts, and run the deployment again using the same deployment artifact. On every file type Flyway is correctly evaluating that the checksum has not changed and skipping the files. For the package spec (.pks) and package body (.pkb) files, however, Flyway is executing the repeatable migration every time. I've run queries to verify, and I'm getting incremented executions on all .pks and .pkb files but staying at one execution for every other suffix.
select "description", "script", "checksum", count(1)
from FLYWAY.repeatable_migrations_history
group by "description", "script", "checksum"
order by count(1) desc, "script";
Does anyone else out there have any ideas? I know that these source files should be idempotent, and largely they are, but some of this PL/SQL has been around for 20 plus years. We've seen a couple of objects that throw an error on the first execution post-compile before working perfectly thereafter, and we've never been able to track down a cause or solution. We will need to prevent unnecessary in order to promote this to production.

Related

Cloud Sql Instance Promote with using shell

I am trying to promote Cloud Sql Replica Instance to Primary while running script. At time one instance I can promote but I want to promote all available replicas to primary instance parallelly at the same time not one by one
please suggest correction in the script
#!/bin/bash
#Set preject variables
gcloud auth login
read -p 'Please provide Project ID for the project that your instance is located in:' project
gcloud config set project $project
#Make a temp directory and file to store the JSON output from Gcloud
mkdir tempFiles
touch tempFiles/instanceDetails.json
touch tempFiles/instanceDetails-dr.json
touch tempFiles/replica1.json
touch tempFiles/replica2.json
touch tempFiles/replica3.json
touch tempFiles/replica4.json
touch tempFiles/replica5.json
touch tempFiles/primaryReplacementReplica.json
#Prompt the user for the primary instance and target failover replica
##read -p 'Enter the primary Instance ID: ' primaryInstance
read -p 'Enter the Instance ID of the target replica: ' drInstance
read -p 'Enter the Instance ID of the target replica: ' drInstance2
#Pull all data from primary instance needed for scripting
echo "Pulling Data from your SQL instances..."
##echo $(gcloud sql instances describe $primaryInstance --format="json") > tempFiles/instanceDetails.json
echo $(gcloud sql instances describe $drInstance --format="json") > tempFiles/instanceDetails-dr.json
echo $(gcloud sql instances describe $drInstance --format="json") > tempFiles/instanceDetails-dr2.json
#ask user to confirm the action since it is irreversable
read -p 'You are attempting to failover from $primaryInstance in $primaryRegion to $drInstance in $drRegion. This is an irreversible action, please type Yes to proceed: ' acceptance
read -p 'You are attempting to failover from $primaryInstance in $primaryRegion to $drInstance2 in $drRegion. This is an irreversible action, please type Yes to proceed: ' acceptance
if [ "$acceptance" = "Yes" ]
then
#Promote the read replica in the DR region
echo "Promoting the replica to a standalone instance..."
gcloud sql instances promote-replica $drInstance && gcloud sql instances promote-replica $drInstance2
echo "Instance promoted."
else
echo "You did not confirm with a Yes. No changes have been made."
fi

Does JanusGraph run on Windows 10

Windows 10, janusgraph-0.2.0-hadoop2.
I have put the winutils.exe in the bin folder.
P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\bin>gremlin-server.bat
Error: Could not find or load main class Files
I had a quick look at the bat script and added some echo statements:
echo "%1"
IF "%1" == "-i" (
GOTO install
) else (
GOTO server
)
:: Start the Gremlin Server
:server
IF "%1" == "" (
SET GREMLIN_SERVER_YAML=%JANUSGRAPH_HOME%\conf\gremlin-server\gremlin-server.yaml
) ELSE (
SET GREMLIN_SERVER_YAML=%1
)
java %JAVA_OPTIONS% %JAVA_ARGS% -cp %CP% org.apache.tinkerpop.gremlin.server.GremlinServer %GREMLIN_SERVER_YAML%
echo %JAVA_OPTIONS%
echo %JAVA_ARGS%
echo %CP%
echo %GREMLIN_SERVER_YAML%
echo "call to GremlinServer"
The output:
P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\bin>gremlin-server.bat
.;C:\Program Files (x86)\QuickTime\QTSystem\QTJava.zip;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\slf4j-log4j12-1.7.12.jar;;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-all-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-berkeleyje-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-bigtable-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-cassandra-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-core-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-cql-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-es-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-hadoop-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-hbase-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-lucene-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-solr-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\*;
""
Error: Could not find or load main class Files
-Xms32m -Xmx512m -Djanusgraph.logdir=P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\log -Dtinkerpop.ext=P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\ext -Dlogback.configurationFile=conf\logback.xml -Dlog4j.configuration=file:/P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\conf\gremlin-server\log4j-server.properties -Dlog4j.debug=true -Dgremlin.log4j.level=WARN -javaagent:P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\jamm-0.3.0.jar -Dgremlin.io.kryoShimService=org.janusgraph.hadoop.serialize.JanusGraphKryoShimService
ECHO is off.
.;C:\Program Files (x86)\QuickTime\QTSystem\QTJava.zip;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\slf4j-log4j12-1.7.12.jar;;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-all-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-berkeleyje-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-bigtable-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-cassandra-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-core-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-cql-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-es-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-hadoop-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-hbase-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-lucene-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\janusgraph-solr-0.2.0.jar;P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\lib\*;
P:\Software\DB\NoSQL\janusgraph-0.2.0-hadoop2\conf\gremlin-server\gremlin-server.yaml
"call to GremlinServer"
This question was also asked on the janusgraph-users Google Group, and I've copied my answers here:
JanusGraph does run on Windows 10. The user experience is not ideal,
and could use some help with people with Windows expertise. I've
opened up issue
950 to track
making the prepackaged distribution more Windows-friendly.
Your problem is probably coming from your CLASSPATH variable, and
I'd think that ".;C:\Program Files (x86)\QuickTime\QTSystem\QTJava.zip;" is messing it up because of the
spaces in the path. Try unsetting the CLASSPATH before running
gremlin-server.bat.
The default Gremlin Server configuration uses
janusgraph-cassandra-es-server.properties because the pre-packaged
distribution bin/janusgraph.sh start will start a single node
Cassandra, a single node Elasticsearch, and the Gremlin Server. If you
want to run with Cassandra, you could go a version directly from the
Apache Cassandra site or a Datastax distribution if you want to go
with an MSI installer. If you're not interested in using Cassandra,
you could change the gremlin-server.yaml to use the
janusgraph-berkeleyje-server.properties which is pretty good for
getting started.

ksh - exit script if last alphabet in variable is p

I'm writing a ksh script to refresh schema from prod to dev/test/qa environment. I would like to have disaster check in place, I'm asking user to input source and target database as well as schema names. When the user accidentally enter prod database as target database name I would like the script to exit. In our environment the production database name ends with p some times followed by 01, 02, 03 etc.
example names:
dbp
dbpp
dbpp01
dbpp02
cdp01
sedpbp
retpp01
PORP01
PORPP01
How can I check if the last alphabet not number of my variable string is p or P ?.
Try the following :
SCHEMA=dbp
case $SCHEMA in
*[pP] | *[pP]0[0-9] ) echo OK
;;
* ) echo Error
;;
esac
I have added another check which checks if the source and target database names are same as well.
case "$tarSID" in
*[pP] | *[pP]0[0-9] | "$tarSID"="SsrcSID")
echo "Warning: Target Database Cannot be Prod or same as Prod"
echo "Re-Enter Target Database Name
;;
* )
;;
Thanks again Alvin

Bash case not properly evaluating value

The Problem
I have a script that has a case statement which I'm expecting to execute based on the value of a variable. The case statement appears to either ignore the value or not properly evaluate it instead dropping to the default.
The Scenario
I pull a specific character out of our server hostnames which indicates where in our environment the server resides. We have six different locations:
Management(m): servers that are part of the infrastructure such as monitoring, email, ticketing, etc
Development(d): servers that are for developing code and application functionality
Test(t): servers that are used for initial testing of the code and application functionality
Implementation(i): servers that the code is pushed to for pre-production evaluation
Production(p): self-explanatory
Services(s): servers that the customer needs to integrate that provide functionality across their project. These are separate from the Management servers in that these are customer servers while Management servers are owned and operated by us.
After pulling the character from the hostname I pass it to a case block. I expect the case block to evaluate the character and add a couple lines of text to our rsyslog.conf file. What is happening instead is that the case block returns the default which does nothing but tell the person building the server to manually configure the entry due to an unrecognized character.
I've tested this manually against a server I recently built and verified that the character I am pulling from the hostname (an 's') is expected and accounted for in the case block.
The Code
# Determine which environment our server resides in
host=$(hostname -s)
env=${host:(-8):1}
OLDFILE=/etc/rsyslog.conf
NEWFILE=/etc/rsyslog.conf.new
# This is the configuration we need on every server regardless of environment
read -d '' common <<- EOF
...
TEXT WHICH IS ADDED TO ALL CONFIG FILES REGARDLESS OF FURTHER CODE EXECUTION
SNIPPED
....
EOF
# If a server is in the Management, Dev or Test environments send logs to lg01
read -d '' lg01conf <<- EOF
# Relay messages to lg01
*.notice ##xxx.xxx.xxx.100
#### END FORWARDING RULE ####
EOF
# If a server is in the Imp, Prod or is a non-affiliated Services zone server send logs to lg02
read -d '' lg02conf <<- EOF
# Relay messages to lg02
*.notice ##xxx.xxx.xxx.101
#### END FORWARDING RULE ####
EOF
# The general rsyslog configuration remains the same; pull it out and write it to a new file
head -n 63 $OLDFILE > $NEWFILE
# Add the common language to our config file
echo "$common" >> $NEWFILE
# Depending on which environment ($env) our server is in, add the appropriate
# remote log server to the configuration with the $common settings.
case $env in
m) echo "$lg01conf" >> $NEWFILE;;
d) echo "$lg01conf" >> $NEWFILE;;
t) echo "$lg01conf" >> $NEWFILE;;
i) echo "$lg02conf" >> $NEWFILE;;
p) echo "$lg02conf" >> $NEWFILE;;
s) echo "$lg02conf" >> $NEWFILE;;
*) echo "Unknown environment; Manually configure"
esac
# Keep a dated backup of the original rsyslog.conf file
cp $OLDFILE $OLDFILE.$(date +%Y%m%d)
# Replace the original rsyslog.conf file with the new version
mv $NEWFILE $OLDFILE
An Aside
I've already determined that I can combine the different groups of code from the case block onto single lines (a total of two) using the | operator. I've listed it in the manner above since this is how it is coded while I'm having issues with it.
I can't see what's wrong with your code. Maybe add another ;; to the default clause. To find the problem add a set -vx as a first line. Will show you lots of debug information.

mysqldump problem with case sensitivity? Win->linux

When i dump a table with uppercase letters using mysqldump it comes out as lower case in my > dump.sql file. I found a report here in 2006, almost 4 years old http://bugs.mysql.com/bug.php?id=19967
A solution here suggest making linux insensitive. I rather not if possible. Whats the easiest way to copy a win32 db into linux?
According to the MySQL manuals, you only have a limited number of options:
Use lower_case_table_names=1 on all systems. The main disadvantage
with this is that when you use SHOW
TABLES or SHOW DATABASES, you do not
see the names in their original
lettercase.
Use lower_case_table_names=0 on Unix and lower_case_table_names=2 on
Windows. This preserves the lettercase
of database and table names. The
disadvantage of this is that you must
ensure that your statements always
refer to your database and table names
with the correct lettercase on
Windows. If you transfer your
statements to Unix, where lettercase
is significant, they do not work if
the lettercase is incorrect.
Exception: If you are using InnoDB tables and you are trying to
avoid these data transfer problems,
you should set lower_case_table_names
to 1 on all platforms to force names
to be converted to lowercase.
See: http://dev.mysql.com/doc/refman/5.0/en/identifier-case-sensitivity.html for full details.
Today I've had to make it so. I already have windows db in lower case and need to import to linux db with case sensitive table names, so the play with lowecase_table_names option in not an option :)
It looks that 'show tables' displays appropriately sorted table names and the dump have escaped table names with ` character. I've succesfully imported the database with following algorithm:
I have mydb.sql with lowercase windows dump
I started application to create database schema in Linux, with case sensitive names.
Then I've had lower case names in dump, and case sensitive names in mysql database. I converted the dump using sed & awk with following script:
#!/bin/bash
MYSQL="mysql -u root -p mydb"
FILE=mydb.sql
TMP1=`mktemp`
TMP2=`mktemp`
cp $FILE $TMP1
for TABLE in `echo "show tables" | $MYSQL`; do
LCTABLE=`echo $TABLE| awk '{print tolower($0)}'`
echo "$LCTABLE --> $TABLE"
cat $TMP1 | sed "s/\`$LCTABLE\`/\`$TABLE\`/" > $TMP2
cp $TMP2 $TMP1
done
cp $TMP1 $FILE.conv
rm $TMP1
rm $TMP2
And the dump has been converted properly. Everything works after import in Linux.

Resources