Flyway command line filesystem migrations from actual command line input - windows

I'm trying out Flyway as a possible way to do database migrations.
Because I am supporting multiple databases and potentially running migration jobs for different environments and databases simultaneously, I need the ability to use the command line tool without referencing the config (properties) file for my SQL migrations location. Is there a way to do this? It appears to only read from the default location or from the location specified in the flyway.properties file.
My install directory is C:\flyway. I'm running this on a Windows server with the command below:
flyway.cmd migrate -url=jdbc:sqlserver://%URL%;databaseName=%DB% -user=%USER% -password=%PW% -schemas=dbo -initOnMigrate=true -locations filesystem:C:/Steve -jarDir filesystem:filesystem:C:/flyway/jars
It's a great tool. I really hope that I will be able to use it.

flyway.cmd -configFile=/path/to/other/configFile.conf should do what you want. I just checked and it seems I forgot to document this on the website (It is in the usage description of the tool itself).
Could you file an issue against the website, requesting this to be added?

Related

Configuring settings for last paricipant support wsadmin/websphere

Recently i've came to an issue to configure Last Participant Support on deployed application. I've found some old post about that:
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014090728
On server itself i found how to do it. But with jython or wsadmin commands im not able to find how to do it on application itself.
But it does not help for me. Any ideas?
There is no command assistance available for the action of changing last participant support from the admin console which typically implies there is no scripting command associated the action. And there doesn't appear to be an wsadmin AdminApp command to modify the property. Looking at config repo changes made as a result of the admin console action, the IBM Programming Model Extensions (PME) deployment descriptor file "ibm-application-ext-pme.xmi" for an application is created/modified by the action.
If possible, the best long-term solution would be to use a tool like RAD to generate that extensions file when packaging the application because if you need to redeploy the app, your config changes wouldn't get overridden. If you can't mod the app, you can script the addition of an PME descriptor file in each of the desired apps with the knowledge that redeploying the app will overwrite your changes. The changes can be made by doing something along the lines of:
1) create a text file named ibm-application-ext-pme.xmi with contents similar to this:
<pmeext:PMEApplicationExtension xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:pmeext="http://www.ibm.com/websphere/appserver/schemas/5.0/pmeext.xmi" xmi:id="PMEApplicationExtension_1559836881290">
<lastParticipantSupportExtension xmi:id="LastParticipantSupportExtension_1559836881292" acceptHeuristicHazard="false"/>
</pmeext:PMEApplicationExtension>
2) in wsadmin or your jython script do the following (note in this example the xmi file you created is in the current directory, if not, include the full path to it in the createDocument command) :
deployUri = "cells/<your_cell_name>/applications/<your_app_name>.ear/deployments/<your_app_name>/META-INF/ibm-application-ext-pme.xmi"
AdminConfig.createDocument(deployUri, "ibm-application-ext-pme.xmi")
AdminConfig.save()
3) restart the server

Create an H2 Database from the console

I would like to run H2 on my local host (Windows), and create a new database.
To do so, I have dowloaded the zip file from http://www.h2database.com/html/main.html, unzipped it, then run the bin/h2.bat script. I have not used the windows installer, as the machine I will be running later on does not grant me installation privileges.
The console was successfully opened on the 8082 port, but I could not create any database, failing with Database "C:/Users/PC/test" not found [90013-198] 90013/90013. I have tried many variations, to no avail.
The documentation states that a database should be created automatically if it does not exist, but indicates it cannot be done so from the H2 console (http://www.h2database.com/html/tutorial.html#creating_new_databases).
However, the documentation does not provide an alternate way to create a database, either by running the jar with additional parameters, or by another utility.
I feel pretty dazed right now. How do I properly create a new database in H2? I would like a normal database, persisted on disk, not an in-memory one.
I would suggest that this does not work in version 198
You could download an older version (I used 196) to create the databse and then switch back to 198 to open the database.
I have managed to run it by using the following command line:
java -cp h2-1.4.198.jar org.h2.tools.Server -tcp -pg -web
I must have missed something in the documentation, sorry about this.

How to run spark-jobs outside the bin folder of spark-2.1.1-bin-hadoop2.7

I have an existing spark-job, the functionality of this spark-job is to connect kafka-server get the data and then storing the data into cassandra tables, now this spark-job is running on server inside spark-2.1.1-bin-hadoop2.7/bin but whenever I am trying to run this spark-job from other location, Its not running, this spark-job contains some JavaRDD related code.
Is there any chance, I can run this spark-job from outside also by adding any dependency in pom or something else?
whenever I am trying to run this spark-job from other location, Its not running
spark-job is a custom launcher script for a Spark application, perhaps with some additional command-line options and packages. Open it, review the content and fix the issue.
If it's too hard to figure out what spark-job does and there's no one nearby to help you out, it's likely time to throw it away and replace with the good ol' spark-submit.
Why don't you use it in the first place?!
Read up on spark-submit in Submitting Applications.

flyway clean is not dropping scheduler jobs or programs

I recently added a scheduler job and program to my development schema. When I tried to refresh the schema, I did a flyway clean, and then a flyway migrate.
I got the following error:
ERROR: Found non-empty schema "TESTDATA" without metadata table! Use init() or set initOnMigrate to true to initialize the metadata table.
When I dropped the job and program by hand, I was then able to run migrate again.
I've been using flyway for a while, and it's always been very straightforward - but I'm not sure how to convince it to properly clean my schema, now that I have an overnight batch job.
Note: I see the option -initOnMigrate, but this causes me two problems:
I have a lot of batch files which would be sensitive to trying to add another runline option.
I use flyway both to update existing schemas and to refresh schemas from scratch. If I need to modify the job or program, I could only include initOnMigrate (and have it bomb on the update), or not include it, and have it bomb on refresh (my current problem).
Thank you
You can work around this by implementing FlywayCallback.afterClean() and do the cleanup yourself.
Also, please file an issue in the issue tracker so we can fix this in time for 3.1.

check directory of oracle logs

I'm using the check_logfiles nagios plugin to monitor Oracle alert logs. It works wonderfully for that purpose.
However I also need to monitor and entire directory of oracle trace logs for errors. This is because the oracle database is always creating new log files with different names.
What I need to know is the best way to scan an entire directory of oracle trace logs to find out which ones match patterns that specify oracle alerts.
Using check logfiles I tried specifying these options -
--criticalpattern='ORA-00600|ORA-00060|ORA-07445|ORA-04031|Shutting
down instance'
and to specify the directory of logs -
--logfile='/global/cms/u01/app/orahb/admin/opbhb/udump/'
and
--logfile="/global/cms/u01/app/orahb/admin/opbhb/udump/*"
Neither of which have any effect. The check runs but returns ok. Does anyone know if this nagios plugin called check_logfiles can monitor a directory of files rather than just a single file? Or perhaps there is another, better way to achieve the same goal of monitoring a bunch of files that can't be specified ahead of time?
Use a script which:
Opens each file
Copies entries which match the pattern
Outputs the matches to a file

Resources