I have already implemented Liquibase with Maven. We are currently using a single database (db2) but now we need to add a new database to the application which will have different objects.
I've seen that i can define a new profile in maven but i couldn't find out how to differentiate which objects is being created on which database.
Is there a solution to this? Can I support 2 different databases with different objects using liquibase?
As you can see in the documentation, you can use two different executions, like this:
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.0.5</version>
<executions>
<execution>
<phase>process-resources</phase>
<configuration>
<changeLogFile>PATH_TO_CHANGELOG_1</changeLogFile>
... connection properties ...
</configuration>
<goals>
<goal>update</goal>
</goals>
</execution>
<execution>
<phase>process-resources</phase>
<configuration>
<changeLogFile>PATH_TO_CHANGELOG_2</changeLogFile>
... connection properties ...
</configuration>
<goals>
<goal>update</goal>
</goals>
</execution>
</executions>
</plugin>
The only problem with this approach is that you need two different changelog.xml files, one per database.
Also, you can have preconditions in your changelog file to choose between what changeset will be processed by each database.
For example:
<changeSet id="1" author="bob">
<preConditions onFail="MARK_RAN">
<dbms type="oracle" />
</preConditions>
<comment>Comments should go after preCondition. If they are before then liquibase usually gives error.</comment>
<dropTable tableName="oldtable"/>
</changeSet>
The onFail="MARK_RAN" makes Liquibase skip the changeset but marks it as run, so the next time it will not try again. See the customPrecondition tag in the documentation for more complex preconditions.
You may want to have 2 separate changelogs to manage the two databases, even if they are both used by the same application.
As Arturo says you can have 2 or more execution-nodes, but you must give every execution-node a seperate id.
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.0.5</version>
<executions>
<execution>
<id>db1-update</id>
<phase>process-resources</phase>
<configuration>
<changeLogFile>src/main/resources/org/liquibase/db1.xml</changeLogFile>
<driver>org.postgresql.Driver</driver>
<url>jdbc:postgresql://localhost/db1</url>
<username>..</username>
<password>..</password>
</configuration>
<goals>
<goal>update</goal>
</goals>
</execution>
<execution>
<id>db2-update</id>
<phase>process-resources</phase>
<configuration>
<changeLogFile>src/main/resources/org/liquibase/db2.xml</changeLogFile>
<driver>org.postgresql.Driver</driver>
<url>jdbc:postgresql://localhost/db2</url>
<username>...</username>
<password>...</password>
</configuration>
<goals>
<goal>update</goal>
</goals>
</execution>
<execution>
<id>db3-update</id>
<phase>process-resources</phase>
<configuration>
<changeLogFile>src/main/resources/org/liquibase/db3.xml</changeLogFile>
<driver>org.postgresql.Driver</driver>
<url>jdbc:postgresql://localhost/db3</url>
<username>...</username>
<password>...</password>
</configuration>
<goals>
<goal>update</goal>
</goals>
</execution>
</executions>
</plugin>
You can use Preconditions inside changeset or changelog and give conditions according to the database,
<preConditions onFail="WARN">
<dbms type="oracle" />
<runningAs username="SYSTEM"/>
</preConditions>
Like this, you can use precondition tag inside changeset and give conditions according to each database.
Use this link for additions documentation.
Old question but I still answer cause I have the same requirement today, and I am opting for another solution.
I would recommend, if you can, as proposed already in the answers to use seperate changelogs.
But if you want to keep the changelogs unified, as I need for my specific case, I would use labels instead of preconditions to filter changesets to be executed.
<changeSet id="0001:1" author="oz" labels="clickhouse">
<sql>...SOMESQL...</sql>
</changeSet>
<changeSet id="0001:2" author="oz" labels="mongodb">
<ext:createCollection collectionName="myCollection">
...SOMEJSON....
</ext:createCollection>
</changeSet>
This will prevent poluting the databasechangelog of the two databases with the executions of the changesets of the other database.
This will cause problems(for the current release at least 4.6.1) for any liquibase operation using tags, such as rollbackToTag or updateToTag.
Related
I use the latest swagger-maven-plugin from the io.swagger.core.v3 to generate my static swagger api documentation.
In my project, I have to separate apis so I want to get a json and yml representation for each api within one package process.
<plugin>
<groupId>io.swagger.core.v3</groupId>
<artifactId>swagger-maven-plugin</artifactId>
<version>2.2.6</version>
<configuration>
<outputPath>${basedir}/target/</outputPath>
<outputFormat>JSONANDYAML</outputFormat>
<prettyPrint>true</prettyPrint>
</configuration>
<executions>
<execution>
<id>1</id>
<goals>
<goal>resolve</goal>
</goals>
<configuration>
<resourcePackages>
<resourcePackage>de.test.rest</resourcePackage>
</resourcePackages>
<outputFileName>swagger</outputFileName>
<configurationFilePath>${basedir}/src/main/resources/openApiConfig.yml</configurationFilePath>
</configuration>
</execution>
<execution>
<id>2</id>
<goals>
<goal>resolve</goal>
</goals>
<configuration>
<resourcePackages>
<resourcePackage>de.test.secondAPI</resourcePackage>
</resourcePackages>
<outputFileName>secondAPI</outputFileName>
<configurationFilePath>${basedir}/src/main/resources/secondOpenApiConfig.yml</configurationFilePath>
</configuration>
</execution>
</executions>
</plugin>
PROBLEM:
the execution creates the expected json and yml files for each execution
swagger.yml
swagger.json
secondAPI.yml
secondAPI.json
The problem is, that the seconAPI files are a copy of the swagger files.
I've read the documentation and I thought that configuration in the plugin root is shared between multiple executions. Configurations within the execution tag are individually used per execution.
Is there a way to run the executions in parallel with individual configuration?
Or is it a problem with the plugin itself?
EDIT:
Each execution works as expected when there is only one execution defined in the executions tag.
My apllication(maven + spring boot) have a liquibase, but I need execute only on dev enviroment.
On another enviroments(prod, CI for example) I'll need block the execution.
Can I do this?
tks.
There're two options:
For your prod environment you can use spring.liquibase.enabled=false properties. This will disable Liquibase altogether and no changeSet will be executed.
Use Liquibase contexts. When executing maven scripts you can add -Dliquibase.contexts=dev_context property (for spring-boot it'll be spring.liquibase.contexts=dev_context).
And in your changeSets you can specify the context attribute:
<changeSet id="foo" author="bar" context="dev_context">
<!-- your logic here -->
</changeSet>
This way your changeSet will be executed only for dev_context.
Tks for the answer
I try this but didn't work.
I think the probleam is on my pom.xml.
If e delete the tag, the liquibase don't work.
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.8.4</version>
<configuration>
<promptOnNonLocalDatabase>false</promptOnNonLocalDatabase>
<propertyFile>src/main/resources/liquibase.properties</propertyFile>
</configuration>
<executions>
<execution>
<phase>process-resources</phase>
<goals>
<goal>update</goal>
</goals>
</execution>
</executions>
</plugin>
I want my "pre-integration-test" phase to be the following execution of goals, in this specfic order.
PHASE : pre-integration-test
get a spring boot jar (maven-dependency-plugin:copy)
get-a-port (build-helper-maven-plugin:reserve-network-port)
display-port (maven-antrun-plugin:run #1)
start-server (exec-maven-plugin)
wait-for startup (maven-antrun-plugin:run #2)
Is there any way to do this using Maven 3?
The problem that I am facing, is that "maven-antrun-plugin:run" 1 & 2 will always be run one after the other, because they are defined in the same plugin element :
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.8</version>
<executions>
<execution>
<id>display-port</id>
<phase>pre-integration-test</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<echo>Displaying value of 'tomcat.http.port' property</echo>
<echo>[tomcat.http.port] ${tomcat.http.port}</echo>
</target>
</configuration>
</execution>
<execution>
<id>wait-for-startup</id>
<phase>pre-integration-test</phase>
<configuration>
<target>
<sleep seconds="10" />
</target>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
Right now, the only way I have found to do this is to duplicate the "maven-antrun-plugin:" plugin element in the pom file.
But this gets me a warning
'build.plugins.plugin.(groupId:artifactId)' must be unique but found duplicate declaration
For the scope of this question, I am not looking for a work-around, such as changing the plugin for the "display-port" or "wait-for startup", or changing the phase on of the goals.
I just want to understand if what I am trying to do is possible or not.
If multiple executions have the same phase, then the first one to be executed will be the built-in one (e.g. maven-compiler-plugin) whose id is default-something, then the other executions will take place in the order they appear in your pom file.
I have a file in my build that I want to put a UUID in every time a build is ran. I use maven's #my.property# to do this for other properties like project.version. What's the simplest way to have maven insert a universal unique identifier (UUID) similar to java.util.UUID does? (I'd rather not write a plugin if I can avoid it)
Edit: Actually, the buildNumber is not random per build but rather the git commit hash of the build. Coincidentally, I also needed this so it's helpful, but does not answer the question.
It appears the 'buildNumber' property does what I need. I'm going forward with that.
Maven property:
[echoproperties] buildNumber=fb08b44b310c489f5b170842d3aac3c5eb5a6f7b
In my file:
"uid": "fb08b44b310c489f5b170842d3aac3c5eb5a6f7b",
I used this plugin that prints all available properties (careful as some might differ - like env.* - between environments:
<build>
<plugins>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.6</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<property environment="env" />
<echoproperties />
</target>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
https://florianlr.wordpress.com/2012/04/24/16/
im at the point in my project where im moving data connections to the beta and production databases for testing. obviously, having the alpha database credentials stored in the source repository is fine, but the beta and production credentials, id be put in front of a firing squad for that one.
i know maven can have a {userdir}/build.properties file. this is the file i want to use to keep the db credentials out of the source repository. but i can't seem to get maven to figure out that for file x.cfg.xml it has to replace values.
so i have in one of my hibernate.cfg.xml files this line
<property name="hibernate.connection.url">#ssoBetaUrl#</property>
now how do i get maven to replace that variable with the value thats in the {userdir}/build.properties file?
edit-------------
ive been playing with the properties-maven-plugin plugin but i seem to not be able to get it to fire. i put this in my parent pom
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0-alpha-2</version>
<executions>
<execution>
<id>read-properties</id>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
</execution>
</executions>
</plugin>
but when it builds, it does not fire. if im reading http://maven.apache.org/maven-1.x/reference/properties.html right it should find the build properties file in the ~/build.properties folder and go from there, but im not sure.
I think you are approaching this the wrong way around. Instead of having the build process bake the appropriate connection details into the JAR file you should instead have the program look for a configuration file at startup.
Typically, my hibernate based apps, will look for a file under %user.home&/.appname/config.properties and load DB credentials and other deployment specfic data from there. If the file is missing, a default version can be included in the JAR and copied to this location (on initial startup so you don't have to copy-paste the file to new systems) that is then edited with appropriate settings.
This way, you can use the same build to produce JAR (or WAR) files for test and production servers, the differences will be in the (presumably already deployed) configuration files. This also makes it possible to have multiple production deployments, each talking to a different database, without any complications in the build process.
You could use two plugins.
properties-maven-plugin
replacer
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0-alpha-1</version>
<executions>
<execution>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
<configuration>
<files>
<file>{userdir}/build.properties</file>
</files>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>1.5.2</version>
<executions>
<execution>
<phase>prepare-package</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<includes>
<include>target/**/*.*</include>
</includes>
<replacements>
<replacement>
<token>#ssoBetaUrl#</token>
<value>http://[anyURL]</value>
</replacement>
</replacements>
</configuration>
</plugin>