Liquibase with spring boot and cassandra - spring-boot

my team is building simple app which will run as job and initialize database of application base on java argument and profile, which swaps between postgres and cassandra database. There is no problem with postgres, but cassandra wont budge. Connection to cassandra database is sucessfull, but changelog is not applied.
We are using library/cassandra:4.1.0
Dependencies of the project:
implementation 'org.hibernate:hibernate-core'
implementation 'com.zaxxer:HikariCP'
implementation 'org.springframework:spring-jdbc'
implementation 'org.liquibase:liquibase-core'
implementation 'org.postgresql:postgresql'
implementation "org.springframework.data:spring-data-cassandra"
implementation ('org.liquibase.ext:liquibase-cassandra:4.18.0'){
exclude group: 'org.slf4j', module: 'slf4j-jdk14'
}
implementation 'org.yaml:snakeyaml'//TODO ?
implementation "org.projectlombok:lombok"
implementation "org.springframework.boot:spring-boot"
implementation "org.springframework.boot:spring-boot-autoconfigure"
implementation "org.springframework.cloud:spring-cloud-config-client"
implementation "org.springframework:spring-beans"
implementation "org.springframework:spring-context"
implementation "org.springframework:spring-core"
implementation "org.slf4j:slf4j-api"
implementation "ch.qos.logback:logback-classic"
implementation "org.slf4j:jul-to-slf4j"
implementation "org.slf4j:log4j-over-slf4j"
application-cassandra.yaml file
server:
port: 8083
spring:
config:
liquibase.change-log: classpath:/${java-param}/changelog.xml
spring:
autoconfigure:
exclude: org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
data:
cassandra:
port: 9042
contact-points: 127.0.0.1
local-datacenter: datacenter1
keyspace-name: hs360_pokus
and just a sample changelog.xml from docs https://docs.liquibase.com/start/install/tutorials/cassandra.html
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ext="http://www.liquibase.org/xml/ns/dbchangelog-ext"
xmlns:pro="http://www.liquibase.org/xml/ns/pro"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-latest.xsd
http://www.liquibase.org/xml/ns/dbchangelog-ext http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-ext.xsd
http://www.liquibase.org/xml/ns/pro http://www.liquibase.org/xml/ns/pro/liquibase-pro-latest.xsd">
<changeSet id="1" author="Liquibase">
<createTable tableName="test_table">
<column name="test_id" type="int">
<constraints primaryKey="true"/>
</column>
<column name="test_column" type="varchar"/>
</createTable>
</changeSet>
</databaseChangeLog>
What are we missing? Something in config or in dependencies?

I suspect the issue comes from your application.yaml file. If you look at the key hierarchy I notice you put twice spring. a.k.a keys should look like spring.data.cassandra and not spring.spring.data.cassandra.
The yaml file should then look like this:
server:
port: 8083
spring:
config:
liquibase.change-log: classpath:/${java-param}/changelog.xml
autoconfigure:
exclude: org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
data:
cassandra:
port: 9042
contact-points: 127.0.0.1
local-datacenter: datacenter1
keyspace-name: hs360_pokus
The connectivity is currently successful because you connect to a node with default spring data cassandra values (127.0.0.1 / 9042 / datacenter1) but it probably writes the tables in the default keyspace system and not using the one you set in your configuration.

Related

apply migration in all schema that i have using liquibase

I developed a website using spring boot in this application I'm using architecture multi tenant to manage my database. I want to use Liquibase as a DB migration tool. The problem is that when i do migration the new modification(modification means by add new columns to different tables and also add new tables) is only apply in schema public and doesn't apply on the others sachems , what i want , when i do migration i want the new modification apply on all sachems
ps : i'm using hibernate to create new sachems
Liquibase allows dynamic substitution of properties in changelog files. We can configure multiple properties inside a file and then use them wherever required. In your case, we can configure properties "schema1", "schema2" with some value and then use it in changelog file using ${schema1} or ${schema2} syntax as per requirement.
In liquibase.properties file, we will configure these properties as follows:
schema1=ABC
schema2=PQR
Liquibase assigns or prioritizes value for configured property in below order:
As an attribute passed to your liquibase runner.
As a JVM sytem property
As an environment variable
As a CLI attribute if you are running liquibase through command line
In liquibase.properties file
In the parameters block (property element of the DATABASECHANGELOG table)
You can do it as below example code snippet:
1. Adding a column to some table in schema ABC:
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.8.xsd">
<changeSet author="authorName" id="some-unique-id" dbms="${dbType}" context="some-context">
<sql endDelimiter=";" splitStatements="true" stripComments="true">
**My SQL query/ transactional logic goes here**
ALTER TABLE "${schema1}"."TableName" ADD COLUMN COLUMNNAME DATATYPE;
</sql>
</changeSet>
</databaseChangeLog>
2. Creating a table in PQR schema:
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.8.xsd">
<changeSet author="authorName" id="some_unique_id" dbms="${dbType}" context="some_context">
<createTable tableName="TableName" schemaName="${schema2}">
<column name="id" type="VARCHAR(200)" />
<column name="name" type="VARCHAR(255)"/>
</createTable>
</changeSet>
</databaseChangeLog>
Note: above example is using 2 properties (schema1 and schema2). You can use only even more than that.
If you need help with creating "liquibase.properties" file, visit this link
Cheers!

Liquibase loadData changeset ignores context

I have a jhipster project, with Java, Spring and Liquibase as always. I want to load mock users on "dev" mode only, but when deploying to Heroku using the "prod" profile, they are ALSO loaded. It's like the context is ignored completely by liquibase. What am I doing wrong here?
The application-prod.yml file has a liquibase context set to "prod"
spring:
liquibase:
contexts: prod
and the application-dev.yml is setting liquibase context to "dev":
spring:
liquibase:
contexts: dev
I have some mock user data that I want to load only on dev (when running on localhost), and the liquibase changeset looks like this:
<changeSet author="me" id="mock-data-1" context="dev" >
<loadData encoding="UTF-8"
file="config/liquibase/mock_users.csv"
separator=";"
tableName="jhi_user">
<column name="activated" type="boolean"/>
<column name="created_date" type="timestamp"/>
</loadData>
...
</changeSet>
All other changesets have no context applied.
(Probably not relevant but) my mock_users.csv looks like this:
id;login;password_hash;first_name;last_name;email;image_url;activated;lang_key;created_by;last_modified_by;created_date;team_id
5;user1;$2a$10$VEjxo0jq2YG9Rbk2HmX9S.k1uZBGYUHdUcid3g/vfiEl7lwWgOH/K;;;user1#localhost.com;;true;sv;system;system;2019-12-03T09:21:06Z;1
Here is my Procfile for Heroku deployment:
web: java $JAVA_OPTS -jar target/*.war --spring.profiles.active=prod,heroku --server.port=$PORT
release: cp -R src/main/resources/config config && ./mvnw liquibase:update -Pheroku
When deploying to Heroku however the logs say thay no context is set at all:
Liquibase settings:
...
2019-12-03T13:17:08.821255+00:00 app[release.7646]: [INFO] context(s): null
...
And the entire changeLog is executed, I can see that in the Heroku logs too.
How do I make sure the liquibase contexts from my application-prod.yml file are used correctly?
EDIT* I can make heroku run liquibase with prod context by editing the pom file, under the "heroku" profile, "liquibase maven plugin" setting the "contexts" tag:
<profile>
<id>heroku</id>
<build>
<plugins>
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<configuration combine.self="override">
<changeLogFile>src/main/resources/config/liquibase/master.xml</changeLogFile>
<diffChangeLogFile>src/main/resources/config/liquibase/changelog/${maven.build.timestamp}_changelog.xml</diffChangeLogFile>
<driver></driver>
<url>${env.JDBC_DATABASE_URL}</url>
<defaultSchemaName></defaultSchemaName>
<username>${env.JDBC_DATABASE_USERNAME}</username>
<password>${env.JDBC_DATABASE_PASSWORD}</password>
<referenceUrl>hibernate:spring:se.axesslab.respekttrappan.domain?dialect=org.hibernate.dialect.PostgreSQL82Dialect&hibernate.physical_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy&hibernate.implicit_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy</referenceUrl>
<verbose>true</verbose>
<contexts>prod</contexts>
<logging>debug</logging>
<promptOnNonLocalDatabase>false</promptOnNonLocalDatabase>
</configuration>
</plugin>
But should this really be required? Then what's the point of having the application-*.yml files hold different liquibase contexts?
The execution of Liquibase is triggered from within the Maven plugin, ./mvnw liquibase:update -Pheroku. The maven plugin doesn't know about the Liquibase context you set in Spring's property file.
Like you figured out yourself, you have to either set the context in the pom.xml or let Spring execute Liquibase.

Intellij JPA console: configure persistence.xml for H2 in-memory database

I am trying to use the JPA console in Intellij Idea Ultimate for testing queries. The project is generated with JHipster 5.7.0 and uses an H2 in-memory database with Hazelcast.
generated application-dev.yml:
...
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:h2:mem:appointmentservice;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
username: appointmentservice
password:
hikari:
auto-commit: false
h2:
console:
enabled: true
jpa:
database-platform: io.github.jhipster.domain.util.FixedH2Dialect
database: H2
show-sql: false
properties:
hibernate.id.new_generator_mappings: true
hibernate.connection.provider_disables_autocommit: true
hibernate.cache.use_second_level_cache: true
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: true
hibernate.cache.region.factory_class: com.hazelcast.hibernate.HazelcastCacheRegionFactory
hibernate.cache.hazelcast.instance_name: appointmentservice
hibernate.cache.use_minimal_puts: true
hibernate.cache.hazelcast.use_lite_member: true
...
I created the following persistence.xml in my resources directory:
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="appointmentservice" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
<properties>
<property name="javax.persistence.jdbc.driver" value="org.h2.Driver"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="javax.persistence.jdbc.url" value="jdbc:h2:mem:appointmentservice"/>
<property name="javax.persistence.jdbc.user" value="appointmentservice"/>
<property name="hibernate.id.new_generator_mappings" value="true"/>
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<property name="hibernate.cache.use_query_cache" value="false"/>
<property name="hibernate.generate_statistics" value="true"/>
<property name="hibernate.cache.region.factory_class" value="com.hazelcast.hibernate.HazelcastCacheRegionFactory"/>
<property name="hibernate.cache.hazelcast.instance_name" value="appointmentservice"/>
<property name="hibernate.cache.use_minimal_puts" value="true"/>
<property name="hibernate.cache.hazelcast.use_lite_member" value="true"/>
</properties>
</persistence-unit>
</persistence>
The following configuration works in the H2 web console:
The problem:
In the Intellij persistence view all entites show up, when clicking on appointmentservice. However when I right click it and open a JPA console, all queries fail claiming, that the tables were not found.
e.g.
jpa-ql> select a from Address a
[2019-03-04 16:03:57] [42S02] Table "ADDRESS" not found; SQL statement:
[2019-03-04 16:03:57] select address0_.id as id1_1_, address0_.active as active2_1_, address0_.city as city3_1_, address0_.clientAccount_id as clientAc9_1_, address0_.country as country4_1_, address0_.institution_id as institu10_1_, address0_.location_id as locatio11_1_, address0_.jhi_number as jhi_numb5_1_, address0_.street as street6_1_, address0_.supplement as suppleme7_1_, address0_.zip as zip8_1_ from address address0_ [42102-197]
I would much appreciate if someone could give me a hint what I'm doing wrong, or if there are any good example persistence.xml files for my case.
Thanks in advance
Edit:
Thank for all the responses!
- I followed the suggestions of #Gaƫl Marziou and deleted persistence.xml and used the tcp URL to connect to the datasource in Intellij. There I now can browse the table contents.
I then had to assign the datasource to the entityManagerFactory in the Intellij persistence view. Furthermore I needed to use the same NamingStrategie as in application.yml.
JHipster creates the H2 server with a TCP port (see h2TCPServer() method in DatabaseConfiguration.java), so your in-memory database is accessible from an external client using a tcp JDBC url which is different from the one configured in your application.yml.
The external client should use jdbc:h2:tcp://localhost:18080/mem:appointmentservice
The port 18080 is based on the web port (e.g. 8080) + 10000 (see h2TCPServer() method) and is logged at application startup as "H2 database is available on port xxxxx".
Personally I use DBeaver to access the H2 database in my JHipster apps.
As advised by M. Deinum, you should delete persistence.xml.

RabbitMQ high availability with Spring AMQP

I'm trying to configure a RabbitMQ cluster in Spring, so I followed the Spring AMQP docs (http://docs.spring.io/spring-amqp/reference/html/amqp.html), but I get an error when adding addresses:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rabbit="http://www.springframework.org/schema/rabbit"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
http://www.springframework.org/schema/rabbit
http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd">
<rabbit:connection-factory id="connectionFactory" addresses="host1,host2" />
The dependencies I defined in gradle:
compile group: 'org.springframework.amqp', name: 'spring-amqp', version:'1.2.0.RELEASE'
compile group: 'org.springframework.amqp', name: 'spring-rabbit', version:'1.2.0.RELEASE'
Anyone has any idea why this is happening?
Thanks!
Edit:
The error I get is:
cvc-complex-type.3.2.2: Attribute 'addresses' is not allowed to appear in element 'rabbit:connection-factory'.
host1 & host2 are IPs of virtual machines.
This happens because you declared spring-rabbit XSD file in schemaLocation for version 1.0. Just change:
http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd
to
http://www.springframework.org/schema/rabbit/spring-rabbit-1.2.xsd
to match your spring-rabbit version, and it should work.

Hsqldb has no data when populated via maven liquibase plugin

I've created schema and populated it via Maven liquibase plugin:
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>2.0.5</version>
<configuration>
<propertyFile>src/main/resources/db/config/db.config.properties</propertyFile>
<changeLogFile>src/main/resources/db/changelog/db.changelog-master.xml</changeLogFile>
</configuration>
</plugin>
Properties file:
driver: org.hsqldb.jdbcDriver
#HSQLDB Embedded in file
url: jdbc:hsqldb:file:src/main/resources/db/hsqldb/dataFile
username: SA
password:
As I see in the output when invoke mvn liquibase:update:
[INFO] Executing on Database: jdbc:hsqldb:file:src/main/resources/db/hsqldb/dataFile
INFO 24.04.13 10:00:liquibase: Successfully acquired change log lock
INFO 24.04.13 10:00:liquibase: Creating database history table with name: DATABASECHANGELOG
INFO 24.04.13 10:00:liquibase: Reading from DATABASECHANGELOG
INFO 24.04.13 10:00:liquibase: Reading from DATABASECHANGELOG
INFO 24.04.13 10:00:liquibase: ChangeSet src/main/resources/db/changelog/db.changelog-1.0.xml::1::sav ran successfully in 7ms
INFO 24.04.13 10:00:liquibase: ChangeSet src/main/resources/db/changelog/db.changelog-1.0.xml::2::sav ran successfully in 3ms
INFO 24.04.13 10:00:liquibase: Successfully released change log lock
INFO 24.04.13 10:00:liquibase: Successfully released change log lock
db.changelog-master.xml contains:
<include file="src/main/resources/db/changelog/db.changelog-1.0.xml"/>
db.changelog-1.0.xml contains:
<changeSet id="1" author="sav">
<createTable tableName="testTable">
<column name="id" type="int">
<constraints primaryKey="true" nullable="false"/>
</column>
<column name="name" type="varchar(50)">
<constraints nullable="false"/>
</column>
<column name="active" type="boolean" defaultValueBoolean="true"/>
</createTable>
</changeSet>
<changeSet id="2" author="sav">
<insert tableName="testTable">
<column name="id" value="1"/>
<column name="name" value="First String"/>
</insert>
<insert tableName="testTable">
<column name="id" value="2"/>
<column name="name" value="Second String"/>
<column name="active" value="false"/>
</insert>
</changeSet>
It seems everything is OK. Now i'm going to src/main/resources/db/hsqldb/ folder and see three files:
dataFile.log
dataFile.properties
dataFile.script
But I don't see a CREATE TABLE testTable DDL statement in the dataFile.script.
Next in Intelli IDEA I configure datasource plugin (set jdbc hsql driver, url: jdbc:hsqldb:file:/src/main/resources/db/hsqldb/dataFile, user: sa ). Connect it, invoke query:
SELECT * FROM INFORMATION_SCHEMA.SYSTEM_TABLES;
I cannot see the table I try to create.
The value of hsqldb_type field for each table - is MEMORY. I expect it to be FILE of something similar.
Any ideas?:)
PS:
1. Maven repository returns HSQLDB as the first search result, and its last version is 1.8.0.10. Actually I had to use HSQLDB DATABASE with its 2.2.9 version. It solved the problem of table creation.
2. I had to use an absolute path to the file AND the ';ifexists=true' property to connect to the existing database in IDEA datasource plugin. As a result my url connection string in the property file differ from the same used in the plugin.
You need to be careful about the use of file paths:
This is a relative path:
driver: org.hsqldb.jdbcDriver
#HSQLDB Embedded in file
url: jdbc:hsqldb:file:src/main/resources/db/hsqldb/dataFile
This is an absolute path within the current drive:
Next in Intelli IDEA I configure datasource plugin (set jdbc hsql driver, url: jdbc:hsqldb:file:/src/main/resources/db/hsqldb/dataFile, user: sa )
Try using absolute paths.
Apart from that, in your attempt to check things in an existing database, connect explicitly requiring the database to exist by adding ;ifexists=true to the connection URL.
As we are not sure Liquibase shuts down the database correctly or not, you can add a property to the Liquibase connection URL to ensure data is written fully ;hsqldb.write_delay=false. We are assuming you are using HSQLDB 2.x for this property.
With HSQLDB 1.8.x, liquibase <= 3.2.0 never read tables on database.
This is because catalog should be set null when retrieve JDBC metadata.
We could not force the catalog to be null, because the method AbstractJdbcDatabase.correctSchema() should not set catalog=schema, if supportCatalogs() return false.
I will post a issue for this.

Resources