How to properly use liquibase `searchPath` option to indicate the respective resource folders? - bash

I'm trying to invoke the update command of liquibase like follows:
liquibase update --changelog-file=./persistence/src/main/resources/changelog/db.changelog-dev.xml \
--url="jdbc:postgresql://localhost:5432/sigma"
This results in:
[...]
Starting Liquibase at 23:44:47 (version 4.17.2 #5255 built at 2022-11-01 18:07+0000)
Liquibase Version: 4.17.2
Liquibase Community 4.17.2 by Liquibase
Unexpected error running Liquibase: The file classpath:/changelog/db.changelog-master.xml was not found in the configured search path:
- /Users/ikaerom/Dev/sigma-backend
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/liquibase-core.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/lib
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jaybird.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/ojdbc8.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/snakeyaml.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/snowflake-jdbc.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/picocli.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jaxb-runtime.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jaxb-api.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jaxb-core.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/hsqldb.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/connector-api.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/mssql-jdbc.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/h2.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/mariadb-java-client.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/liquibase-commercial.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/commons-lang3.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/postgresql.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/sqlite-jdbc.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/opencsv.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/commons-text.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/commons-collections4.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jcc.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib
More locations can be added with the 'searchPath' parameter.
The db.changelog-dev.xml is essentially including db.changelog-master.xml, which then also references some SQL scripts. The two XML files lie in the same resource folder $PROJECT_ROOT/persistence/src/main/resources/changelog. The imported/included SQL files referenced within the changelog XML all lie in the resource folder's subfolders.
Any way of specifying this eluding searchPath or even --search-path parameter (as indicated in the documentation) seems to fail spectacularly:
$> liquibase update --searchPath="./persistence/src/main/resources/" --changelog-file=./persistence/src/main/resources/changelog/db.changelog-dev.xml --url="jdbc:postgresql://localhost:5432/sigma"
Unexpected argument(s): --searchPath=./persistence/src/main/resources/
So let's try the other indicated syntax:
$> liquibase update --search-path="./persistence/src/main/resources/" --changelog-file=./persistence/src/main/resources/changelog/db.changelog-dev.xml --url="jdbc:postgresql://localhost:5432/sigma"
Unexpected argument(s): --search-path=./persistence/src/main/resources/
If I attempt to use LIQUIBASE_SEARCH_PATH=, I end up with this:
[...]
Liquibase Version: 4.17.2
Liquibase Community 4.17.2 by Liquibase
Liquibase Community detected and ignored the following environment variables:
- LIQUIBASE_SEARCH_PATH
To configure Liquibase with environment variables requires a Liquibase Pro or Liquibase Labs license. Get a free trial at https://liquibase.com/trial. Options include the liquibase.licenseKey in the defaults file, adding a flag in the CLI, and more. Learn more at https://docs.liquibase.com.
[...]
I don't really want to buy a pro version just to get this feature working ;).
My question is: how do I specify the search path for liquibase to pick it up in my bash shell?
I find it hard to believe that this wouldn't work, given liquibase is so well documented, and it tries to always give you the correct hints and pointers, if you don't use it correctly. What did I miss?
Update: I have a suspicion that the order of invocation matters. So, the update command should be last in the list. However, no luck so far:
$> liquibase \
--changelog-file=./persistence/src/main/resources/changelog/db.changelog-dev.xml \
--url="jdbc:postgresql://localhost:5432/sigma" \
--searchpath="./persistence/src/main/resources/changelog/" \
update
[...]
Starting Liquibase at 14:29:51 (version 4.17.2 #5255 built at 2022-11-01 18:07+0000)
Liquibase Version: 4.17.2
Liquibase Community 4.17.2 by Liquibase
Unexpected error running Liquibase: The file ./persistence/src/main/resources/changelog/db.changelog-dev.xml was not found in the configured search path:
- /Users/ikaerom/Dev/sigma-backend/persistence/src/main/resources/changelog
More locations can be added with the 'searchPath' parameter.
For more information, please use the --log-level flag

Found the solution myself, after digging through the liquibase source code.
In my db.changelog-dev.xm I had a line which included db.changelog-master.xml as follows. That classpath:/ has to be removed:
- <include file="classpath:/changelog/db.changelog-master.xml"/>
+ <include file="changelog/db.changelog-master.xml"/>
Then, this invocation finally works (mind the adapted searchPath and the relative designation of the changelog parameter settings):
liquibase \
--hub-mode=off \
--headless=true \
--url="jdbc:postgresql://localhost:5432/sigma" \
--searchPath="./persistence/src/main/resources" \
--changelog-file=changelog/db.changelog-dev.xml \
update 2>&1 | grep -Ev -- "^##"
The --hub-mode=off will prevent liquibase from asking if you want to connect to the liquibase hub. The rest is sugar-coating.
The only problem open is that when liquibase is invoked from the shell CLI, the user ending up owning the changelog/lock tables is the user invoking the liquibase command:
ikaerom#/tmp:sigma> \dt databasechangeloglock
+--------+-----------------------+-------+---------+
| Schema | Name | Type | Owner |
|--------+-----------------------+-------+---------|
| public | databasechangeloglock | table | ikaerom |
+--------+-----------------------+-------+---------+
SELECT 1
Time: 0.011s
ikaerom#/tmp:sigma> \dt databasechangeloglock
+--------+-----------------------+-------+---------+
| Schema | Name | Type | Owner |
|--------+-----------------------+-------+---------|
| public | databasechangeloglock | table | ikaerom |
+--------+-----------------------+-------+---------+
SELECT 1
Time: 0.010s
However, when liquibase is updated by invoking the Spring boot application, then the table owner user is the one the application context is setting (in my case sigma):
ikaerom#/tmp:sigma> \dt databasechangeloglock
+--------+-----------------------+-------+-------+
| Schema | Name | Type | Owner |
|--------+-----------------------+-------+-------|
| public | databasechangeloglock | table | sigma |
+--------+-----------------------+-------+-------+
SELECT 1
Time: 0.010s
ikaerom#/tmp:sigma> \dt databasechangelog
+--------+-------------------+-------+-------+
| Schema | Name | Type | Owner |
|--------+-------------------+-------+-------|
| public | databasechangelog | table | sigma |
+--------+-------------------+-------+-------+
SELECT 1
Time: 0.009s
This clashes if you run your liquibase update first:
Caused by: liquibase.exception.DatabaseException: ERROR: relation "databasechangeloglock" already exists [Failed SQL: (0) CREATE TABLE public.databasechangeloglock (ID INTEGER NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP WITHOUT TIME ZONE, LOCKEDBY VARCHAR(255), CONSTRAINT databasechangeloglock_pkey PRIMARY KEY (ID))]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:397)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:83)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:151)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:135)
at liquibase.lockservice.StandardLockService.init(StandardLockService.java:115)
at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:286)
... 94 common frames omitted
Caused by: org.postgresql.util.PSQLException: ERROR: relation "databasechangeloglock" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:329)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:315)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:286)
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:94)
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:393)
This again can be solved by a proper GRANT for the sigma or a re-assignment of the owner to the rightful user. Or simply by adding the --username property to the name of the spring boot application context or database user owner:
liquibase \
--hub-mode=off \
--headless=true \
--username="sigma" \
--url="jdbc:postgresql://localhost:5432/sigma" \
--searchPath="./persistence/src/main/resources" \
--changelog-file=changelog/db.changelog-dev.xml \
update 2>&1 | grep -Ev -- "^##"

Related

Unable to obtain table lock - another Flyway instance may be running

I'm using integration of Spring Boot and Flyway (6.5.5) to run updates for CockroachDB cluster. When several instances of service are starting in the same time, all of them are trying to lock flyway_schema_history table to validate migrations. However, the following exception occurs:
2020-09-09 00:00:00.013 ERROR 1 --- [ main] o.s.boot.SpringApplication :
Application run failed org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]:
Invocation of init method failed; nested exception is org.flywaydb.core.api.FlywayException:
Unable to obtain table lock - another Flyway instance may be running
I could not find any config property to tweak this. Maybe someone faced with the same issue and solved it somehow?
Workaround: restart service.
After debugging the issue, it's appeared in very weird Flyway behaviour:
org.flywaydb.core.internal.database.cockroachdb.CockroachDBTable
CockroachDB-specific table.
Note that CockroachDB doesn't support table locks. We therefore use a row in the schema history as a lock indicator;
if another process ahs inserted such a row we wait (potentially indefinitely) for it to be removed before
carrying out a migration.
*/
So, in my case during applying migration, service was restarted and this pseudo lock record left forever.
Workaround was delete the "lock" manually:
installed_rank | version | description | type | script | checksum | installed_by | installed_on | execution_time | success
-----------------+----------------------------------+------------------------------------------+------+--------------------------------------------------+-------------+--------------------+----------------------------------+----------------+----------
-100 | d9ab17626a4d66a4d8a89fe9bdca98e9 | flyway-lock | | | 0 | | 2020-09-14 11:25:02.874838+00:00 | 0 | true
Hope, it will help someone.
The appropriate ticket has been created: https://github.com/flyway/flyway/issues/2932

Distributed OSGi example with Apache Karaf Cellar - Client bundle can't activate because can't find distributed service

I am using Apache Karaf 4.1.1 and Karaf Cellar. I have written two bundles. The first bundle provides a service of type ITrackerManager. The second bundle has a component that references ITrackerManager. My end goal is to witness the component in the second bundle successfully get a reference to the ITrackerManager service in the first bundle which is running on a different node. This is all part of my exploration of distributed OSGi.
What is actually happening when I install that second bundle is that it gets installed but fails to activate due to missing the service reference. I must be conducting my test incorrectly. Any ideas on how I would go about demonstrating my end goal; component in bundle on Node B successfully uses service on Node A?
Here is how I have run my test so far.
Node A
karaf#root()> cluster:node-list
| Id | Alias | Host Name | Port
--+-------------------+-------+--------------+-----
x | 159.4.251.58:5701 | | 159.4.251.58 | 5701
| 159.4.251.58:5702 | | 159.4.251.58 | 5702
Node B
karaf#root()> cluster:node-list
| Id | Alias | Host Name | Port
--+-------------------+-------+--------------+-----
| 159.4.251.58:5701 | | 159.4.251.58 | 5701
x | 159.4.251.58:5702 | | 159.4.251.58 | 5702
So far so good. I am running two karaf instances on my computer. Both instances see each other. Now I want to install that first bundle onto Node A ONLY. To accomplish that, I install the bundle into the cluster, then specifically remove it from Node B.
Node A
karaf#root()> cluster:bundle-install -s default mvn:myCompany/dosgi-example-part1/1.0-SNAPSHOT
karaf#root()> cluster:bundle-list default
Bundles in cluster group default
ID | State | Lvl | Located | Blocked | Version | Name
---+----------+-----+---------------+---------+----------------+--------------------------------------------------------------
0 | Active | | cluster/local | | 5.6.2 | System Bundle
...
67 | Active | | cluster/local | | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 1
karaf#root()> cluster:service-list
Service Class | Provider Node
--------------------------+------------------
myCompany.ITrackerManager | 159.4.251.58:5701
| 159.4.251.58:5702
Still looking good. My bundle is in the cluster, is local on Node A (and Node B at this point), and the service is recognized by the cluster and is available on both Node A and Node B. Now to remove the bundle from Node B.
Node B
karaf#root()> cluster:bundle-list default
Bundles in cluster group default
ID | State | Lvl | Located | Blocked | Version | Name
---+----------+-----+---------------+---------+----------------+-------------------------------------------------------------
67 | Active | | cluster/local | | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 1
karaf#root()> bundle:list
START LEVEL 100 , List Threshold: 50
ID | State | Lvl | Version | Name
---+--------+-----+----------------+-----------------------------------------------
75 | Active | 80 | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 1
karaf#root()> bundle:uninstall 75
karaf#root()> cluster:bundle-list default
Bundles in cluster group default
ID | State | Lvl | Located | Blocked | Version | Name
---+----------+-----+---------------+---------+----------------+--------------------------------------------------------------
67 | Active | | cluster | | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 1
karaf#root()> cluster:service-list
Service Class | Provider Node
--------------------------+------------------
myCompany.ITrackerManager | 159.4.251.58:5701
Excellent. The first bundle has been removed from Node B but still shows up as being in the cluster. Both nodes agree that my service is only available on Node A now (since the bundle was removed from Node B). Now I will load my second bundle on Node B only. This is where I run into problems. I don't load the second bundle using the cluster:bundle-install command because I don't want it ending up on Node A. So instead I install my second bundle using the normal bundle:install command. This results in an error about an unsatisfied reference.
Node B
karaf#root()> bundle:install -s mvn:otherCompany/dosgi-example-part2/1.0-SNAPSHOT
Bundle ID: 76
Error executing command: Error installing bundles:
Unable to start bundle mvn:otherCompany/dosgi-example-part2/1.0-SNAPSHOT: org.osgi.framework.BundleException: Unable to resolve otherCompany.dosgi-example-part2 [76](R 76.0): missing requirement [otherCompany.dosgi-example-part2 [76](R 76.0)] osgi.wiring.package; (&(osgi.wiring.package=myCompany)(version>=1.0.0)(!(version>=2.0.0))) Unresolved requirements: [[otherCompany.dosgi-example-part2 [76](R 76.0)] osgi.wiring.package; (&(osgi.wiring.package=myCompany)(version>=1.0.0)(!(version>=2.0.0)))]
karaf#root()> bundle:list
START LEVEL 100 , List Threshold: 50
ID | State | Lvl | Version | Name
---+-----------+-----+----------------+-----------------------------------------------------------------------------------------------------
76 | Installed | 80 | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 2
So there it is. I install the second bundle on NodeB only, expecting that it is able to successfully use the required service which resides on Node A only. Unfortunately that does not happen. Instead I get error message stating there are unresolved requirements. It seems to behave as if DOSGI is not available. If I install both bundles on the same node, the second bundle activates without any errors. Any insights you may have would be appreciated.
My problem was two-fold.
Stuff to be sent over DOSGI needs to be serializable. In my case, I was calling a method on a remote service that took an argument. That argument was a class type defined in a common API. That class type was not serializable. Once I made it serializable, it starting getting different errors. Which brings me to...
Normal name space rules apply. I will elaborate below.
My API defined two interfaces.
ITracker
ITrackerManager
That API bundle was installed into the cluster so it is available on all nodes. My Service bundle had a concrete implementation of ITrackerManager. When that bundle is installed locally on Node A, the cluster:service-list command correctly shows that Node A has a service of type ITrackerManager.
My Client bundle has a concrete implementation of ITracker that had a reference to ITrackerManager which was installed on Node B. The first thing the ITracker instance did in its activate method was call ITrackerManager.addTracker(this). What should have happened was that the instance of ITracker on Node B provided itself to the ITrackerManager running on Node A. Initially this failed because ITracker was not serializable. Once that was solved, I started seeing classNotFound exceptions on Node A.
Node A was trying to deserialize the ITracker instance locally. It was attempting to deserailize a concrete class (TheirTracker) which was not defined locally, it was only defined on Node B in the client bundle. This failed.
So the normal namespace rules apply. Even though the client bundle on Node B has a reference to a service running in a bundle Node A, the service bundle in Node A cannot create (i.e. deserialize) an instance of a class that only exists in the client bundle on Node B.
I switched up my interfaces so that ITrackerManager method does not take an ITracker arguement. Instead it takes a string. Invoking that method over DOSGi works fine.
While I understand why this problem exists, this undermines a core capability I was hoping to use with DOSGi. I want clients to be able to register with a central controller which will actively control them. This won't work because even though the clients implement the interface the central controller is looking for, the specific serialization fails at the central controller. The client concrete classes exist in a namespace unknown to the central controller, hence the client cannot successfully pass itself to the central controller.
This must be a way to achieve what I am looking for in DSOGi without making each of the multiple clients an exported DSOGi service. Any ideas?

"Method not supported" when creating table via Thrift Server JDBC in Spark 1.5

I have a instance of Spark 1.5 running a thrift server. My database manager (DBeaver) successfully connects to this thrift server. However, when I try to run the following piece of code:
CREATE TABLE test(
id int
)
I receive: DBCException: SQL Error: Method not supported
java.sql.SQLException: SQLException: Method not supported
The interesting thing is, the table is in fact created. When I try:
beeline> show tables;
+------------+--------------+--
| tableName | isTemporary |
+------------+--------------+--
| test | false |
+------------+--------------+--+
If I try to create a similar table from beeline, it is created without any error messages.
0: jdbc:hive2://localhost:10000> CREATE TABLE test02( id INT );
+---------+--+
| result |
+---------+--+
+---------+--+
The question is, how to create tables via JDBC without receiving this error message?

Unable to install logtash contrib plugins?

I want to use logstash contrib plugin riemann in my config file. On running logstash error comes:
An unexpected error occurred. This is probably a bug. |
| You can find help with this problem in a few places: |
| |
| * chat: #logstash IRC channel on freenode irc. |
| IRC via the web: http://goo.gl/TI4Ro |
| * email: logstash-users#googlegroups.com |
| * bug system: https://logstash.jira.com/ |
| |
+---------------------------------------------------------+
The error reported is:
Couldn't find any output plugin named 'riemann'. Are you sure this is correct? Trying to load the riemann output plugin resulted in this error: no such file to load -- logstash/outputs/riemann
I have a folder inside which both the logstash and its contrib tar are present and extracted.
I am using logstash 1.4.1 and logstash-contrib-1.4.1.
I tried the manual installation for contrib too by :
./bin/plugin install contrib
but nothing appears on the console on running the command.
Any help?
EDIT
On ls the following is my directory structure:
ls
elasticsearch-1.1.1 kibana-3.1.0.tar.gz logstash-1.4.1.tar.gz logstash-contrib-1.4.1.tar.gz
elasticsearch-1.1.1.tar.gz logstash-1.4.1 logstash-contrib-1.4.1 riemann-0.2.5.tar.bz2
Thus I have untarred contrib in the same directory as logstash. Any IDEA??
You shall extract logstash-contrib-1.4.1.tar.gz inside logstash-1.4.1 directory with commands:
cd logstash-1.4.1
tar zxvf logstash-contrib-1.4.1.tar.gz --strip 1
Then check lib/logstash/outputs/riemann.rb is under logstash-1.4.1.

Vectorwise integration Pentaho

I have a problem regarding the integration of Vectorwise with Pentaho.
The Pentaho was working fine with other databases but integration with Vectorwise gives always the same error.
AdhocWebService.ERROR_0012 - Failed to generate the report preview. Please check the server log for details of the error.
When i checked the error log it said this :
**
ERROR
[org.pentaho.platform.plugin.services.connections.metadata.sql.SqlMetadataQueryExec]
SqlMetadataQueryExec.ERROR_0002 - Query execution failed:
QueryModelMetaData.ERROR_0001 -
!QueryModelMetaData.ERROR_0001_MetadataColumnNotFound! Generated SQL:
SELECT
"BT_TIME_DIMENSION_TIME_DIMENSION"."cl_year" AS "COL0"
,AVG("BT_TIME_DIMENSION_TIME_DIMENSION"."cl_week") AS "COL1" FROM
"time_dimension" "BT_TIME_DIMENSION_TIME_DIMENSION" GROUP BY
"BT_TIME_DIMENSION_TIME_DIMENSION"."cl_year" ORDER BY
"COL0" ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine]
db143504-d7c1-11e1-b334-355c9beece81:SOLUTION-ENGINE:preview.xaction:
Action Sequence execution failed, see details below | Error Time:
Friday, July 27, 2012 1:35:40 PM IST | Session ID: joe | Instance Id:
db143504-d7c1-11e1-b334-355c9beece81 | Action Sequence:
preview.xaction | Execution Stack: EXECUTING ACTION: rule
(MQLRelationalDataComponent) | Action Class:
MQLRelationalDataComponent | Action Desc: rule | Loop Index (1-based):
0
**
when i run the same query directly on the Vectorwise server the output is obtained.
Please Help me. Thanx in advance :)

Resources