Unable to obtain table lock - another Flyway instance may be running - spring-boot

I'm using integration of Spring Boot and Flyway (6.5.5) to run updates for CockroachDB cluster. When several instances of service are starting in the same time, all of them are trying to lock flyway_schema_history table to validate migrations. However, the following exception occurs:
2020-09-09 00:00:00.013 ERROR 1 --- [ main] o.s.boot.SpringApplication :
Application run failed org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]:
Invocation of init method failed; nested exception is org.flywaydb.core.api.FlywayException:
Unable to obtain table lock - another Flyway instance may be running
I could not find any config property to tweak this. Maybe someone faced with the same issue and solved it somehow?
Workaround: restart service.

After debugging the issue, it's appeared in very weird Flyway behaviour:
org.flywaydb.core.internal.database.cockroachdb.CockroachDBTable
CockroachDB-specific table.
Note that CockroachDB doesn't support table locks. We therefore use a row in the schema history as a lock indicator;
if another process ahs inserted such a row we wait (potentially indefinitely) for it to be removed before
carrying out a migration.
*/
So, in my case during applying migration, service was restarted and this pseudo lock record left forever.
Workaround was delete the "lock" manually:
installed_rank | version | description | type | script | checksum | installed_by | installed_on | execution_time | success
-----------------+----------------------------------+------------------------------------------+------+--------------------------------------------------+-------------+--------------------+----------------------------------+----------------+----------
-100 | d9ab17626a4d66a4d8a89fe9bdca98e9 | flyway-lock | | | 0 | | 2020-09-14 11:25:02.874838+00:00 | 0 | true
Hope, it will help someone.
The appropriate ticket has been created: https://github.com/flyway/flyway/issues/2932

Related

How to properly use liquibase `searchPath` option to indicate the respective resource folders?

I'm trying to invoke the update command of liquibase like follows:
liquibase update --changelog-file=./persistence/src/main/resources/changelog/db.changelog-dev.xml \
--url="jdbc:postgresql://localhost:5432/sigma"
This results in:
[...]
Starting Liquibase at 23:44:47 (version 4.17.2 #5255 built at 2022-11-01 18:07+0000)
Liquibase Version: 4.17.2
Liquibase Community 4.17.2 by Liquibase
Unexpected error running Liquibase: The file classpath:/changelog/db.changelog-master.xml was not found in the configured search path:
- /Users/ikaerom/Dev/sigma-backend
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/liquibase-core.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/lib
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jaybird.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/ojdbc8.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/snakeyaml.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/snowflake-jdbc.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/picocli.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jaxb-runtime.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jaxb-api.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jaxb-core.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/hsqldb.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/connector-api.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/mssql-jdbc.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/h2.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/mariadb-java-client.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/liquibase-commercial.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/commons-lang3.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/postgresql.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/sqlite-jdbc.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/opencsv.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/commons-text.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/commons-collections4.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib/jcc.jar
- /opt/homebrew/Cellar/liquibase/4.17.2/libexec/internal/lib
More locations can be added with the 'searchPath' parameter.
The db.changelog-dev.xml is essentially including db.changelog-master.xml, which then also references some SQL scripts. The two XML files lie in the same resource folder $PROJECT_ROOT/persistence/src/main/resources/changelog. The imported/included SQL files referenced within the changelog XML all lie in the resource folder's subfolders.
Any way of specifying this eluding searchPath or even --search-path parameter (as indicated in the documentation) seems to fail spectacularly:
$> liquibase update --searchPath="./persistence/src/main/resources/" --changelog-file=./persistence/src/main/resources/changelog/db.changelog-dev.xml --url="jdbc:postgresql://localhost:5432/sigma"
Unexpected argument(s): --searchPath=./persistence/src/main/resources/
So let's try the other indicated syntax:
$> liquibase update --search-path="./persistence/src/main/resources/" --changelog-file=./persistence/src/main/resources/changelog/db.changelog-dev.xml --url="jdbc:postgresql://localhost:5432/sigma"
Unexpected argument(s): --search-path=./persistence/src/main/resources/
If I attempt to use LIQUIBASE_SEARCH_PATH=, I end up with this:
[...]
Liquibase Version: 4.17.2
Liquibase Community 4.17.2 by Liquibase
Liquibase Community detected and ignored the following environment variables:
- LIQUIBASE_SEARCH_PATH
To configure Liquibase with environment variables requires a Liquibase Pro or Liquibase Labs license. Get a free trial at https://liquibase.com/trial. Options include the liquibase.licenseKey in the defaults file, adding a flag in the CLI, and more. Learn more at https://docs.liquibase.com.
[...]
I don't really want to buy a pro version just to get this feature working ;).
My question is: how do I specify the search path for liquibase to pick it up in my bash shell?
I find it hard to believe that this wouldn't work, given liquibase is so well documented, and it tries to always give you the correct hints and pointers, if you don't use it correctly. What did I miss?
Update: I have a suspicion that the order of invocation matters. So, the update command should be last in the list. However, no luck so far:
$> liquibase \
--changelog-file=./persistence/src/main/resources/changelog/db.changelog-dev.xml \
--url="jdbc:postgresql://localhost:5432/sigma" \
--searchpath="./persistence/src/main/resources/changelog/" \
update
[...]
Starting Liquibase at 14:29:51 (version 4.17.2 #5255 built at 2022-11-01 18:07+0000)
Liquibase Version: 4.17.2
Liquibase Community 4.17.2 by Liquibase
Unexpected error running Liquibase: The file ./persistence/src/main/resources/changelog/db.changelog-dev.xml was not found in the configured search path:
- /Users/ikaerom/Dev/sigma-backend/persistence/src/main/resources/changelog
More locations can be added with the 'searchPath' parameter.
For more information, please use the --log-level flag
Found the solution myself, after digging through the liquibase source code.
In my db.changelog-dev.xm I had a line which included db.changelog-master.xml as follows. That classpath:/ has to be removed:
- <include file="classpath:/changelog/db.changelog-master.xml"/>
+ <include file="changelog/db.changelog-master.xml"/>
Then, this invocation finally works (mind the adapted searchPath and the relative designation of the changelog parameter settings):
liquibase \
--hub-mode=off \
--headless=true \
--url="jdbc:postgresql://localhost:5432/sigma" \
--searchPath="./persistence/src/main/resources" \
--changelog-file=changelog/db.changelog-dev.xml \
update 2>&1 | grep -Ev -- "^##"
The --hub-mode=off will prevent liquibase from asking if you want to connect to the liquibase hub. The rest is sugar-coating.
The only problem open is that when liquibase is invoked from the shell CLI, the user ending up owning the changelog/lock tables is the user invoking the liquibase command:
ikaerom#/tmp:sigma> \dt databasechangeloglock
+--------+-----------------------+-------+---------+
| Schema | Name | Type | Owner |
|--------+-----------------------+-------+---------|
| public | databasechangeloglock | table | ikaerom |
+--------+-----------------------+-------+---------+
SELECT 1
Time: 0.011s
ikaerom#/tmp:sigma> \dt databasechangeloglock
+--------+-----------------------+-------+---------+
| Schema | Name | Type | Owner |
|--------+-----------------------+-------+---------|
| public | databasechangeloglock | table | ikaerom |
+--------+-----------------------+-------+---------+
SELECT 1
Time: 0.010s
However, when liquibase is updated by invoking the Spring boot application, then the table owner user is the one the application context is setting (in my case sigma):
ikaerom#/tmp:sigma> \dt databasechangeloglock
+--------+-----------------------+-------+-------+
| Schema | Name | Type | Owner |
|--------+-----------------------+-------+-------|
| public | databasechangeloglock | table | sigma |
+--------+-----------------------+-------+-------+
SELECT 1
Time: 0.010s
ikaerom#/tmp:sigma> \dt databasechangelog
+--------+-------------------+-------+-------+
| Schema | Name | Type | Owner |
|--------+-------------------+-------+-------|
| public | databasechangelog | table | sigma |
+--------+-------------------+-------+-------+
SELECT 1
Time: 0.009s
This clashes if you run your liquibase update first:
Caused by: liquibase.exception.DatabaseException: ERROR: relation "databasechangeloglock" already exists [Failed SQL: (0) CREATE TABLE public.databasechangeloglock (ID INTEGER NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP WITHOUT TIME ZONE, LOCKEDBY VARCHAR(255), CONSTRAINT databasechangeloglock_pkey PRIMARY KEY (ID))]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:397)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:83)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:151)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:135)
at liquibase.lockservice.StandardLockService.init(StandardLockService.java:115)
at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:286)
... 94 common frames omitted
Caused by: org.postgresql.util.PSQLException: ERROR: relation "databasechangeloglock" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:329)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:315)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:286)
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:94)
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:393)
This again can be solved by a proper GRANT for the sigma or a re-assignment of the owner to the rightful user. Or simply by adding the --username property to the name of the spring boot application context or database user owner:
liquibase \
--hub-mode=off \
--headless=true \
--username="sigma" \
--url="jdbc:postgresql://localhost:5432/sigma" \
--searchPath="./persistence/src/main/resources" \
--changelog-file=changelog/db.changelog-dev.xml \
update 2>&1 | grep -Ev -- "^##"

flyway upgrade from 4.X to 5.x runs the already ran migration scripts again

I was upgrading springboot 2 from 1.5. For that, I upgraded the flyway from 4.X to 5.24. When I run the springboot application after that, it's executing scripts which were already executed. Below is the logs I am seeing. removing project specific names as I am not allowed to post it:
myproject INFO 2019-03-11T16:08:11-0400 main [org.flywaydb.core.internal.schemahistory.JdbcTableSchemaHistory] Creating Schema History table: "PUBLIC"."flyway_schema_history"
myproject INFO 2019-03-11T16:08:11-0400 main [org.flywaydb.core.internal.command.DbMigrate] Current version of schema "PUBLIC": << Empty Schema >>
myproject INFO 2019-03-11T16:08:11-0400 main [org.flywaydb.core.internal.command.DbMigrate] Migrating schema "PUBLIC" to version 1 - CREATE mything
myproject ERROR 2019-03-11T16:08:11-0400 main [org.flywaydb.core.internal.command.DbMigrate] Migration of schema "PUBLIC" to version 1 - CREATE mything failed! Please restore backups and roll back database and code!
myproject WARN 2019-03-11T16:08:11-0400 main [org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext] Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.internal.command.DbMigrate$FlywayMigrateException:
Migration V1__CREATE_mything.sql failed
---------------------------------------
SQL State : 42509
Error Code : -5509
Message : type not found or user lacks privilege: SERIAL
I guess You are using default value for parameter table.name, which changed in v. 5.0.0.
Refer to https://flywaydb.org/documentation/releaseNotes
Issue 1848 The default for flyway.table has been changed from schema_version to flyway_schema_history

Distributed OSGi example with Apache Karaf Cellar - Client bundle can't activate because can't find distributed service

I am using Apache Karaf 4.1.1 and Karaf Cellar. I have written two bundles. The first bundle provides a service of type ITrackerManager. The second bundle has a component that references ITrackerManager. My end goal is to witness the component in the second bundle successfully get a reference to the ITrackerManager service in the first bundle which is running on a different node. This is all part of my exploration of distributed OSGi.
What is actually happening when I install that second bundle is that it gets installed but fails to activate due to missing the service reference. I must be conducting my test incorrectly. Any ideas on how I would go about demonstrating my end goal; component in bundle on Node B successfully uses service on Node A?
Here is how I have run my test so far.
Node A
karaf#root()> cluster:node-list
| Id | Alias | Host Name | Port
--+-------------------+-------+--------------+-----
x | 159.4.251.58:5701 | | 159.4.251.58 | 5701
| 159.4.251.58:5702 | | 159.4.251.58 | 5702
Node B
karaf#root()> cluster:node-list
| Id | Alias | Host Name | Port
--+-------------------+-------+--------------+-----
| 159.4.251.58:5701 | | 159.4.251.58 | 5701
x | 159.4.251.58:5702 | | 159.4.251.58 | 5702
So far so good. I am running two karaf instances on my computer. Both instances see each other. Now I want to install that first bundle onto Node A ONLY. To accomplish that, I install the bundle into the cluster, then specifically remove it from Node B.
Node A
karaf#root()> cluster:bundle-install -s default mvn:myCompany/dosgi-example-part1/1.0-SNAPSHOT
karaf#root()> cluster:bundle-list default
Bundles in cluster group default
ID | State | Lvl | Located | Blocked | Version | Name
---+----------+-----+---------------+---------+----------------+--------------------------------------------------------------
0 | Active | | cluster/local | | 5.6.2 | System Bundle
...
67 | Active | | cluster/local | | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 1
karaf#root()> cluster:service-list
Service Class | Provider Node
--------------------------+------------------
myCompany.ITrackerManager | 159.4.251.58:5701
| 159.4.251.58:5702
Still looking good. My bundle is in the cluster, is local on Node A (and Node B at this point), and the service is recognized by the cluster and is available on both Node A and Node B. Now to remove the bundle from Node B.
Node B
karaf#root()> cluster:bundle-list default
Bundles in cluster group default
ID | State | Lvl | Located | Blocked | Version | Name
---+----------+-----+---------------+---------+----------------+-------------------------------------------------------------
67 | Active | | cluster/local | | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 1
karaf#root()> bundle:list
START LEVEL 100 , List Threshold: 50
ID | State | Lvl | Version | Name
---+--------+-----+----------------+-----------------------------------------------
75 | Active | 80 | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 1
karaf#root()> bundle:uninstall 75
karaf#root()> cluster:bundle-list default
Bundles in cluster group default
ID | State | Lvl | Located | Blocked | Version | Name
---+----------+-----+---------------+---------+----------------+--------------------------------------------------------------
67 | Active | | cluster | | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 1
karaf#root()> cluster:service-list
Service Class | Provider Node
--------------------------+------------------
myCompany.ITrackerManager | 159.4.251.58:5701
Excellent. The first bundle has been removed from Node B but still shows up as being in the cluster. Both nodes agree that my service is only available on Node A now (since the bundle was removed from Node B). Now I will load my second bundle on Node B only. This is where I run into problems. I don't load the second bundle using the cluster:bundle-install command because I don't want it ending up on Node A. So instead I install my second bundle using the normal bundle:install command. This results in an error about an unsatisfied reference.
Node B
karaf#root()> bundle:install -s mvn:otherCompany/dosgi-example-part2/1.0-SNAPSHOT
Bundle ID: 76
Error executing command: Error installing bundles:
Unable to start bundle mvn:otherCompany/dosgi-example-part2/1.0-SNAPSHOT: org.osgi.framework.BundleException: Unable to resolve otherCompany.dosgi-example-part2 [76](R 76.0): missing requirement [otherCompany.dosgi-example-part2 [76](R 76.0)] osgi.wiring.package; (&(osgi.wiring.package=myCompany)(version>=1.0.0)(!(version>=2.0.0))) Unresolved requirements: [[otherCompany.dosgi-example-part2 [76](R 76.0)] osgi.wiring.package; (&(osgi.wiring.package=myCompany)(version>=1.0.0)(!(version>=2.0.0)))]
karaf#root()> bundle:list
START LEVEL 100 , List Threshold: 50
ID | State | Lvl | Version | Name
---+-----------+-----+----------------+-----------------------------------------------------------------------------------------------------
76 | Installed | 80 | 1.0.0.SNAPSHOT | Distributed OSGi Example Part 2
So there it is. I install the second bundle on NodeB only, expecting that it is able to successfully use the required service which resides on Node A only. Unfortunately that does not happen. Instead I get error message stating there are unresolved requirements. It seems to behave as if DOSGI is not available. If I install both bundles on the same node, the second bundle activates without any errors. Any insights you may have would be appreciated.
My problem was two-fold.
Stuff to be sent over DOSGI needs to be serializable. In my case, I was calling a method on a remote service that took an argument. That argument was a class type defined in a common API. That class type was not serializable. Once I made it serializable, it starting getting different errors. Which brings me to...
Normal name space rules apply. I will elaborate below.
My API defined two interfaces.
ITracker
ITrackerManager
That API bundle was installed into the cluster so it is available on all nodes. My Service bundle had a concrete implementation of ITrackerManager. When that bundle is installed locally on Node A, the cluster:service-list command correctly shows that Node A has a service of type ITrackerManager.
My Client bundle has a concrete implementation of ITracker that had a reference to ITrackerManager which was installed on Node B. The first thing the ITracker instance did in its activate method was call ITrackerManager.addTracker(this). What should have happened was that the instance of ITracker on Node B provided itself to the ITrackerManager running on Node A. Initially this failed because ITracker was not serializable. Once that was solved, I started seeing classNotFound exceptions on Node A.
Node A was trying to deserialize the ITracker instance locally. It was attempting to deserailize a concrete class (TheirTracker) which was not defined locally, it was only defined on Node B in the client bundle. This failed.
So the normal namespace rules apply. Even though the client bundle on Node B has a reference to a service running in a bundle Node A, the service bundle in Node A cannot create (i.e. deserialize) an instance of a class that only exists in the client bundle on Node B.
I switched up my interfaces so that ITrackerManager method does not take an ITracker arguement. Instead it takes a string. Invoking that method over DOSGi works fine.
While I understand why this problem exists, this undermines a core capability I was hoping to use with DOSGi. I want clients to be able to register with a central controller which will actively control them. This won't work because even though the clients implement the interface the central controller is looking for, the specific serialization fails at the central controller. The client concrete classes exist in a namespace unknown to the central controller, hence the client cannot successfully pass itself to the central controller.
This must be a way to achieve what I am looking for in DSOGi without making each of the multiple clients an exported DSOGi service. Any ideas?

Exception occurs at the end of test cases

I'm using Maven for dependency management. When I run the test cases an exception occurs at the end of test cases though test cases pass successfully.
Following is my stack trace:
2013-10-08 16:04:22,839 [Thread-15] ERROR plugins.DefaultGrailsPlugin - Error configuration scaffolding: Error creating bean with name 'instanceControllersApi': Singleton bean creation not allowed while the singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!)
Message: Error creating bean with name 'instanceControllersApi': Singleton bean creation not allowed while the singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!)
Line | Method
->> 662 | run in java.lang.Thread
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
I'm using Grails 2.1.3. I have tried both, "static" and "dynamic" scaffolding but it did not resolve the issue.
I also referred to What does this exception mean? issue but no luck.
One user with a similar error fixed it by deleting the project folder in his ~/.grails directory.
http://grails.1312388.n4.nabble.com/Database-migration-plugin-Running-dbm-gorm-diff-results-Error-creating-bean-with-name-instanceContro-td4637567.html
A good ol' grails clean might help too, and be less invasive.
Also, if you can share the project through source control (git, mercurial, svn), you might try reproducing the issue on another machine. If you can't, that's a good sign that the issue is peculiar to your environment, and could be resolved through some sort of cleanup.
I have resolved my issue. I am not sure why it was occurring but i had many controller where scaffold=true. I generated the all controller and view and it resolved my issue.

Unsatisfied dependency on javax.sql.DataSource

I have a maven + spring based application which I built and deployed in Servicemix.
However, when tried to start the bundle, it remained in the Waiting status for a long time before generating following exception:
16:25:52,219 | DEBUG | Timer-0 | DependencyServiceManager | startup.DependencyServiceManager 339 | 72 - org.springframework.osgi.extender - 1.2.0 | Deregistering service dependency dependencyDetector for OsgiBundleXmlApplicationContext(bundle= abc, config=osgibundle:/META-INF/spring/*.xml)
16:25:52,219 | ERROR | Timer-0 | WaiterApplicationContextExecutor | WaiterApplicationContextExecutor 432 | 72 - org.springframework.osgi.extender - 1.2.0 | Unable to create application context for [abc], unsatisfied dependencies: Dependency on [(objectClass=javax.sql.DataSource)] (from bean [&dataSource])
org.springframework.context.ApplicationContextException: Application context initialization for 'com.vetstreet.pet_mailer' has timed out
Appreciate any help or suggestion.
Spring Extender holds back the startup of an application (setting it's status to Waiting) if a service that you're referencing is unavailable at startup. The reason is that the availability attribute of every referenced service is set to mandatory, and
there's a default-timeout global attribute which is by default set to 5 seconds. If the service you're referring to doesn't appear in that amount of time, Spring Extender will throw an Exception like that you have.
So what I think is something wrong with the service publication of your DataSource. Do you have the corresponding tag in your other application?
<osgi:service ...>
Check out this link: http://static.springsource.org/osgi/docs/2.0.0.M1/reference/html/service-registry.html. It contains a lot of example. Ensure that in both osgi:service and both osgi:reference you have the javax.sql.DataSource interface set. And be aware of not publishing the same interface by 2 different bundles.
One more thing: just to be sure, import the javax.sql package in your manifest:
<Import-Package>javax.sql</Import-Package>
Hope this helps, Gergely

Resources