I have Spring Boot application with Flyway.
I have following sql script:
src/main/resources/db/migration/V1__init.sql
but the script is not executed.
In application.properties file I have:
spring.datasource.url = jdbc:mysql://localhost:3306/carorderprocess?useSSL=false
spring.datasource.username = root
spring.datasource.password = ...
spring.flyway.baselineOnMigrate = true
When I run application, in DB I only see:
mysql> select * from flyway_schema_history;
+----------------+---------+-----------------------+----------+-----------------------+----------+--------------+---------------------+----------------+---------+
| installed_rank | version | description | type | script | checksum | installed_by | installed_on | execution_time | success |
+----------------+---------+-----------------------+----------+-----------------------+----------+--------------+---------------------+----------------+---------+
| 1 | 1 | << Flyway Baseline >> | BASELINE | << Flyway Baseline >> | NULL | root | 2019-11-19 10:47:52 | 0 | 1 |
+----------------+---------+-----------------------+----------+-----------------------+----------+--------------+---------------------+----------------+---------+
So the script is not executed, why?
This script won't run, as the version of the filename provided is not higher than the greatest version stored in flyway_schema_history table. There are two solutions you can do:
Clear the flyway_schema_history table
Rename your file to V2__init.sql (recommended solution)
Then simply restart your spring boot app, and changes should be applied out of the box
One note: 1st solution probably requires removal of spring.flyway.baselineOnMigrate = true property. I would also consider if you really need it. What it does can be found here
When you create Spring Boot application using Flyway, you need to do the next steps:
First. Add flyway dependency in pom.xml, if using maven:
<plugin>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-maven-plugin</artifactId>
<version>4.0.3</version>
</plugin>
Second. Generate flyway.properties file and it should reside in the same directory as the pom.xml file. Default configs to describe:
flyway.user=databaseUser
flyway.password=databasePassword
flyway.schemas=schemaName
flyway.url=jdbc:h2:mem:DATABAS
flyway.locations=filesystem:db/migration
Third. Create your migration files under db/migration folder. For example - V1_1_0__my_first_migration.sql.
Fourth. Run the application: mvn spring-boot:run.
Related
I have just started exploring substrate and I am following this tutorial, (https://docs.substrate.io/tutorials/v3/add-a-pallet/).
Under step Implement the Nicks pallet Config trait -> Add Nicks to the construct_runtime! macro., I added Nicks pallet to my runtime, as shown in the screenshot.
Now, when I run cargo check -p node-template-runtime, I get a lot of errors;
asad#asad-Z440:~/Dev/Blockchain/polkadot/substrate-node-template$ cargo check -p node-template-runtime
warning: /home/asad/Dev/Blockchain/polkadot/substrate-node-template/node/Cargo.toml: version requirement `3.0.0-monthly-2021-09+1` for dependency `node-template-runtime` includes semver metadata which will be ignored, removing the metadata is recommended to avoid confusion
warning: /home/asad/Dev/Blockchain/polkadot/substrate-node-template/runtime/Cargo.toml: version requirement `3.0.0-monthly-2021-09+1` for dependency `pallet-template` includes semver metadata which will be ignored, removing the metadata is recommended to avoid confusion
Updating git repository `https://github.com/paritytech/substrate.git`
Updating crates.io index
Updating git repository `https://github.com/paritytech/substrate.git`
Compiling node-template-runtime v3.0.0-monthly-2021-09+1 (/home/asad/Dev/Blockchain/polkadot/substrate-node-template/runtime)
error: failed to run custom build command for `node-template-runtime v3.0.0-monthly-2021-09+1 (/home/asad/Dev/Blockchain/polkadot/substrate-node-template/runtime)`
Caused by:
process didn't exit successfully: `/home/asad/Dev/Blockchain/polkadot/substrate-node-template/target/debug/build/node-template-runtime-c9c48352bd7ed47e/build-script-build` (exit status: 1)
--- stdout
Information that should be included in a bug report.
Executing build command: "rustup" "run" "nightly" "cargo" "rustc" "--target=wasm32-unknown-unknown" "--manifest-path=/home/asad/Dev/Blockchain/polkadot/substrate-node-template/target/debug/wbuild/node-template-runtime/Cargo.toml" "--color=always" "--release"
Using rustc version: rustc 1.57.0-nightly (aa7aca3b9 2021-09-30)
--- stderr
warning: /home/asad/Dev/Blockchain/polkadot/substrate-node-template/runtime/Cargo.toml: version requirement `3.0.0-monthly-2021-09+1` for dependency `pallet-template` includes semver metadata which will be ignored, removing the metadata is recommended to avoid confusion
Updating git repository `https://github.com/paritytech/substrate.git`
Compiling node-template-runtime v3.0.0-monthly-2021-09+1 (/home/asad/Dev/Blockchain/polkadot/substrate-node-template/runtime)
warning: unused doc comment
--> /home/asad/Dev/Blockchain/polkadot/substrate-node-template/runtime/src/lib.rs:253:1
|
253 | /// Nicks Config:
| ^^^^^^^^^^^^^^^^^ rustdoc does not generate documentation for macro invocations
|
= note: `#[warn(unused_doc_comments)]` on by default
= help: to document an item produced by a macro, the macro must produce the documentation as part of its expansion
error: duplicate lang item in crate `sp_io` (which `frame_support` depends on): `panic_impl`.
.........
310 | / construct_runtime!(
311 | | pub enum Runtime where
312 | | Block = Block,
313 | | NodeBlock = opaque::Block,
... |
327 | | }
328 | | );
| |__^
note: required by a bound in `sp_runtime::generic::UncheckedExtrinsic`
--> /home/asad/.cargo/git/checkouts/substrate-7e08433d4c370a21/20a9bbb/primitives/runtime/src/generic/unchecked_extrinsic.rs:39:40
|
39 | pub struct UncheckedExtrinsic<Address, Call, Signature, Extra>
| ^^^^ required by this bound in `sp_runtime::generic::UncheckedExtrinsic`
= note: this error originates in the macro `construct_runtime` (in Nightly builds, run with -Z macro-backtrace for more info)
For more information about this error, try `rustc --explain E0277`.
warning: `node-template-runtime` (lib) generated 1 warning
error: could not compile `node-template-runtime` due to 112 previous errors; 1 warning emitted
Things I have tried:
Removing Cargo.lock file and then running cargo check -p node-template-runtime.
PS: I am just starting with rust and substrate, so spare me if i am asking something obvious here.
So I'm just trying to get a project set up using JDK 15 / SBT 1.5.4 / Scala 2.13.6 all of which have been installed via brew on MacOS.
However before I even attempt to build via intellij etc. I'm unable to connect to the sbt shell via a normal terminal.
[info] welcome to sbt 1.5.4 (AdoptOpenJDK Java 15.0.2)
[info] loading global plugins from /Users/user/.sbt/1.0/plugins
[warn] Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? (default: r)
Exception in thread "Thread-0" java.lang.NoClassDefFoundError: Could not initialize class com.swoval.runtime.ShutdownHooks
at com.swoval.runtime.ShutdownHooks$1.run(ShutdownHooks.java:25)
I've tried reinstalling all three dependancies a few times and cleared the cache but still nothing. I'm guessing something isn't configured right because it's unable to even connect to the shell.
Thanks!
EDIT: I've just followed the install steps on the sbt docs and the only requirements are JDK 11 or 8 and sbt itself. I've removed scala / JDK15 and re-downloaded JDK 11 via SDK-man this time and i'm still having the same issue.
EDIT 2: I've removed the .sbt directory and reinitiased and now sbt command is showing more output, hopefully this helps a little more.
java.lang.NoClassDefFoundError: Could not initialize class com.swoval.runtime.ShutdownHooks
at com.swoval.runtime.NativeLoader.loadPackaged(NativeLoader.java:143)
at com.swoval.runtime.NativeLoader.loadPackaged(NativeLoader.java:174)
at com.swoval.files.apple.FileEventMonitorImpl.<clinit>(FileEventMonitors.java:127)
at com.swoval.files.apple.FileEventMonitors.get(FileEventMonitors.java:47)
at com.swoval.files.ApplePathWatcher.<init>(ApplePathWatcher.java:258)
at com.swoval.files.ApplePathWatcher.<init>(ApplePathWatcher.java:194)
at com.swoval.files.ApplePathWatchers.get(ApplePathWatcher.java:331)
at com.swoval.files.PathWatchers.get(PathWatchers.java:84)
at com.swoval.files.FileTreeRepositories.get(FileTreeRepositories.java:64)
at com.swoval.files.FileTreeRepositories.get(FileTreeRepositories.java:32)
at sbt.internal.nio.FileTreeRepositoryImpl.<init>(FileTreeRepositoryImpl.scala:46)
at sbt.internal.nio.FileTreeRepository$.default(FileTreeRepository.scala:40)
at sbt.BuiltinCommands$.$anonfun$setupGlobalFileTreeRepository$1(Main.scala:985)
at sbt.BuiltinCommands$.$anonfun$doLoadProject$5(Main.scala:974)
at sbt.Project$.setProject(Project.scala:501)
at sbt.BuiltinCommands$.doLoadProject(Main.scala:974)
at sbt.BuiltinCommands$.$anonfun$loadProjectImpl$2(Main.scala:912)
at sbt.Command$.$anonfun$applyEffect$4(Command.scala:150)
at sbt.Command$.$anonfun$applyEffect$2(Command.scala:145)
at sbt.Command$.process(Command.scala:189)
at sbt.MainLoop$.$anonfun$processCommand$5(MainLoop.scala:245)
at scala.Option.getOrElse(Option.scala:189)
at sbt.MainLoop$.process$1(MainLoop.scala:245)
at sbt.MainLoop$.processCommand(MainLoop.scala:278)
at sbt.MainLoop$.$anonfun$next$5(MainLoop.scala:163)
at sbt.State$StateOpsImpl$.runCmd$1(State.scala:289)
at sbt.State$StateOpsImpl$.process$extension(State.scala:325)
at sbt.MainLoop$.$anonfun$next$4(MainLoop.scala:163)
at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:23)
at sbt.MainLoop$.next(MainLoop.scala:163)
at sbt.MainLoop$.run(MainLoop.scala:144)
at sbt.MainLoop$.$anonfun$runWithNewLog$1(MainLoop.scala:119)
at sbt.io.Using.apply(Using.scala:27)
at sbt.MainLoop$.runWithNewLog(MainLoop.scala:112)
at sbt.MainLoop$.runAndClearLast(MainLoop.scala:66)
at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:51)
at sbt.MainLoop$.runLogged(MainLoop.scala:42)
at sbt.StandardMain$.runManaged(Main.scala:218)
at sbt.xMain$.$anonfun$run$11(Main.scala:133)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at scala.Console$.withIn(Console.scala:230)
at sbt.internal.util.Terminal$.withIn(Terminal.scala:560)
at sbt.internal.util.Terminal$.$anonfun$withStreams$1(Terminal.scala:350)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at scala.Console$.withOut(Console.scala:167)
at sbt.internal.util.Terminal$.$anonfun$withOut$2(Terminal.scala:550)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at scala.Console$.withErr(Console.scala:196)
at sbt.internal.util.Terminal$.withOut(Terminal.scala:550)
at sbt.internal.util.Terminal$.withStreams(Terminal.scala:350)
at sbt.xMain$.withStreams$1(Main.scala:87)
at sbt.xMain$.run(Main.scala:121)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at sbt.internal.XMainConfiguration.run(XMainConfiguration.java:56)
at sbt.xMain.run(Main.scala:46)
at xsbt.boot.Launch$.$anonfun$run$1(Launch.scala:149)
at xsbt.boot.Launch$.withContextLoader(Launch.scala:176)
at xsbt.boot.Launch$.run(Launch.scala:149)
at xsbt.boot.Launch$.$anonfun$apply$1(Launch.scala:44)
at xsbt.boot.Launch$.launch(Launch.scala:159)
at xsbt.boot.Launch$.apply(Launch.scala:44)
at xsbt.boot.Launch$.apply(Launch.scala:21)
at xsbt.boot.Boot$.runImpl(Boot.scala:78)
at xsbt.boot.Boot$.run(Boot.scala:73)
at xsbt.boot.Boot$.main(Boot.scala:21)
at xsbt.boot.Boot.main(Boot.scala)
[error] java.lang.NoClassDefFoundError: Could not initialize class com.swoval.runtime.ShutdownHooks
[error] Use 'last' for the full log.
[warn] Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? (default: r)
Exception in thread "Thread-0" java.lang.NoClassDefFoundError: Could not initialize class com.swoval.runtime.ShutdownHooks
at com.swoval.runtime.ShutdownHooks$1.run(ShutdownHooks.java:25)
EDIT3: Tried creating a new user and just downloading sbt and still getting the same issue. Will attempt completely removing Java as it seems to be the culprit and reinstalling.
Similar to Boris's suggestion the issue lied in the installation version of Java.
There are two versions for AdoptOpenJDK, .j9 an .hs as seen below if you are using sdk-man as a package manager:
AdoptOpenJDK | >>> | 16.0.1.j9 | adpt | installed | 16.0.1.j9-adpt
| | 16.0.1.hs | adpt | | 16.0.1.hs-adpt
| | 11.0.11.j9 | adpt | | 11.0.11.j9-adpt
| | 11.0.11.hs | adpt | | 11.0.11.hs-adpt
For some reason the OpenJDK version .hs-adpt shows this issue for every version but the .j9-adpt version works fine.
TLDR: Install the .j9-adpt version of Java that you need otherwise .hs-adpt is not working for me with sbt version 1.5.4.
I assume the reason - you are using wrong Java JDK for Mac OS with M1 chip. You need to install proper Java JDK version, I use sdkman for that.
Install sdk man:
curl -s "https://get.sdkman.io" | bash
source "$HOME/.sdkman/bin/sdkman-init.sh"
set up sdk man to not use rosetta2_compatbile versions:
vim .sdkman/etc/config
and set sdkman_rosetta2_compatbile=false
choose java JDK from the list:
sdk list java
================================================================================
Available Java Versions
================================================================================
Vendor | Use | Version | Dist | Status | Identifier
--------------------------------------------------------------------------------
Azul Zulu | | 16.0.1 | zulu | | 16.0.1-zulu
| | 11.0.11 | zulu | | 11.0.11-zulu
| | 8.0.292 | zulu | | 8.0.292-zulu
BellSoft | | 16.0.1 | librca | | 16.0.1-librca
| | 11.0.11 | librca | | 11.0.11-librca
| | 8.0.292 | librca | | 8.0.292-librca
Java.net | | 18.ea.3 | open | | 18.ea.3-open
| | 18.ea.2 | open | | 18.ea.2-open
| | 18.ea.1 | open | | 18.ea.1-open
| | 17.ea.28 | open | | 17.ea.28-open
| | 17.ea.27 | open | | 17.ea.27-open
| | 17.ea.26 | open | | 17.ea.26-open
| | 17.ea.25 | open | | 17.ea.25-open
================================================================================
and install it using command sdk install java IDENTIFIER, i.e.:
sdk install java 16.0.1-zulu
In GitLab job description it is possible to specify stages, where jobs will be grouped by stages and executed in parallel. Imagine that I'd like to do the following:
Build a release binary.
Build a release Docker image for release binary.
Build a debug binary.
Build a debug Docker image for debug binary.
With no nested stages, I can try building release and debug binaries at the same time, and later build both images. But, this is terribly inefficient because one of the builds takes a lot longer than the other, yet, I cannot start creating an image for the build that finished first.
If only it was possible to arrange for the Docker image building job to start as soon as either the first build finished, it would be perfect. One way this might have been possible is if I could specify nested stages, where, say, stage build-all had two nested stages: build-release and build-debug, each composed of two jobs: build-release-binary, build-release-image, and, similarly, build-debug-binary, build-debug-image.
Since I'm new to GitLab, I would also appreciate a negative answer, i.e. knowing that it is not possible is also useful.
Problem
To first confirm your problem, I imagine you have a setup like this:
.gitlab-ci.yml:
stages:
- build-binaries
- build-images
# Binaries
build-release-binary:
stage: build-binaries
script:
- make release
build-debug-binary:
stage: build-binaries
script:
- make debug
# Docker Images
build-release-image:
stage: build-images
dependencies:
- build-release-binary
script:
- docker build -t wvxvw:release .
build-debug-image:
stage: build-images
dependencies:
- build-debug-binary
script:
- docker build -t wvxvw:debug .
And that should produce a pipeline like this:
build-binaries build-images
______________________ _____________________
| | | |
| build-release-binary |----+--+--->| build-release-image |
|______________________| / \ |_____________________|
| |
______________________ | | _____________________
| | | | | |
| build-debug-binary |---/ \-->| build-debug-image |
|______________________| |_____________________|
Assessment
You are correct that no jobs from the build-images stage will begin until all jobs from the build-binaries stage complete (even though the job's dependencies are met).
There is a GitLab issue open that discusses this:
gitlab-org/gitlab-ce#49964: Allow running a CI job if its dependencies succeeded
I've added a comment pointing out the improvements that could be made in this case. In the future, the pipeline might then look like this (note the separate connecting lines):
build-binaries build-images
______________________ _____________________
| | | |
| build-release-binary |----------->| build-release-image |
|______________________| |_____________________|
______________________ _____________________
| | | |
| build-debug-binary |----------->| build-debug-image |
|______________________| |_____________________|
Workaround
Sometimes if you have sequential tasks, it's easier to simply run them in a single job. This avoids the overhead of firing up another job, when you already have everything ready to go in the first job.
As a work-around, you could simply flatten your pipeline into a single stage which would build both the binary and the Docker image:
.gitlab-ci.yml:
stages:
- build
build-release:
stage: build
script:
- make release
- docker build -t wvxvw:release .
build-debug:
stage: build
script:
- make debug
- docker build -t wvxvw:debug .
Your pipeline would then of course look like this:
build
_______________
| |
| build-release |
|_______________|
_______________
| |
| build-debug |
|_______________|
I've worked with a team to simplify their pipeline in a similar manner, and we were pleased with the results.
As of Gitlab 12.2, this was fixed with the needs clause, so arbitrary DAGs are now allowed. You can visualize the graph as of Gitlab 13.1 (Beta).
For example, imagine you want to run pylint and unit tests in parallel, and then check the coverage of your unit tests, but without waiting for pylint to finish.
stages:
- Checks
- SecondaryChecks
pylint:
stage: Checks
script: pylint
unittests:
stage: Checks
script: coverage run -m pytest -rs --verbose
testcoverage:
stage: SecondaryChecks
needs: ["unittests"]
script: coverage report -m | grep -q "TOTAL.*100%"
Note that 'needs' only works for targets defined in previous stages. Hence the need for two stages here.
When trying to install my application as an osgi bundle with the install command in karaf on the command line, everything seems fine. When I then type start (id) everything still seems fine, but my application does not seem to accept requests. When I then type log:display, I get this:
2016-04-20 13:49:38,251 | INFO | Thread-19 | bundle | 37 - org.apache.aries.spifly.dynamic.bundle - 1.0.1 | Bundle Considered for SPI providers: oms-integrations
2016-04-20 13:49:38,251 | INFO | Thread-19 | bundle | 37 - org.apache.aries.spifly.dynamic.bundle - 1.0.1 | No 'SPI-Provider' Manifest header. Skipping bundle: oms-integrations
I'm new and I have no clue what this means ("No 'SPI-Provider' Manifest header.") or how to solve it?
This is not a problem. It just means that you have Aries spi-fly installed. It scans all bundles for this header and enhances the ones with the header to be able to use the ServiceLoader in OSGi. If you do not use ServiceLoader then you can safely ignore these messages.
You can also configure this logger to WARN to suppress the messsages.
I have a mac application (Eg. Sample.pkg containing Sample.app) along with few pkg dependencies (Eg. A.pkg and B.pkg ). Whenever the user runs the dmg/product archive bundled with these three packages, A.pkg and B.pkg has to be run first before Sample.pkg is installed.
Is there a way where I can specify this dependency while packaging the mac application, without need the user to manually check and install them in the right order?
Solution
There is a way.
You can add such entry to your distribution.xml
<?xml version="1.0" encoding="utf-8" standalone="no"?>
<installer-gui-script minSpecVersion="1">
<title>Application name</title>
<organization>com.organization</organization>
....
<volume-check>
<required-bundles description="Some message which UI Installer doesn't show :(">
<!-- bundle 1 -->
<bundle id="com.organization.app1" path="Applications/App1.app" />
<!-- bundle 2 -->
<bundle id="com.organization.app2" path="Applications/App2.app" />
</required-bundles>
</volume-check>
....
</installer-gui-script>
This is documented here (required-bundles).
Some examples can be found on github.
Disadvantage
There is some bug in Apple Installer, required-bundles description says:
Attributes
|----------------|------------------|------------------------------------------------------------|
| Attribute name | Type | Description |
|----------------|------------------|------------------------------------------------------------|
| | | _Optional._ Values: `true` (default) to require all of |
| `all` | Boolean | the specified bundles, or `false` to require at least one |
| | | of them. |
|----------------|------------------|------------------------------------------------------------|
| `description` | String, | _Optional._ A description of the required bundles, |
| | localization key | displayed to the user if the requirement is not met. |
|----------------|------------------|------------------------------------------------------------|
So the message from description should be shown, but I can't see it anywhere, so user can be confused why he is unable to install application.
It just warns: You can't install <your application> here, <your application> do not allow it. (sorry translation from my localization back to English).
Alternative
I've seen some installation package which was running custom script form installation-check invoking it from installation JavaScript, using system.run('script_name').