why spring boot not use application.properties in test resources first? - spring-boot

I have a special properties file in test resources directory
└── test
├── java
│ └── com
│ └── inter3i
│ ├── dao
│ │ └── FooMapperTest.java
└── resources
└── application.properties
in this application.properties file I specify the MySQL URL.
spring.datasource.url=jdbc:mysql://139.224.xxx.xxx/foo?useSSL=false
then I execute a test
mvn test -Dtest=com.foo.reportapi.dao.FooMapperTest
but it is failed because
org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:289) ~[spring-jdbc-4.3.10.RELEASE.jar:4.3.10.RELEASE]
but actually the MySQL URK is OK, why does it have this error? From wireshark I know it actually connected to another URL
spring.datasource.url=jdbc:mysql://192.168.0.25/foo
which configured in application-default.properties
src
├── main
│ └── resources
│ ├── application-default.properties
So why is it so counterintuitive? I think test classes should use application.properties in test resources first.
In addition I have to use wireshark to find which URL it is connecting to, how could I get Spring Boot to output MySQL URL info explicitly?

As jonrsharpe already mentioned, a specific profile has precedence over the application.properties file - here you find the documentation of the PropertySource order:
https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html
You can fix it in serveral ways:
rename main/resources/application-default.properties to main/resources/application.properties
rename test/resources/application.properties to test/resources/application-default.properties
rename test/resources/application.properties to test/resources/application-default-integrationtest.properties and enable the profile with the following annotation on your test class: #ActiveProfiles({"integrationtest"})
I would recommend #3 because it does not depend on classpath priority of main and test elements and states clearly what file is used.
Now to the logging-part of your question.
If you increase the spring log level to "debug" you can see which config files are loaded. You can log a specific property in your own code:
#Component
#Slf4j
public class LogSpringDatasourceUrlProperty {
#Autowired
public LogSpringDatasourceUrlProperty(#Value("${spring.datasource.url}") String jdbcUrl){
log.info( "application uses '{}' as jdbcUrl", jdbcUrl );
}
}

Related

How to tell Gradle and Intellij that the project's folder structure is different?

I'm using Gradle with the wrapper, and the folder structure by default is like so:
.
├── settings.gradle
├── build.gradle
├── gradle.properties
├── gradle
│ └── wrapper
│ ├── gradle-wrapper.jar
│ └── gradle-wrapper.properties
├── gradlew
└── gradlew.bat
However, I would like to change it to so:
.
├── gradle
| ├── build.gradle
│ ├── settings.gradle
│ ├── gradle.properties
│ └── wrapper
│ ├── gradlew
│ ├── gradlew.bat
│ ├── gradle-wrapper.jar
│ └── gradle-wrapper.properties
└── src
├── main
└── test
Other than the fact that I don't know how to tell IntelliJ about the folder structure, I don't know how to change it for Gradle since the Environment Options related with changing the folder structure are deprecated:
-b, --build-file (deprecated)
Specifies the build file. For example: gradle --build-file=foo.gradle. The default is build.gradle, then build.gradle.kts.
-c, --settings-file (deprecated)
Specifies the settings file. For example: gradle --settings-file=somewhere/else/settings.gradle
You can't tell Gradle and Intellij IDEA that you use a non-standard Gradle build layout. And in all honesty, you shouldn't even consider that unless you have strong reasons to do so. There are mainly two reasons for that:
Developers familiar with one Gradle project feel immediately at home when starting with your Gradle project.
A non-standard build file and directory layout requires additional logic in IDE's (which is not present) and requires to provide extra parameters when building on the command line.
To put things into context, please have look at Gradle issue #16402.
Deprecate command-line options that describe the build layout
The -b and -c command-line options are effectively used to describe a non-standard build layout to Gradle. This is problematic because it means that a specific combination of options must be used whenever Gradle is used on that build, for example whenever invoked from the IDE, CI, command-line or some other tool. These command-line options also have some potentially surprising behaviours, such as running a settings script present in the target directory.
We don't think there are any use cases that are strong enough to justify keeping these options, and we should remove them (via deprecation). If we discover there are some use cases, we might consider replacing the options with more self-describing contracts, for example conventions for build script names.

Dockerize a multi maven project (not multi-module)

In my maven application i have multiple projects:
Core
Application 1
Application 2
Application 1 and Application 2 are two projects that uses the core (for example, those application are built for two different customers)
In order to Dockerize all of them, the simplest way would be to create a multi-module project, but the downside is that i have all inside a single project (core + Application 1 + Application 2).
I would like to have the core separated from them.
The main problem with this configuration is that the core project need to built before the other two, and App 1 and App 2 use this as maven dependency:
App 1
<dependency>
<groupId>it.myorg</groupId>
<artifactId>core-project</artifactId>
<version>1.12.0-SNAPSHOT</version>
</dependency>
If i try to dockerize the App 1 its fail when i package it, because inside the docker container core-project 1.12.0-SNAPSHOT do not exists.
I was thinking to setup a "local maven repo", pushing the core there and App 1 will "pull" the jar from the repo and not from .m2 folder, but i dont like this soulution.
I can provide more information, sorry if i dont provide examples, but my paper is white right now :(
Folder structure
+- Core
--- pom.xml
--- src
+- Application1
--- pom.xml
--- src
The solution i'm trying to do now is create a Dockerfile for core project (FROM maven:latest), building the image with a tag and in Dockerfile of App1 use this image (so, creating a multi stage build but in two separate moments).
The best would be
FROM maven:latest as core-builder
## build the core
FROM maven:latest
## Copy jar from builder
Because the project are in separate folder, i cant build the core in this way. I need to build del core BEFORE (running docker build -t) and later copy from them.
UPDATE
After the correct answer from #mihai i'm asking if its possible a structure like this:
-- myapp-docker
- Dockerfile
- docker-compose.yml
-- core-app
-- application_1
Having Dockerfile at the same level of core-app and application_1 its totally fine and 100% working. The only "problem" is that i should put all the projects in the same repo.
This is the proposed solution with multi-stage builds.
To replicate your setup I created this structure:
.
├── Dockerfile-app1
├── application1
│ ├── pom.xml
│ └── src
│ └── main
│ ├── resources
│ └── webapp
│ ├── WEB-INF
│ │ └── web.xml
│ └── index.jsp
├── core
│ ├── pom.xml
│ └── src
│ ├── main
│ │ └── java
│ │ └── com
│ │ └── test
│ │ └── App.java
│ └── test
│ └── java
│ └── com
│ └── test
│ └── AppTest.java
In the pom.xml file from Application 1 I added the dependency to core:
<dependency>
<groupId>com.test</groupId>
<artifactId>core</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
I named the Dockerfile Dockerfile-app1, this way you can have more than 1 of them.
This is the Dockerfile-app1:
FROM maven:3.6.0-jdk-8 as build
WORKDIR /apps
COPY ./core .
RUN mvn clean install
FROM maven:3.6.0-jdk-8
# If you comment this out then the build fails because it cannot find the dependency to 'core'
COPY --from=build /root/.m2 /root/.m2
COPY ./application1 ./
RUN mvn clean install
You should probably add an entrypoint at the end to run your project or even better add another 3rd stage that only copies the generated artefacts and runs your project (this way the final image will not have your sourced in).
The first stage only builds the core submodule.
The second stage used the results of the first stage, copies only the source for application1 and builds it.
You can easily replicate this for application2 by creating a similar file Dockerfile-app2.
Since you're using maven, try dockerfile-maven to build the image. You don't want any of your build information inside of your image (like what the dependencies are), you should just add the jar at the end. I usually use it together with spring-boot-maven-plugin and repackage, to get a fully self-contained jar.

spring boot does not read the correct property file

[root#xx ~]# tree /data/portal/
/data/portal/
├── portal.jar
└── config
   └── application.properties // I wish it read this
java -jar portal.jar does not read properties in config folder, it stubbornly reads the same file in jar package. However, based on 24.3 Application Property Files jar should read the configuration in config folder. Maven project too. Now I have to manually configure the file location by --spring.config.location.
EDIT:
I have just found that, by using --spring.config.location=file:/absolutepath/config/application.yml does not actually load any included property files. like this:
spring.profiles.include: 'routes'
does not load application-routes.yml, even though log says The following profiles are active: routes

Execute custom script before integration tests

my application is using Spring Boot, with 2 main modules main and test. In another directory on same level I have folder which contains script with database triggers which I need create in database before tests.
Here is my project structure:
src/
├── main
├── scripts (this is not module, only default folder)
│   └── custom_script.sql
└── test
└── persistent
└── TestConfiguration.java
Test configuration is only interface where I set some configuration for tests, currently contains following code:
#Sql("../../scripts/custom_script.sql")
public interface TestConfiguration {
}
This code didn't work, and custom_script.sql isn't executed. Can you tell me why, or what is the better to execute it?

Hyperledger Fabric unit test cross-chaincode invocation without collapsing vendor folder

I have been running into compilation issues when I tried to perform unit testing in golang locally, when trying to instantiate and invoke another chaincode through the MockStub object. Below is my file hierarchy:
├── transaction-chaincode
│   ├── transaction.go
│   ├── transaction_test.go
│   └── vendor
└── user-chaincode
├── user.go
├── user_test.go
└── vendor
The scenario here basically involves one of the chaincode, for example user.go, calling the other chaincode transaction.go. The vendor folders in both directories contain the exact same content.
The problem occurs when I try to instantiate a new instance of the transaction chaincode thru shim.NewMockStub in user_test.go, as the transaction mock object looks for the init method from within transaction-chaincode/vendor/ instead of user-chaincode/vendor/, despite the vendor folders having the same packages (and thus the same method).
I was able to get rid of this error by having a single vendor folder at the parent directory of transaction-chaincode & user-chaincode, but I cannot do so for developmental purposes. How would you suggest I solve this unit testing problem while keeping the vendor folders in their respective locations?
If I understood correctly, you are putting shim and other dependencies in each vendor folder. user_test.go then does something like NewMockStub(..., &transaction_chaincode.transaction{}). You want transaction_chaincode.transaction to bind to user/vendor ?
I don't think that'll happen. The shim import in transaction_chaincode.transaction will bind to its transaction_chaincode/vendor.
If the above understanding is correct, why do you think its a "problem" ?

Resources