Jenkins is updating my applicationContext.xml file with it's system properties? - spring

This is pretty crazy...
I have an applicationContext.xml, that configures a bean using system properties.
In my case I am configuring a bean to inject values from a file or if not there look into the system.properties(I expect this only to happen at run time!)
i have:
<bean id="myBean" class="foo.foo.fooBarImpl">
<property name="keyStoreFile" value="${javax.net.ssl.KeyStore}"/>
...
...
</bean>
So when my java application that uses this applicationContext.xml(resides inside the jar on the classpath) starts it will pull the ${javax.net.ssl.KeyStore} from a properties file or if the properties file is not there, attempt to get it from the system properties.
What is happening that I cannot explain is..when jenkins pulls from the repository, and builds..
it is modifying my applicationContext.xml! and actually writing in what is present in the system properties..and saving it before buildling the .jar! my jar now as hardcoded values in it of SSL information(like the password...)
<bean id="myBean" class="foo.foo.fooBarImpl">
<property name="keyStoreFile" value="/mympath/keystore.jks"/>
...
...
</bean>
this above modified applicationContext.xml is now in my .jar!?
is there a setting in Spring, or Jenkins(maybe) to prevent my applicationContext.xml to be modified and resaved into the .jar?

Are you using Maven on your build?
Could be that maven resource filtering is taking in place.
Please try on pom.xml something like
<project>
...
<build>
...
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>false</filtering>
<includes>
<include>**/applicationContext.xml</include>
</includes>
</resource>
...
</resources>
...
</build>
...
</project>

Related

How to configure Helidon application to use in-memory database for integration tests?

We are setting up a Helidon MP application that connects to a SQL database and exposes some endpoints for CRUD operations. I am facing issues when implementing the integration tests. Our objective is to have the application use the SQL database, but when running the tests use an in-memory database.
I've used this type of implementation on others frameworks and programing languages. The initial solution was to access the dependency injection container and change the configuration of the ORM (hibernate in this case) to use an in-memory database. Unfortunately I did not manage to do this.
The second approach was to configure another persistence.xml file in the test folder, that would override the one from main folder. Using on each of them a different jta-data-source, I would be able to configure separate connection credentials. I found out that this causes a ambiguous dependency and would fail.
content of src/resources/META-INF/persistence.xml
<persistence>
<persistence-unit name="dservice" transaction-type="JTA">
<jta-data-source>dsource</jta-data-source>
<class>.....</class>
.
.
.
<class>.....</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.OracleDialect"/>
</properties>
</persistence-unit>
</persistence>
content of test/resources/META-INF/persistence.xml
<persistence>
<persistence-unit name="dservice" transaction-type="JTA">
<jta-data-source>dsource_test</jta-data-source>
<class>.....</class>
.
.
.
<class>.....</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="jakarta.persistence.sql-load-script-source" value="META-INF/init_script.sql"/>
<property name="jakarta.persistence.schema-generation.database.action" value="drop-and-create"/>
</properties>
</persistence-unit>
</persistence>
content of src/resources/META-INF/microprofile-config.properties
# used for build
oracle.ucp.jdbc.PoolDataSource.dsource.URL=jdbc:oracle:something
oracle.ucp.jdbc.PoolDataSource.dsource.connectionFactoryClassName=oracle.jdbc.pool.OracleDataSource
oracle.ucp.jdbc.PoolDataSource.dsource.user=some_user
oracle.ucp.jdbc.PoolDataSource.dsource.password=some_password
# used for in-memory testing
oracle.ucp.jdbc.PoolDataSource.dsource_test.URL=jdbc:h2:mem:depServerDb;DB_CLOSE_DELAY=-1
oracle.ucp.jdbc.PoolDataSource.dsource_test.connectionFactoryClassName=org.h2.jdbcx.JdbcDataSource
oracle.ucp.jdbc.PoolDataSource.dsource_test.user=db_user
oracle.ucp.jdbc.PoolDataSource.dsource_test.password=user_password
I added in main/resources/META-INF/persistence.xml another persistence unit with a different name and try to create the entity manager manually using Persistence.createEntityManagerFactory() and have a provider class to access the entity manager. Unfortunately this attempt also failed.
content of src/resources/META-INF/persistence.xml
<persistence>
<persistence-unit name="dservice" transaction-type="JTA">
<jta-data-source>dsource</jta-data-source>
<class>.....</class>
.
.
.
<class>.....</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.OracleDialect"/>
</properties>
</persistence-unit>
<persistence-unit name="dservice_test" transaction-type="JTA">
<jta-data-source>dsource_test</jta-data-source>
<class>.....</class>
.
.
.
<class>.....</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="jakarta.persistence.sql-load-script-source" value="META-INF/init_script.sql"/>
<property name="jakarta.persistence.schema-generation.database.action" value="drop-and-create"/>
</properties>
</persistence-unit>
</persistence>
I ended up with a solution that I am not satisfied with. I kept the persistence.xml from scenario 3 and added a entity manager provider class in which I inject two entity managers, one for each persistence unit. In the test class I added a #AddConfig(key = "app.testing", value = "true"). This will make my entity manager provider to deliver the entity manager "depservice" when the application is running and "depservice-test" when I run "mvn test" command.
content of provider class
#ApplicationScoped
public class PersistenceUnitProvider {
#PersistenceContext(unitName = "dservice")
private EntityManager em_application;
#PersistenceContext(unitName = "dservice_test")
private EntityManager em_test;
private String testing = false;
#Inject
public PersistenceUnitProvider(#ConfigProperty(name = "app.testing") String testing){
this.testing = testing;
}
public EntityManager getPersistenceUnit(){
if(this.testing == "true"){
return em_test;
}
return em_application;
}
}
content of junit test class that will change app.testing property to true and use in-memory database
#HelidonTest
#AddConfig(key = "app.testing", value = "true")
class MainTest {
.
..
...
....
}
The issue is that this makes the application establish two connections when it's run or tested. Is there a better way to achieve this?
..............................................................................................................................................
UPDATE:
Following the solution Laird mentioned in the accepted answer, we are using the maven build process, that has phase triggers to add whatever config files needed to change the behavior of the helidon application.
We created a folder to store all the config files used for development, test and production, with the following structure:
_config
development
(files that use a local SQL database)
tests
(files that use a in memory database)
production
(files that use a development SQL database)
In the pom.xml file we switch the config files as needed for the different phases
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<executions>
<execution>
<id>copy-resources-dev</id>
<phase>compile</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/classes/META-INF</outputDirectory>
<resources>
<resource>
<directory>_config/development</directory>
<filtering>true</filtering>
</resource>
</resources>
</configuration>
</execution>
<execution>
<id>copy-resources-test</id>
<phase>test-compile</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/classes/META-INF</outputDirectory>
<resources>
<resource>
<directory>_config/tests</directory>
<filtering>true</filtering>
</resource>
</resources>
</configuration>
</execution>
<execution>
<id>copy-resources-packaging</id>
<phase>prepare-package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/classes/META-INF</outputDirectory>
<resources>
<resource>
<directory>_config/production</directory>
<filtering>true</filtering>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
This will enable us, when we run "helidon dev", to use a localhost database. When we execute the tests, use a in-memory database. And when we package the application and want to run it on a server somewhere, use whatever database we need it to.
There are many, many, many things going on here. I'll try to keep it short; a full JPA tutorial is beyond the scope of this question and this website.
The short answer is: (1) in JPA, a persistence.xml is definitionally environment-specific, and (2) persistence.xmls don't "override" each other. When seen this way, the problem reduces to: I want two environments in the same project and can't figure out how to turn them on and off selectively.
There are a variety of (non-Helidon-specific) ways you can do this sort of thing:
Use the maven-resources-plugin to defer copying src/main/resources/META-INF/persistence.xml into target/classes/META-INF/ until after unit tests have run (so exclude persistence.xml from <resources> in your pom.xml and then bind the maven-resources-plugin:copy-resources goal to the prepare-package phase. Now src/test/resources/META-INF/persistence.xml will be in effect at unit test time, and your (untested) src/main/resources/META-INF/persistence.xml will be the one you deploy with.
Do amazing things with MicroProfile Config configuration profiles if the only thing you need to change is data source information, which is already external to a container-mode-JPA persistence.xml file, but from your example it seems that you need to change <property> elements as well.
Recognize that since persistence.xmls are inherently environment-specific, just don't include a src/main/resources/META-INF/persistence.xml at all in your library project, since a library project, by definition, is supposed to be used in a variety of environments. Instead, place a persistence.xml in its own "thin" project and combine that project with your library project to form an application. You can of course test these combinations in all sorts of other ways.
I would strongly recommend against adding test code to production bits, like you are doing with em_application and em_test. This may look simple at start but quickly go out of hand before you realize it.
Instead, try to retain only one line of code for testing and production. Have a single JPA persistence unit (em_application) in your PersistenceUnitProvider, persistence.xml and microprofile-config.properties. Stick to OracleDialect in test as well. Make a second copy of your META-INF/microprofile-config.properties under your JUnit test directory and retain the H2 configuration in the test copy. You would additionally have to shift your init_script.sql to the H2 connection URL to orchestrate the setup, since persistence.xml can no longer carry H2 configuration.
The HelidonTest application should pick up the test copy of mp-config properties file and the persistence until would automatically now work with H2.

java can not read external properties in the same directory with .jar file

I create a jar application, and use maven-shade-plugin to package it.
There are some properties files in it and I use PropertyPlaceholderConfigurer to read properties:
<bean id="configurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:/properties/core.properties</value>
</list>
</property>
</bean>
Because I want to update the properties in the file flexible, so I do this to exclude properties when package:
<resources>
<resource>
<directory>src/main/resources</directory>
<includes>
<include>**/*</include>
</includes>
<excludes>
<exclude>properties/core.properties</exclude>
</excludes>
</resource>
</resources>
After package the jar, I create a properties directory contains core.properties and put it in the same directory with jar file.
When I run the jar file:
java -jar test.jar
It can read the properties in the core.properties files.
But if I create a springboot application and do the same steps but package with spring-boot-maven-plugin.
Then run the jar with the same command:
java -jar test.jar
An error happened:
Caused by: java.io.FileNotFoundException: class path resource [application.properties] cannot be opened because it does not exist
at org.springframework.core.io.ClassPathResource.getInputStream(ClassPathResource.java:180)
at org.springframework.core.io.support.EncodedResource.getInputStream(EncodedResource.java:159)
at org.springframework.core.io.support.PropertiesLoaderUtils.fillProperties(PropertiesLoaderUtils.java:99)
at org.springframework.core.io.support.PropertiesLoaderUtils.fillProperties(PropertiesLoaderUtils.java:73)
at org.springframework.core.io.support.PropertiesLoaderUtils.loadProperties(PropertiesLoaderUtils.java:59)
at org.springframework.core.io.support.ResourcePropertySource.<init>(ResourcePropertySource.java:67)

Setting Spring applicationContext properties from POM file

I have the following configurations in my applicationContext.xml file:
<bean class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:application.properties</value>
<value>classpath:database.properties</value>
</list>
</property>
</bean>
<bean id="javaMailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl">
<property name="host" value="${smtp.host}" />
</bean>
with smtp.host being set in my POM file like so:
<build>
<defaultGoal>install</defaultGoal>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
...
</build>
<profile>
<id>local</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<properties>
<application.env>local</application.env>
<profile.scope>compile</profile.scope>
<skip.test>true</skip.test>
<smtp.host>my.smtp.server</smtp.host>
</properties>
</profile>
Upon deploying my application, I ran into an error message saying that Spring was not able to resolve smtp.host. I added the following mapping to my application.properties file:
smtp.host=${smtp.host}
But Spring started to complain that I had a circular placeholder reference on the property. Is there anything I am missing?
Thanks!
You mix here two things up. The pom.xml is for building the application. Properties you define there has normally nothing to do with your application properties. And Maven profiles has nothing to do with Spring profiles. They are only named equaly.
You should configure your Spring Application as you can read here. You could -- what I would not suggest -- use your pom as property source.
The normal way would be to read it from a externalized configuration. As I do not know if you use Spring Boot, you can have a look at the Spring Boot Way and adapt it, if you use Spring without Boot.
So add a apllication.properties file into src/mein/resources like
smtp.host=my.smtp.server
If you use boot, you are done, else you have to add a
#PropertySource("classpath:/application.properties")
to your #Configuration
Here you are mixing the build and runtime phase of application which are mutually exclusive.
Mavens' role end once the build is complete thus any properties used perishes with it. Moreover application start up is agnostic to the tool / process used to build it and thus there isn't any information shared between them. Thus the idea to use properties specified in pom.xml is not feasible.
Regarding the circular reference the statement smtp.host=${smtp.host} is loosely similar to java code int i = i; which essentially has no effect because i is defined and assigned to itself.

Using Maven settings.xml properties inside Spring context

I've got a Maven settings.xml file in my ~/.m2 directory; it looks like this:
<settings>
<profiles>
<profile>
<id>mike</id>
<properties>
<db.driver>org.postgresql.Driver</db.driver>
<db.type>postgresql</db.type>
<db.host>localhost</db.host>
<db.port>5432</db.port>
<db.url>jdbc:${db.type}://${db.host}:${db.port}/dbname</db.url>
</properties>
</profile>
</profiles>
<activeProfiles>
<activeProfile>mike</activeProfile>
</activeProfiles>
<servers>
<server>
<id>server_id</id>
<username>mike</username>
<password>{some_encrypted_password}</password>
</server>
</servers>
</settings>
I'd like to use these properties twice
Once inside Maven's integration-test phase to set up and tear down my database. Using Maven filtering, this is working perfectly.
A second time when running my Spring application, which means I need to substitute these properties into my servlet-context.xml file during Maven's resources:resources phase. For properties in the upper section of settings.xml, such as ${db.url}, this works fine. I cannot figure out how to substitute my database username and (decrypted) password into the Spring servlet-context.xml file.
The pertinent part of my servlet-context.xml file looks like:
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName"><value>${db.driver}</value></property>
<property name="url"><value>${db.url}</value></property>
<property name="username"><value>${username}</value></property>
<property name="password"><value>${password}</value></property>
</bean>
The end goal here is for each developer to have their own Maven settings (and database on their own machine for integration testing)...And a similar setup on the Jenkins server. We do not want to share a common username/password/etc.
There is a way of filtering web resources by configuration of Maven War Plugin. Look at this for a snippet from official plugin's docs.
And by the way, I strongly recommend reconsidering this filtering-based way for providing de facto run-time configuration at build-time. Just notice that you have to rebuild the same code to just prepare package for another environment (or alternatively edit package contents). You can use application server's specific stuff for this (at least JBoss has one) or use Spring that AFAIR also can be configured like this.
I recommend you to use a property file in the middle. I mean: Spring application would load properties values form the property file using context:property-placeholder and Maven would be the one who replace ${...} variables using values from settings.xml using filtering.
Your property file:
db.driver=${db.driver}
db.url=${db.url}
username=${username}
password=${password}
Your servlet-context.xml file
<context:property-placeholder location="classpath:your-property-file.properties" />
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName"><value>${db.driver}</value></property>
<property name="url"><value>${db.url}</value></property>
<property name="username"><value>${username}</value></property>
<property name="password"><value>${password}</value></property>
</bean>
In your pom.xml
<resources>
...
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
...
</resources>
I haven't tried it, but as per this maven wiki page, you should be able to refer to properties in settings.xml using settings. prefix. So ${settings.servers.server.username} should ideally return the username in settings.xml.

Maven: include resource file based on profile

I'm converting an Ant webapp project over to Maven. I have most of it working, but I'm stuck trying to figure out how to copy some resource files from different sources based on the profile.
I have src/main/resources/persistence-{dev, prod}.xml. One of these needs to be included in the war file as WEB-INF/classes/META-INF/persistence.xml.
I would like the dev version to be copied when the dev profile is active, and the prod version when prod is active.
Just use the maven resources plugin like so http://maven.apache.org/plugins/maven-resources-plugin/examples/include-exclude.html and have a property for the file name or extension set in a profile.
If you are not wedded to the paradigm of having 3 separate persistence.xml files and copying one or the other selectively, you can use maven profiles with filtering like this (just implemented this the other day and today came across your post):
In persistence.xml:
<property name="hibernate.show_sql" value="${hibernate.debug}" />
<property name="hibernate.format_sql" value="${hibernate.debug}" />
In pom.xml create a profile and define the variable:
<profiles>
<profile>
<id>hib-debug</id>
<properties>
<hibernate.debug>true</hibernate.debug>
</properties>
</profile>
</profiles>
define a default for when you build without specifying a profile:
<properties>
<hibernate.debug>false</hibernate.debug>
</properties>
and.... make sure you turn on resource filtering:
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
Then you build with mvn -Phib-debug and voila! Substitution is done.

Resources