Neo4J ogm testing create temporary database - spring

i'm using spring and at the moment running my tests create new objects in my "real" embedded database. I want to create a new one or a temporary db just for testing. I'm new with spring and neo4j so could anyone please help?
thanks a lot

If you are using the embedded driver with SDN/OGM you just need to configure it without providing a path. Then it will create embedded database in /tmp/.. which gets deleted on jvm exit.
E.g. if you are using java configuration
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver");
return config;
}
See docs for full documentation
http://docs.spring.io/spring-data/data-neo4j/docs/current/reference/html/#_configuring_the_embedded_driver

Related

Spring boot application properties load process change programatically to improve security

I have spring boot micro-service with database credentials define in the application properties.
spring.datasource.url=<<url>>
spring.datasource.username=<<username>>
spring.datasource.password=<<password>>
We do not use spring data source to create the connection manually. Only Spring create the database connection with JPA.(org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration)
We only provide the application properties, but spring create the connections automatically to use with the database connection pool.
Our requirement to enhance the security without using db properties in clear text. Two possible methods.
Encrypt the database credentials
Use the AWS secret manager. (then get the credential with the application load)
For the option1, jasypt can be used, since we are just providing the properties only and do not want to create the data source manually, how to do to understand by the spring framework is the problem. If better I can get some working sample or methods.
Regarding the option-2,
first we need to define secretName.
use the secertName and get the database credentials from AWS secret manager.
update the application.properties programatically to understand by spring framework. (I need to know this step)
I need to use either option1 and option2. Mentioned the issues with each option.
What you could do is use environment variables for your properties. You can use them like this:
spring.datasource.url=${SECRET_URL}
You could then retrieve these and start your Spring process using a ProcessBuilder. (Or set the variables any other way)
I have found the solution for my problem.
We need to define org.springframework.context.ApplicationListenerin spring.factories file. It should define the required application context listener like below.
org.springframework.context.ApplicationListener=com.sample.PropsLoader
PropsLoader class is like this.
public class PropsLoader implements ApplicationListener<ApplicationEnvironmentPreparedEvent> {
#Override
public void onApplicationEvent(ApplicationEnvironmentPreparedEvent event) {
ConfigurableEnvironment environment = event.getEnvironment();
String appEnv = environment.getProperty("application.env");
//set new properties based on the application environment.
// calling other methods and depends on the enviornment and get the required value set
Properties props = new Properties();
props.put("new_property", "value");
environment.getPropertySources().addFirst(new PropertiesPropertySource("props", props));
}
}
spring.factories file should define under the resources package and META-INF
folder.
This will set the application context with new properties before loading any other beans.

Environment Configuration Spring Boot

Created a Spring Boot application that will need to migrate from "Local Dev" to "Test", "QA" and "Prod" environments.
Application currently uses a "application.properties" for database connectivity and Kafka configuration.
I am wanting to deploy to "Test" and realized that the properties will not work for that enviornment. After reading the ref docs, it looks like I can simply copy the application.properties file and add a new one application-test.properties, so on, and then run the standalone jar with a -Dspring.profiles.active=test and that seems to work.
But by the time I am done, that means I h ave 4 different appliction-XXXXX.properties files in the jar which may or may not be bad. I know the ultimate configuration would be to use Spring Config server, but right now we are not there with regards to this.
Can anyone validate that using multiple properties files is viable and will work for a bit, or if I am looking at th is all wrong. I do not want to have configuration on the servers in each environment, as I am thinking these mini-services should be self-contained.
Any input would be appreciated.
in a word, your configuration file should be outside your source code.
#PropertySource(value = {"classpath:system.properties"})
public class EnvironmentConfig {
#Bean
public static PropertySourcesPlaceholderConfigurer properties() {
return new PropertySourcesPlaceholderConfigurer();
}
}
Let's say it's named "system.properties", which will be uploaded to server at deployment stage under your application classpath.

How to Access Spring Configuration Outside a JAR File

I am new with Spring Boot Development and currently can't move-on on the issue of how to load my spring application configuration outside the jar file.
My existing code looks like this
private ApplicationContext context;
public static void main(String[] args){
SpringApplication.run(SMPPEngine.c1ass);
new SMPPEngine();
}
public SMPPEngine(){
loadConfiguration();
process();
}
private void loadConfiguration(){
context = new ClassPatthlApplicationContext(”application-context.xm1”);
}
What I want to achieve is to have the jar file next to application-context.xml in one directory so that when there are configuration changes,I don't need to recompile my code just to reflect the changes on application-context.xml.
Based on what I've read on the internet, this is possible by using 'file://directory/application.xml' instead of classpath. But my problem on using the later is that when you place your jar and file to other location, I am required to do code change to reflect the new directory which does not solve the problem of getting away from code recompilation.
I hope I made my issue clear, and get an immediately response with you guys :)
Thanks in advance :)
There are many approaches to do this, standard, you can use spring file: prefix for accessing filesystem paths.
but with spring boot, you can specifiy it in application.properties with
spring.config.location propertiy, or you can add it in command line when run the spring boot jar file like
java -jar myproject.jar --spring.config.location=classpath:/default.properties,classpath:/override.properties
But for your codes, actually you do not need to re-create the spring context from the configuration files, but you want get the context instance, you just need to inject it
#Autowired
private ApplicationContext context;
Another approach, if you have the infrastructure. Would be to use Spring Cloud Config. After your Boot application is configured to read from it, they can be modified at anytime without recompilation or restarting.

Completely auto DB upgradable Spring boot application

I am trying to use flyway for DB migrations and Spring boot's flyway support for auto-upgrading DB upon application start-up and subsequently this database will be used by my JPA layer
However this requires that schema be present in the DB so that primary datasource initialization is successful. What are the options available to run a SQL script that will create the required schema before flyway migrations happen.
Note that If I use flyway gradle plugin (and give the URL as jdbc:mysql://localhost/mysql. It does create the schema for me. Am wondering if I could make this happen from Java code on application startup.
Flyway does not support full installation when schema is empty, just migration-by-migration execution.
You could though add schema/user creation scripts in the first migration, though then your migration scripts need to be executed with sysdba/root/admin user and you need to set current schema at the beginning of each migration.
If using Flyway, the least problematic way is to install schema for the first time manually and do a baseline Flyway task (also manually). Then you are ready for next migrations to be done automatically.
Although Flyway is a great tool for database migrations it does not cover this particular use case well (installing schema for the first time).
"Am wondering if I could make this happen from Java code on application startup."
The simple answer is yes as Flyway supports programmatic configuration from with java applications. The starting point in the flyway documentation can be found here
https://flywaydb.org/documentation/api/
flyway works with a standard JDBC DataSource and so you can code the database creation process in Java and then have flyway handle the schema management. In many environment you are likely to require 2 steps anyway as the database/schema creation will need admin rights to the database, while the ongoing schema management will need an account with reduced access rights.
what you need is to implement the interface FlywayCallback
in order to kick start the migration manually from you code you can use the migrate() method on the flyway class
tracking the migration process can be done through the MigrationInfoService() method of the flyway class
Unfortunately if your app has a single datasource that expects the schema to exist, Flyway will not be able to use that datasource to create the scheme. You must create another datasource that is not bound to the schema and use the unbounded datasource by way of a FlywayMigrationStrategy.
In your properties file:
spring:
datasource:
url: jdbc:mysql://localhost:3306/myschema
bootstrapDatasource:
url: jdbc:mysql://localhost:3306
In your config file:
#Bean
#Primary
#ConfigurationProperties("spring.datasource")
public DataSourceProperties primaryDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
#ConfigurationProperties("spring.datasource")
public DataSource primaryDataSource() {
return primaryDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
#ConfigurationProperties("spring.bootstrapDatasource")
public DataSource bootstrapDataSource() {
return DataSourceBuilder.create().build();
}
And in your FlywayMigrationStrategy file:
#Inject
#Qualifier("bootstrapDataSource")
public void setBootstrapDataSource(DataSource bootstrapDataSource) {
this.bootstrapDataSource = bootstrapDataSource;
}
#Override
public void migrate(Flyway flyway) {
flyway.setDataSource(bootstrapDataSource);
...
flyway.migrate()
}

Generate schema in dropwizard-hibernate

I followed the tutorial for dropwizard and hibernate without problems. Now I have non trivial annotations in my entities, and I would like hibernate to generate the tables for me, and stuff like that.So, how can I change hibernate's configuration? Can I give it a hibernate.cfg.xml? If I can, do I have to set up the connection again?
I found this PR,
but it doesn't seem to be in the public release yet (no hibernateBundle.configure in my jars)
But maybe I'm looking for the wrong thing. So far, I'm just trying to set hibernate.hbm2dll.auto. After all, there might be an other way to enable hibernate table generation in Dropwizard... So, any help?
Thank you.
Edit: I approached the problem from another angle, to explicitly create the schema instead of using hbm2ddl.auto. See proposed answer.
Edit: Problem solved! Doing this in the YAML config currently works: (Dropwizard 0.7.1)
database:
properties:
hibernate.dialect: org.hibernate.dialect.MySQLDialect
hibernate.hbm2ddl.auto: create
(from this answer)
Old answer:
This is what I am currently using: A class that calls hibernate's SchemaExport to export the schema to a SQL file or to modify the database. I just run it after changing my entities, and before running the application.
public class HibernateSchemaGenerator {
public static void main(String[] args) {
Configuration config = new Configuration();
Properties properties = new Properties();
properties.put("hibernate.dialect", "org.hibernate.dialect.MySQLDialect");
properties.put("hibernate.connection.url", "jdbc:mysql://localhost:3306/db");
properties.put("hibernate.connection.username", "user");
properties.put("hibernate.connection.password", "password");
properties.put("hibernate.connection.driver_class", "com.mysql.jdbc.Driver");
properties.put("hibernate.show_sql", "true");
config.setProperties(properties);
config.addAnnotatedClass(MyClass.class);
SchemaExport schemaExport = new SchemaExport(config);
schemaExport.setOutputFile("schema.sql");
schemaExport.create(true, true);
}
}
I didn't know about hibernate tools before. So this code example can be used in the service initialization to act like hbm2ddl.auto = create.
I'm currently using it just by running the class (from eclipse or maven) to generate and review the output SQL.

Resources