Flyway and JOOQ for AWS Aurora DB/embedded MySQL - maven

The question I'm going to ask might be a bit more of a 'proof of concept' area, so I fully expect to find out that it's not possible to achieve what I want.
Background:
I have deployed AWS Aurora database cluster using Serverless Framework. Cluster connection details are stored as follows:
Endpoint URL - Stored as String in the AWS Parameter Store and exists among CloudFormation outputs.
Master Password - Randomly generated during the Serverless deployment and stored as a SecureString in the AWS Parameter Store; decryptable by given lambdas only (in other words - no-one knows the password)
Master Username - hardcoded, not going to change
Serverless Build invokes Flyway migration scripts to create the Aurora database schema. This is done using AWS lambda-backed custom resource, thus it has access to SSM (Parameter Store) to initialise DB connection.
What I'm trying to achieve:
Maven build that invokes JOOQ code generation using the given Aurora DB (which is based on MySQL) above.
What I've tried and problems I've faced:
During the Serverless deployment create a read-only user for the Aurora DB. The login details could as well be hardcoded. However, at this stage, Maven build is unaware about the cluster endpoint URL.
Have embedded MySQL database during Maven build. Replicate Flyway migration to this embedded database as well. However, all my attempts to integrate Flyway & JOOQ to embedded MySQL DB have failed due to connection issues. Additionally, this approach will have further complications when I integrate my build process to a CI pipeline.
I was considering having AWS Lambda being invoked during the Maven build. It would have to return a packaged JAR that contains JOOQ generated code and would add the JAR to the sources. Haven't tried it as it seems to be too much overhead for what is required.
Another concern I have for this design - currently Aurora cluster runs on the security group that allows ingress from anywhere (for debug purposes). However, this will change to only access ingress from its own group that will restrict access to only lambdas sharing same security group.
Has anyone tried to achieve anything similar? Do you have any suggestions for my approach?

Related

Can you use Testcontainers to manage service dependencies, like a database, during local development?

Testcontainers can manage dockerized service dependencies, like a database, Kafka, Elasticsearch, and so on for integration testing.
Can I configure my Spring Boot application to manage these service dependencies during local development?
For example, my Spring Boot application needs a MySQL database.
I would like to integrate it with Testcontainers to provide a Docker container with MySQL not only during the tests execution, but at application startup during local development too.
Testcontainers provides an API to manage applications and services in Docker containers. It's incredibly useful for integration testing, where having a programmatically configured, isolated, repeatable environments is an essential requirement for trustworthy tests.
Because of that Testcontainers has integrations with the frameworks like Spring and Quarkus, and tes frameworks like JUnit, Spock, etc to automatically tie the lifecycle of your containerized dependencies to the lifecycle of the tests.
However, Testcontainers API is generic and doesn't have to run during the tests. For example, Quarkus has a feature called Dev Services which automatically creates a container for your database (or other service dependencies, for example Kafka, Redis, etc) when your application tries to access the database, but the configuration is not present.
You can think about it like this, if you have the data access repository classes initialized and wired, but no datasource.url in the config -- it'll spin up the database using testcontainers and configure the app to use it (just like it would happen during tests, but instead used for local development).
Spring Boot doesn't have an automated feature like that currently, there's an open issue to investigate these local development setups with Testcontainers.
If you're open to manually add a feature for your particular application, you can look at the prototype linked from that issue here: https://github.com/joshlong/testcontainers-auto-services-prototype
It's a bit more involved because it integrates with the Spring DevTools, but here are the essential parts that need to be taken care of:
Check that you need to use the database (in your application it can be a given).
Verify the configuration to use the database is absent (if the database is already configured you don't need to spin up a new one)
Create a container using Testcontainers API, either using an appropriate module or the GenericContainer with any Docker image.
Provide the configuration back to the application. For the database that would be the jdbcUrl, username, password, database name, r2dbcUrl and any other relevant properties.
You can take a look at the video with Josh Long where this concept was tried: https://www.youtube.com/watch?v=1PUshxvTbAc&t=2450s
It would also work in the production environments, but the usefulness of the ephemeral Databases, might be limited.

Managing database content on jhipster server

How would one go about managing the data in their PostgreSQL database on their JHipster generated server? My goal is to be able to periodically check the items in the database and perform certain tasks based on the database contents.
I'm new to using JHipster and I'm not sure how I'd go about adding or removing entities as well as adding items to entities on the server. I understand that services facilitate doing these operations for the client-side, but I can't see how I would use the same approach to do what I need on the server (if this is even the correct approach).
To schedule a task on backend you can annotate a public method of a service with #Scheduled, you can find an example in the code generated by JHipster in UserService class look at the removeNotActivatedUsers() method.

How to do database schema testing across distinct applications that share the same schema

I have a situation where I have two SpringBoot microservices which share the same database schema. The schema is maintained by a liquibase changelog file. One service reads from the database, and the other service is responsible for writing to the database.
Right the Writing Service owns the liquibase changelog file, which means the Writing service owns the schema. And the way I validate the Reading Service is to deploy the Writing service first into a test environment followed by the Reading Service, and then execute end-to-end tests against the Reading Service.
Is there a way for both services (two separate apps, two separate repos) to share the liquibase changelog file? I feel this is similar to a contract test as the changelog file will be the contract for both services, but wasn't sure if there was something provided by Liquibase, Spring, Pact, etc that supported this idea.
Thanks for your time!
I think it won't count as a legit answer, but two solutions come to mind:
since your second service reads from the database, I suppose you have a full set of entities there, and entities are supposed to match your database schema. And since you're using Spring, I suppose you can add spring.jpa.hibernate.ddl-auto=validate to the application.properties file. This will validate your database schema against the entities.
You can create a separate library which will contain all the changeLogs. After that you can include this library to both services, so liquibase will validate that all changeSets are executed during application's deployment. But you should make sure that all your changeSets have preConditions, so your deployment won't fail and there won't be any duplicates in the DB schema.

Secure some keys in application.properties - spring boot application

I have a spring boot application where I am using some aws services.
The code in openly available in Git.
I don't want to commit AWS secret and access keys which are part of application.properties. I can't add to .gitignore as I want to commit other values of application.properties.
Many are committing to this repo. We are adding these aws keys in local and making sure its not added as part of any commit.
I want to make sure the aws keys in application.properties should not come to git at any cost. Which is the best way to manage these secret keys.?
You shouldn't be placing AWS API keys in application.properties at all. If the application is running on AWS it should be using the IAM role of the server it is running on. If it is not running on AWS it should probably be using environment variables.
Please review the documentation on this subject here.
Thanks to #Mark B. I would prefer using Java system properties as we can maintain them at application level. Env variables will be at system level which is not really needed and it may lead to conflicts.
while running an spring-boot jar with mvn, it can be done as below
mvn spring-boot:run -Dspring-boot.run.jvmArguments="-Daws.accessKeyId=XXXXXXXXXXX -Daws.secretKey=XXXXXXXXXXX"
if running for IDE like Eclipse or IntelliJ, it should be added as VM Options.
-Daws.accessKeyId=XXXXXXXXXXX -Daws.secretKey=XXXXXXXXXXXX
After this AWS client object can be built as usual.
As an example,
SNS client can be build by
AmazonSNS snsClient = AmazonSNSClient.builder().withRegion(Regions.US_EAST_1).build();
SES client can be built by
AmazonSimpleEmailService emailClient = AmazonSimpleEmailServiceClientBuilder.standard().withRegion(Regions.US_EAST_1).build();

application.properties configuration for distributed database pattern

I am trying to develop a microservice by using sprin and spring boot with postgresql database. I am here using distributted datbase. So for particular region I am using one DB, and for other region I am using different DB. Currently I only tried with one database. I added datasource name , username and password in application.properties.
Here my doubt is that, if I am using multiple distributed database, how cam mention different DB source URL in configuration (application.properties)? I am using following structure to use one database currently,
spring.datasource.url=jdbc:postgresql://localhost/milleTech_users
spring.datasource.username=postgres
spring.datasource.password=postgresql
spring.jpa.generate-ddl=true
Like above.
So if I am using multiple DB for multiple region How I can give configuration conditionally here? I am new to microservice world and distributed database design pattern.
Multiple Database details cannot be managed within a single application.properties.
Consider using Spring Cloud Config where in you can create multiple application.properties with different profile names for every application.
In your case, the profile names could reflect the region. When you deploy to a particular region, launch the app with that profile name so that the required config would be loaded and appropriate database connection would be used
Edit :
Also in your case, if you can set environment variables, you can explore on the following option mentioned in this thread

Resources