Configuring reactive messaging in quarkus for the AWS Glue Schema Registry - quarkus

Having an issue integrating the AWS Glue schema registry with quarkus reactive messaging. I have a property defined as:
mp.messaging.outgoing.eligibility.schemaName=<some schema name>
Notice the camelcase in schemaName. The Glue schema registry is looking for a value for schemaName but from the log output quarkus seems to be putting that property out in all lower case as schemaname so the default approach for adding additional kafka properties doesn't work.
Is there a way to maintain the camel casing in the properties file or another approach to adding kafka properties to an application.
Thanks

You need to put the property segment into quotes to preserve the original case :
mp.messaging.outgoing.eligibility."schemaName"=<some schema name>

Related

Spring cloud sleuth not appending keys to Hibernate query logs

When I am using spring cloud sleuth, I am observing that it is appending application name and key details for all the application logs.
But this is not happening for Hibernate logs or jpa queries.
Is there any way to achieve this using sleuth
You can check out Brave integration with JDBC via py6spy - https://github.com/openzipkin/brave/tree/master/instrumentation/p6spy
Extract from the docs:
brave-instrumentation-p6spy
This includes a tracing event listener for P6Spy (a proxy for calls to your JDBC driver). It reports to Zipkin how long each statement takes, along with relevant tags like the query.
P6Spy requires a spy.properties in your application classpath (ex src/main/resources). brave.p6spy.TracingP6Factory must be in the modulelist to enable tracing.
modulelist=brave.p6spy.TracingP6Factory
url=jdbc:p6spy:derby:memory:p6spy;create=true
In addition, you can specify the following options in spy.properties
remoteServiceName
By default the zipkin service name for your database is the name of the database. Set this property to override it
remoteServiceName=myProductionDatabase
includeParameterValues
When set to to true, the tag sql.query will also include the JDBC parameter values.
Note: if you enable this please also consider enabling 'excludebinary' to avoid logging large blob values as hex (see http://p6spy.readthedocs.io/en/latest/configandusage.html#excludebinary).
includeParameterValues=true
excludebinary=true
spy.properties applies globally to any instrumented jdbc connection. To override this, add the zipkinServiceName property to your connection string.
jdbc:mysql://127.0.0.1:3306/mydatabase?zipkinServiceName=myServiceName
This will override the remoteServiceName set in spy.properties.
The current tracing component is used at runtime. Until you have instantiated brave.Tracing, no traces will appear.

application.properties configuration for distributed database pattern

I am trying to develop a microservice by using sprin and spring boot with postgresql database. I am here using distributted datbase. So for particular region I am using one DB, and for other region I am using different DB. Currently I only tried with one database. I added datasource name , username and password in application.properties.
Here my doubt is that, if I am using multiple distributed database, how cam mention different DB source URL in configuration (application.properties)? I am using following structure to use one database currently,
spring.datasource.url=jdbc:postgresql://localhost/milleTech_users
spring.datasource.username=postgres
spring.datasource.password=postgresql
spring.jpa.generate-ddl=true
Like above.
So if I am using multiple DB for multiple region How I can give configuration conditionally here? I am new to microservice world and distributed database design pattern.
Multiple Database details cannot be managed within a single application.properties.
Consider using Spring Cloud Config where in you can create multiple application.properties with different profile names for every application.
In your case, the profile names could reflect the region. When you deploy to a particular region, launch the app with that profile name so that the required config would be loaded and appropriate database connection would be used
Edit :
Also in your case, if you can set environment variables, you can explore on the following option mentioned in this thread

configure database schema in spring boot application using jdbc template

I have a spring boot application using spring jdbc template for DAO layer connecting to Oracle DB. The DB username is different than the schema on which the queries will be run. Hence when the queries are run it needs to run using a different schema and I do not want to prefix the hardcoded value for the schema(For ex select * from user1.table.....)
I researched a bit and couldn't find a simple and straight way to do that.
For ex if I were using JPA I could have simply configured the property spring.jpa.properties.hibernate.default_schema=<schema name> but couldn't find an equivalent way of configuring the same when using spring jdbc
I ran into a similar problem and didn't find an ideal way to do it. I ended up setting the schema in SQL when the app loaded. At least I was able to reuse spring.jpa.properties.hibernate.default_schema, which I already had set for the JPA default schema.
final String schemaName = jpaProperties.getProperties().get("hibernate.default_schema");
jdbcTemplate.execute("SET SCHEMA '" + schemaName + "'");
This clearly isn't ideal, but it is better than defining your schema in multiple places.
(Note: I autowired both the JpaProperties and JdbcTemplate.)
You need to use the Oracle JDBC Driver.
A good example can be found in this mykong article:
# Oracle settings
spring.datasource.url=jdbc:oracle:thin:#localhost:1521:xe
spring.datasource.username=system
spring.datasource.password=password
spring.datasource.driver-class-oracle.jdbc.driver.OracleDriver

Configurable index name in Spring data Elasticsearch

I am using Spring Data ElasticSearch repository and I like to give a default index name for production and another one for testing
Here the application.properties for production:
spring.data.elasticsearch.cluster-name=elasticsearch
spring.data.elasticsearch.cluster-nodes=localhost:9300
spring.data.elasticsearch.repositories.enabled=true
Here the configuration application-test.properties
spring.data.elasticsearch.cluster-name=elasticsearch
spring.data.elasticsearch.cluster-nodes=localhost:9300
elasticsearch.index.name=registry-test
spring.data.elasticsearch.repositories.enabled=true
The elasticsearch.index.name does not seem to be taken in account. What is the right setting in the properties if I am using Spring Boot autoconfiguration? I cannot find any info in the Spring documentation.
Thanks for your help.
This is full list of properties Spring boot understands. You'll need to scroll down to find Elasticsearch related properties. Your property is not listed.
There is one, which may help you:
spring.data.elasticsearch.properties.*= # Additional properties used to configure the client.
But I don't know how it's working. So I guess best option for you would be to create Elasticsearch beans explicitly and not rely on auto-configuration.

Having spring bean properties refreshed automatically from properties file

I'm using Spring 2.5.6. I have a bean whose properties are being assign from a property file via a PropertyPlaceholderConfigurer. I'm wondering whether its possible to have the property of the bean updated when the property file is modified. There would be for example some periodic process which checks the last modified date of the property file, and if it has changed, reload the bean.
I'm wondering if there is already something that satisfies my requirements. If not, what would be the best approach to solving this problem?
Thanks for your help.
Might also look into useing Spring's PropertyOverrideConfigurer. Could re-read the properties and re-apply it in some polling/schedular bean.
It does depend on how the actual configured beans use these properties. They might, for example, indirectly cache them somewhere themself.
If you want dynamic properties at runtime, perhaps another way to do it is JMX.
One way to do this is to embed a groovy console in your application. Here's some instructions. They were very simple to do, btw - took me very little time even though I'm not that familiar with groovy.
Once you do that you can simply go into the console and change values inside the live application on the fly.
You might try to use a custom scope for the bean that recreates beans on changes of the properties file. See my more extensive answer here.
Spring Cloud Config has facilities to change configuration properties at runtime via the Spring Cloud Bus and using a Cloud Config Server. The configuration or .properties or .yml files are "externalized" from the Spring app and instead retrieved from a Spring Cloud Config Server that the app connects to on startup. That Cloud Config Server retrieves the appropriate configuration .properties or .yml files from a GIT repo (there are other storage solutions, but GIT is the most common). You can then change configuration at runtime by changing the contents of the GIT repo's configuration files--The Cloud Config Server broadcasts the changes to any Client Spring applications via the Spring Cloud Bus, and those applications' configuration is updated without needing a restart of the app. You can find a working simple example here: https://github.com/ldojo/spring-cloud-config-examples

Resources