Spring cloud sleuth not appending keys to Hibernate query logs - spring

When I am using spring cloud sleuth, I am observing that it is appending application name and key details for all the application logs.
But this is not happening for Hibernate logs or jpa queries.
Is there any way to achieve this using sleuth

You can check out Brave integration with JDBC via py6spy - https://github.com/openzipkin/brave/tree/master/instrumentation/p6spy
Extract from the docs:
brave-instrumentation-p6spy
This includes a tracing event listener for P6Spy (a proxy for calls to your JDBC driver). It reports to Zipkin how long each statement takes, along with relevant tags like the query.
P6Spy requires a spy.properties in your application classpath (ex src/main/resources). brave.p6spy.TracingP6Factory must be in the modulelist to enable tracing.
modulelist=brave.p6spy.TracingP6Factory
url=jdbc:p6spy:derby:memory:p6spy;create=true
In addition, you can specify the following options in spy.properties
remoteServiceName
By default the zipkin service name for your database is the name of the database. Set this property to override it
remoteServiceName=myProductionDatabase
includeParameterValues
When set to to true, the tag sql.query will also include the JDBC parameter values.
Note: if you enable this please also consider enabling 'excludebinary' to avoid logging large blob values as hex (see http://p6spy.readthedocs.io/en/latest/configandusage.html#excludebinary).
includeParameterValues=true
excludebinary=true
spy.properties applies globally to any instrumented jdbc connection. To override this, add the zipkinServiceName property to your connection string.
jdbc:mysql://127.0.0.1:3306/mydatabase?zipkinServiceName=myServiceName
This will override the remoteServiceName set in spy.properties.
The current tracing component is used at runtime. Until you have instantiated brave.Tracing, no traces will appear.

Related

Log effective URL for Spring Boot Liquibase

I'm using Spring Boot 2.7. When I run a unit test, it insists on creating the Liquibase change log table either twice for what should be an H2 in memory database. I'd like to have Liquibase log the actual JDBC URL being used. I know what the properties say, but I have an application.properties, an application-h2.properties, and sometimes Spring wants to use an in memory database even though a different in memory database is used.
Is there some property like
spring.liquibase.show-effective-jdbc-url=true?
Bonus points for telling me how to log this for regular JPA access.
Thanks,
Woodsman
There is not a flag, but the effective URL is logged at FINE level. There should be a message like Connected to USER#URL where the value for url is returned from the driver itself, not just what you gave it.

How to configure maxDegreeOfParallelism for cosmosdb in Springboot?

I want to configure the CosmosQueryRequestOptions.maxDegreeOfParallelism while using the CosmosRepository. I didn't find any documentation around it.
This blog shows how to configure and use this setting through a custom client, but I want to use the repository instead. https://medium.com/#middha.nishant173/improve-query-performance-with-azure-cosmosdb-java-sdk-v4-db1fc54cb484
CosmosQueryRequestOptions is implementation detail for Spring Data Cosmos SDK, so customers cannot set it through spring application.
This can be implemented as a new feature, and can be exposed through application.properties via query.maxDegreeOfParallelism - which customers can opt in if they want.
Default value for maxDegreeOfParallelism is 0, which is the right value for single partition queries. For cross partition queries in the current SDK version, you can get the cosmosClient through spring boot applicationContext and run the query directly against the client. This example shows how to do it - https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-spring-data-cosmos-test/src/test/java/com/azure/spring/data/cosmos/repository/integration/PageableAddressRepositoryIT.java#L144

Disable specific bean types from cloud foundry java-buildpack-auto-reconfiguration

I have the problem that im my company there is a workaround for using the db2 broker in cloud foundry. To get this work you have to disable the java-buildpack-auto-reconfiguration in your application with this property:
JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '{enabled: false}'
Otherwise you get this error:
DB2 SQL Error: SQLCODE=-142, SQLSTATE=42612, SQLERRMC=null
In the git project (https://github.com/cloudfoundry/java-buildpack-auto-reconfiguration) I read that this property disables re-writes of bean definitions of various types (javax.sql.DataSource, org.springframework.data.mongodb.MongoDbFactory, org.springframework.amqp.rabbit.connection.ConnectionFactory, ...) to connect automatically with services bound to the application. In our application we use MongoDB as well as DB2. Therefore I am worried about that with this configuration I disable something that I do not want to disable for MongoDB.
I hope this will get fixed soon that I do not need this configuration. But for now is it possible to only disable the reconfiguration for specific bean types (in my case "javax.sql.DataSource")?

Custom properties (J2EEResourcePropertySet) of datasource do not appear when using UserDefined JDBC provider to create datasource trough wsadmin

So basically i have created user-defined JDBC provider through wsadmin:
AdminTask.createJDBCProvider('[-scope Cluster=MyCluster -databaseType User-defined -providerType "User-defined JDBC Provider" -implementationType User-defined -name "MSSQL JDBC Provider" -description "Microsoft SQL Server JDBC Driver" -classpath [${SQL_PATH}/sql.jar ] -nativePath "" -implementationClassName com.microsoft.sqlserver.jdbc.SQLServerConnectionPoolDataSource ]')
After that i want to create datasource. So basically when i use UI to create datasource - it fills 3 pages with custom properties (J2EEResourcePropertySet) of that datasource (55 J2EEResourceProperties).
If i use wsadmin it does not fills those 3 pages for some reason only i see ~8 custom properties (J2EEResourceProperties).
If i look at log command assistance commands when creating trough UI and script - those are the same.
Can someone explain me what is wrong? I need to have 55 custom properties when running a script also. Thanks.
Here is my datasource script:
jdbcprovider = AdminConfig.getid('/JDBCProvider:MSSQL JDBC Provider/')
AdminTask.createDatasource(jdbcprovider, '[-name DataSource1 -jndiName DataSource1 -dataStoreHelperClassName com.ibm.websphere.rsadapter.MicrosoftSQLServerDataStoreHelper -containerManagedPersistence true -componentManagedAuthenticationAlias SAmgr/DataSource1 ]')
Adding images to understand:
55 custom properties appeared:
Creating datasource trough UI Custom Properties:
8 custom properties appeared:
Creating datasource trough wsadmin script custom properties
EDIT: When you create a DataSource using a JDBC provider created using one of the pre-defined types, like MS SQL Server JDBC Driver, WAS uses the contents of a template to populate (among other things) the properties of the datasource. There is a template for both the 8 WebSphere properties (like webSphereDefaultQueryTimeout), and the other vendor-specific properties (like applicationName for MSSQL Server). The 8 WebSphere specific properties are common to all datasources and are maintained by WAS and are not properties of the JDBC driver. The properties in the vendor-specific template are a subset of all the vendor properties based on our assessment of whether it is “safe” to set/unset the property in a managed (JEE app server) environment.
Whether you create the datasource from the admin console or wsadmin, when the JDBC provider is based on one of the predefined vendors, the property set is the same since it’s coming from standard templates. The behavior difference you're seeing is because you’re creating a user-defined JDBC provider and not using one of the predefined ones. Typically, user-defined JDBC providers are only needed when the JDBC driver you want to use is not one of the pre-defined ones. When you create a datasource from a user-defined JDBC provider using the admin console, behind the scenes it is calling a method to introspect the driver and discover any public javabeans that the JDBC driver exposes as properties. The admin console then adds these properties to the datasource in addition to the 8 WebSphere properties from the template discussed above. The admin console performs the introspection because there is no template available for the vendor properties. However, when you create a datasource from a user-defined JDBC provider using wsadmin, it doesn’t perform that introspection, thus the only properties you see on the datasource will be the 8 WebSphere properties from the template discussed above. There are instances where the console executes some steps programmatically vs. via scripting and differences in behavior like this can arise. So, that's the answer to "why the difference”. After investigation, there no option to have the wsadmin command introspect the driver adding the additional properties.
I see two ways you can address the issue and get the properties added.
If the set of driver properties you need is part of those included in the standard template for that driver, change from creating a user-defined JDBC provider to using one created with that driver vendor. Using wsadmin, you’ll get all the same properties you would get if you created it from the admin console. If some, but not all of the property(s) you need are in the template, you can add those properties via scripting using the AdminConfig.create(…) method as you suggested. After your config is saved, all objects created via scripting will be available to see in the admin console, including the custom properties.

WSO2 Identity Server - Custom JDBC User Store Manager - JDBC Pools

WSO2 Identity Server 5.0.0 (and some patches ;))
It does not appear that custom JDBC user store managers (child of JDBCUserStoreManager) use a JDBC pool. I'm noticing that I can end up session closed errors and sql exceptions whereas the Identity Server itself is still operating OK with its separate database connection (a configured pool).
So I guess I have two questions about this:
Somewhere up the chain, is there a JDBC pool for the JDBCUserStoreManager? If so, are there means to configure that guy more robustly?
Can I create another JDBC datasource in master-datasources.xml which my custom JDBC user store manage could reference?
Instead of using your own datasources/connections, you can import Carbon Datasources and use those (they come with inbuilt pooling and no need to worry about any configurations etc). You can either access these programmatically by directly calling ndatasource component or access them via JNDI.
To access them directly from ndatasource component:
The dependency:
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.ndatasource.core</artifactId>
<version>add_correct_version_here</version>
</dependency>
(You can check repository/components/plugins to find out the correct version for above dependency)
You can inject DataSourceService as in this code (the #scr.reference tag refers to the service you need to inject, this uses maven scr plugin to parse these dependencies when building the bundle).
Note that when you follow this approach you'll have to build the jar as an OSGi bundle as it uses declarative services (and have to place it in repository/components/dropins). Otherwise the dependencies won't be injected at runtime.
Next, you can access all the data sources as:
List<CarbonDataSource> dataSources = dataSourceService.getAllDataSources();
Rajeev's answer was really insightful and helped with investigating and evaluating what I should do. But, I didn't end up using that route. :)
I ended up looking through the Identity Server and Carbon source code and found out that the JDBCUserStoreManager does end up creating a JDBC pool configured by the properties you set for that manager. I had a class called CustomUserStoreConstants for my custom user store manager which had setMandatoryProperty called by default to set:
JDBCRealmConstants.DRIVER_NAME
JDBCRealmConstants.URL
JDBCRealmConstants.USER_NAME
JDBCRealmConstants.PASSWORD
So the pool was configured with these values, BUT that was it...nothing else. So no wonder it wasn't surviving the night!
It turned out that the code setting this up, if it found a value for the JDBCRealmConstants.DATASOURCE in the config params, it would just load up that datasource and ignore any other params set. Seeing that, I got rid of those 4 params listed above and forced my custom user store to only allow having a DATASOURCE and I set it in code with the default JNDI name that I would name that datasource always. With that, I was able to configure my JDBC pool for this datasource with all params such as testOnBorrow, validationQuery, validationInterval, etc in master-datasources.xml. Now the only thing that would ever need to change is the datasource's configuration in that file.
The other reason I went with the datasource in the master-datasources.xml is that I didn't have to decided in my custom user store's code which parameters I would want to have or not have and just manage it all in the xml file easily. This really has advantages with portability of configs and IT involvement for deployments and debugging. I already have other datasources in this file for the IS deployment.
All said, my user store is now living through the night and weekends. :)

Resources