Enterprise manager 12c jmx operations - oracle

Wanted to know if OEM Cloud Control 12c supports Custom JMX operations as in JConsole. I've instrumented my java application to add some JMX Operations which take in a String as parameter, does some processing and return the result.
Example: Something like the add() operation
I tried using the jmxcli utility for creating a metadata plugin, but looks like the arguments (or parameters) to the JMX Operation should be hardcoded while creating the plugin. Is there any other way to run JMX operations on user-defined parameters in OEM?

This can be done using a custom UI, the scope of this QueryDescriptor should be "USER". The custom UI will be used to input this value, and this value will be used in the JMX fetchlet.

Related

Custom properties (J2EEResourcePropertySet) of datasource do not appear when using UserDefined JDBC provider to create datasource trough wsadmin

So basically i have created user-defined JDBC provider through wsadmin:
AdminTask.createJDBCProvider('[-scope Cluster=MyCluster -databaseType User-defined -providerType "User-defined JDBC Provider" -implementationType User-defined -name "MSSQL JDBC Provider" -description "Microsoft SQL Server JDBC Driver" -classpath [${SQL_PATH}/sql.jar ] -nativePath "" -implementationClassName com.microsoft.sqlserver.jdbc.SQLServerConnectionPoolDataSource ]')
After that i want to create datasource. So basically when i use UI to create datasource - it fills 3 pages with custom properties (J2EEResourcePropertySet) of that datasource (55 J2EEResourceProperties).
If i use wsadmin it does not fills those 3 pages for some reason only i see ~8 custom properties (J2EEResourceProperties).
If i look at log command assistance commands when creating trough UI and script - those are the same.
Can someone explain me what is wrong? I need to have 55 custom properties when running a script also. Thanks.
Here is my datasource script:
jdbcprovider = AdminConfig.getid('/JDBCProvider:MSSQL JDBC Provider/')
AdminTask.createDatasource(jdbcprovider, '[-name DataSource1 -jndiName DataSource1 -dataStoreHelperClassName com.ibm.websphere.rsadapter.MicrosoftSQLServerDataStoreHelper -containerManagedPersistence true -componentManagedAuthenticationAlias SAmgr/DataSource1 ]')
Adding images to understand:
55 custom properties appeared:
Creating datasource trough UI Custom Properties:
8 custom properties appeared:
Creating datasource trough wsadmin script custom properties
EDIT: When you create a DataSource using a JDBC provider created using one of the pre-defined types, like MS SQL Server JDBC Driver, WAS uses the contents of a template to populate (among other things) the properties of the datasource. There is a template for both the 8 WebSphere properties (like webSphereDefaultQueryTimeout), and the other vendor-specific properties (like applicationName for MSSQL Server). The 8 WebSphere specific properties are common to all datasources and are maintained by WAS and are not properties of the JDBC driver. The properties in the vendor-specific template are a subset of all the vendor properties based on our assessment of whether it is “safe” to set/unset the property in a managed (JEE app server) environment.
Whether you create the datasource from the admin console or wsadmin, when the JDBC provider is based on one of the predefined vendors, the property set is the same since it’s coming from standard templates. The behavior difference you're seeing is because you’re creating a user-defined JDBC provider and not using one of the predefined ones. Typically, user-defined JDBC providers are only needed when the JDBC driver you want to use is not one of the pre-defined ones. When you create a datasource from a user-defined JDBC provider using the admin console, behind the scenes it is calling a method to introspect the driver and discover any public javabeans that the JDBC driver exposes as properties. The admin console then adds these properties to the datasource in addition to the 8 WebSphere properties from the template discussed above. The admin console performs the introspection because there is no template available for the vendor properties. However, when you create a datasource from a user-defined JDBC provider using wsadmin, it doesn’t perform that introspection, thus the only properties you see on the datasource will be the 8 WebSphere properties from the template discussed above. There are instances where the console executes some steps programmatically vs. via scripting and differences in behavior like this can arise. So, that's the answer to "why the difference”. After investigation, there no option to have the wsadmin command introspect the driver adding the additional properties.
I see two ways you can address the issue and get the properties added.
If the set of driver properties you need is part of those included in the standard template for that driver, change from creating a user-defined JDBC provider to using one created with that driver vendor. Using wsadmin, you’ll get all the same properties you would get if you created it from the admin console. If some, but not all of the property(s) you need are in the template, you can add those properties via scripting using the AdminConfig.create(…) method as you suggested. After your config is saved, all objects created via scripting will be available to see in the admin console, including the custom properties.

Spring cloud sleuth not appending keys to Hibernate query logs

When I am using spring cloud sleuth, I am observing that it is appending application name and key details for all the application logs.
But this is not happening for Hibernate logs or jpa queries.
Is there any way to achieve this using sleuth
You can check out Brave integration with JDBC via py6spy - https://github.com/openzipkin/brave/tree/master/instrumentation/p6spy
Extract from the docs:
brave-instrumentation-p6spy
This includes a tracing event listener for P6Spy (a proxy for calls to your JDBC driver). It reports to Zipkin how long each statement takes, along with relevant tags like the query.
P6Spy requires a spy.properties in your application classpath (ex src/main/resources). brave.p6spy.TracingP6Factory must be in the modulelist to enable tracing.
modulelist=brave.p6spy.TracingP6Factory
url=jdbc:p6spy:derby:memory:p6spy;create=true
In addition, you can specify the following options in spy.properties
remoteServiceName
By default the zipkin service name for your database is the name of the database. Set this property to override it
remoteServiceName=myProductionDatabase
includeParameterValues
When set to to true, the tag sql.query will also include the JDBC parameter values.
Note: if you enable this please also consider enabling 'excludebinary' to avoid logging large blob values as hex (see http://p6spy.readthedocs.io/en/latest/configandusage.html#excludebinary).
includeParameterValues=true
excludebinary=true
spy.properties applies globally to any instrumented jdbc connection. To override this, add the zipkinServiceName property to your connection string.
jdbc:mysql://127.0.0.1:3306/mydatabase?zipkinServiceName=myServiceName
This will override the remoteServiceName set in spy.properties.
The current tracing component is used at runtime. Until you have instantiated brave.Tracing, no traces will appear.

Oracle Service Bus Java

I have an EJB which has as input argument and return value a JAXB mapped complex structure (with subclasses etc).
Now I want to deploy this on the Oracle Service Bus 11g. I can create a business proxy invoking the EJB, but only with basic types (int, ...).
How do i tunnel the XML between EJB and OSB? Any advanced OSB information is appreciated, as I don't know much about it.
After playing around, it turns out the OSB supports (afaik only) Apache XMLBeans. Thus if you declare parameters and return values of type org.apache.xmlbeans.XmlObject it works. I did get some errors concerning DOM v3 not being implemented and some crashes in the oracle DOM implementation, so I just use the XmlObject to create an XML string, and the reparse it.
#Euclides: I have XMLObject and XmlObject in my classpath. I need the second (lower case). Thanks for the hint, anyway.

WSO2 Identity Server - Custom JDBC User Store Manager - JDBC Pools

WSO2 Identity Server 5.0.0 (and some patches ;))
It does not appear that custom JDBC user store managers (child of JDBCUserStoreManager) use a JDBC pool. I'm noticing that I can end up session closed errors and sql exceptions whereas the Identity Server itself is still operating OK with its separate database connection (a configured pool).
So I guess I have two questions about this:
Somewhere up the chain, is there a JDBC pool for the JDBCUserStoreManager? If so, are there means to configure that guy more robustly?
Can I create another JDBC datasource in master-datasources.xml which my custom JDBC user store manage could reference?
Instead of using your own datasources/connections, you can import Carbon Datasources and use those (they come with inbuilt pooling and no need to worry about any configurations etc). You can either access these programmatically by directly calling ndatasource component or access them via JNDI.
To access them directly from ndatasource component:
The dependency:
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.ndatasource.core</artifactId>
<version>add_correct_version_here</version>
</dependency>
(You can check repository/components/plugins to find out the correct version for above dependency)
You can inject DataSourceService as in this code (the #scr.reference tag refers to the service you need to inject, this uses maven scr plugin to parse these dependencies when building the bundle).
Note that when you follow this approach you'll have to build the jar as an OSGi bundle as it uses declarative services (and have to place it in repository/components/dropins). Otherwise the dependencies won't be injected at runtime.
Next, you can access all the data sources as:
List<CarbonDataSource> dataSources = dataSourceService.getAllDataSources();
Rajeev's answer was really insightful and helped with investigating and evaluating what I should do. But, I didn't end up using that route. :)
I ended up looking through the Identity Server and Carbon source code and found out that the JDBCUserStoreManager does end up creating a JDBC pool configured by the properties you set for that manager. I had a class called CustomUserStoreConstants for my custom user store manager which had setMandatoryProperty called by default to set:
JDBCRealmConstants.DRIVER_NAME
JDBCRealmConstants.URL
JDBCRealmConstants.USER_NAME
JDBCRealmConstants.PASSWORD
So the pool was configured with these values, BUT that was it...nothing else. So no wonder it wasn't surviving the night!
It turned out that the code setting this up, if it found a value for the JDBCRealmConstants.DATASOURCE in the config params, it would just load up that datasource and ignore any other params set. Seeing that, I got rid of those 4 params listed above and forced my custom user store to only allow having a DATASOURCE and I set it in code with the default JNDI name that I would name that datasource always. With that, I was able to configure my JDBC pool for this datasource with all params such as testOnBorrow, validationQuery, validationInterval, etc in master-datasources.xml. Now the only thing that would ever need to change is the datasource's configuration in that file.
The other reason I went with the datasource in the master-datasources.xml is that I didn't have to decided in my custom user store's code which parameters I would want to have or not have and just manage it all in the xml file easily. This really has advantages with portability of configs and IT involvement for deployments and debugging. I already have other datasources in this file for the IS deployment.
All said, my user store is now living through the night and weekends. :)

JMX in Spring: Is MBeanServerConnectionFactoryBean Thread-Safe

I have a spring based web-application that needs to get data from ActiveMQ via a JMX connection.
I am using MBeanServerConnectionFactoryBean (in Spring) to get various MBean attributes from ActiveMQ.
I have only one MBeanServerConnectionFactoryBean as a member variable, which is used to get the data. If multiple requests/threads come concurrently will there be any issues? Will there be any race conditions?
Please suggest the best way to keep the code thread-safe.
Spring FactoryBean objects are not intended to be used directly from your code, they're supposed to be used in your Spring config. As such, they are designed to be executed once and once only.
If you want to use them, including MBeanServerConnectionFactoryBean, then you need to create them, configure them, use them and discard them each and every time you want to get the object they create. They are most definitely not thread-safe.
Better yet, do it as the design intended and use them in your Spring config.

Resources