WSO2 Identity Server 5.0.0 (and some patches ;))
It does not appear that custom JDBC user store managers (child of JDBCUserStoreManager) use a JDBC pool. I'm noticing that I can end up session closed errors and sql exceptions whereas the Identity Server itself is still operating OK with its separate database connection (a configured pool).
So I guess I have two questions about this:
Somewhere up the chain, is there a JDBC pool for the JDBCUserStoreManager? If so, are there means to configure that guy more robustly?
Can I create another JDBC datasource in master-datasources.xml which my custom JDBC user store manage could reference?
Instead of using your own datasources/connections, you can import Carbon Datasources and use those (they come with inbuilt pooling and no need to worry about any configurations etc). You can either access these programmatically by directly calling ndatasource component or access them via JNDI.
To access them directly from ndatasource component:
The dependency:
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.ndatasource.core</artifactId>
<version>add_correct_version_here</version>
</dependency>
(You can check repository/components/plugins to find out the correct version for above dependency)
You can inject DataSourceService as in this code (the #scr.reference tag refers to the service you need to inject, this uses maven scr plugin to parse these dependencies when building the bundle).
Note that when you follow this approach you'll have to build the jar as an OSGi bundle as it uses declarative services (and have to place it in repository/components/dropins). Otherwise the dependencies won't be injected at runtime.
Next, you can access all the data sources as:
List<CarbonDataSource> dataSources = dataSourceService.getAllDataSources();
Rajeev's answer was really insightful and helped with investigating and evaluating what I should do. But, I didn't end up using that route. :)
I ended up looking through the Identity Server and Carbon source code and found out that the JDBCUserStoreManager does end up creating a JDBC pool configured by the properties you set for that manager. I had a class called CustomUserStoreConstants for my custom user store manager which had setMandatoryProperty called by default to set:
JDBCRealmConstants.DRIVER_NAME
JDBCRealmConstants.URL
JDBCRealmConstants.USER_NAME
JDBCRealmConstants.PASSWORD
So the pool was configured with these values, BUT that was it...nothing else. So no wonder it wasn't surviving the night!
It turned out that the code setting this up, if it found a value for the JDBCRealmConstants.DATASOURCE in the config params, it would just load up that datasource and ignore any other params set. Seeing that, I got rid of those 4 params listed above and forced my custom user store to only allow having a DATASOURCE and I set it in code with the default JNDI name that I would name that datasource always. With that, I was able to configure my JDBC pool for this datasource with all params such as testOnBorrow, validationQuery, validationInterval, etc in master-datasources.xml. Now the only thing that would ever need to change is the datasource's configuration in that file.
The other reason I went with the datasource in the master-datasources.xml is that I didn't have to decided in my custom user store's code which parameters I would want to have or not have and just manage it all in the xml file easily. This really has advantages with portability of configs and IT involvement for deployments and debugging. I already have other datasources in this file for the IS deployment.
All said, my user store is now living through the night and weekends. :)
Related
For a team of developers it is essential that everybody sets up and configures the application server. In our case we are using websphere 8.5.
I'm looking for an easy way to do this. Normally you do it using the profile management tool located in WAS_HOME/bin/ProfileManagement and this tool works quiet well. But after the installation of the websphere server one still needs to configure the server profile - creating datasources, JMS queues, buses, variables and so on. So I thought it would be nice if there is a way to apply these configurations to an existing profile.
My first try was to just configure one profile and then make a configuration backup using
%WAS_HOME%/bin/backupConfig.bat
But the configuration contains e.g. the hostname and other host dependent configurations. So the backupConfig.bat tool is not what I'm looking for.
The next thought that came in my mind is that I might could create a special profileTemplate. So that others can use the profile management tool and use this template. But the template structure does not seem to be made for customization. A lot of files and nearly no documentation can be found of how to create an own profile template.
So I came across the augment templates. These template are used (as the name implies) to add specific configuration to an existing profile. I found a lot of documentation of how to apply an augmentation to an existing profile but no documentation of how to create an augmentation.
At last I think that there must be some way of exporting websphere datasource, buses, jms etc. configuration and apply them to other profiles, because in very big installations the operations team must have this ability.
I know that I can add container-specific descriptors to the EAR. E.g. META-INF/ibmconfig/cells/defaultCell/applications/defaultApp/resources.xml. But I don't want to build environment specific EAR files, because it couples our builds to the infrastructure and thus we have to build and redeploy when ever operations changes the infrastructure, e.g. hostnames, IPs, passwords.
Does anyone know how to manage the distribution of datasource, buses, jms, etc. to multiple websphere installations?
In addition to wsadmin scripts - which are very good for these kind of tasks, I'd suggest Properties-based configuration. It might be more useful for you, since it allows to export many configuration objects at one time and then apply it to different environments. It is also might a bit easier, since you work on plain text files instead of jython scripts.
Properties file based configuration enables you to:
Extract data out of the configuration repository to create properties
files.
Update a properties file to manipulate the configuration, as
needed.
Apply the updated data in the properties file to a target
configuration repository.
See more details here:
Properties-based configuration
Infocenter documentation
Education assistant
I suggest you to use wsadmin shell scripting and create a script for resource creation. A bonus is that you can run it directly from RAD (right click Run As->Administrative Script).
Here is the complete example written in Jython for JDBC resource creation along with JAAS login information (note: I'm using Oracle Database, your setup could be different depending on database you are using):
cell=AdminConfig.showAttribute(AdminConfig.list("Cell"), "name")
node=AdminConfig.showAttribute(AdminConfig.list("Node"), "name")
#Add JAAS credentials
print "Adding JAAS credentials"
security = AdminConfig.getid('/Cell:'+cell+'/Security:/')
alias = ['alias', node+'/dbUser']
userid = ['userId', 'DBUSER']
password = ['password', 'PASSWORD']
jaasAttrs = [alias, userid, password]
AdminConfig.create('JAASAuthData', security, jaasAttrs)
AdminConfig.save()
#Add JDBC jar path
print "Adding JDBC jar path"
AdminTask.setVariable('[-variableName ORACLE_JDBC_DRIVER_PATH -variableValue ${WAS_INSTALL_ROOT}/lib/ext -scope Cell='+cell+',Node='+node+']')
AdminConfig.save()
#JDBC Provider print "Adding JDBC Provider"
AdminTask.createJDBCProvider('[-scope Node='+node+',Server=server1 -databaseType Oracle -providerType "Oracle JDBC Driver" -implementationType "Connection pool data source" -name "Oracle JDBC Driver" -description "Oracle JDBC Driver-compliant Provider." -classpath ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar]')
AdminConfig.save()
#JDBC Datasources print "Creating Datasource"
AdminJDBC.createDataSourceAtScope("Node="+node+",Server=server1", "Oracle JDBC Driver", "test", "jdbc/test", "com.ibm.websphere.rsadapter.Oracle11gDataStoreHelper", "jdbc:oracle:thin:#10.0.0.1:1521:TEST", [['componentManagedAuthenticationAlias', node+'/test'], ['containerManagedPersistence', 'true']])
AdminConfig.save()
I just remebered the wsadmin tool and guess it is the best way to implement my requirements.
Fortunately IBM provides sample scripts that show you how to create datasources or modify them using jython or jacl scripts.
An example of how to create datasources can be found for example in the Administration scripts (1-12) -- Jython version (file ex7.py in the zip)
Hope this helps others who have the same or similay question.
I am trying to setup a struts project locally. One way I know to set up JDBC settings as to go to administrative console of websphere and create JDBC provider and JNDI and all. But is there any other way to do in the code itself?
There is some resource reference in web.xml. I am totally new to struts.Please help.
DataSourceAlias
javaxsql.Data...... etc etc
If you configured for WAS 6.1and configuration is good you need to stop and start nodeAgents for the changes to get propagated and test the jdbc connection after restarting.....if it was WAS 8 they will be propagated automatically that means you configured improperly
I've been looking for someone else doing this same thing, but haven't seen a scenario that's quite like this so I thought I'd see if anyone here has any good ideas on how to accomplish this.
My group builds and maintains an open-source neuroimaging data archive tool called XNAT. Previous versions of our application have always required users to run a builder application that took in a build.properties file and used that to initialize the database server configuration, among other things. We're really trying to get down to a single installable war file that we can make available on the NeuroDebian repository. In order to do this, we need to be able to start the application WITHOUT any database configuration information, run through a configuration wizard a la Wordpress or Drupal installations that includes the user inputting the database configuration, and finally setting this configuration information SOMEWHERE and re-starting or re-initializing the application context so that it gets its data source started up, Hibernate entity scans run, all auto-wired or injected dependencies that require the data source or Hibernate transaction manager resolved, and services scanned for #Transactional annotations, and so on.
I can easily see how we can use the new Spring Framework WebApplicationInitializer to detect whether the user has already set up the database configuration and initialize the app properly based on that:
If database has not been configured, create an servlet that just supports the UI for the initialization wizard
If database has been configured, create the regular application context
The problem in the first case is what happens once the user has completed the initialization wizard? We can store the database configuration somewhere and now we're ready to go but... how do we get the regular application context working? Can we just take the code that we'd call in the already initialized scenario and call that? Will that initialize the application properly then, with component scans and so on all being handled or...?
The only solution we have currently is to have the user restart the server manually (it's usually Tomcat) or use the server manager application to restart just our application. That's not very aesthetically pleasing though.
My end goal here will be to write a simple test app that takes in the database credentials and then tries to initialize everything else afterwards, but I'm hoping to see if anyone's thought about this particular issue and/or tried it and has any advice on how to handle it. Any help would be greatly appreciated!
I have a Maven 3 project that uses Hibernate 3. In the Hibernate properties file, there is an entry for hibernate.connection.provider_class with the class corresponding to the C3P0 connection provider (org.hibernate.connection.C3P0ConnectionProvider). Obviously, this class is only used at runtime, so I don't need to add the corresponding dependency in my POM with the compile scope. Now, I want to give the possibility to use any connection pooling framework desired, so I also don't add a runtime dependency to the POM.
What is the best practice?
I thought about adding an entry to the classpath corresponding to the runtime dependency (in this case, hibernate-c3p0) when the application is run (for example, using the command line). But, I don't know if it's possible.
This is almost (maybe exactly) the same problem as with SLF4J. I don't know if Hibernate also uses the facade pattern for connection pooling.
Thanks
Since your code doesn't depend on the connection pooling (neither the main code nor the tests need it), there is no point to mention the dependency anywhere.
If anyone should mention it, then that would be Hibernate because Hibernate offers this feature in its config.
But you can add it to your POM with optional: true to indicate:
I support this feature
If you use it, then I recommend this framework and this version
That will make life slightly more simple for consumers of your project.
But overall, you should not mention features provided/needed by other projects unless they have some impact on your code (like when you offer a more simple way to configure connection pooling for Hibernate).
[EDIT] Your main concern is probably how to configure the project for QA. The technical term for this new movement is "DevOps" - instead of producing a dump WAR which the customer (QA) has to configure painstakingly, configuration is part of the development process just like everything else. What you pass on is a completely configured, ready-to-run setup.
To implement this, create another Maven module called "project-qa" which depends on your project and everything else you need to turn the dead code into a running application (so it will depend on DBCP plus it will contain all the necessary config files).
Maven supports overlayed WARs which will allow you to implement this painlessly.
You can mark your dependency as optional. In this case it will not be packaged into archives. In this case you have to ensure that your container provides required library.
You could use a different profile for each connection provider. In each profile you put the runtime dependency that correspond to the connection provider you want to use and change the hibernate.connection.provider_class property accordingly.
For more details about how to configure dependencies in profiles, see Different dependencies for different build profiles in maven.
To see how to change the value of the hibernate.connection.provider_class property see How can I change a .properties file in maven depending on my profile?
I was wondering: how does one go about configuring WAS if they want to store some confidential information that is not datasource, JMS or Mail related? I'm looking more for an adhoc JNDI resource (a few Strings) that can be queried at runtime to get both a username and password for a vendor system I need to connect to.
Not being uber familiar with WAS, I'm sort of lost. In Tomcat it was a breeze. In WAS, I think I'm missing a few concepts, I'm sure it's possible though.
WAS JNDI is open to extension using your own URL provider class. The examplehere uses the capability to point to property files, but presumably you could instead create a provider that obtained data from a database (or whatever repository you are required to use) instead of a property file.
You can define your own JNDI entries.
Under Environment -> Naming -> Name Space Bindings you can create String binding type, and assign a key and value which can be looked up by the applications.
Is this what you are after?
Manglu