I am trying to call a remote ejb from my application and getting error
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.RuntimeException: javax.naming.NameNotFoundException: CNTR4009E: The edu.osu.cse5234.ooth.business.view.InventoryService remote interface for the InventoryServiceBean enterprise bean in the OutOfTheHouse-EJB.jar module in the OutOfTheHouse-EJBEAR application could not be obtained for the java:global/OutOfTheHouse-EJBEAR/OutOfTheHouse-EJB/InventoryServiceBean!edu.osu.cse5234.ooth.business.view.InventoryService JNDI name because remote interfaces are not supported by any of the features configured in the server.xml file.
I found that we need to add
<featureManager>
<feature>ejbRemote-3.2</feature>
<feature>localConnector-1.0</feature>
</featureManager>
but I am not getting an option to add ejbRemote in my server.xml.
What could be the reason?
Open server.xml, view in the Design mode (as opposed to Source mode), expand Server Configuration and click on Feature Manager. On the right side feature manager deatails will be listed. Click on Add. Add the following features:
ejblite-3.2
javaee-7.0
jndi-1.0
localConnnector-1.0
Additionally, make sure the EAR project is added to WAS liberty profile. Increase console log level to INFO (Server Manager --> Add --> Logging then edit console log level in the details window). The console output will spit out the JNDI name exactly as generated by the container. Copy paste into ServiceLocator code.
Praveen
I get this error as soon as I enable features "webProfile-7.0" or "javaee-7.0". These features load other features and it seems that one of them prevents the EJB from being bound. Also there is no log entry containing the JNDI name in this case. To solve this issue I just enable the features I really need (servlet-3.1, localConnector-1.0, ejbLite-3.2, ejbRemote-3.2, jsp-2.3 in my case). This also speeds up server start.
The problem was with the version of liberty profile I was using. I used a different one and I was able to add javaee-7.0 as feature and after that it worked fine.
Related
I have the problem that im my company there is a workaround for using the db2 broker in cloud foundry. To get this work you have to disable the java-buildpack-auto-reconfiguration in your application with this property:
JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '{enabled: false}'
Otherwise you get this error:
DB2 SQL Error: SQLCODE=-142, SQLSTATE=42612, SQLERRMC=null
In the git project (https://github.com/cloudfoundry/java-buildpack-auto-reconfiguration) I read that this property disables re-writes of bean definitions of various types (javax.sql.DataSource, org.springframework.data.mongodb.MongoDbFactory, org.springframework.amqp.rabbit.connection.ConnectionFactory, ...) to connect automatically with services bound to the application. In our application we use MongoDB as well as DB2. Therefore I am worried about that with this configuration I disable something that I do not want to disable for MongoDB.
I hope this will get fixed soon that I do not need this configuration. But for now is it possible to only disable the reconfiguration for specific bean types (in my case "javax.sql.DataSource")?
WSO2 Identity Server 5.0.0 (and some patches ;))
It does not appear that custom JDBC user store managers (child of JDBCUserStoreManager) use a JDBC pool. I'm noticing that I can end up session closed errors and sql exceptions whereas the Identity Server itself is still operating OK with its separate database connection (a configured pool).
So I guess I have two questions about this:
Somewhere up the chain, is there a JDBC pool for the JDBCUserStoreManager? If so, are there means to configure that guy more robustly?
Can I create another JDBC datasource in master-datasources.xml which my custom JDBC user store manage could reference?
Instead of using your own datasources/connections, you can import Carbon Datasources and use those (they come with inbuilt pooling and no need to worry about any configurations etc). You can either access these programmatically by directly calling ndatasource component or access them via JNDI.
To access them directly from ndatasource component:
The dependency:
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.ndatasource.core</artifactId>
<version>add_correct_version_here</version>
</dependency>
(You can check repository/components/plugins to find out the correct version for above dependency)
You can inject DataSourceService as in this code (the #scr.reference tag refers to the service you need to inject, this uses maven scr plugin to parse these dependencies when building the bundle).
Note that when you follow this approach you'll have to build the jar as an OSGi bundle as it uses declarative services (and have to place it in repository/components/dropins). Otherwise the dependencies won't be injected at runtime.
Next, you can access all the data sources as:
List<CarbonDataSource> dataSources = dataSourceService.getAllDataSources();
Rajeev's answer was really insightful and helped with investigating and evaluating what I should do. But, I didn't end up using that route. :)
I ended up looking through the Identity Server and Carbon source code and found out that the JDBCUserStoreManager does end up creating a JDBC pool configured by the properties you set for that manager. I had a class called CustomUserStoreConstants for my custom user store manager which had setMandatoryProperty called by default to set:
JDBCRealmConstants.DRIVER_NAME
JDBCRealmConstants.URL
JDBCRealmConstants.USER_NAME
JDBCRealmConstants.PASSWORD
So the pool was configured with these values, BUT that was it...nothing else. So no wonder it wasn't surviving the night!
It turned out that the code setting this up, if it found a value for the JDBCRealmConstants.DATASOURCE in the config params, it would just load up that datasource and ignore any other params set. Seeing that, I got rid of those 4 params listed above and forced my custom user store to only allow having a DATASOURCE and I set it in code with the default JNDI name that I would name that datasource always. With that, I was able to configure my JDBC pool for this datasource with all params such as testOnBorrow, validationQuery, validationInterval, etc in master-datasources.xml. Now the only thing that would ever need to change is the datasource's configuration in that file.
The other reason I went with the datasource in the master-datasources.xml is that I didn't have to decided in my custom user store's code which parameters I would want to have or not have and just manage it all in the xml file easily. This really has advantages with portability of configs and IT involvement for deployments and debugging. I already have other datasources in this file for the IS deployment.
All said, my user store is now living through the night and weekends. :)
I am getting this exception when i am trying to start the WAS server. I have created JMS providers in console and set all the jar files in the classpath.
External initial context factory defined is com.sonicsw.jndi.mfcontext.MFContextFactory with a valid URL.
I am not sure if the issue is with websphere configuration settings or code.
Can someone please provide any context to move forward?
Can you add more information?
classpath, properties set up for your connector
check this documentation :
http://documentation.progress.com/output/Sonic/8.5.1/jca_books/resadapwas_guide.pdf
regards
I cannot control logging levels for my code in Websphere Liberty Profile server.
I have configured the server.xml on the server not to log hibernate and spring, since my logs will get flooded with activity from those two frameworks. I commonly do this using log4j and it works fine in standalone WAS.
<logging consoleLogLevel="INFO" copySystemStreams="true" traceFormat="ENHANCED" traceSpecification="org.springframework.*=off:com.ibm.ws.*=off:org.hibernate*=off"/>
In Liberty this does not work.
I get the following log when liberty updates the configuration (when I save server.xml with the changes):
[INFO ] TRAS0040I: The configured trace state included the following specifications that do not match any loggers currently registered in the server: org.hibernate*=off:org.springframework.*=off
Basically this message applies to any of my code and any third party code (Spring, Hibernate, etc).
However the traceSpecification levels work fine for the IBM classes, and I'm able to specify *=off, which effectively turns off all logging.
Has anyone experienced this?
IBM's documentation for TRAS0040I seems simple enough, but I can't seem to figure out why my loggers are not getting registered with the server.
Liberty doesn't have a rich control on logging. You should understand the difference between "logging" and "tracing". Check description of console.log, messages.log, trace.log files at the beggining of: http://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
Your configuration in "traceSpecification" - actually will do nothing, as spring and hibernate are logs from JVM and they doesn't go to trace, so trace configuration doesn't affect them.
All you can configure in liberty for jvm logs is consoleLogLevel (INFO, AUDIT, WARNING, ERROR, and OFF)
If you want to configure log levels for specific components in Liberty - you should use for example log4j with own configuration
I am trying to setup a struts project locally. One way I know to set up JDBC settings as to go to administrative console of websphere and create JDBC provider and JNDI and all. But is there any other way to do in the code itself?
There is some resource reference in web.xml. I am totally new to struts.Please help.
DataSourceAlias
javaxsql.Data...... etc etc
If you configured for WAS 6.1and configuration is good you need to stop and start nodeAgents for the changes to get propagated and test the jdbc connection after restarting.....if it was WAS 8 they will be propagated automatically that means you configured improperly