Spent half a day solving the problem of ejb client lookup in glassfish 4.
The bean:
#Stateless
#LocalBean
public class TestBean implements TestRemote{
#PersistenceContext(unitName = "testunit")
private EntityManager em;
public String sayHello() {return "hello";}
}
The remote business interface:
#javax.ejb.Remote
public interface TestRemote {
public String sayHello();
}
Client lookup code, tried the following variants (the client can be in another machine, but in this test they are running on the same machine):
new InitialContext().lookup("java:global/myproject/TestBean");
new InitialContext().lookup("java:global/myproject/TestRemote");
new InitialContext().lookup("TestRemote");
new InitialContext().lookup("TestBean");
new InitialContext().lookup(TestBean.class.getName());
The project is deployed as a war (myproject.war).
This works:
new InitialContext().lookup("com.example.TestRemote");
If you're using glassFish, it normally says what the portable JNDI name is in the logs e.g:
INFO: EJB5181:Portable JNDI names for EJB ProductsBean: [java:global/products-ejb/ProductsBean, java:global/products-ejb/ProductsBean!com.sample.product.ejb.ProductsRemote]
so yours will be along the lines of:
java:global/<ejb-module-name>/TestBean
This looks similar to your question but I could not tell whether "myproject" was the Enterprise Application Name or the EJB Module name...
or
java:global/<enterpriseApplicationName>/<ejb-module-name>/TestBean!<package_of_remote>.TestRemote
Related
I have a java web application developed on Spring framework which uses mybatis. I see that the datasource is defined in beans.xml. Now I want to add a secondary data source too as a backup. For e.g, if the application is not able to connect to the DB and gets some error, or if the server is down, then it should be able to connect to a different datasource. Is there a configuration in Spring to do this or we will have to manually code this in the application?
I have seen primary and secondary notations in Spring boot but nothing in Spring. I could achieve these in my code where the connection is created/retrieved, by connecting to the secondary datasource if the connection to the primary datasource fails/timed out. But wanted to know if this can be achieved by making changes just in Spring configuration.
Let me clarify things one-by-one-
Spring Boot has a #Primary annotation but there is no #Secondary annotation.
The purpose of the #Primary annotation is not what you have described. Spring does not automatically switch data sources in any way. #Primary merely tells the spring which data source to use in case we don't specify one in any transaction. For more detail on this- https://www.baeldung.com/spring-data-jpa-multiple-databases
Now, how do we actually switch datasources when one goes down-
Most people don't manage this kind of High-availability in code. People usually prefer to 2 master database instances in an active-passive mode which are kept in sync. For auto-failovers, something like keepalived can be used. This is also a high subjective and contentious topic and there are a lot of things to consider here like can we afford replication lag, are there slaves running for each master(because then we have to switch slaves too as old master's slaves would now become out of sync, etc. etc.) If you have databases spread across regions, this becomes even more difficult(read awesome) and requires yet more engineering, planning, and design.
Now since, the question specifically mentions using application code for this. There is one thing you can do. I don't advice to use it in production though. EVER. You can create an ASPECTJ advice around your all primary transactional methods using your own custom annotation. Lets call this annotation #SmartTransactional for our demo.
Sample Code. Did not test it though-
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface SmartTransactional {}
public class SomeServiceImpl implements SomeService {
#SmartTransactional
#Transactional("primaryTransactionManager")
public boolean someMethod(){
//call a common method here for code reusability or create an abstract class
}
}
public class SomeServiceSecondaryTransactionImpl implements SomeService {
#Transactional("secondaryTransactionManager")
public boolean usingTransactionManager2() {
//call a common method here for code reusability or create an abstract class
}
}
#Component
#Aspect
public class SmartTransactionalAspect {
#Autowired
private ApplicationContext context;
#Pointcut("#annotation(...SmartTransactional)")
public void smartTransactionalAnnotationPointcut() {
}
#Around("smartTransactionalAnnotationPointcut()")
public Object methodsAnnotatedWithSmartTransactional(final ProceedingJoinPoint joinPoint) throws Throwable {
Method method = getMethodFromTarget(joinPoint);
Object result = joinPoint.proceed();
boolean failure = Boolean.TRUE;// check if result is failure
if(failure) {
String secondaryTransactionManagebeanName = ""; // get class name from joinPoint and append 'SecondaryTransactionImpl' instead of 'Impl' in the class name
Object bean = context.getBean(secondaryTransactionManagebeanName);
result = bean.getClass().getMethod(method.getName()).invoke(bean);
}
return result;
}
}
I'm trying to set up a project with two data sources, one is MongoDB and the other is Postgres. I have repositories for each data source in different packages and I annotated my main class as follows:
#Import({MongoDBConfiguration.class, PostgresDBConfiguration.class})
#SpringBootApplication(exclude = {
MongoRepositoriesAutoConfiguration.class,
JpaRepositoriesAutoConfiguration.class
})
public class TemporaryRunner implements CommandLineRunner {
...
}
MongoDBConfiguration:
#Configuration
#EnableMongoRepositories(basePackages = {
"com.example.datastore.mongo",
"com.atlassian.connect.spring"})
public class MongoDBConfiguration {
...
}
PostgresDBConfiguration:
#Configuration
#EnableJpaRepositories(basePackages = {
"com.example.datastore.postgres"
})
public class PostgresDBConfiguration {
...
}
And even though I specified the base packages as described in documentation, I still get those messages in the console:
13:10:44.238 [main] [] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode!
13:10:44.266 [main] [] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.atlassian.connect.spring.AtlassianHostRepository.
I managed to solve this issue for all my repositories by using MongoRepository and JpaRepository but AtlassianHostRepository comes from an external lib and it is a regular CrudRepository (which totally makes sense because the consumer of the lib can decide what type of DB he would like to use). Anyway it looks that basePackages I specified are completely ignored and not used in any way, even though I specified com.atlassian.connect.spring package only in #EnableMongoRepositories Spring Data somehow can't figure out which data module should be used.
Am I doing something wrong? Is there any other way I could tell spring data to use mongo for AtlassianHostRepository without changing the AtlassianHostRepository.class itself?
The only working solution I found was to let spring data ignore AtlassianHostRepository (because it couldn't figure out which data source to use) then create a separate configuration for it, and simply create it by hand:
#Configuration
#Import({MongoDBConfiguration.class})
public class AtlassianHostRepositoryConfiguration {
private final MongoTemplate mongoTemplate;
#Autowired
public AtlassianHostRepositoryConfiguration(final MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public AtlassianHostRepository atlassianHostRepository() {
RepositoryFactorySupport factory = new MongoRepositoryFactory(mongoTemplate);
return factory.getRepository(AtlassianHostRepository.class);
}
}
This solution works fine for a small or limited number of repositories used from a library, it would be rather cumbersome to create all the repositories by hand when there are more of them, but after reading the source code of spring-data I see no way to make it work with basePackages as stated in documentation (I may be wrong though).
I'm trying to develop a spring-boot application which offer the possibility for the user to create and call some simple workflows.
The steps of the workflows are already written (they all extends the same class), and, when the user create a workflow, he/she just pick which steps he wants to include in his it. The steps and the workflows are saved in a database.
My problem comes when the user call the workflow: I want to instanciate dynamically each step using the class loader but with the dependencies injected by spring!
Here is an example of a plug-in:
public class HelloWorldStepPlugin extends StepPlugin {
private static final Logger LOG = LogManager.getLogger();
#Autowired
private HelloWorldRepository repository;
public HelloWorldStepPlugin() {
super(HelloWorldStepPlugin.class.getSimpleName());
}
#Override
public void process() {
LOG.info("Hello world!");
this.repository.findAll(); // <= throw a NullPointerException because this.repository is null
}
}
Here is how I execute a Workflow (in another class):
ClassLoader cl = getClass().getClassLoader();
for (Step s : workflow.getSteps()) {
StepPlugin sp = (StepPlugin) cl.loadClass(STEP_PLUGIN_PACKAGE + s.getPlugin()).newInstance();
sp.process();
}
How can I do to have my HelloWorldRepository injected by Spring?
Is there a much better approach to do what I intend to?
I suggest you declare your steps as prototype beans. Instead of saving class names in the database, save bean names. Then get the steps and the plugins from the spring context (i.e. using getBean()).
nI am developing a Spring MVC web app using Spring 3.2. We will deploy the web app to different customers. Each customer may use one of several implementations of a service interface.
It's possible that the customer may need to reset these values, so we can't just hard-wire the implementation into the application, it needs to be externally configurable.
We are already using customer-specific property files that for setting simple properties such as Strings, numbers etc, but I'm asking how to set a particular implementation of an interface.
E.g.,
class MyClass {
// this is straightforward
#Value("${customer.propertyInPropertyFile}")
private String customerSpecificString;
// how to set the correct implementation for each customer?
private ISomeService service;
}
If there are 4 implementations of ISomeService, we can't autowire, or explicitly set a bean, as this will then be set in the compiled code - and it needs to be configurable after the application is deployed ( it would be OK to restart the application though if need be)..
Does anyone know how to do this? Would this better be performed using Spring EL, or profiles?
Thanks!
So, as I wanted to used Java configuration, I used the following solution:
#Configuration
#Profile("prod")
#EnableAsync
public class ProductionConfig extends BaseConfig
// inject property value which identifies theimplementation to use
Value("${service.impl}")
private String serviceName;
#Bean()
public IRepository repository() {
IRepository rc = null;
if(StringUtils.isEmpty(serviceName)){
rc = new Impl1();
} else if ("sword-mets".equals(serviceName)){
rc = new Impl2();
} else {
rc = new Impl3();
}
log.info("Setting in repository implementation " + rc);
return rc;
}
So, this isn't quite as clean as the suggested reply using aliases - the Config class needs to know about all the possible implementation classes - but is simple and avoids having to use the XML config just for this bean.
In the following Connector/J reference for JDBC/MySQL it suggests we cache the instances of InitialContext and Datasource. Would just making it a private static instance solve the caching? Shouldn't one have to be concerned with thread-safety (if at all)? What is the best 'place' to cache this for a web-app (Restlet + glassfish/Java EE + mysql)??
There is a GenericDAO class that is the root of the data-access classes, so to speak. So would just having static instances actually solve the problem? It would force some of the methods to be static which we don't want. Suggestions??
Thanks!
public void doSomething() throws Exception {
/*
* Create a JNDI Initial context to be able to
* lookup the DataSource
**
In production-level code, this should be cached as
* an instance or static variable, as it can
* be quite expensive to create a JNDI context.
**
Note: This code only works when you are using servlets
* or EJBs in a Java EE application server. If you are
* using connection pooling in standalone Java code, you
* will have to create/configure datasources using whatever
* mechanisms your particular connection pooling library
* provides.
*/
InitialContext ctx = new InitialContext();
/*
* Lookup the DataSource, which will be backed by a pool
* that the application server provides. DataSource instances
* are also a good candidate for caching as an instance
* variable, as JNDI lookups can be expensive as well.
*/
DataSource ds =
(DataSource)ctx.lookup("java:comp/env/jdbc/MySQLDB");
/*
*Remaining code here...
*/
}
If you're using JAX-RS, then you can use #Context annotation.
E.g.
#Context
private ServletContext context;
#GET
#Path("whatevers")
public List<Whatever> getWhatevers() {
DataSource dataSource = Config.getInstance(context).getDataSource();
// ...
}
However, if the #Resource annotation is also supported on your Restlet environment, you could make use of it as good.
#Resource(mappedName="jdbc/MySQLDB")
private DataSource dataSource
This is in turn technically better to be placed in an EJB which you in turn inject by #EJB in your webservice.
#Stateless
public class WhateverDAO {
#Resource(mappedName="jdbc/MySQLDB")
private DataSource dataSource
public List<Whatever> list() {
// ...
}
}
with
#EJB
private WhateverDAO whateverDAO;
#GET
#Path("whatevers")
public List<Whatever> getWhatevers() {
return whateverDAO.list();
}
Following up on BalusC's link, I can confirm that we could do the same thing when using Restlet. However, as per the code in the example to get the config instance you are passing in the ServletContext as an argument. Restlet is like 'another' framework that uses Servlets as an adapter to configure itself. So it'll be tricky to pass the ServletContext as an argument from somewhere else in the code (Restlet uses it's own Context object which is conceptually similar to ServletContext)
For my case a static method returning the cached datasource seems to be 'clean enough' but there could be other design/organization approaches.