Nullpointer injecting a bean when creating a job via quartz - spring

The context is the next:
I have a web app using Spring 2.5 and Struts 1.1
I create a job dynamically in an Action using Quartz:
JobDetailBean jobDetail = new JobDetailBean();
jobDetail.setBeanName("foo");
Map<String, String> map = new HashMap<String,String>();
map.put("idFeed","foo");
map.put("idSite","foo");
jobDetail.setJobDataAsMap(map);
jobDetail.setJobClass(FeedJob.class);
jobDetail.afterPropertiesSet();
CronTriggerBean cronTrigger = new CronTriggerBean();
cronTrigger.setBeanName("foo");
String expression = " * * * * * *";
cronTrigger.setCronExpression(expression);
cronTrigger.afterPropertiesSet();
// add to schedule
scheduler.scheduleJob((JobDetail) jobDetail, cronTrigger);
scheduler is a org.quartz.Scheduler injected in the Action.
The class FeedJob has the method executeInternal(JobExecutionContext ctx) which is the code the job has to run:
public class FeedJob extends QuartzJobBean {
private FeedBL feedBL;
public void setFeedBL(FeedBL feedBL) {this.feedBL = feedBL;}
public FeedJob() {}
public String idFeed;
public String idSite;
public String getIdFeed() {
return idFeed;
}
public void setIdFeed(String idFeed) {
this.idFeed = idFeed;
}
public String getIdSite() {
return idSite;
}
public void setIdSite(String idSite) {
this.idSite = idSite;
}
protected void executeInternal(JobExecutionContext ctx) throws JobExecutionException {
try {
feedBL.sincronizacionProductFeed(idFeed, idSite);
} catch (Exception e) {
e.printStackTrace();
}
}
}
And when its going to run, I get a java.lang.NullPointerException when trying to run this line of code:
feedBL.sincronizacionProductFeed(idFeed, idSite);
The reason is when I'm creating the job in the Action I'm setting the job:
jobDetail.setJobClass(FeedJob.class);
And Spring doesn't notice about the bean he has already created, so that instance of the FeedJob class hasn't god injected the feedBL class.
Any good idea for solving this problem?
I have tried to give the job the context like this:
jobDetail.setApplicationContext(applicationContext);
But doesnt work.

You may want to check this answer. It solves the same problem you are experiencing.

Related

How to run Quarkus programatically in Test mode

I am trying to run acceptance tests with concordion fixtures in a quarkus project. Concordion does not work with Junit5 so I am using its original #Run(ConcordionRunner.class).
I am creating a superclass to start my quarkus application before tests like that:
#RunWith(ConcordionRunner.class)
public abstract class AbstractFixture {
public static RunningQuarkusApplication application;
protected static RequestSpecification server;
protected AbstractFixture() {
setUp();
}
public void setUp() {
if(application == null) {
startApplication();
server = new RequestSpecBuilder()
.setPort(8081)
.setContentType(ContentType.JSON)
.build();
}
}
private void startApplication() {
try {
PathsCollection.Builder rootBuilder = PathsCollection.builder();
Path testClassLocation = PathTestHelper.getTestClassesLocation(getClass());
rootBuilder.add(testClassLocation);
final Path appClassLocation = PathTestHelper.getAppClassLocationForTestLocation(
testClassLocation.toString());
rootBuilder.add(appClassLocation);
application = QuarkusBootstrap.builder()
.setIsolateDeployment(false)
.setMode(QuarkusBootstrap.Mode.TEST)
.setProjectRoot(Paths.get("").normalize().toAbsolutePath())
.setApplicationRoot(rootBuilder.build())
.build()
.bootstrap()
.createAugmentor()
.createInitialRuntimeApplication()
.run();
} catch (BindException e) {
e.printStackTrace();
System.out.println("Address already in use - which is fine!");
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
The code above is working but I can't change the default port 8081 to any other.
If I print the config property in my Test class like below, it prints the port correctly, but quarkus is not running on it:
public class HelloFixture extends AbstractFixture {
public String getGreeting() {
Response response = given(server).when().get("/hello");
System.out.println("Config[port]: " + application.getConfigValue("quarkus.http.port", String.class));
return response.asString();
}
}
How can I specify the configuration file or property programatically before run?
I found the answer. At first, I was referencing the wrong property "quarkus.http.port" instead of "quarkus.http.test-port".
Despite that, I found the way to override properties before run:
...
StartupAction action = QuarkusBootstrap.builder()
.setIsolateDeployment(false)
.setMode(QuarkusBootstrap.Mode.TEST)
.setProjectRoot(Paths.get("").normalize().toAbsolutePath())
.setApplicationRoot(rootBuilder.build())
.build()
.bootstrap()
.createAugmentor()
.createInitialRuntimeApplication();
action.overrideConfig(getConfigOverride());
application = action.run();
...
private Map<String, String> getConfigOverride() {
Map<String, String> config = new HashMap<>();
config.put("quarkus.http.test-port", "18082");
return config;
}

How to accept http requests after shutdown signal in Quarkus?

I tried this:
void onShutdown(#Observes final ShutdownEvent event) throws InterruptedException {
log.infof("ShutdownEvent received, waiting for %s seconds before shutting down", shutdownWaitSeconds);
TimeUnit.SECONDS.sleep(shutdownWaitSeconds);
log.info("Continue shutting down");
}
But after receiving ShutdownEvent Quarkus already responds with 503 to http requests. Looks like this could be done with ShutdownListener in preShutdown method. I have implemented this listener but it does not get called yet. How do I register ShutdownListener?
Use case here is OpenShift sending requests to terminating pod.
Option 1: Create Quarkus extension
Instructions are here. ShutdownController is my own class implementing ShutdownListener where I have a sleep in preShutdown method.
class ShutdownControllerProcessor {
#BuildStep
FeatureBuildItem feature() {
return new FeatureBuildItem("shutdown-controller");
}
#BuildStep
ShutdownListenerBuildItem shutdownListener() {
// Called at build time. Default constructor will be called at runtime.
// Getting MethodNotFoundException when calling default constructor here.
return new ShutdownListenerBuildItem(new ShutdownController(10));
}
}
Option 2: Modify ShutdownRecorder private static final field
New shutdown listener can be added using reflection. This is a bit ugly solution.
registerIfNeeded() need to be called after Quarkus startup, for example with timer 1 second after #PostConstruct.
#ApplicationScoped
public class ListenerRegisterer {
public void registerIfNeeded() {
try {
tryToRegister();
} catch (NoSuchFieldException | IllegalAccessException e) {
throw new IllegalStateException(e);
}
}
private void tryToRegister() throws NoSuchFieldException, IllegalAccessException {
final var field = ShutdownRecorder.class.getDeclaredField("shutdownListeners");
field.setAccessible(true);
final var listeners = (List<ShutdownListener>) field.get(null);
if (listeners != null && !listeners.toString().contains("ShutdownController")) {
listeners.add(new ShutdownController(10));
setFinalStatic(field, listeners);
}
}
private static void setFinalStatic(final Field field, final Object newValue) throws NoSuchFieldException, IllegalAccessException {
field.setAccessible(true);
final var modifiersField = Field.class.getDeclaredField("modifiers");
modifiersField.setAccessible(true);
modifiersField.setInt(field, field.getModifiers() & ~Modifier.FINAL);
field.set(null, newValue);
}
}

Spring batch : Assemble a job rather than configuring it (Extensible job configuration)

Background
I am working on designing a file reading layer that can read delimited files and load it in a List. I have decided to use Spring Batch because it provides a lot of scalability options which I can leverage for different sets of files depending on their size.
The requirement
I want to design a generic Job API that can be used to read any delimited file.
There should be a single Job structure that should be used for parsing every delimited file. For example, if the system needs to read 5 files, there will be 5 jobs (one for each file). The only way the 5 jobs will be different from each other is that they will use a different FieldSetMapper, column name, directory path and additional scaling parameters such as commit-interval and throttle-limit.
The user of this API should not need to configure a Spring
batch job, step, chunking, partitioning, etc on his own when a new file type is introduced in the system.
All that the user needs to do is to provide the FieldsetMapperto be used by the job along with the commit-interval, throttle-limit and the directory where each type of file will be placed.
There will be one predefined directory per file. Each directory can contain multiple files of the same type and format. A MultiResourcePartioner will be used to look inside a directory. The number of partitions = number of files in the directory.
My requirement is to build a Spring Batch infrastructure that gives me a unique job I can launch once I have the bits and pieces that will make up the job.
My solution :
I created an abstract configuration class that will be extended by concrete configuration classes (There will be 1 concrete class per file to be read).
#Configuration
#EnableBatchProcessing
public abstract class AbstractFileLoader<T> {
private static final String FILE_PATTERN = "*.dat";
#Autowired
JobBuilderFactory jobs;
#Autowired
ResourcePatternResolver resourcePatternResolver;
public final Job createJob(Step s1, JobExecutionListener listener) {
return jobs.get(this.getClass().getSimpleName())
.incrementer(new RunIdIncrementer()).listener(listener)
.start(s1).build();
}
public abstract Job loaderJob(Step s1, JobExecutionListener listener);
public abstract FieldSetMapper<T> getFieldSetMapper();
public abstract String getFilesPath();
public abstract String[] getColumnNames();
public abstract int getChunkSize();
public abstract int getThrottleLimit();
#Bean
#StepScope
#Value("#{stepExecutionContext['fileName']}")
public FlatFileItemReader<T> reader(String file) {
FlatFileItemReader<T> reader = new FlatFileItemReader<T>();
String path = file.substring(file.indexOf(":") + 1, file.length());
FileSystemResource resource = new FileSystemResource(path);
reader.setResource(resource);
DefaultLineMapper<T> lineMapper = new DefaultLineMapper<T>();
lineMapper.setFieldSetMapper(getFieldSetMapper());
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer(",");
tokenizer.setNames(getColumnNames());
lineMapper.setLineTokenizer(tokenizer);
reader.setLineMapper(lineMapper);
reader.setLinesToSkip(1);
return reader;
}
#Bean
public ItemProcessor<T, T> processor() {
// TODO add transformations here
return null;
}
#Bean
#JobScope
public ListItemWriter<T> writer() {
ListItemWriter<T> writer = new ListItemWriter<T>();
return writer;
}
#Bean
#JobScope
public Step readStep(StepBuilderFactory stepBuilderFactory,
ItemReader<T> reader, ItemWriter<T> writer,
ItemProcessor<T, T> processor, TaskExecutor taskExecutor) {
final Step readerStep = stepBuilderFactory
.get(this.getClass().getSimpleName() + " ReadStep:slave")
.<T, T> chunk(getChunkSize()).reader(reader)
.processor(processor).writer(writer).taskExecutor(taskExecutor)
.throttleLimit(getThrottleLimit()).build();
final Step partitionedStep = stepBuilderFactory
.get(this.getClass().getSimpleName() + " ReadStep:master")
.partitioner(readerStep)
.partitioner(
this.getClass().getSimpleName() + " ReadStep:slave",
partitioner()).taskExecutor(taskExecutor).build();
return partitionedStep;
}
/*
* #Bean public TaskExecutor taskExecutor() { return new
* SimpleAsyncTaskExecutor(); }
*/
#Bean
#JobScope
public Partitioner partitioner() {
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
Resource[] resources;
try {
resources = resourcePatternResolver.getResources("file:"
+ getFilesPath() + FILE_PATTERN);
} catch (IOException e) {
throw new RuntimeException(
"I/O problems when resolving the input file pattern.", e);
}
partitioner.setResources(resources);
return partitioner;
}
#Bean
#JobScope
public JobExecutionListener listener(ListItemWriter<T> writer) {
return new JobCompletionNotificationListener<T>(writer);
}
/*
* Use this if you want the writer to have job scope (JIRA BATCH-2269). Also
* change the return type of writer to ListItemWriter for this to work.
*/
#Bean
public TaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor() {
#Override
protected void doExecute(final Runnable task) {
// gets the jobExecution of the configuration thread
final JobExecution jobExecution = JobSynchronizationManager
.getContext().getJobExecution();
super.doExecute(new Runnable() {
public void run() {
JobSynchronizationManager.register(jobExecution);
try {
task.run();
} finally {
JobSynchronizationManager.close();
}
}
});
}
};
}
}
Let's say I have to read Invoice data for the sake of discussion. I can therefore extend the above class for creating an InvoiceLoader :
#Configuration
public class InvoiceLoader extends AbstractFileLoader<Invoice>{
private class InvoiceFieldSetMapper implements FieldSetMapper<Invoice> {
public Invoice mapFieldSet(FieldSet f) {
Invoice invoice = new Invoice();
invoice.setNo(f.readString("INVOICE_NO");
return e;
}
}
#Override
public FieldSetMapper<Invoice> getFieldSetMapper() {
return new InvoiceFieldSetMapper();
}
#Override
public String getFilesPath() {
return "I:/CK/invoices/partitions/";
}
#Override
public String[] getColumnNames() {
return new String[] { "INVOICE_NO", "DATE"};
}
#Override
#Bean(name="invoiceJob")
public Job loaderJob(Step s1,
JobExecutionListener listener) {
return createJob(s1, listener);
}
#Override
public int getChunkSize() {
return 25254;
}
#Override
public int getThrottleLimit() {
return 8;
}
}
Let's say I have one more class called Inventory that extends AbstractFileLoader.
On application startup, I can load these two annotation configurations as follows :
AbstractApplicationContext context1 = new AnnotationConfigApplicationContext(InvoiceLoader.class, InventoryLoader.class);
Somewhere else in my application two different threads can launch the jobs as follows :
Thread 1 :
JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
Job job1 = context1.getBean("invoiceJob", Job.class);
JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
Thread 2 :
JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
Job job1 = context1.getBean("inventoryJob", Job.class);
JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
The advantage of this approach is that everytime there is a new file to be read, all that the developer/user has to do is subclass AbstractFileLoader and implement the required abstract methods without the need to get into the details of how to assemble the job.
The questions :
I am new to Spring batch so I may have overlooked some of the not-so-obvious issues with this approach such as shared internal objects in Spring batch that may cause two jobs running together to fail or obvious issues such as scoping of the beans.
Is there a better way to achieve my objective?
The fileName attribute of the #Value("#{stepExecutionContext['fileName']}") is always being assigned the value as I:/CK/invoices/partitions/ which is the value returned by getPathmethod in InvoiceLoader even though the getPathmethod inInventoryLoader`returns a different value.
One option is passing them as job parameters. For instance:
#Bean
Job job() {
jobs.get("myJob").start(step1(null)).build()
}
#Bean
#JobScope
Step step1(#Value('#{jobParameters["commitInterval"]}') commitInterval) {
steps.get('step1')
.chunk((int) commitInterval)
.reader(new IterableItemReader(iterable: [1, 2, 3, 4], name: 'foo'))
.writer(writer(null))
.build()
}
#Bean
#JobScope
ItemWriter writer(#Value('#{jobParameters["writerClass"]}') writerClass) {
applicationContext.classLoader.loadClass(writerClass).newInstance()
}
With MyWriter:
class MyWriter implements ItemWriter<Integer> {
#Override
void write(List<? extends Integer> items) throws Exception {
println "Write $items"
}
}
Then executed with:
def jobExecution = launcher.run(ctx.getBean(Job), new JobParameters([
commitInterval: new JobParameter(3),
writerClass: new JobParameter('MyWriter'), ]))
Output is:
INFO: Executing step: [step1]
Write [1, 2, 3]
Write [4]
Feb 24, 2016 2:30:22 PM org.springframework.batch.core.launch.support.SimpleJobLauncher$1 run
INFO: Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{commitInterval=3, writerClass=MyWriter}] and the following status: [COMPLETED]
Status is: COMPLETED, job execution id 0
#1 step1 COMPLETED
Full example here.

Can you think of a better way to only load DBbUnit once per test class with Spring?

I realise that best practise may advise on loading test data on every #Test method, however this can be painfully slow for DBUnit so I have come up with the following solution to load it only once per class:
Only load a data set once per test class
Support multiple data sources and those not named "dataSource" from the ApplicationContext
Roll back of the inserted DBUnit data set not strictly required
While the code below works, what is bugging me is that my Test class has the static method beforeClassWithApplicationContext() but it cannot belong to an Interface because its static. Therefore my use of Reflection is being used in a non Type safe manner. Is there a more elegant solution?
/**
* My Test class
*/
#RunWith(SpringJUnit4ClassRunner.class)
#TestExecutionListeners({DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, DbunitLoadOnceTestExecutionListener.class})
#ContextConfiguration(locations={"classpath:resources/spring/applicationContext.xml"})
public class TestClass {
public static final String TEST_DATA_FILENAME = "Scenario-1.xml";
public static void beforeClassWithApplicationContext(ApplicationContext ctx) throws Exception {
DataSource ds = (DataSource)ctx.getBean("dataSourceXyz");
IDatabaseConnection conn = new DatabaseConnection(ds.getConnection());
IDataSet dataSet = DbUnitHelper.getDataSetFromFile(conn, TEST_DATA_FILENAME);
InsertIdentityOperation.CLEAN_INSERT.execute(conn, dataSet);
}
#Test
public void somethingToTest() {
// do stuff...
}
}
/**
* My new custom TestExecutioner
*/
public class DbunitLoadOnceTestExecutionListener extends AbstractTestExecutionListener {
final String methodName = "beforeClassWithApplicationContext";
#Override
public void beforeTestClass(TestContext testContext) throws Exception {
super.beforeTestClass(testContext);
Class<?> clazz = testContext.getTestClass();
Method m = null;
try {
m = clazz.getDeclaredMethod(methodName, ApplicationContext.class);
}
catch(Exception e) {
throw new Exception("Test class must implement " + methodName + "()", e);
}
m.invoke(null, testContext.getApplicationContext());
}
}
One other thought I had was possibly creating a static singleton class for holding a reference to the ApplicationContext and populating it from DbunitLoadOnceTestExecutionListener.beforeTestClass(). I could then retrieve that singleton reference from a standard #BeforeClass method defined on TestClass. My code above calling back into each TestClass just seems a little messy.
After the helpful feedback from Matt and JB this is a much simpler solution to achieve the desired result
/**
* My Test class
*/
#RunWith(SpringJUnit4ClassRunner.class)
#TestExecutionListeners({DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, DbunitLoadOnceTestExecutionListener.class})
#ContextConfiguration(locations={"classpath:resources/spring/applicationContext.xml"})
public class TestClass {
private static final String TEST_DATA_FILENAME = "Scenario-1.xml";
// must be static
private static volatile boolean isDataSetLoaded = false;
// use the Qualifier to select a specific dataSource
#Autowired
#Qualifier("dataSourceXyz")
private DataSource dataSource;
/**
* For performance reasons, we only want to load the DBUnit data set once per test class
* rather than before every test method.
*
* #throws Exception
*/
#Before
public void before() throws Exception {
if(!isDataSetLoaded) {
isDataSetLoaded = true;
IDatabaseConnection conn = new DatabaseConnection(dataSource.getConnection());
IDataSet dataSet = DbUnitHelper.getDataSetFromFile(conn, TEST_DATA_FILENAME);
InsertIdentityOperation.CLEAN_INSERT.execute(conn, dataSet);
}
}
#Test
public void somethingToTest() {
// do stuff...
}
}
The class DbunitLoadOnceTestExecutionListener is no longer requried and has been removed. It just goes to show that reading up on all the fancy techniques can sometimes cloud your own judgement :o)
Not a specialist, but couldn't you call an instance method of your test object in prepareTestInstance() after having verified it implements the appropriate interface, and call this method only if it's the first time prepareTestInstance is invoked with a test instance of this class. You would just have to keep a set of already seen classes:
#Override
public void prepareTestInstance(TestContext testContext) throws Exception {
MyDbUnitTest instance = (MyDbUnitTest) getTestInstance();
if (!this.alreadySeenClasses.contains(instance.getClass()) {
instance.beforeClassWithApplicationContext(testContext.getApplicationContext());
this.alreadySeenClasses.add(instance.getClass());
}
}

ClassCastException when using embedded glassfish for unit tests

I'm running some unit tests on some EJBS via maven and an embedded glassfish container. One of my tests works, but all subsequent attempts to test a different EJB result in the same error:
java.lang.ClassCastException: $Proxy81 cannot be cast to
Followed by whatever bean I'm attempting to test. I'm confident my setup is good since, as I say, one of my beans can be tested properly.
Examples of workiing code:
#Stateful
public class LayoutManagerBean implements LayoutManager {
private final Log LOG = LogFactory.getLog(LayoutManagerBean.class);
public List<Menu> getMenus(User currentUser) {
...
}
}
#Local
public interface LayoutManager {
public List<Menu> getMenus(User user);
}
And the test:
public class LayoutManagerTest {
private static EJBContainer ejbContainer;
private static Context ctx;
#BeforeClass
public static void setUp() {
ejbContainer = EJBContainer.createEJBContainer();
ctx = ejbContainer.getContext();
}
#AfterClass
public static void tearDown() {
ejbContainer.close();
}
#Test
public void getMenus() {
LayoutManager manager = null;
try {
manager = (LayoutManager) ctx.lookup("java:global/classes/LayoutManagerBean!uk.co.monkeypower.openchurch.core.layout.beans.LayoutManager");
} catch (NamingException e) {
System.out.println("Failed to lookup the gosh darned bean!");
}
assertNotNull(manager);
//Menu[] menus = manager.getMenus();
//assertTrue(menus.length > 1);
}
}
And an example of a failure:
#Singleton
public class OpenChurchPortalContext implements PortalContext {
private Set<PortletMode> portletModes = Collections.emptySet();
private Set<WindowState> windowStates = Collections.emptySet();
private Properties portalProperties = new Properties();
public OpenChurchPortalContext() {
portletModes.add(PortletMode.VIEW);
portletModes.add(PortletMode.HELP);
portletModes.add(PortletMode.EDIT);
portletModes.add(new PortletMode("ABOUT"));
windowStates.add(WindowState.MAXIMIZED);
windowStates.add(WindowState.MINIMIZED);
windowStates.add(WindowState.NORMAL);
}
...
}
And the test:
public class OpenChurchPortalContextTest {
private static EJBContainer ejbContainer;
private static Context ctx;
#BeforeClass
public static void setUp() {
ejbContainer = EJBContainer.createEJBContainer();
ctx = ejbContainer.getContext();
}
#AfterClass
public static void tearDown() {
ejbContainer.close();
}
#Test
public void test() {
OpenChurchPortalContext context = null;
try {
context = (OpenChurchPortalContext) ctx.lookup("java:global/classes/OpenChurchPortalContext");
} catch (NamingException e) {
System.out.println("Failed to find the bean in the emebedded jobber");
}
assertNotNull(context);
Set<PortletMode> modes = (Set<PortletMode>) context.getSupportedPortletModes();
assertTrue(modes.size() > 1);
Set<WindowState> states = (Set<WindowState>) context.getSupportedWindowStates();
assertTrue(states.size() > 1);
}
}
Any ideas as to why this may not be working?
You often get this problem if you are proxying a class, not an interface. Assuming that it's this line which is failing:
context = (OpenChurchPortalContext) ctx.lookup("java:global/classes/OpenChurchPortalContext");
OpenChurchPortalContext is a class, but it is being wrapped by a proxy class to implement the EJB specific functionality. This proxy class isn't a subclass of OpenChurchPortalContext, so you're getting a ClassCastException.
You aren't getting this with the first example, because the LayoutManager is an interface.
LayoutManager manager = null; // INTERFACE, so it works
try {
manager = (LayoutManager) ctx.lookup("java:global/classes/LayoutManagerBean!uk.co.monkeypower.openchurch.core.layout.beans.LayoutManager");
} catch (NamingException e) {
System.out.println("Failed to lookup the gosh darned bean!");
}
First, you can test to see if this is really your problem, change context to be a PortalContext not OpenChurchPortalContext:
PortalContext context = null;
try {
context = (PortalContext) ctx.lookup("java:global/classes/OpenChurchPortalContext");
} catch (NamingException e) {
System.out.println("Failed to find the bean in the emebedded jobber");
}
If your problem really is the Proxy, then the above code should work. If this is the case, you have two potential solutions:
When you do the ctx.lookup, always use an interface. This can be a bit of a pain, because you need to define an interface specifically for each EJB.
You may be able to configure your EJB container to proxy the classes instead of just the interfaces, similar to proxyTargetClass for Spring AOP. You'll need to check with the documentation for your container for that.
Your singleton EJB has a default local business interface by means of implementing PortalContext interface. The test client should know it only by its business interface, and the actual bean class (OpenChurchPortalContext) should not be referenced directly by the client. So the fix is to look it up by its business interface PortalContext.

Resources