I have a Spring-configured CXF-powered JAX-RS service with two service beans:
<jaxrs:server id="wsServices">
<jaxrs:serviceBeans>
<ref bean="a"/>
<ref bean="b"/>
</jaxrs:serviceBeans>
<jaxrs:schemaLocations>
<jaxrs:schemaLocation>
classpath:/schema/webservices.xsd
</jaxrs:schemaLocation>
</jaxrs:schemaLocations>
</jaxrs:server>
<bean id="a" class="AServiceImpl"/>
<bean id="b" class="BServiceImpl" />
Unfortunately, only one of the two service beans is covered by the XSD file, so that that other one fails at schema validation.
I know how to turn off schema validation altogether, and then both services work fine (but I'd rather continue to have validation where it can be used).
How can I change the configuration to only use schema validation for bean a, but not for b?
Schema validation is set at the JAX-RS service level, so you can't do what you want directly (without adding to the schema) but you can have multiple <jaxrs:server> instances in the same webapp with different paths. That should let you set up what you want without too much trouble. (This is where the more sophisticated configuration approach of CXF comes into its own.)
Related
I have multiple web applications deployed in Tomcat and service jar shared in TOMCAT_HOME/lib/ext. All of the application are using Spring, and in the service jar I have beans, annotated with Spring 3.1 Caching annotations . I am using Ehcache provider. And I want to have one single CacheManager used by all the web applications. If I define spring cache configurations at web application level, caching works, but separate cacheManager is created for every app/context. 'Shared' Cache Manager is causing problems, because if one the those applications gets undeployed, this shared cacheManager is shut down. So I want a single CacheManager , configured in my service jar, and used for all the calls to methods made from beans from the web apps. My current try is to define following confuguration in service.jar's applicationContext.xml:
<context:annotation-config/>
<context:component-scan base-package="com.mycompany.app" />
<context:component-scan base-package="com.mycompany.portal.service" />
<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager" p:cacheManager-ref="ehCacheManager"/>
<bean id="ehCacheManager" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:configLocation="ehcache.xml" ></bean>
<cache:annotation-driven cache-manager="cacheManager"/>
I have defined parent application context via beanRefContext.xml:
<bean id="service.parent.context" class="org.springframework.context.support.ClassPathXmlApplicationContext">
<constructor-arg>
<list>
<value>applicationContext.xml</value>
</list>
</constructor-arg>
</bean>
And I am using this context as a parent context for all of my web apps with following contextParam in web app's web.xml:
<context-param>
<param-name>parentContextKey</param-name>
<param-value>service.parent.context</param-value>
</context-param>
The result is that this parentContext is loaded, but caching doesn't work at all
How can I solve this? Am I on the right way with the defining of the parentContext in the service.jar?
I don't think so. It looks like you are trying to have a single cache for multiple applications by "hacking" the root classloader.
If you need to share your cache across several applications, use a cache manager that supports that use case (i.e. that provides you a service you can reach from each application).
I'm using spring integration to create multiple services (each running in their own JVM) with JMS endpoints.
Once retry, exception handling, etc is added, the configuration is no longer trivial. I have moved the spring integration into its own context file and import it in all services to have a consistent setup.
eg
<import resource="classpath:/spring/jmsEndpoint.xml"/>
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="properties">
<props>
<prop key="queueName">myServiceQueue</prop>
</props>
</property>
</bean>
<alias name="myBusinessLogic" alias="abstractJmsEndpoint"/>
<bean id="myBusinessLogic" class="..."/>
This configuration allows me to keep each individual service configuration simple, only requiring an override of an abstract bean and setting a few properties.
The problem is I now want multiple jms endpoints in the same service (jvm). As I can't import jmsEndpoint.xml multiple times, what is the best way to reuse the configuration?
See the dynamic-ftp sample - it uses a technique of creating instances of a parameterized application contexts, passing different properries into each. It's README also has links to forum discussions about how to make these contexts children of the main context, in cases where the child needs access to shared resources.
in Spring MVC we can configure handler mapping as bean.but how spring examine what is the handler mapping we mentioned in xml?
simpliy
<bean id="simplehandler" class="" />
do we need to specify "simplehandler" bean id to somewhere for spring to identify bean handler?
First thing that must be clear is that: Spring has several handler mappings.
And the "DefaulAnnotationHandlerMapping" is activated by default (See DispatcherServlet.properties in the Spring distribution or just google for it. All default handlers are listed there). Spring will choose "DefaulAnnotationHandlerMapping" by default.
If you want Spring to use another handler mapping strategy, you have to tell him explicitly
e.g.:
<bean class="org.blablabla......ControllerClassNameHandlerMapping" />
Note that this cancels the use of the default handler mapping strategy
You can also tell Spring to use several handler mapping strategy and prioritized them by using the order property in the mappers declaration
something like
<bean class="org.blabla....DefaulAnnotationHandlerMapping" >
<property name="order" value="0"/>
</bean>
<bean class="org.blablabla......ControllerClassNameHandlerMapping">
<property name="order" value="1"/>
</bean>
Hope this helps. And sorry if the syntax of my bean declaration is not 100% correct. I had to write quickly ;-)
I'm using the simple object/graph mapping in Spring Data Neo4j 2.0, where I perform persistence operations using the Spring Data repository framework. I'm working with the repositories rather than working with the Neo4jTemplate. I inject the repositories into my Spring Web MVC controllers, and the controllers call the repos directly. (No intermediate service layer--my operations are generally CRUDs and finder queries.)
When I do read operations, there are no issues. But when I do write operations, I get "NotInTransactionException". My understanding is that read ops in Neo4j don't require transactions, but write ops do.
What's the best way to get transactions into the picture here, assuming I want to stick with the simple OGM? I'm wanting to use #Transactional, but putting that on the various repository interfaces doesn't work. If I introduce an intermediate service tier in between the controllers and the repositories and then annotate the service beans with #Transactional, then it works, but I'm wondering whether there's a simpler way to do it. Without Spring Data, I'd typically have access to the DAO (repository) implementations, so I'd be able to annotate the concrete DAOs with #Transactional if I wanted to avoid a pass-through service tier. With Spring Data the repos are dynamically generated so that doesn't appear to be an option.
First, note that having transactional DAOs is not generally a good practice. But if you don't have a service layer, then let it be on the DAOs.
Then, you can enable declarative transactions. Here's how I did it:
First, define an annotation called #GraphTransactional:
#Retention(RetentionPolicy.RUNTIME)
#Transactional("neo4jTransactionManager")
public #interface GraphTransactional {
}
Update: spring-data-neo4j have added such an annotation, so you can reuse it instead of creating a new one: #Neo4jTransactional
Then, in applicationContext.xml, have the following (where neo4jdb is your EmbeddedGraphDatabase):
<bean id="neo4jTransactionManagerService"
class="org.neo4j.kernel.impl.transaction.SpringTransactionManager">
<constructor-arg ref="neo4jdb" />
</bean>
<bean id="neo4jUserTransactionService" class="org.neo4j.kernel.impl.transaction.UserTransactionImpl">
<constructor-arg ref="neo4jdb" />
</bean>
<bean id="neo4jTransactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager" ref="neo4jTransactionManagerService" />
<property name="userTransaction" ref="neo4jUserTransactionService" />
</bean>
<tx:annotation-driven transaction-manager="neo4jTransactionManager" />
Have in mind that if you use another transaction manager as well, you'd have to specify order="2" for this annotation-driven definition, and also have in mind that you won't have two-phase commit if you have one method that is declared to be both sql and neo4j transactional.
I have been learning the Spring Batch framework to try to put in practice at work through the online documentation as well as the Pro Spring Batch book by Appress. I have a quick question.
Scenario
I want to do a simple test where I read from the database, do some processing, and then write to another database.
Question
I understand that there is a configuration file called launch-context.xml that contains the Job Repository database schema to maintain the state of the jobs and each of the steps for each one of them.
Say that I have a Source Database (A) where I do a read from and a Target Database (B) where I write to.
Maybe I have overlooked it but...
Where do I put the data source information for A and B?
I guess it depends on the answer of #1 but if put it under src/main/resources say for example source-datasource.xml and target-datasource.xml How is Spring going to pick it up and wire it appropriately? In Spring web app development I usually put those types of files under the context-param tag.
You can define these datasources in any spring file of your choosing, so yes:
src/main/resources/db/source-datasource.xml
src/main/resources/db/target-datasource.xml
will do.
Let's say you named your datasource beans as a sourceDataSource and a targetDataSource. The way you tell Spring Batch ( or in this case just Spring ) to use them is through the "import" and "dependency injection".
Importing
You can organize your spring configs the way you fit best, but since you already have launch-context.xml, in order for the above datasources to be visible, you need to import them into launch-context.xml as:
<import resource="classpath:db/source-datasource.xml"/>
<import resource="classpath:db/target-datasource.xml"/>
Injecting / Using
<bean id="sourceReader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="sourceDataSource" />
<!-- other properties here -->
</beans:bean>
<bean id="targetWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="dataSource" ref="targetDataSource" />
<!-- other properties here -->
</beans:bean>
where a sourceReader and a targetWriter are the beans you would inject into your step(s).