Spring Cloud - SQS - The specified queue does not exist for this wsdl version - spring

I am attempting to get spring cloud to work with messaging using auto configure.
My properties file contains:
cloud.aws.credentials.accessKey=xxxxxxxxxx
cloud.aws.credentials.secretKey=xxxxxxxxxx
cloud.aws.region.static=us-west-2
My Configuration class is as follows:
#EnableSqs
#ComponentScan
#EnableAutoConfiguration
public class Application {
public static void main(String[] args) throws Exception {
SpringApplication.run(Application.class, args);
}
}
My Listener class:
#RestController
public class OrderListener {
#MessageMapping("orderQueue")
public void orderListener(Order order){
System.out.println("Order Name " + order.getName());
System.out.println("Order Url" + order.getUrl());
}
}
However, when I run this. I get the following error:
org.springframework.context.ApplicationContextException: Failed to start bean 'simpleMessageListenerContainer'; nested exception is org.springframework.messaging.core.DestinationResolutionException: The specified queue does not exist for this wsdl version. (Service: AmazonSQS; Status Code: 400; Error Code: AWS.SimpleQueueService.NonExistentQueue; Request ID: cc8cb199-be88-5993-bd58-fca3c9f17110); nested exception is com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version. (Service: AmazonSQS; Status Code: 400; Error Code: AWS.SimpleQueueService.NonExistentQueue; Request ID: cc8cb199-be88-5993-bd58-fca3c9f17110)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:176)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:346)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:149)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:112)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:770)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:140)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:483)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:691)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:321)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:961)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:950)
at com.releasebot.processor.Application.main(Application.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: org.springframework.messaging.core.DestinationResolutionException: The specified queue does not exist for this wsdl version. (Service: AmazonSQS; Status Code: 400; Error Code: AWS.SimpleQueueService.NonExistentQueue; Request ID: cc8cb199-be88-5993-bd58-fca3c9f17110); nested exception is com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version. (Service: AmazonSQS; Status Code: 400; Error Code: AWS.SimpleQueueService.NonExistentQueue; Request ID: cc8cb199-be88-5993-bd58-fca3c9f17110)
at org.springframework.cloud.aws.messaging.support.destination.DynamicQueueUrlDestinationResolver.resolveDestination(DynamicQueueUrlDestinationResolver.java:81)
at org.springframework.cloud.aws.messaging.support.destination.DynamicQueueUrlDestinationResolver.resolveDestination(DynamicQueueUrlDestinationResolver.java:37)
at org.springframework.messaging.core.CachingDestinationResolverProxy.resolveDestination(CachingDestinationResolverProxy.java:88)
at org.springframework.cloud.aws.messaging.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:300)
at org.springframework.cloud.aws.messaging.listener.SimpleMessageListenerContainer.start(SimpleMessageListenerContainer.java:38)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:173)
... 18 common frames omitted
Anyone else run across this? Any help would be greatly appreciated

This error means that the specified queue orderQueue does not exist on region us-west-2. Just create it and it should work.
Btw, there's no need to add _#EnableSqs_ when using _#EnableAutoConfiguration_.

Alain's answer is correct. This error indicates that the queue doesn't exist in the region us-west-2. One of the reasons could be that the AWS Java SDK uses us-east-1 as the default region. From AWS documentation http://docs.aws.amazon.com/java-sdk/latest/developer-guide/java-dg-region-selection.html
The AWS SDK for Java uses us-east-1 as the default region if you do not specify a region in your code. However, the AWS Management Console uses us-west-2 as its default. Therefore, when using the AWS Management Console in conjunction with your development, be sure to specify the same region in both your code and the console.
You can set the region or end point specifically in the client using setRegion() or setEndpoint() methods of AmazonSQSClientobject. See http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sqs/AmazonSQS.html#setEndpoint-java.lang.String-
For a list of region and endpoints see http://docs.aws.amazon.com/general/latest/gr/rande.html#sqs_region

Try using URL of the queue, instead of the name.

I've impacted the same issue while trying to get-queue-url using command line. Look what I had:
A client error (AWS.SimpleQueueService.NonExistentQueue) occurred when calling the GetQueueUrl operation: The specified queue does not exist for this wsdl version.
Had to run this:
$aws configure
And under prompt 'Default region name[...]:' entered the region than my queue belongs to. Then the error disappeared.
So double check your configs ;)

Make sure you are using the correct AWS Profile. This was the problem in my case. If you have multiple profiles you can get all:
cat ~/.aws/credentials
and then you can set a different profile:
export AWS_PROFILE=<profile_name>
Without specifying this, the current profile by default is default, so make sure you take that into account.

I also had this error after re-creating a queue of the same name, creating a new queue was the immediate solution, since I wasn't sure how long it would take until the new ARN would attach.

If the queue name already exist and you still get this error make sure to run aws configure
and set the region to us-east-1.
In my case the queue name existed but it was still throwing up that error, because the region was set to us-east-2. I am not sure why though, but seems like changing the region to us-east-1 to the default region fixed it for me.

Related

Error on multi-tenant hybris backoofice start-up

I have setup a slave tenant on a Hybris 1811 installation, but I cannot get the backoffice to work for the slave tenant (foo). The error that I get in browser is: Server Error.
I have followed the instructions from here: How to access Backoffice in Junit Tenant, but I cannot get it to work.
tenant_foo.properties
db.tableprefix=foo_
cronjob.timertask.loadonstartup=false
tenant.restart.on.connection.error=false
db.factory=de.hybris.platform.jdbcwrapper.JUnitDataSourceFactory
db.url=jdbc:oracle:thin:#localhost:1521:foo
db.driver=oracle.jdbc.OracleDriver
db.username=foo
db.password=bar
hac.webroot=/hac_foo
local_tenant_foo.properties
backoffice.webroot=/backoffice_foo
I have checked the Hybris logs and found this error:
ERROR [localhost-startStop-3] (foo) [ContextLoader] Context initialization failed
org.springframework.beans.factory.support.BeanDefinitionValidationException: java.io.IOException: Unable to remove a module library: E:\hybris-1811\data\backoffice\widgetlib\deployed\voucherbackoffice.jar; nested exception is com.hybris.cockpitng.core.CockpitApplicationException: java.io.IOException: Unable to remove a module library: E:\hybris-1811\data\backoffice\widgetlib\deployed\voucherbackoffice.jar
at com.hybris.backoffice.BackofficeApplicationContext.prepareRefresh(BackofficeApplicationContext.java:106) ~[classes/:?]
HAC works fine for both tenants (master and foo), but backoffice only works for the master tenant. Also, if I navigate to HAC->tenants-> foo -> view -> configured extension, I can see that for extensions acceleratorservices and admincockpit, under the WebContext column it displays "Missing configuration for this context in current tenant".
try to add backoffice library home for each tenant:
backoffice.library.home=${data.home}/foo
(foo is the tenant id). There is also some documentation about it in the help here.
I hope it helps!

File upload stuck on large files with Spring Web on Windows

I have a JHipster application providing a REST endpoint for uploading files.
This code works without any problem in a Docker container on a Linux machine but seems to be stuck in the same Docker container running on Windows.
#PostMapping("/datafiles")
#Timed
public ResponseEntity<DataFileDTO> createDataFile(#Valid #RequestParam("file") MultipartFile data,
#Valid #RequestHeader("workspaceId") Long workspaceId, RedirectAttributes redirectAttributes)
throws URISyntaxException, IOException {
log.debug("REST request to save a data content");
DataFileDTO result = dataFileService.createFromData(data, workspaceId);
return ResponseEntity.created(new URI("/api/datafiles/" + result.getId()))
.headers(HeaderUtil.createEntityCreationAlert(ENTITY_NAME, result.getId().toString())).body(result);
}
This code works for 12Mb video file but when I try to upload a file which is 70Mb, no debug log appears and the server is blocked until I get a timeout from undertow
Jan 29 15:26:46 StorageService StorageService-dockerstorage_storageservice-app_1.network:8081: ERROR - io.undertow.request : UT005023: Exception handling request to /api/datafiles
java.io.IOException: UT000128: Remote peer closed connection before all data could be read
at io.undertow.conduits.FixedLengthStreamSourceConduit.exitRead(FixedLengthStreamSourceConduit.java:338)
at io.undertow.conduits.FixedLengthStreamSourceConduit.read(FixedLengthStreamSourceConduit.java:255)
at org.xnio.conduits.ConduitStreamSourceChannel.read(ConduitStreamSourceChannel.java:127)
at io.undertow.channels.DetachableStreamSourceChannel.read(DetachableStreamSourceChannel.java:209)
at io.undertow.server.HttpServerExchange$ReadDispatchChannel.read(HttpServerExchange.java:2343)
at org.xnio.channels.Channels.readBlocking(Channels.java:294)
at io.undertow.servlet.spec.ServletInputStreamImpl.readIntoBuffer(ServletInputStreamImpl.java:192)
at io.undertow.servlet.spec.ServletInputStreamImpl.read(ServletInputStreamImpl.java:168)
at io.undertow.server.handlers.form.MultiPartParserDefinition$MultiPartUploadHandler.parseBlocking(MultiPartParserDefinition.java:223)
at io.undertow.servlet.spec.HttpServletRequestImpl.parseFormData(HttpServletRequestImpl.java:792)
... 43 common frames omitted
Changing the timeout just leaves the server waiting but nothing happens on my part of code.
Is there something specific to Windows in this case ?
I finally found that the problem is not on the JHipster side but is related to the configuration of the NGinx Docker container running the frontend.
It seems that the default configuration is different in this case.

Unable to create state store in Kafka streams

I am getting this error Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1 while creating state store in my kafka streams application. Here is the complete stack trace of the application
[2016-08-30 12:43:09,408] ERROR [StreamThread-1] User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group string-monitor failed on partition assignment (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:71)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:86)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
at org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: java.io.IOException: Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:95)
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:69)
... 32 more
Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:71)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:86)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
at org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
... 1 more
Caused by: java.io.IOException: Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:95)
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:69)
... 32 more
And I create a state store as below
StateStoreSupplier avgStore = Stores.create("avgStore")
.withKeys(Serdes.String())
.withValues(Serdes.String())
.persistent()
.build();
Any idea of how to fix this?
Did you configure multiple threads within your application instance? If yes, it may be due to a known issue in older versions of Kafka, where the underlying consumer used by Kafka Streams (in the app instance) can take too long to rebalance, causing itself to be kicked out of the consumer group (and hence triggering another consumer group rebalance) while it is still under the first rebalance process.
The following error messages in your stacktrace indicate that you are actually running into the problem I described above:
Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
The problem is summarized in this Apache Kafka ticket:
https://issues.apache.org/jira/browse/KAFKA-3758
A recent change to the underlying Kafka consumer client fixes this issue:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread
However it is not included in an official Kafka release yet and, as of today, is available only in Apache Kafka trunk. If you are able to run your app with Kafka trunk you can verify if this problem has gone away already.
I also saw this problem when the user didn't have permissions to write to the default state.dir
When I changed the following property to a dir w/good permissions everything was fine:
property.put(StreamsConfig.STATE_DIR_CONFIG, "{goodDir}")
This was observed in 0.10.2
Just a note for those who receives the same Failed to lock the state directory exception and uses The Spring for Apache Kafka (spring-kafka) (example from spring doc):
It took me a while to figure out in the source code that Spring starts your built Steams automatically, so there is no need for you to do manually something like:
KafkaStreams streams = new KafkaStreams(builder, config);
streams.start();
I dit it and faced the aforementioned exception because, not surpassingly, there were two Application instances running: one is created by Spring, another by me

Why wsdl2java generated code use CXF dependencies at will?

I use Apache CXF 3.0.4 wsdl2java to generate code from this wsdl with following command:
./wsdl2java -client -exsh true -d weather -p weather -verbose url
As far as I know wsdl2java generates pure java code using only JAX-WS. Generated code works fine without any additional library/dependency. I've grepped it to find any CXF classes, but it seems to be free from CXF. The problem flows out when I add specific CXF dependency to my pom.xml. This dependency is:
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-frontend-jaxws</artifactId>
<version>3.1.0</version>
</dependency>
After adding this, application seems to use different implementation of JAX-WS( is it even possible?).
Generated code contains among other things WeatherSoap interface. One of it's methods is getWeatherInformation()
#WebService(targetNamespace = "http://ws.cdyne.com/WeatherWS/", name = "WeatherSoap")
#XmlSeeAlso({ObjectFactory.class})
public interface WeatherSoap {
/**
* Gets Information for each WeatherID
*/
#WebResult(name = "GetWeatherInformationResult", targetNamespace = "http://ws.cdyne.com/WeatherWS/")
#RequestWrapper(localName = "GetWeatherInformation", targetNamespace = "http://ws.cdyne.com/WeatherWS/", className = "weather.GetWeatherInformation")
#WebMethod(operationName = "GetWeatherInformation", action = "http://ws.cdyne.com/WeatherWS/GetWeatherInformation")
#ResponseWrapper(localName = "GetWeatherInformationResponse", targetNamespace = "http://ws.cdyne.com/WeatherWS/", className = "weather.GetWeatherInformationResponse")
public weather.ArrayOfWeatherDescription getWeatherInformation();
...
}
Without cxf-rt-frontend-jaxws dependency
Step into getWeatherInformation() while debugging leads to GetWeatherInformation( another generated class.
Step into GetWeatherInformation leads to com.oracle.webservices.internal.api.message.BasePropertySet
...
Result from SOAP webservice received.
With cxf-rt-frontend-jaxws dependency
Step into getWeatherInformation while debugging leads to org.apache.cxf.jaxws.JaxWsClientProxy (why?)
...
Exception is thrown:
Exception in thread "main" javax.xml.ws.soap.SOAPFaultException: Could not find conduit initiator for address: http://wsf.cdyne.com/WeatherWS/Weather.asmx and transport: http://schemas.xmlsoap.org/soap/http
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:161)
at com.sun.proxy.$Proxy33.getWeatherInformation(Unknown Source)
at weather.WeatherSoap_WeatherSoap_Client.main(WeatherSoap_WeatherSoap_Client.java:49)
Caused by: java.lang.RuntimeException: Could not find conduit initiator for address: http://wsf.cdyne.com/WeatherWS/Weather.asmx and transport: http://schemas.xmlsoap.org/soap/http
at org.apache.cxf.binding.soap.SoapTransportFactory.getConduit(SoapTransportFactory.java:224)
at org.apache.cxf.binding.soap.SoapTransportFactory.getConduit(SoapTransportFactory.java:229)
at org.apache.cxf.endpoint.AbstractConduitSelector.createConduit(AbstractConduitSelector.java:145)
at org.apache.cxf.endpoint.AbstractConduitSelector.getSelectedConduit(AbstractConduitSelector.java:107)
at org.apache.cxf.endpoint.UpfrontConduitSelector.prepare(UpfrontConduitSelector.java:63)
at org.apache.cxf.endpoint.ClientImpl.prepareConduitSelector(ClientImpl.java:853)
at org.apache.cxf.endpoint.ClientImpl.doInvoke(ClientImpl.java:511)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:425)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:326)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:279)
at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:96)
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:139)
... 2 more
I know that there is workaround for this exception which is mentioned here and it works. But not for my work application. It throws NPE after all:
Exception in thread "main" javax.xml.ws.soap.SOAPFaultException: java.lang.NullPointerException
...
Caused by: org.apache.cxf.binding.soap.SoapFault: java.lang.NullPointerException
at org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.unmarshalFault(Soap11FaultInInterceptor.java:86)
at org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.handleMessage(Soap11FaultInInterceptor.java:52)
at org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.handleMessage(Soap11FaultInInterceptor.java:41)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.interceptor.AbstractFaultChainInitiatorObserver.onMessage(AbstractFaultChainInitiatorObserver.java:113)
at org.apache.cxf.binding.soap.interceptor.CheckFaultInterceptor.handleMessage(CheckFaultInterceptor.java:69)
at org.apache.cxf.binding.soap.interceptor.CheckFaultInterceptor.handleMessage(CheckFaultInterceptor.java:34)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.endpoint.ClientImpl.onMessage(ClientImpl.java:802)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1642)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1533)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1336)
at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:652)
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.endpoint.ClientImpl.doInvoke(ClientImpl.java:516)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:425)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:326)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:279)
at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:96)
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:139)
... 7 more
Question time
Why Apache-CXF is used even when application works fine without it?
Is it possible to force application to stop using Apache-CXF in this case?
I don't need this dependencies for client( since it works fine without them). I just want to produce SOAP as well. And thats why I need this dependencies. The only solution which I see is to split application into separate consumer and producer.
application seems to use different implementation of JAX-WS( is it even possible?).
Yes. CXF have it's own jax-ws provider and it's jar contains a service file declaring it. (when the JVM needs a jax-ws provider: it looks on the classpath to see if there is a non-default provider declared... and if any use it).
Why ?
That's the Java specs : http://docs.oracle.com/javaee/5/api/javax/xml/ws/spi/Provider.html#provider()
Is it possible to force application to stop using Apache-CXF in this case?
Yes. Create the appropriate resource file on your classpath to redirect to the default implementation.
In details : the file must be here : META-INF/services/javax.xml.ws.spi.Provider (if you use maven: put it simply here : /src/main/resources/META-INF/services/javax.xml.ws.spi.Provider)
And it must contains one single line :
javax.xml.ws.spi.Provider

Need help in deploying warn using WSAdmin install with JNDI

I am trying to deploy a web application using WSAdmin tool. But it is throwing an error.
JACl script that I am using is :
$AdminApp install /opt/www/temp/SampleApp.war {-nopreCompileJSPs -nodeployejb -server delivery -cell delivery_cell -node delivery_node -appname SampleApp -contextroot SampleApp -MapWebModToVH {{"SampleApp" SampleApp.war,WEB-INF/web.xml default_host}}}
Error I am getting is:
com.ibm.ws.scripting.ScriptingException: WASX7109E: Insufficient data for install task "MapResRefToEJB
ADMA0007E: A validation error occurred in task Mapping resource references to resources. The Java Naming and Directory Interface (JNDI) name is not specified for resource reference jdbc/app_DB in module SampleApp with EJB name.
From the error above I understand that I need to configure my JNDI with -MapResRefToEJB. I tried to understand this option but getting too confused.
Can anyone help me to resolve this issue?
These errors appear to be caused by the MapResRefToEJB option in
the wsadmin command not being set correctly, or the resource it is pointing to
not existing correctly in the web.xml file.
Additional information on MapResRefToEJB
Options for the AdminApp object install, installInteractive, edit,
editInteractive, update, and updateInteractive commands
http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd.doc/info/ae/ae/rxml_taskoptions.html
Thank you
Note : Opinions are my own.

Resources