custom Python Module can not re-excute when raise exception in module - spring-xd

i custom a processor Python Module, RabbitMQ as source, Python Module can not re-excute when raise exception in python modules.
in this python module, it connect to elasticsearch , when es is down, it raise exception 。 xd always get message from RabbitMQ, and throw exception even elasticsearch service is available again:
2015-04-23 19:27:52,336 1.1.1.RELEASE ERROR SimpleAsyncTaskExecutor-1 process.ShellCommandProcessor -
java.io.IOException:
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:315)`
but otherwise in java processor module, when elasticsearch service is down,throw exception, but java processor module execute right while elasticsearch service is available again.
i respect service do not start depend others. so how to make python module reacton right ?

I have created an issue to investigate this. https://jira.spring.io/browse/XD-2982

Related

ClassNotFoundException in Nifi flow

Hey StackOverflow Community,
I have some problems with my Nifi flow. I made one to take my data from my Azure blob to put them into my HDFSCluster ( still in Azure).
My configuration in the item PutHDFS in Nifi is :
PutHDFSConfiguration
But when I inform the field "Hadoop ressources", i have this following error:
PutHDFS[id=89381b69-015d-1000-deb7-50b6cf485d28] org.apache.hadoop.fs.adl.HdiAdlFileSystem: java.lang.ClassNotFoundException: org.apache.hadoop.fs.adl.HdiAdlFileSystem
PutHDFS[id=89381b69-015d-1000-deb7-50b6cf485d28] PutHDFS[id=89381b69-015d-1000-deb7-50b6cf485d28] failed to invoke #OnScheduled method due to java.lang.RuntimeException: Failed while executing one of processor's OnScheduled task.; processor will not be scheduled to run for 30 seconds: java.lang.RuntimeException: Failed while executing one of processor's OnScheduled task.
How can i resolve this and put my data into my clusters.
Thanks for your answer.
Apache NiFi does not bundle any of the Azure related libraries, it only bundles the standard Apache Hadoop client, currently 2.7.3 if using a recent NiFi release.
You can specify the location of the additional Azure JARs through the PutHDFS processor property called "Additional Classpath Resources".

JSF 2.X on IBM Domino 9 (Servlet 2.5) - is it feasible at all?

Just curious whether it is possible to run JSF 2.3 on IBM Domino?
I have tried to deploy (JSF versions 2.0, 2.1, 2.2, 2.3) via the UpdateSite Plugin install following Sven's post HowTo: Vaadin on Domino. But was not really successful as got the following exceptions (listing from v2.3, but other versions are similar):
Unable to obtain InjectionProvider from init time FacesContext. Does this container implement the Mojarra Injection SPI?
Application was not properly initialized at startup, could not find Factory: javax.faces.context.FacesContextFactory. Attempting to find backup.
Uncaught init() exception thrown by servlet {0}: {2}
CWPWC0005E: Error occurred while initializing servlet wrapper. javax.servlet.ServletException: Uncaught initialization exception thrown by servlet Thread[Thread-6,5,main]
CWPWC0005E: Error occurred while initializing servlet wrapper. javax.servlet.ServletException: Uncaught initialization exception thrown by servlet
Any suggestions what to adjust? I do understand that Servlet version could not match JSF spec, but is it feasible at all?
Thanks!
The short answer: don't bother
The long answer:
Domino modified quite some number of OSGi elements to run. The Domino JSF has been extended to include SSJS. So you fight with a lot of moving parts.
What you want to do:
put an nginx in front of your Domino on 80/443
run Domino on a different port only accepting 127.0.0.1 connections
run your Primefaces Websphere liberty Glasfish app on yet another port
have nginx redirects point to old/new server based on url
Users will see one server, https can be handled by nginx, you can have http2, offload static resources.
While you are on it: give vert.x a shot. Way more fun than JEE

Spring XD 1.3.1 stream with Cassandra 3.0

I am using DSE 5.0.5 which come with Cassandra 3.0.11
I am trying to use Spring XD 1.3.1 to connect to the Cassandra
I have a processor module which processes the data and a sink which actually ingests the data.
I am trying to create stream as below
stream create --name ingestion-stream --definition "http --port=9020
|ingestion-transformer| cassandra-3 --contactPoints='1.2.3.4.' --
keyspace='mykeyspace' --ingestQuery='insert into table1(column1,column2,column3)
values (?,?,?)'" --deploy
The injection-stream is the name, ingestion-transformer is the module which transforms the data. I am almost sure that there is no problem with it.
But in the "cassandra-3" which is a sink module, I am facing problem
The Stream creation fails, giving below error in log
2017-02-17T12:45:21+0530 1.3.1.RELEASE ERROR
DeploymentsPathChildrenCache-0 boot.SpringApplication - Application
startup failed
-- then there are lot of error code----
Caused by: org.springframework.beans.BeanInstantiationException:
Failed to instantiate
[org.springframework.data.cassandra.mapping.CassandraMappingContext]:
Circular reference involving containing bean 'cassandraConfiguration'
- consider declaring the factory method as static for independence from its containing instance. Factory method 'cassandraMapping' threw
exception; nested exception is java.lang.NoClassDefFoundError: Could
not initialize class
org.springframework.data.cassandra.mapping.CassandraSimpleTypeHolder
My Sink module is using
cassandra-driver-core-3.0.0.jar and cassandra-driver-dse-3.0.0-alpha5.jar
I have also placed these two in xd/lib/
It was working fine with Cassandra 2.2.5 and Spring XD 1.3.0
Spring Data for Apache Cassandra 1.4.x and earlier do not work with cassandra-driver-core-3.x and later. Spring Data for Apache Cassandra 1.4.x supports only driver version 2.1.
Spring Data for Apache Cassandra 1.5.x supports cassandra-driver-core-3.x and later.
The driver upgrade from 2.1 to 3.x comes with a series of breaking changes, that's thy you get exceptions on application start.

Hybris Ant initialize error: Shutting down hybris platform since the system cannot be used without working Spring context

I am new to Hybris.
I started my hybris server and when I try to initialize, I get the following error:
Error creating Spring application context. Shutting down hybris platform since the system cannot be used without working Spring context...
ERROR [AfterSaveEventPublisher-master] [DeploymentMigrationUtil] Error while migrating deployments of extension core
java.lang.IllegalStateException: The queryCacheRegion Cache is not alive (STATUS_SHUTDOWN)
If I dont do ant initialize and directly start my Hybris Server,
I get this error:
Solr Core Initialization failure
Solr will start but fail to index data of my store.
Can anyone help me on this?
Thanks

Deploying ojdbc14.jar in a code module for a FileNet code module

I'm trying to deploy a couple of jar files in a code module for an event action in FileNet P8 4.0 (the FileNet server runs on WebSphere 6.1). One of these jars is my custom code, and the other jar is the thin driver for Oracle called ojdbc14.jar (I also tried with ojdbc15.jar), the custom code uses the oracle jar in order to connect to a data source and get a connection using the JNDI name.
When the event action is executed (after a subscription is invoked) the code in my custom module is called OK, the problem occurs when my code needs to load classes from the Oracle jar, I get this cause:
ERROR - Mon Sep 21 16:42:17 UTC 2009 - com.ibm.websphere.naming.CannotInstantiateObjectException: Exception occurred while the JNDI NamingManager was processing a javax.naming.Reference object. [Root exception is java.lang.reflect.InvocationTargetException]
at com.ibm.ws.naming.util.Helpers.processSerializedObjectForLookupExt(Helpers.java:1000)
at com.ibm.ws.naming.util.Helpers.processSerializedObjectForLookup(Helpers.java:705)
at com.ibm.ws.naming.jndicos.CNContextImpl.processResolveResults(CNContextImpl.java:2093)
...
Caused by: java.lang.NoClassDefFoundError: oracle.jdbc.driver.OracleLog
at com.ibm.ws.rsadapter.dbutils.impl.OracleUtilityImpl.setLogVolume(OracleUtilityImpl.java:85)
at com.ibm.ws.rsadapter.spi.InternalOracleDataStoreHelper.setProperties(InternalOracleDataStoreHelper.java:142)
at com.ibm.ws.rsadapter.spi.WSRdbDataSource.(WSRdbDataSource.java:846)
at com.ibm.ws.rsadapter.spi.WSManagedConnectionFactoryImpl.setDataSourceProperties(WSManagedConnectionFactoryImpl.java:1947)
... 43 more
...
Caused by: java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleLog
at java.net.URLClassLoader.findClass(URLClassLoader.java:496)
at com.ibm.ws.bootstrap.ExtClassLoader.findClass(ExtClassLoader.java:132)
at java.lang.ClassLoader.loadClass(ClassLoader.java:631)
at com.ibm.ws.bootstrap.ExtClassLoader.loadClass(ExtClassLoader.java:87)
at java.lang.ClassLoader.loadClass(ClassLoader.java:597)
... 48 more
Since I'm deploying the oracle jar with the code module, shouldn't FileNet should be able to find the class? Do you think I need to configure something else?
Thanks in advance.
Is it possible for your application to use WebSphere's own JDBC connection pools? When you set up a pool for a particulr database you get all the vendor-specific drivers installed there.
Generally, all manner of classpath and classloader confisions ensue when you try to place infrastructure code in your own applications. I don't know for certain that this is the case for your situation, but I do find taht staying on the known path in WebSphere tends to give the smoothest results.
I found the problem... somehow the ojdbc14.jar file got corrupted, so even when the classpath was correct and no matter what I tried to fix the problem, the problem was always there.
Thanks for the comments!

Resources