Elasticsearch no node available (after power outage) - elasticsearch

Yesterday we had a power outage and were able to get all of our machines back online with the exception of one box.
When firing up our application we see the log
Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [com.levelsbeyond.search.elasticsearch.ElasticSearchTransportClientProvider]: Constructor threw exception; nested exception is org.elasticsearch.client.transport.NoNodeAvailableException: No node available (org.mule.api.lifecycle.InitialisationException)
at org.mule.config.builders.AbstractConfigurationBuilder.configure(AbstractConfigurationBuilder.java:52)
at org.mule.config.builders.AbstractResourceConfigurationBuilder.configure(AbstractResourceConfigurationBuilder.java:78)
at org.mule.context.DefaultMuleContextFactory.createMuleContext(DefaultMuleContextFactory.java:97)
at org.mule.config.builders.MuleXmlBuilderContextListener.createMuleContext(MuleXmlBuilderContextListener.java:169)
at org.mule.config.builders.MuleXmlBuilderContextListener.initialize(MuleXmlBuilderContextListener.java:98)
at org.mule.config.builders.MuleXmlBuilderContextListener.contextInitialized(MuleXmlBuilderContextListener.java:74)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4939)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5434)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:983)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1660)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
and then consistently repeated we see
2014-09-19 11:35:19,200 WARN [org.elasticsearch.transport.netty] (elasticsearch[Dominic Fortune][transport_client_worker][T#5]{New I/O worker #5}) - <[Dominic Fortune] Message not fully read (response) for [12] handler future(org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler$1#625badaa), error [true], resetting>
Everything was working perfectly fine until the power outage. This is a single node in the cluster and it is running on the same machine as the java application (centos 6.5) so I know this isn't the same issue you keep finding on SO and on google that states this issue is caused by different version of elasticsearch and/or Java.
Does anyone no how to recover from this and get back up and running?
Thanks.

Turns out that when the power went out, a restart triggered an auto update of elasticsearch and the upgraded version didn't support the transport drivers in use.

Related

SpringBoot throws NoClassDefFoundError at runtime

We use SpringBoot 1.4.7 web to deploy ActiveMq 5.13.5 broker. The build artifact is a jar file which is launched by java command. Most of the time, it runs smoothly without any problem. However, without restarting the app, it randomly throws the following exception once or twice a day:
java.io.IOException: Unexpected error occurred: java.lang.NoClassDefFoundError: org/apache/activemq/command/ActiveMQTextMessage
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:222)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: org/apache/activemq/command/ActiveMQTextMessage
at org.apache.activemq.openwire.v11.ActiveMQTextMessageMarshaller.createObject(ActiveMQTextMessageMarshaller.java:55)
at org.apache.activemq.openwire.OpenWireFormat.doUnmarshal(OpenWireFormat.java:360)
at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:277)
at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:240)
at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:232)
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:215)
... 1 common frames omitted
Caused by: java.lang.ClassNotFoundException: org.apache.activemq.command.ActiveMQTextMessage
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:94)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 common frames omitted
I ensured the class ActiveMQTextMessage had been loaded before the exception time by outputting the class loading debug. I also ensured the system has plenty of free capacity for the cpu, memory(heap and meta), disk, file handles and sockets.
I have a few questions here:
Anyone saw similar problem and know the cause and fix?
Does SpringBoot unload class at runtime and reload them back?
Any hint to further debug this type of problem is highly appreciated.
Edit:
The dependency tree for compile and runtime is posted here: https://gist.github.com/liuzy163/8b8f8c7119624b5bdbe30e133d5f5910

Schema exception on Kafka jdbc sink connect

I am trying to insert rows into my Oracle table using Kafka jdbc sink connect. I have messages in my Kafka topic (JSON) like below;
[{"f1":"qws","f2":"zcz","f3":"SDFF","f4":"f33bfed577bcd7c4625479bd3cd13323--1132061303","f5":null,"f6":null,"f7":"ghSDAgh/akdjytfd/jhsgd","f8":"hsfgd/sdfjghsfjd/jsg","f9":null,"f10":"ASD","f11":"sdfg/vbnm","f12":"S","startTime":"2018-01-30T05:24:41.162","_startTime":"DATE","f13":219,"f14":"http://192.168.0.1:1234/asd/fgh/jkl/zxc/vbn/qwe/rty","f15":"fe80:0:0:0:7501:14d9:b44b:2a95%eth5","f16":1234,"f17":"ABCD-1234","f18":"192.168.0.1","f19":"sdfgd","dfgVO":{"fa1":null,"fa2":"formats","fa3":""qwe.rty.uiop.asd.fgh.jkl.zxc.vbn.asdf#61e97f29"","fa4":7,"fa5":79,"fa6":null,"fa7":"{}","fa8":1517289881381},"f20":null,"f21":"http-drte-1234-uik-7","f22":false,"f23":false,"f24":false}]
I have the connector configuration like below;
name=jdbc-sink-2
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=my_topic_1
connection.url=jdbc:oracle:thin:#192.168.0.1:1521:user01
connection.user=USER1
connection.password=PASSWD1
auto.create=true
table.name.format=MY_TABLE_2
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
producer.retries=1
When I start the connector, I am getting the error below;
[2018-01-30 11:16:55,417] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:308)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:406)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:250)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:16:55,422] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:149)
Then I added the below configurations to my existing connector configuration;
key.converter.schemas.enable=false
value.converter.schemas.enable=false
Now, I am getting another error like below;
[2018-01-30 11:36:58,118] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)
org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: MY_TABLE_2
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:190)
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:58)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:65)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:62)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:66)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:435)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:36:58,123] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)
[2018-01-30 11:36:58,124] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:457)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:36:58,125] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:149)
This says that I need to modify my Kafka message like key value schema format. I cannot modify my Kafka message format since it is published by someone else. How can I fix this error?
Thank you.
Per doc, if you want to use the JDBC Sink, you need to provide a schema. You can do this either using Avro + Schema Registry, or using JSON with embedded schema. You can see a sample of the expected JSON structure here.
Where is your data coming from? If it's Kafka Connect source, you can just use Avro or JSON with schemas enabled. If it's elsewhere, you'll need to amend that to provide the data to include schema - the Avro serialiser provided with the Schema Registry can do just this for you.

How can I use spring-boot/spring-data-cassandra to connect to Cassandra 3.3.0? [duplicate]

I am trying to configure spring data with cassandra.
But I am getting bellow error , when my app is deploying in tomcat.
When I check the connection, it is available to the given port. (127.0.0.1:9042). I have include stack trace and spring configuration bellow.
Does anyone having idea on this error?
Full stack trace :
2015-12-06 17:46:25 ERROR web.context.ContextLoader:331 - Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession': Invocation of init method failed; nested exception is com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1572)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:736)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:759)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:434)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:306)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:106)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4994)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5492)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1245)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1895)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1230)
at com.datastax.driver.core.Cluster.init(Cluster.java:157)
at com.datastax.driver.core.Cluster.connect(Cluster.java:245)
at com.datastax.driver.core.Cluster.connect(Cluster.java:278)
at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.afterPropertiesSet(CassandraCqlSessionFactoryBean.java:82)
at org.springframework.data.cassandra.config.CassandraSessionFactoryBean.afterPropertiesSet(CassandraSessionFactoryBean.java:43)
===================================================================
Spring Configuration :
<?xml version="1.0" encoding="UTF-8"?>
<beans:beans ...>
<cassandra:cluster id="cassandraCluster"
contact-points="127.0.0.1" port="9042" />
<cassandra:converter />
<cassandra:session id="cassandraSession" cluster-ref="cassandraCluster"
keyspace-name="blood" />
<cassandra:template id="cqlTemplate" />
<cassandra:repositories base-package="com.blood.dao.nosql" />
<cassandra:mapping entity-base-packages="com.blood.domain.nosql" />
</beans:beans>
The problem is that Spring Data Cassandra (as of December 2015 when I write this) does not provide support for Cassandra 3.x. Here's an excerpt from a conversation with one of the developers in the #spring channel on freenode:
[13:49] <_amicable> Hi all, does anybody know if spring data cassandra supports cassandra 3.x? All dependencies & datastax drivers seem to be 2.x
[13:49] <#_ollie> amicable: Not in the near future.
[13:49] <_amicable> _ollie: thanks.
[13:50] <_amicable> I'll go and look at the relative merits of 2.x vs 3.x then
;)
[13:51] <#_ollie> SD Cassandra is a community project (so far) and its progress highly depends on how much time the developers can actually spend on it.
[13:51] <#_ollie> We will have someone joining the team in February 2016 to get the project more closely aligned to the core Spring Data projects.
It looks like you are using an older version of the driver with Cassandra 3.0. Cassandra 3.0 changed its internal schema metadata representation, and only the latest drivers can parse this metadata.
Use Java Cassandra driver 3.0.0-alpha5 to connect to Cassandra 3.0.

FAIL - Application at context path /icescrum could not be started

I am trying to deploy iceScrum in using tomcat. I couldn't start the app in the manager, it is showing the above FAIL message. In ice scrum.log
2014-06-24 08:42:26,983 [http-bio-8080-exec-2] ERROR org.springframework.web.context.ContextLoader - Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'pluginManager' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NullPointerException: Cannot invoke method getAt() on null object
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException: Cannot invoke method getAt() on null object
... 3 more
I have no idea what went wrong. I followed all the steps mentioned in https://www.kagilum.com/documentation/install-guide/#application-server. Have anyone come across this kind of issues?
P.S. I am an iOS dev.
icescrum dosent support Java 8 yet.
iceScrum requires Java 6 or 7. Please note that early versions of Java
6 may not work properly and that Java 8 support is planned but not
available yet.
changing the java version to 7 fixed the issue. And the app runs without any problems.

Spring Oauth2 Provider in Grails - dependency

I want to enable OAuth2 provider in my web application, which is based on Grails 2.2.2. However I'm struggling with spring-security-oauth2-provider plugin.
spring-security-oauth2-provider plugin uses spring-security-oauth2 library. I'm trying to run plugin version 1.0.4-SNAPSHOT from Git, which uses spring-security-oauth2-1.0.4.RELEASE library.
After plugin installation, my application will not start, saying that it could not initialize "oauth2ProviderFilter" bean with this exception and stack trace:
| Error 2013-06-15 23:51:51,434 [localhost-startStop-1] ERROR context.GrailsContextLoader - Error initializing the application: Error creating bean with name 'oauth2ProviderFilter': Initialization of bean failed; nested exception is java.lang.reflect.MalformedParameterizedTypeException
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'oauth2ProviderFilter': Initialization of bean failed; nested exception is java.lang.reflect.MalformedParameterizedTypeException
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527)
at org.codehaus.groovy.grails.commons.spring.ReloadAwareAutowireCapableBeanFactory.doCreateBean(ReloadAwareAutowireCapableBeanFactory.java:122)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:607)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:925)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:472)
at org.codehaus.groovy.grails.commons.spring.DefaultRuntimeSpringConfiguration.getApplicationContext(DefaultRuntimeSpringConfiguration.java:153)
at org.codehaus.groovy.grails.commons.spring.GrailsRuntimeConfigurator.configure(GrailsRuntimeConfigurator.java:170)
at org.codehaus.groovy.grails.commons.spring.GrailsRuntimeConfigurator.configure(GrailsRuntimeConfigurator.java:127)
at org.codehaus.groovy.grails.web.context.GrailsConfigUtils.configureWebApplicationContext(GrailsConfigUtils.java:121)
at org.codehaus.groovy.grails.web.context.GrailsContextLoader.initWebApplicationContext(GrailsContextLoader.java:107)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4887)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5381)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1559)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1549)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.reflect.MalformedParameterizedTypeException
at sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.validateConstructorArguments(ParameterizedTypeImpl.java:60)
at sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.<init>(ParameterizedTypeImpl.java:53)
at sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.make(ParameterizedTypeImpl.java:95)
at sun.reflect.generics.factory.CoreReflectionFactory.makeParameterizedType(CoreReflectionFactory.java:105)
at sun.reflect.generics.visitor.Reifier.visitClassTypeSignature(Reifier.java:140)
at sun.reflect.generics.tree.ClassTypeSignature.accept(ClassTypeSignature.java:49)
at sun.reflect.generics.repository.ConstructorRepository.getParameterTypes(ConstructorRepository.java:94)
at java.lang.reflect.Method.getGenericParameterTypes(Method.java:291)
at java.beans.FeatureDescriptor.getParameterTypes(FeatureDescriptor.java:387)
at java.beans.MethodDescriptor.setMethod(MethodDescriptor.java:114)
at java.beans.MethodDescriptor.<init>(MethodDescriptor.java:72)
at java.beans.MethodDescriptor.<init>(MethodDescriptor.java:56)
at java.beans.Introspector.getTargetMethodInfo(Introspector.java:1130)
at java.beans.Introspector.getBeanInfo(Introspector.java:414)
at java.beans.Introspector.getBeanInfo(Introspector.java:161)
at org.springframework.beans.CachedIntrospectionResults.<init>(CachedIntrospectionResults.java:217)
at org.springframework.beans.CachedIntrospectionResults.forClass(CachedIntrospectionResults.java:142)
at org.springframework.beans.BeanWrapperImpl.getCachedIntrospectionResults(BeanWrapperImpl.java:324)
at org.springframework.beans.BeanWrapperImpl.getPropertyDescriptors(BeanWrapperImpl.java:331)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.filterPropertyDescriptorsForDependencyCheck(AbstractAutowireCapableBeanFactory.java:1242)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1101)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517)
After investigationg on oauth2 library documentation (here), I found out that oauth2ProviderFilter is an instance of OAuth2AuthenticationProcessingFilter. After inspecting source of this class (github), it looks like there are few properties (authenticationEntryPoint, authenticationManager, authencticationDetailsSource) that need to be instantiated. I thought that there might be a problem with dependency injection, so I tried to define oauth2ProviderFilter bean in resources.groovy with references to instances defined by Spring Security Core plugin.
I placed this in my resources.groovy:
beans = {
oauth2ProviderFilter(OAuth2AuthenticationProcessingFilter){
authenticationEntryPoint = ref('basicAuthenticationEntryPoint')
authenticationManager = ref('authenticationManager')
authenticationDetailsSource = ref('authenticationDetailsSource')
}
}
That did not solve the problem, there is still error saying that this filter can't be instantiated.
Since I'm no spring expert, do you think that this error might be connected with dependency injection during bean creation? Where might be the problem?
Is it possible that spring-security-oauth2 library is designed for spring framework version which and grails might be using different framework version and this might cause problems?
What are next steps I could take to find the cause of the problem and eventually fix the problem?
What happens when you run the release version of the plugin and not the SNAPSHOT? I found that when I work with latest versions of grails, many times they break existing plugins. So what I would do is...
Try to use release version of plugin with your 2.2.2 grails.
If that does not work, I would step back my grails version and try an older one with the released plugin.
If an older version does not work, I might be missing something in my setup, so I would try to figure out what it is.
If things work in older versions and it looks like my setup is good based on setting it up, then I would ask on the grails nabble on what is up and/or open a JIRA ticket.
Another thing to consider is how young/old the plugin is and how many previous bugs it had. We have many times either decided to debug the plugin because it was a worthwhile effort, but sometimes we decided to not use it because it had too many issues and were going to keep us back in the future.

Resources