I am trying to insert rows into my Oracle table using Kafka jdbc sink connect. I have messages in my Kafka topic (JSON) like below;
[{"f1":"qws","f2":"zcz","f3":"SDFF","f4":"f33bfed577bcd7c4625479bd3cd13323--1132061303","f5":null,"f6":null,"f7":"ghSDAgh/akdjytfd/jhsgd","f8":"hsfgd/sdfjghsfjd/jsg","f9":null,"f10":"ASD","f11":"sdfg/vbnm","f12":"S","startTime":"2018-01-30T05:24:41.162","_startTime":"DATE","f13":219,"f14":"http://192.168.0.1:1234/asd/fgh/jkl/zxc/vbn/qwe/rty","f15":"fe80:0:0:0:7501:14d9:b44b:2a95%eth5","f16":1234,"f17":"ABCD-1234","f18":"192.168.0.1","f19":"sdfgd","dfgVO":{"fa1":null,"fa2":"formats","fa3":""qwe.rty.uiop.asd.fgh.jkl.zxc.vbn.asdf#61e97f29"","fa4":7,"fa5":79,"fa6":null,"fa7":"{}","fa8":1517289881381},"f20":null,"f21":"http-drte-1234-uik-7","f22":false,"f23":false,"f24":false}]
I have the connector configuration like below;
name=jdbc-sink-2
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=my_topic_1
connection.url=jdbc:oracle:thin:#192.168.0.1:1521:user01
connection.user=USER1
connection.password=PASSWD1
auto.create=true
table.name.format=MY_TABLE_2
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
producer.retries=1
When I start the connector, I am getting the error below;
[2018-01-30 11:16:55,417] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:308)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:406)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:250)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:16:55,422] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:149)
Then I added the below configurations to my existing connector configuration;
key.converter.schemas.enable=false
value.converter.schemas.enable=false
Now, I am getting another error like below;
[2018-01-30 11:36:58,118] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)
org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: MY_TABLE_2
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:190)
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:58)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:65)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:62)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:66)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:435)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:36:58,123] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)
[2018-01-30 11:36:58,124] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:457)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:36:58,125] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:149)
This says that I need to modify my Kafka message like key value schema format. I cannot modify my Kafka message format since it is published by someone else. How can I fix this error?
Thank you.
Per doc, if you want to use the JDBC Sink, you need to provide a schema. You can do this either using Avro + Schema Registry, or using JSON with embedded schema. You can see a sample of the expected JSON structure here.
Where is your data coming from? If it's Kafka Connect source, you can just use Avro or JSON with schemas enabled. If it's elsewhere, you'll need to amend that to provide the data to include schema - the Avro serialiser provided with the Schema Registry can do just this for you.
Related
I am trying to send a data object from one spring service to another service using kafka.
The problem is that kafka is not able to resolve the className and hence is unable to map the consumber's class to the producer's class.
Following is the error message:
Caused by: org.springframework.messaging.converter.MessageConversionException: failed to resolve class name. Caused by: java.lang.ClassNotFoundException
I have tried to map the class using the following property:
For the producer: spring.kafka.producer.properties.spring.json.type.mapping=event:com.ankit.orderservice.event.OrderPlacedEvent
For the consumer: spring.kafka.consumer.properties.spring.json.type.mapping: event:com.ankit.OrderPlacedEvent
I'm trying to set up a simple test in Apache Nifi to connect to an existing PostgreSQL instance. I'm able to connect outside of nifi using other tools like dBeaver, and am fairly sure my connection string is proper. I have tried putting the postgresql jdbc driver in all sorts of places, but still keep seeing the "No suitable driver" error. I'll include some screenshots of my DBCPConnectionPool controller as well as my stack traces.
I have seen other posts like this, but none of them seem to lead to any solutions for me. Any help is appreciated.
Stack Trace
19-11-05 23:50:09,933 ERROR [Timer-Driven Process Thread-2] o.a.nifi.processors.standard.ExecuteSQL ExecuteSQL[id=3d68fb42-016e-1000-0ea4-abcc7dcc2e48] Unable to execute SQL select query select * from records; due to org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create JDBC driver of class 'org.postgresql.Driver' for connect URL 'jdbc:postgres://salt.db.elephantsql.com:5432/oickotoy'. No FlowFile to route to failure: org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create JDBC driver of class 'org.postgresql.Driver' for connect URL 'jdbc:postgres://salt.db.elephantsql.com:5432/oickotoy'
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create JDBC driver of class 'org.postgresql.Driver' for connect URL 'jdbc:postgres://salt.db.elephantsql.com:5432/oickotoy'
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:442)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor609.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:87)
at com.sun.proxy.$Proxy91.getConnection(Unknown Source)
at org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:223)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot create JDBC driver of class 'org.postgresql.Driver' for connect URL 'jdbc:postgres://salt.db.elephantsql.com:5432/oickotoy'
at org.apache.commons.dbcp2.DriverFactory.createDriver(DriverFactory.java:75)
at org.apache.commons.dbcp2.BasicDataSource.createConnectionFactory(BasicDataSource.java:472)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:538)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:753)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:438)
... 19 common frames omitted
Caused by: java.sql.SQLException: No suitable driver
at org.apache.commons.dbcp2.DriverFactory.createDriver(DriverFactory.java:68)
... 23 common frames omitted
What solved this issue for me was oddly enough deleting the Database connection URL (jdbc:postgresql://.....), applying the empty connection string to the controller service, then re-typing the connection string and then applying the now re-typed connection string to the controller service.
Seems like some kind of special character has caused this hiccup
What ended up solving this in the end was a version problem. I was apparently working on a dev release of Nifi that this was broken in. After I upgraded to the latest stable release, my problem went away.
I am trying to configure WSO2 API Manager 2.0.0 with Enterprise-db Advance Server (postgres) 9.5.
I have configured Postgres JDBC Driver (postgresql-9.4.1212.jre7.jar) with it and configured all required data-sources required for WSO2-AM.
I am getting following error when start wso2-am server, please advise what is wrong here.
Caused by: org.wso2.carbon.user.core.UserStoreException: DB error occurred while checking is existing domain : PRIMARY & tenant id : -1234
Caused by: java.sql.SQLException: Uncaught underlying exception.
Caused by: java.lang.NullPointerException: tuples must be non-null
at org.postgresql.jdbc.PgResultSet.<init>(PgResultSet.java:147)
at org.postgresql.jdbc.PgStatement.createResultSet(PgStatement.java:161)
at org.postgresql.jdbc.PgStatement$StatementResultHandler.handleResultRows(PgStatement.java:213)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2037)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:291)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:432)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:358)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:305)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:269)
at org.postgresql.jdbc.PgConnection.execSQLUpdate(PgConnection.java:480)
at org.postgresql.jdbc.PgConnection.getTransactionIsolation(PgConnection.java:850)
Please Note: When i try to configure EDB-JDBC driver (edb-jdbc17.jar) with it gives different error. Caused by: java.lang.Exception: Unsupported database: EnterpriseDB. Database will not be created automatically by the WSO2 Registry. Please create the database using appropriate database scripts for the database.
Creating db with scripts did not help.
Yesterday we had a power outage and were able to get all of our machines back online with the exception of one box.
When firing up our application we see the log
Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [com.levelsbeyond.search.elasticsearch.ElasticSearchTransportClientProvider]: Constructor threw exception; nested exception is org.elasticsearch.client.transport.NoNodeAvailableException: No node available (org.mule.api.lifecycle.InitialisationException)
at org.mule.config.builders.AbstractConfigurationBuilder.configure(AbstractConfigurationBuilder.java:52)
at org.mule.config.builders.AbstractResourceConfigurationBuilder.configure(AbstractResourceConfigurationBuilder.java:78)
at org.mule.context.DefaultMuleContextFactory.createMuleContext(DefaultMuleContextFactory.java:97)
at org.mule.config.builders.MuleXmlBuilderContextListener.createMuleContext(MuleXmlBuilderContextListener.java:169)
at org.mule.config.builders.MuleXmlBuilderContextListener.initialize(MuleXmlBuilderContextListener.java:98)
at org.mule.config.builders.MuleXmlBuilderContextListener.contextInitialized(MuleXmlBuilderContextListener.java:74)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4939)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5434)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:983)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1660)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
and then consistently repeated we see
2014-09-19 11:35:19,200 WARN [org.elasticsearch.transport.netty] (elasticsearch[Dominic Fortune][transport_client_worker][T#5]{New I/O worker #5}) - <[Dominic Fortune] Message not fully read (response) for [12] handler future(org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler$1#625badaa), error [true], resetting>
Everything was working perfectly fine until the power outage. This is a single node in the cluster and it is running on the same machine as the java application (centos 6.5) so I know this isn't the same issue you keep finding on SO and on google that states this issue is caused by different version of elasticsearch and/or Java.
Does anyone no how to recover from this and get back up and running?
Thanks.
Turns out that when the power went out, a restart triggered an auto update of elasticsearch and the upgraded version didn't support the transport drivers in use.
I have used two instances of WSO2 ESB 4.6 at port number 9443(esb1) and 9446(esb2) and also using Message Broker 2.0.1 at 9444. I am using this url to perform my task :http://wso2.org/library/articles/2013/03/configuring-wso2-esb-wso2-message-broker. I have done the Queue to queue send recieve example using the above link. And everything is working fine. But the problem is when i post any message to esb1, it gets reflected to esb2 since esb2 is working as my subscriber. I want that message store should store that message passed on from esb1 and based on some event it should provide esb2 the messages sent by esb1.
Thanks in advance.
When i make my Massage processor active i keep getting this error continuously :
[2013-04-08 17:58:56,658] ERROR - JobRunShell Job synapse.message.processor.quartz.Processor2-forward job threw an unhan
dled Exception:
java.lang.NullPointerException
at org.wso2.carbon.message.store.persistence.jms.util.JMSUtil.createConnection(JMSUtil.java:46)
at org.wso2.carbon.message.store.persistence.jms.JMSMessageStore.createConnection(JMSMessageStore.java:577)
at org.wso2.carbon.message.store.persistence.jms.JMSMessageStore.getReadConnection(JMSMessageStore.java:517)
at org.wso2.carbon.message.store.persistence.jms.JMSMessageStore.peek(JMSMessageStore.java:239)
at org.apache.synapse.message.processors.forward.ForwardingJob.execute(ForwardingJob.java:88)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
[2013-04-08 17:58:56,669] ERROR - ErrorLogger Job (synapse.message.processor.quartz.Processor2-forward job threw an exce
ption.
org.quartz.SchedulerException: Job threw an unhandled exception. [See nested exception: java.lang.NullPointerException]
at org.quartz.core.JobRunShell.run(JobRunShell.java:224)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
Caused by: java.lang.NullPointerException
at org.wso2.carbon.message.store.persistence.jms.util.JMSUtil.createConnection(JMSUtil.java:46)
at org.wso2.carbon.message.store.persistence.jms.JMSMessageStore.createConnection(JMSMessageStore.java:577)
at org.wso2.carbon.message.store.persistence.jms.JMSMessageStore.getReadConnection(JMSMessageStore.java:517)
at org.wso2.carbon.message.store.persistence.jms.JMSMessageStore.peek(JMSMessageStore.java:239)
at org.apache.synapse.message.processors.forward.ForwardingJob.execute(ForwardingJob.java:88)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
... 1 more
Looks like there's some issue in your jndi.properties configuration. Make sure the connection factory configuration is valid which seems to have caused the reported issue. In message store implementation, the value of "connection factory" parameter defaults to "QueueConnectionFactory". If you're trying to specify another connection factory with a different name removing the default one (QueueConnectionFactory) from the configuration, make sure you specify the proper connection factory name in the appropriate element of the message store configuration.
Hope this helps!