Mule - Database Connector - Insert - Parameter Type as Expression or Bean Reference ( Oracle Data Type RAW(16) | GUID | UUID ) - oracle

How can I correct use the Parameter Types as an Expression or Bean reference for example when using Insert operation of the MuleSoft Database Connector?
Configure Database Connector Data Types Examples - Mule 4 | Configure the Parameter Types Field in Studio | MuleSoft Documentation (docs.mulesoft.com)
Database Connector Reference 1.13 - Mule 4 | Parameter Type Definition | MuleSoft Documentation
Because we need to dynamic pass a reference for a non default type in database. For example, in Oracle we have the RAW(16) that when using the Input Parameters it gets the error:
Message : Invalid column type: 1111.
Error type : DB:QUERY_EXECUTION
So, we need Parameter Types as explained in the documentation below:
Configure Database Connector Data Types Examples - Mule 4 | MuleSoft Documentation (docs.mulesoft.com)
One way that I tried is: [{'key': "ID", 'type': "LONGNVARCHAR"}]
In the XML:
<db:insert doc:name="Insert" doc:id="4ee50969-884b-4eb7-93e0-adca25229683"
config-ref="Database_Config_Oracle"
queryTimeoutUnit="DAYS"
autoGenerateKeys="true"
parameterTypes="#[[{'key': "ID", 'type': "LONGNVARCHAR"}]]">
<db:sql><![CDATA[#[ vars.db.query ]]]></db:sql>
<db:input-parameters><![CDATA[#[vars.db.inputParameters]]]></db:input-parameters>
<db:auto-generated-keys-column-names />
</db:insert>
But this way it gets the error:
""java.lang.IllegalStateException - No read or write handler for type
java.lang.IllegalStateException: No read or write handler for type
at org.mule.weave.v2.module.pojo.reader.PropertyDefinition._type$lzycompute(PropertyDefinition.scala:44)
at org.mule.weave.v2.module.pojo.reader.PropertyDefinition._type(PropertyDefinition.scala:35)
at org.mule.weave.v2.module.pojo.reader.PropertyDefinition.classType(PropertyDefinition.scala:70)
at org.mule.weave.v2.module.pojo.writer.entry.BeanPropertyEntry.entryType(BeanPropertyEntry.scala:24)
at org.mule.weave.v2.module.pojo.writer.WriterEntry.putValue(WriterEntry.scala:18)
at org.mule.weave.v2.module.pojo.writer.WriterEntry.putValue$(WriterEntry.scala:11)
at org.mule.weave.v2.module.pojo.writer.entry.BeanPropertyEntry.putValue(BeanPropertyEntry.scala:19)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.write(JavaWriter.scala:62)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.writeSimpleJavaValue(JavaWriter.scala:419)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.doWriteValue(JavaWriter.scala:268)
at org.mule.weave.v2.module.writer.WriterWithAttributes.internalWriteValue(WriterWithAttributes.scala:35)
at org.mule.weave.v2.module.writer.WriterWithAttributes.internalWriteValue$(WriterWithAttributes.scala:34)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.internalWriteValue(JavaWriter.scala:44)
at org.mule.weave.v2.module.writer.WriterWithAttributes.writeAttributesAndValue(WriterWithAttributes.scala:30)
at org.mule.weave.v2.module.writer.WriterWithAttributes.writeAttributesAndValue$(WriterWithAttributes.scala:15)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.writeAttributesAndValue(JavaWriter.scala:44)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.doWriteValue(JavaWriter.scala:241)
at org.mule.weave.v2.module.writer.Writer.writeValue(Writer.scala:65)
at org.mule.weave.v2.module.writer.Writer.writeValue$(Writer.scala:46)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.writeValue(JavaWriter.scala:44)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.doWriteValue(JavaWriter.scala:216)
at org.mule.weave.v2.module.writer.Writer.writeValue(Writer.scala:65)
at org.mule.weave.v2.module.writer.Writer.writeValue$(Writer.scala:46)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.writeValue(JavaWriter.scala:44)
at org.mule.weave.v2.module.java.JavaInvocationHelper$.transformToJavaCollection(JavaInvokeFunction.scala:105)
at org.mule.weave.v2.el.utils.DataTypeHelper$.transformToJava(DataTypeHelper.scala:166)
at org.mule.weave.v2.el.utils.DataTypeHelper$.transformToJavaDataType(DataTypeHelper.scala:153)
at org.mule.weave.v2.el.utils.DataTypeHelper$.toJavaValue(DataTypeHelper.scala:104)
at org.mule.weave.v2.el.WeaveExpressionLanguageSession.evaluate(WeaveExpressionLanguageSession.scala:253)
at org.mule.weave.v2.el.WeaveExpressionLanguageSession.$anonfun$evaluate$4(WeaveExpressionLanguageSession.scala:135)
at org.mule.weave.v2.el.WeaveExpressionLanguageSession.doEvaluate(WeaveExpressionLanguageSession.scala:268)
at org.mule.weave.v2.el.WeaveExpressionLanguageSession.evaluate(WeaveExpressionLanguageSession.scala:134)
at org.mule.runtime.core.internal.el.dataweave.DataWeaveExpressionLanguageAdaptor$1.evaluate(DataWeaveExpressionLanguageAdaptor.java:321)
at org.mule.runtime.core.internal.el.DefaultExpressionManagerSession.evaluate(DefaultExpressionManagerSession.java:117)
at org.mule.runtime.core.privileged.util.attribute.ExpressionAttributeEvaluatorDelegate.resolveExpressionWithSession(ExpressionAttributeEvaluatorDelegate.java:68)
at org.mule.runtime.core.privileged.util.attribute.ExpressionAttributeEvaluatorDelegate.resolve(ExpressionAttributeEvaluatorDelegate.java:56)
at org.mule.runtime.core.privileged.util.AttributeEvaluator.resolveTypedValue(AttributeEvaluator.java:107)
at org.mule.runtime.module.extension.internal.runtime.resolver.ExpressionValueResolver.resolveTypedValue(ExpressionValueResolver.java:115)
at org.mule.runtime.module.extension.internal.runtime.resolver.ExpressionValueResolver.resolve(ExpressionValueResolver.java:99)
at org.mule.runtime.module.extension.internal.runtime.resolver.TypeSafeValueResolverWrapper.lambda$initialise$0(TypeSafeValueResolverWrapper.java:69)
at org.mule.runtime.module.extension.internal.runtime.resolver.TypeSafeValueResolverWrapper.resolve(TypeSafeValueResolverWrapper.java:52)
at org.mule.runtime.module.extension.internal.runtime.resolver.TypeSafeExpressionValueResolver.resolve(TypeSafeExpressionValueResolver.java:73)
at org.mule.runtime.module.extension.internal.runtime.resolver.ResolverUtils.resolveRecursively(ResolverUtils.java:92)
at org.mule.runtime.module.extension.internal.runtime.resolver.ResolverSet.resolve(ResolverSet.java:113)
at org.mule.runtime.module.extension.internal.runtime.operation.ComponentMessageProcessor.getResolutionResult(ComponentMessageProcessor.java:1258)
at org.mule.runtime.module.extension.internal.runtime.operation.ComponentMessageProcessor.addContextToEvent(ComponentMessageProcessor.java:762)
at org.mule.runtime.module.extension.internal.runtime.operation.ComponentMessageProcessor.lambda$null$5(ComponentMessageProcessor.java:354)
at reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:273)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:496)
at org.mule.runtime.core.privileged.processor.chain.AbstractMessageProcessorChain$2.onNext(AbstractMessageProcessorChain.java:490)
at org.mule.runtime.core.privileged.processor.chain.AbstractMessageProcessorChain$2.onNext(AbstractMessageProcessorChain.java:485)
at reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:127)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:204)
at reactor.core.publisher.FluxOnAssembly$OnAssemblySubscriber.onNext(FluxOnAssembly.java:351)
at reactor.core.publisher.FluxSubscribeOnValue$ScheduledScalar.run(FluxSubscribeOnValue.java:178)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:50)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:27)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.mule.service.scheduler.internal.AbstractRunnableFutureDecorator.doRun(AbstractRunnableFutureDecorator.java:151)
at org.mule.service.scheduler.internal.RunnableFutureDecorator.run(RunnableFutureDecorator.java:54)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748), while writing Java at
1| [{'key': "ID", 'type': "LONGNVARCHAR"}]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^.
1| [{'key': "ID", 'type': "LONGNVARCHAR"}]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trace:
at anonymous::main (line: 1, column: 2)" evaluating expression: "[{'key': "ID", 'type': "LONGNVARCHAR"}]"."
Reference: Database Connector Data Types Examples - Parameter Types | How to write as Expression or Bean reference · Issue #2050 · mulesoft/docs-connectors (github.com)

Related

ElasticSearch hive SerializationError handler

Using Elastic search version 6.8.0
hive> select * from provider1;
OK
{"id","k11",}
{"id","k12",}
{"id","k13",}
{"id","k14",}
{"id":"K1","name":"Ravi","salary":500}
{"id":"K2","name":"Ravi","salary":500}
{"id":"K3","name":"Ravi","salary":500}
{"id":"K4","name":"Ravi","salary":500}
{"id":"K5","name":"Ravi","salary":500}
{"id":"K6","name":"Ravi","salary":"sdfgg"}
{"id":"K7","name":"Ravi","salary":"sdf"}
{"id":"k8"}
{"id":"K9","name":"r1","salary":522}
{"id":"k10","name":"r2","salary":53}
Time taken: 0.179 seconds, Fetched: 14 row(s)
ADD JAR /home/smrafi/elasticsearch-hadoop-6.8.0/dist/elasticsearch-hadoop-6.8.0.jar;
CREATE external TABLE hive_es_with_handler( data STRING)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES(
'es.resource' = 'test_eshadoop/healthCareProvider',
'es.nodes' = 'vpc-pid-pre-prod-es-cluster-b7thvqfj3tp45arxl34gge3yyi.us-east-2.es.amazonaws.com',
'es.input.json' = 'yes',
'es.index.auto.create' = 'true',
'es.write.operation'='upsert',
'es.nodes.wan.only' = 'true',
'es.port' = '443',
'es.net.ssl'='true',
'es.batch.size.entries'='1',
'es.mapping.id' ='id',
'es.batch.write.retry.count'='-1',
'es.batch.write.retry.wait'='60s',
'es.write.rest.error.handlers' = 'es, ignoreBadRecords',
'es.write.data.error.handlers' = 'customLog',
'es.write.data.error.handler.customLog' = 'com.verisys.elshandler.CustomLogOnError',
'es.write.rest.error.handler.es.client.resource'="error_es_index/error",
'es.write.rest.error.handler.es.return.default'='HANDLED',
'es.write.rest.error.handler.log.logger.name' = 'BulkErrors',
'es.write.data.error.handler.log.logger.name' = 'SerializationErrors',
'es.write.rest.error.handler.ignoreBadRecords' = 'com.verisys.elshandler.IgnoreBadRecordHandler',
'es.write.rest.error.handler.es.return.error'='HANDLED');
insert into hive_es_with_handler10 select * from provider1;
Below is exception trace, it failed complaining the error.handler index is not present
Caused by: org.elasticsearch.hadoop.serialization.EsHadoopSerializationException: org.codehaus.jackson.JsonParseException: Unexpected character (',' (code 44)): was expecting a colon to separate field name and value at [Source: [B#1e3f0aea; line: 1, column: 7]
at org.elasticsearch.hadoop.serialization.json.JacksonJsonParser.nextToken(JacksonJsonParser.java:95)
at org.elasticsearch.hadoop.serialization.ParsingUtils.doFind(ParsingUtils.java:168)
at org.elasticsearch.hadoop.serialization.ParsingUtils.values(ParsingUtils.java:151)
at org.elasticsearch.hadoop.serialization.field.JsonFieldExtractors.process(JsonFieldExtractors.java:213)
at org.elasticsearch.hadoop.serialization.bulk.JsonTemplatedBulk.preProcess(JsonTemplatedBulk.java:64)
at org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk.write(TemplatedBulk.java:54)
at org.elasticsearch.hadoop.hive.EsSerDe.serialize(EsSerDe.java:171)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:725)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:148)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:550)
... 9 more
Caused by: org.codehaus.jackson.JsonParseException: Unexpected character (',' (code 44)): was expecting a colon to separate field name and value at [Source: [B#1e3f0aea; line: 1, column: 7]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1433)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:521)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportUnexpectedChar(JsonParserMinimalBase.java:442)
at org.codehaus.jackson.impl.Utf8StreamParser.nextToken(Utf8StreamParser.java:500)
at org.elasticsearch.hadoop.serialization.json.JacksonJsonParser.nextToken(JacksonJsonParser.java:93) ... 22 more
I tried to use the custom SerializationErrorHandler But it is of no use and Handler is not coming into context, Its completely stopping the job instead of continuing for the good records even After having default (HANDLED as the constant)
Seems you have invalid JSON
Mentioned in the docs, this is not handled by Hive
Serialization Error Handlers are not yet available for Hive. Elasticsearch for Apache Hadoop uses Hive’s SerDe constructs to convert data into bulk entries before being sent to the output format. SerDe objects do not have a cleanup method that is called when the object ends its lifecycle. Because of this, we do not support serialization error handlers in Hive as they cannot be closed at the end of the job execution

Error using Polybase to load Parquet file: class java.lang.Integer cannot be cast to class parquet.io.api.Binary

I have a snappy.parquet file with a schema like this:
{
"type": "struct",
"fields": [{
"name": "MyTinyInt",
"type": "byte",
"nullable": true,
"metadata": {}
}
...
]
}
Update: parquet-tools reveals this:
############ Column(MyTinyInt) ############
name: MyTinyInt
path: MyTinyInt
max_definition_level: 1
max_repetition_level: 0
physical_type: INT32
logical_type: Int(bitWidth=8, isSigned=true)
converted_type (legacy): INT_8
When I try and run a stored procedure in Azure Data Studio to load this into an external staging table with PolyBase I get the error:
11:16:21Started executing query at Line 113
Msg 106000, Level 16, State 1, Line 1
HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: ClassCastException: class java.lang.Integer cannot be cast to class parquet.io.api.Binary (java.lang.Integer is in module java.base of loader 'bootstrap'; parquet.io.api.Binary is in unnamed module of loader 'app')
The load into the external table works fine with only varchars
CREATE EXTERNAL TABLE [domain].[TempTable]
(
...
MyTinyInt tinyint NULL,
...
)
WITH
(
LOCATION = ''' + #Location + ''',
DATA_SOURCE = datalake,
FILE_FORMAT = parquet_snappy
)
The data will eventually be merged into a Data Warehouse Synapse table. In that table the column will have to be of type tinyint.
I have the same issue and good support plan in Azure, so I've got an answer from Microsoft:
there is a known bug in ADF for this particular scenario: The date
type in parquet should be mapped as data type date in Sql sever
however, ADF incorrectly converts this type to Datetime2 which causes
a conflict in PolyBase. I have confirmation for the core engineering
team that this will be rectified with a fix by the end of November and
will be published directly into the ADF product.
In the meantime, as a workaround:
Create the target table with data type DATE as opposed to DATETIME2
Configure the Copy Activity Sink settings to use Copy Command as opposed to PolyBase
but even Copy command don't work for me, so only one workaround is to use Bulk insert, but Bulk is extremely slow and on big datasets it's would be a problem

Validation of FHIR Resources against different aspects listed at https://www.hl7.org/fhir/validation.html using HAPI Library

Getting the below exception when running the code:
FhirContext ctx = FhirContext.forR4();
// Create a FhirInstanceValidator and register it to a validator
FhirValidator validator = ctx.newValidator();
FhirInstanceValidator instanceValidator = new FhirInstanceValidator();
validator.registerValidatorModule(instanceValidator);
/*
* If you want, you can configure settings on the validator to adjust
* its behaviour during validation
*/
instanceValidator.setAnyExtensionsAllowed(true);
// input is Patient resource in String https://www.hl7.org/fhir/patient-example.json.html
ValidationResult result = validator.validateWithResult(input);
I am using Hapi Library to validate a resource (if i am not wrong this is a Patient resource https://www.hl7.org/fhir/patient-example.json.html ). I have stored this Patient Json in a string
and trying to Validate its :
1: Structure -> i think using Parse Validation it can be achieved and i did the same.
2: Cardinality -> I created two "active:true" Json key-value pair thinking that it will throw cardinality error but neither of SchemxxxValidator / ParseValidator / InstanceValidator working.
...
How to validate a resource against properties listed here https://www.hl7.org/fhir/validation.html (structure ,cardinality , ValueDomains ...) , Do i have to use all three ways
That is Parser , FhirInstanceValidator and SchemaBaseValidator / SchematronBaseValidator .
Please Help as i am new to FHIR and excuse for lame question.
15:58| INFO | VersionUtil.java 72 | HAPI FHIR version 4.1.0 - Rev 03163c2cf5
15:58| INFO | FhirContext.java 174 | Creating new FHIR context for FHIR version [R4]
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/profile/profiles-resources.xml
15:58| INFO | DependencyLogImpl.java 75 | FHIR XML procesing will use StAX implementation 'Woodstox' version '5.1.0'
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/profile/profiles-types.xml
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/profile/profiles-others.xml
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/extension/extension-definitions.xml
15:58| ERROR | FhirInstanceValidator.java 222 | Failure during validation
java.lang.UnsupportedOperationException
at org.hl7.fhir.r4.hapi.ctx.HapiWorkerContext.generateSnapshot(HapiWorkerContext.java:242)
at org.hl7.fhir.r4.elementmodel.ParserBase.getDefinition(ParserBase.java:122)
at org.hl7.fhir.r4.elementmodel.JsonParser.parse(JsonParser.java:123)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:539)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:531)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:220)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:242)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.doValidate(BaseValidatorBridge.java:20)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.validateResource(BaseValidatorBridge.java:43)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validateResource(FhirInstanceValidator.java:33)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:243)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:198)
at com.json.schema.validator.InstanceValidatorEx.instanceValidator(InstanceValidatorEx.java:223)
at com.json.schema.validator.InstanceValidatorEx.main(InstanceValidatorEx.java:191)
Exception in thread "main" ca.uhn.fhir.rest.server.exceptions.InternalErrorException: Unexpected failure while validating resource
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:223)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:242)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.doValidate(BaseValidatorBridge.java:20)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.validateResource(BaseValidatorBridge.java:43)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validateResource(FhirInstanceValidator.java:33)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:243)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:198)
at com.json.schema.validator.InstanceValidatorEx.instanceValidator(InstanceValidatorEx.java:223)
at com.json.schema.validator.InstanceValidatorEx.main(InstanceValidatorEx.java:191)
Caused by: java.lang.UnsupportedOperationException
at org.hl7.fhir.r4.hapi.ctx.HapiWorkerContext.generateSnapshot(HapiWorkerContext.java:242)
at org.hl7.fhir.r4.elementmodel.ParserBase.getDefinition(ParserBase.java:122)
at org.hl7.fhir.r4.elementmodel.JsonParser.parse(JsonParser.java:123)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:539)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:531)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:220)
Cardinality -> I created two "active:true" Json key-value pair thinking that it will throw cardinality error but neither of SchemxxxValidator / ParseValidator / InstanceValidator working. ...
That's an issue in HAPI - it validates the objects it loads from the JSON, and the JSON parser silently drops the duplicate property key. If you use the validator directly, this won't happen. I believe that this is going to be addressed at some stage
generateSnapshot failed
that's a real issue - I'm not sure why that's not set up, but the validator can't work if snapshots are not being generated

Issues using Kafka KSQL AVRO table as a source for a Kafka Connect JDBC Sink

I've been struggling with this for about a week now trying to get a simple (3 fields) AVRO formated KSQL table as a source to a JDBC connector sink (mysql)
I am getting the following errors (after INFO line):
[2018-12-11 18:58:50,678] INFO Setting metadata for table "DSB_ERROR_TABLE_WINDOWED" to Table{name='"DSB_ERROR_TABLE_WINDOWED"', columns=[Column{'MOD_CLASS', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR}, Column{'METHOD', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR}, Column{'COUNT', isPrimaryKey=false, allowsNull=true, sqlType=BIGINT}]} (io.confluent.connect.jdbc.util.TableDefinitions)
[2018-12-11 18:58:50,679] ERROR WorkerSinkTask{id=dev-dsb-errors-mysql-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask)
org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: DSB_ERROR_TABLE_WINDOWED
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:127)
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:64)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:79)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:124)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:63)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:75)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I can tell that the sink is doing something properly as the schema is pulled (see just before the error above) and the table is created successfully in the database with the proper schema:
MariaDB [dsb_errors_ksql]> describe DSB_ERROR_TABLE_WINDOWED;
+-----------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+-------+
| MOD_CLASS | varchar(256) | YES | | NULL | |
| METHOD | varchar(256) | YES | | NULL | |
| COUNT | bigint(20) | YES | | NULL | |
+-----------+--------------+------+-----+---------+-------+
3 rows in set (0.01 sec)
And here is the KTABLE definition:
ksql> describe extended DSB_ERROR_TABLE_windowed;
Name : DSB_ERROR_TABLE_WINDOWED
Type : TABLE
Key field : KSQL_INTERNAL_COL_0|+|KSQL_INTERNAL_COL_1
Key format : STRING
Timestamp field : Not set - using <ROWTIME>
Value format : AVRO
Kafka topic : DSB_ERROR_TABLE_WINDOWED (partitions: 4, replication: 1)
Field | Type
---------------------------------------
ROWTIME | BIGINT (system)
ROWKEY | VARCHAR(STRING) (system)
MOD_CLASS | VARCHAR(STRING)
METHOD | VARCHAR(STRING)
COUNT | BIGINT
---------------------------------------
Queries that write into this TABLE
-----------------------------------
CTAS_DSB_ERROR_TABLE_WINDOWED_37 : create table DSB_ERROR_TABLE_windowed with (value_format='avro') as select mod_class, method, count(*) as count from DSB_ERROR_STREAM window session ( 60 seconds) group by mod_class, method having count(*) > 0;
There is an entry auto generated in the schema registry for this table (but no key entry):
{
"subject": "DSB_ERROR_TABLE_WINDOWED-value",
"version": 7,
"id": 143,
"schema": "{\"type\":\"record\",\"name\":\"KsqlDataSourceSchema\",\"namespace\":\"io.confluent.ksql.avro_schemas\",\"fields\":[{\"name\":\"MOD_CLASS\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"METHOD\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"COUNT\",\"type\":[\"null\",\"long\"],\"default\":null}]}"
}
and here is the Connect Worker definition:
{ "name": "dev-dsb-errors-mysql-sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics": "DSB_ERROR_TABLE_WINDOWED",
"connection.url": "jdbc:mysql://os-compute-d01.maeagle.corp:32692/dsb_errors_ksql?user=xxxxxx&password=xxxxxx",
"auto.create": "true",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://kafka-d01.maeagle.corp:8081",
"key.converter": "org.apache.kafka.connect.storage.StringConverter"
}
}
My understanding (which could be wrong) is that KSQL should be creating the appropriate AVRO schemas in the Schema Registry and Kafka Connect should be able to read those back properly. As I noted above, something is working as the appropriate table is being generated in Mysql, although I am surprised that there is not a key field created...
Most of the posts and examples are using JSON as opposed to AVRO so they haven't been particularly useful.
It seems to be at the deserialization portion of reading and inserting of the topic record...
I am at a loss at this point and could use some guidance.
I have also opened a similiar ticket via github:
https://github.com/confluentinc/ksql/issues/2250
Regards,
--John
As John says above, the key in the topic's record is not a string, but a string post-fixed with a single Java serialized 64bit integer, representing the window start time.
Connect does not come with a SMT that can handle the windowed key format. However, it would be possible to write one to strip off the integer and just return the natural key. You could then include this on the class path and update your connect config.
If you require the window start time in the database, then you can update you ksqlDB query to include the window start time as a field in the value.

Method call: Method toObject(java.util.LinkedHashMap) cannot be found on org.springframework.integration.x.gemfire.JsonStringToObjectTransformer

I am facing the following issue in Spring XD and gemfire
nested exception is org.springframework.expression.spel.SpelEvaluationException: EL1004E:(pos 8): Method call: Method toObject(java.util.LinkedHashMap) cannot be found on org.springframework.integration.x.gemfire.JsonStringToObjectTransformer type.
Any idea how to fix this?
Following is how we can replicate this issue:
stream create json_test --definition "trigger --fixedDelay=1 |
transform --expression='''{node1 : {node2 : {data1:hello, data2: world}}}''' |
splitter --expression=#jsonPath(payload,'$.node1.node2') |
log" --deploy`
What we are expecting {data1:hello, data2:world} but we are getting {data1=hello, data2=world} which is causing the issue.
What is the solution for this issue?
The gemfire-json-server sink can only handle incoming String JSON payloads; it looks like you are supplying a LinkedHashMap somehow.
It probably means you have run it through some JSON to object transformation or conversion.
EDIT
The splitter produces a LinkedHashMap - specify an outputType to convert it to JSON...
xd:>stream create json_test --definition "trigger --fixedDelay=1 |
transform --expression='''{node1 : {node2 : {data1:hello, data2: world}}}''' |
splitter --expression=#jsonPath(payload,'$.node1.node2') --outputType=application/json |
log" --deploy
Result...
2017-06-14T09:52:10-0400 1.3.1.RELEASE INFO task-scheduler-2 sink.json_test - {"data1":"hello","data2":"world"}
2017-06-14T09:52:11-0400 1.3.1.RELEASE INFO task-scheduler-2 sink.json_test - {"data1":"hello","data2":"world"}
2017-06-14T09:52:12-0400 1.3.1.RELEASE INFO task-scheduler-2 sink.json_test - {"data1":"hello","data2":"world"}

Resources