Method call: Method toObject(java.util.LinkedHashMap) cannot be found on org.springframework.integration.x.gemfire.JsonStringToObjectTransformer - spring-xd

I am facing the following issue in Spring XD and gemfire
nested exception is org.springframework.expression.spel.SpelEvaluationException: EL1004E:(pos 8): Method call: Method toObject(java.util.LinkedHashMap) cannot be found on org.springframework.integration.x.gemfire.JsonStringToObjectTransformer type.
Any idea how to fix this?
Following is how we can replicate this issue:
stream create json_test --definition "trigger --fixedDelay=1 |
transform --expression='''{node1 : {node2 : {data1:hello, data2: world}}}''' |
splitter --expression=#jsonPath(payload,'$.node1.node2') |
log" --deploy`
What we are expecting {data1:hello, data2:world} but we are getting {data1=hello, data2=world} which is causing the issue.
What is the solution for this issue?

The gemfire-json-server sink can only handle incoming String JSON payloads; it looks like you are supplying a LinkedHashMap somehow.
It probably means you have run it through some JSON to object transformation or conversion.
EDIT
The splitter produces a LinkedHashMap - specify an outputType to convert it to JSON...
xd:>stream create json_test --definition "trigger --fixedDelay=1 |
transform --expression='''{node1 : {node2 : {data1:hello, data2: world}}}''' |
splitter --expression=#jsonPath(payload,'$.node1.node2') --outputType=application/json |
log" --deploy
Result...
2017-06-14T09:52:10-0400 1.3.1.RELEASE INFO task-scheduler-2 sink.json_test - {"data1":"hello","data2":"world"}
2017-06-14T09:52:11-0400 1.3.1.RELEASE INFO task-scheduler-2 sink.json_test - {"data1":"hello","data2":"world"}
2017-06-14T09:52:12-0400 1.3.1.RELEASE INFO task-scheduler-2 sink.json_test - {"data1":"hello","data2":"world"}

Related

Mule - Database Connector - Insert - Parameter Type as Expression or Bean Reference ( Oracle Data Type RAW(16) | GUID | UUID )

How can I correct use the Parameter Types as an Expression or Bean reference for example when using Insert operation of the MuleSoft Database Connector?
Configure Database Connector Data Types Examples - Mule 4 | Configure the Parameter Types Field in Studio | MuleSoft Documentation (docs.mulesoft.com)
Database Connector Reference 1.13 - Mule 4 | Parameter Type Definition | MuleSoft Documentation
Because we need to dynamic pass a reference for a non default type in database. For example, in Oracle we have the RAW(16) that when using the Input Parameters it gets the error:
Message : Invalid column type: 1111.
Error type : DB:QUERY_EXECUTION
So, we need Parameter Types as explained in the documentation below:
Configure Database Connector Data Types Examples - Mule 4 | MuleSoft Documentation (docs.mulesoft.com)
One way that I tried is: [{'key': "ID", 'type': "LONGNVARCHAR"}]
In the XML:
<db:insert doc:name="Insert" doc:id="4ee50969-884b-4eb7-93e0-adca25229683"
config-ref="Database_Config_Oracle"
queryTimeoutUnit="DAYS"
autoGenerateKeys="true"
parameterTypes="#[[{'key': "ID", 'type': "LONGNVARCHAR"}]]">
<db:sql><![CDATA[#[ vars.db.query ]]]></db:sql>
<db:input-parameters><![CDATA[#[vars.db.inputParameters]]]></db:input-parameters>
<db:auto-generated-keys-column-names />
</db:insert>
But this way it gets the error:
""java.lang.IllegalStateException - No read or write handler for type
java.lang.IllegalStateException: No read or write handler for type
at org.mule.weave.v2.module.pojo.reader.PropertyDefinition._type$lzycompute(PropertyDefinition.scala:44)
at org.mule.weave.v2.module.pojo.reader.PropertyDefinition._type(PropertyDefinition.scala:35)
at org.mule.weave.v2.module.pojo.reader.PropertyDefinition.classType(PropertyDefinition.scala:70)
at org.mule.weave.v2.module.pojo.writer.entry.BeanPropertyEntry.entryType(BeanPropertyEntry.scala:24)
at org.mule.weave.v2.module.pojo.writer.WriterEntry.putValue(WriterEntry.scala:18)
at org.mule.weave.v2.module.pojo.writer.WriterEntry.putValue$(WriterEntry.scala:11)
at org.mule.weave.v2.module.pojo.writer.entry.BeanPropertyEntry.putValue(BeanPropertyEntry.scala:19)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.write(JavaWriter.scala:62)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.writeSimpleJavaValue(JavaWriter.scala:419)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.doWriteValue(JavaWriter.scala:268)
at org.mule.weave.v2.module.writer.WriterWithAttributes.internalWriteValue(WriterWithAttributes.scala:35)
at org.mule.weave.v2.module.writer.WriterWithAttributes.internalWriteValue$(WriterWithAttributes.scala:34)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.internalWriteValue(JavaWriter.scala:44)
at org.mule.weave.v2.module.writer.WriterWithAttributes.writeAttributesAndValue(WriterWithAttributes.scala:30)
at org.mule.weave.v2.module.writer.WriterWithAttributes.writeAttributesAndValue$(WriterWithAttributes.scala:15)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.writeAttributesAndValue(JavaWriter.scala:44)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.doWriteValue(JavaWriter.scala:241)
at org.mule.weave.v2.module.writer.Writer.writeValue(Writer.scala:65)
at org.mule.weave.v2.module.writer.Writer.writeValue$(Writer.scala:46)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.writeValue(JavaWriter.scala:44)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.doWriteValue(JavaWriter.scala:216)
at org.mule.weave.v2.module.writer.Writer.writeValue(Writer.scala:65)
at org.mule.weave.v2.module.writer.Writer.writeValue$(Writer.scala:46)
at org.mule.weave.v2.module.pojo.writer.JavaWriter.writeValue(JavaWriter.scala:44)
at org.mule.weave.v2.module.java.JavaInvocationHelper$.transformToJavaCollection(JavaInvokeFunction.scala:105)
at org.mule.weave.v2.el.utils.DataTypeHelper$.transformToJava(DataTypeHelper.scala:166)
at org.mule.weave.v2.el.utils.DataTypeHelper$.transformToJavaDataType(DataTypeHelper.scala:153)
at org.mule.weave.v2.el.utils.DataTypeHelper$.toJavaValue(DataTypeHelper.scala:104)
at org.mule.weave.v2.el.WeaveExpressionLanguageSession.evaluate(WeaveExpressionLanguageSession.scala:253)
at org.mule.weave.v2.el.WeaveExpressionLanguageSession.$anonfun$evaluate$4(WeaveExpressionLanguageSession.scala:135)
at org.mule.weave.v2.el.WeaveExpressionLanguageSession.doEvaluate(WeaveExpressionLanguageSession.scala:268)
at org.mule.weave.v2.el.WeaveExpressionLanguageSession.evaluate(WeaveExpressionLanguageSession.scala:134)
at org.mule.runtime.core.internal.el.dataweave.DataWeaveExpressionLanguageAdaptor$1.evaluate(DataWeaveExpressionLanguageAdaptor.java:321)
at org.mule.runtime.core.internal.el.DefaultExpressionManagerSession.evaluate(DefaultExpressionManagerSession.java:117)
at org.mule.runtime.core.privileged.util.attribute.ExpressionAttributeEvaluatorDelegate.resolveExpressionWithSession(ExpressionAttributeEvaluatorDelegate.java:68)
at org.mule.runtime.core.privileged.util.attribute.ExpressionAttributeEvaluatorDelegate.resolve(ExpressionAttributeEvaluatorDelegate.java:56)
at org.mule.runtime.core.privileged.util.AttributeEvaluator.resolveTypedValue(AttributeEvaluator.java:107)
at org.mule.runtime.module.extension.internal.runtime.resolver.ExpressionValueResolver.resolveTypedValue(ExpressionValueResolver.java:115)
at org.mule.runtime.module.extension.internal.runtime.resolver.ExpressionValueResolver.resolve(ExpressionValueResolver.java:99)
at org.mule.runtime.module.extension.internal.runtime.resolver.TypeSafeValueResolverWrapper.lambda$initialise$0(TypeSafeValueResolverWrapper.java:69)
at org.mule.runtime.module.extension.internal.runtime.resolver.TypeSafeValueResolverWrapper.resolve(TypeSafeValueResolverWrapper.java:52)
at org.mule.runtime.module.extension.internal.runtime.resolver.TypeSafeExpressionValueResolver.resolve(TypeSafeExpressionValueResolver.java:73)
at org.mule.runtime.module.extension.internal.runtime.resolver.ResolverUtils.resolveRecursively(ResolverUtils.java:92)
at org.mule.runtime.module.extension.internal.runtime.resolver.ResolverSet.resolve(ResolverSet.java:113)
at org.mule.runtime.module.extension.internal.runtime.operation.ComponentMessageProcessor.getResolutionResult(ComponentMessageProcessor.java:1258)
at org.mule.runtime.module.extension.internal.runtime.operation.ComponentMessageProcessor.addContextToEvent(ComponentMessageProcessor.java:762)
at org.mule.runtime.module.extension.internal.runtime.operation.ComponentMessageProcessor.lambda$null$5(ComponentMessageProcessor.java:354)
at reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:273)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:496)
at org.mule.runtime.core.privileged.processor.chain.AbstractMessageProcessorChain$2.onNext(AbstractMessageProcessorChain.java:490)
at org.mule.runtime.core.privileged.processor.chain.AbstractMessageProcessorChain$2.onNext(AbstractMessageProcessorChain.java:485)
at reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:127)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:204)
at reactor.core.publisher.FluxOnAssembly$OnAssemblySubscriber.onNext(FluxOnAssembly.java:351)
at reactor.core.publisher.FluxSubscribeOnValue$ScheduledScalar.run(FluxSubscribeOnValue.java:178)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:50)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:27)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.mule.service.scheduler.internal.AbstractRunnableFutureDecorator.doRun(AbstractRunnableFutureDecorator.java:151)
at org.mule.service.scheduler.internal.RunnableFutureDecorator.run(RunnableFutureDecorator.java:54)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748), while writing Java at
1| [{'key': "ID", 'type': "LONGNVARCHAR"}]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^.
1| [{'key': "ID", 'type': "LONGNVARCHAR"}]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trace:
at anonymous::main (line: 1, column: 2)" evaluating expression: "[{'key': "ID", 'type': "LONGNVARCHAR"}]"."
Reference: Database Connector Data Types Examples - Parameter Types | How to write as Expression or Bean reference · Issue #2050 · mulesoft/docs-connectors (github.com)

How to use READ - FTPs in Mule

I am trying to pass a .csv file that came to me via FTPs, it comes in a binary format (I think) and I want to pass it to some format where I can edit the columns and rows.
I use Anypoint Studio version: 7.12.1, FTPS connector 1.7.1, Runtime version 4.4.0 EE
read XML:
<ftps:read doc:name="Read" doc:id="34e4139b-606e-4cd0-8650-4e3b2bab01d7" config-ref="FTPS_Config_OneHub" path="test_Dec.csv"/>
I'm doing it with a transform message:
<ee:transform doc:name="Transform Message" doc:id="165835a6-da27-442a-82bd-a3a28552611f" >
<ee:message >
<ee:set-payload ><![CDATA[%dw 2.0
output application/csv
---
read(payload, "application/string")]]></ee:set-payload>
</ee:message>
</ee:transform>
But I get the following error:
""You called the function 'AnonymousFunction' with these arguments:
1: Array ([{"PK\u0003\u0004\u0014\u0000\b\b\b\u0000": "q",column_1: "^",column_2: ""}, ...)
2: String ("application/string")
But it expects arguments of these types:
1: String | Binary
2: String
3: Object
4| read(payload, "application/string")
^^^^
Trace:
at anonymous::main (line: 4, column: 1)" evaluating expression: "%dw 2.0
output application/csv
---
read(payload, "application/string")"."
The payload with the file looks like this:
PK (\!T [Content_Types].xmlµSËnÂ0ü•È×*6ôPUEƒMCÇ©ô\{“Xø%¯¡ð÷]8”R‰
ƒM†õ9Ç!PEƒMEƒMõà$òEƒMÁEƒEƒMMÒ†äd¦cêD”j);·£ÑPÁgð¹ÎEƒMEƒM'OÐÊ•ÍÕEƒMãî¾H7LÆh’™R‰µ×G¢EƒMEƒMõ^'EƒMEƒM°
Thank you very much!
The input file is not a CSV. It is zip file. You should first decompress it and then see if it has a CSV file inside. You can use the Compression Module in Mule to decompress it.

Mulesoft EC2 *describeInstances* with *filter* option

I'm having problems using the EC2 connector with filters for DescribeInstances. Specifically, I'm trying to find all instances that have the tag "classId" set.
I've also tried to find all instances that have the classId tag with specific string, e.g. "123".
Below are the XMLs of the describeInstance for both scenarios.
tag-key ------
<ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag-key" values="#[['classId']]">
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
tag:classId:----
<ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag:classId">
<ec2:values >
<ec2:value value="#['123']" />
</ec2:values>
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
Each time I receive an error like the following (for tag:classId):
ERROR 2021-03-29 08:32:49,693 [[MuleRuntime].uber.04: [ec2-play].ec2-playFlow.BLOCKING #1092a5bc] [processor: ; event: df5e2df0-908a-11eb-94b5-38f9d38da5c3] org.mule.runtime.core.internal.exception.OnErrorPropagateHandler: 
********************************************************************************
Message        : The filter 'null' is invalid (Service: AmazonEC2; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 33e3bbfb-99ea-4382-932f-647662810c92; Proxy: null)
Element        : ec2-playFlow/processors/0 # ec2-play:ec2-play.xml:33 (Describe instances)
Element DSL      : <ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag:classId">
<ec2:values>
<ec2:value value="#['123']"></ec2:value>
</ec2:values>
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
Error type      : EC2:INVALID_PARAMETER_VALUE
FlowStack       : at ec2-playFlow(ec2-playFlow/processors/0 # ec2-play:ec2-play.xml:33 (Describe instances))
 (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
NOTE: The code works without a filter, returning all instances. But, that isn't what I want or need. The more filtering I can do the faster the response.
Does anyone have samples of the filter option working? Can you tell me what I'm doing wrong?
Thanks!
This surely is a bug. I tried the same and it was not working for me as well. I enabled debug logging and found that the connector is not sending the filter.1.Name=tag:classId as a query parameter in the request. Here is the debug log that I found. (Notice there is no filter.1.Name=tag:classId in the query string)
DEBUG 2021-04-02 21:55:17,198 [[MuleRuntime].uber.03: [test-aws-connector].test-aws-connectorFlow.BLOCKING #2dff3afe] [processor: ; event: 91a34891-93d0-11eb-af49-606dc73d31d1] org.apache.http.wire: http-outgoing-0 >> "Action=DescribeInstances&Version=2016-11-15&Filter.1.Value.1=123"
However, I tried to use the Expression or Bean Reference option and set the expression directly as [{name: 'tag:classId', values:['123']}] like this:
and it worked correctly. Here is the same debug log after this change
DEBUG 2021-04-02 21:59:17,198 [[MuleRuntime].uber.03: [test-aws-connector].test-aws-connectorFlow.BLOCKING #2dff3afe] [processor: ; event: 91a34891-93d0-11eb-af49-606dc73d31d1] org.apache.http.wire: http-outgoing-0 >> "Action=DescribeInstances&Version=2016-11-15&Filter.1.Name=tag%3AclassId&Filter.1.Value.1=123"
Also, I want to point out very weird behaviour, this does not work if you try to format [{name: 'tag:classId',values: ['123']}] across multiple lines in the expression and will give an error during deployment.

Validation of FHIR Resources against different aspects listed at https://www.hl7.org/fhir/validation.html using HAPI Library

Getting the below exception when running the code:
FhirContext ctx = FhirContext.forR4();
// Create a FhirInstanceValidator and register it to a validator
FhirValidator validator = ctx.newValidator();
FhirInstanceValidator instanceValidator = new FhirInstanceValidator();
validator.registerValidatorModule(instanceValidator);
/*
* If you want, you can configure settings on the validator to adjust
* its behaviour during validation
*/
instanceValidator.setAnyExtensionsAllowed(true);
// input is Patient resource in String https://www.hl7.org/fhir/patient-example.json.html
ValidationResult result = validator.validateWithResult(input);
I am using Hapi Library to validate a resource (if i am not wrong this is a Patient resource https://www.hl7.org/fhir/patient-example.json.html ). I have stored this Patient Json in a string
and trying to Validate its :
1: Structure -> i think using Parse Validation it can be achieved and i did the same.
2: Cardinality -> I created two "active:true" Json key-value pair thinking that it will throw cardinality error but neither of SchemxxxValidator / ParseValidator / InstanceValidator working.
...
How to validate a resource against properties listed here https://www.hl7.org/fhir/validation.html (structure ,cardinality , ValueDomains ...) , Do i have to use all three ways
That is Parser , FhirInstanceValidator and SchemaBaseValidator / SchematronBaseValidator .
Please Help as i am new to FHIR and excuse for lame question.
15:58| INFO | VersionUtil.java 72 | HAPI FHIR version 4.1.0 - Rev 03163c2cf5
15:58| INFO | FhirContext.java 174 | Creating new FHIR context for FHIR version [R4]
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/profile/profiles-resources.xml
15:58| INFO | DependencyLogImpl.java 75 | FHIR XML procesing will use StAX implementation 'Woodstox' version '5.1.0'
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/profile/profiles-types.xml
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/profile/profiles-others.xml
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/extension/extension-definitions.xml
15:58| ERROR | FhirInstanceValidator.java 222 | Failure during validation
java.lang.UnsupportedOperationException
at org.hl7.fhir.r4.hapi.ctx.HapiWorkerContext.generateSnapshot(HapiWorkerContext.java:242)
at org.hl7.fhir.r4.elementmodel.ParserBase.getDefinition(ParserBase.java:122)
at org.hl7.fhir.r4.elementmodel.JsonParser.parse(JsonParser.java:123)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:539)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:531)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:220)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:242)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.doValidate(BaseValidatorBridge.java:20)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.validateResource(BaseValidatorBridge.java:43)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validateResource(FhirInstanceValidator.java:33)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:243)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:198)
at com.json.schema.validator.InstanceValidatorEx.instanceValidator(InstanceValidatorEx.java:223)
at com.json.schema.validator.InstanceValidatorEx.main(InstanceValidatorEx.java:191)
Exception in thread "main" ca.uhn.fhir.rest.server.exceptions.InternalErrorException: Unexpected failure while validating resource
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:223)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:242)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.doValidate(BaseValidatorBridge.java:20)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.validateResource(BaseValidatorBridge.java:43)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validateResource(FhirInstanceValidator.java:33)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:243)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:198)
at com.json.schema.validator.InstanceValidatorEx.instanceValidator(InstanceValidatorEx.java:223)
at com.json.schema.validator.InstanceValidatorEx.main(InstanceValidatorEx.java:191)
Caused by: java.lang.UnsupportedOperationException
at org.hl7.fhir.r4.hapi.ctx.HapiWorkerContext.generateSnapshot(HapiWorkerContext.java:242)
at org.hl7.fhir.r4.elementmodel.ParserBase.getDefinition(ParserBase.java:122)
at org.hl7.fhir.r4.elementmodel.JsonParser.parse(JsonParser.java:123)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:539)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:531)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:220)
Cardinality -> I created two "active:true" Json key-value pair thinking that it will throw cardinality error but neither of SchemxxxValidator / ParseValidator / InstanceValidator working. ...
That's an issue in HAPI - it validates the objects it loads from the JSON, and the JSON parser silently drops the duplicate property key. If you use the validator directly, this won't happen. I believe that this is going to be addressed at some stage
generateSnapshot failed
that's a real issue - I'm not sure why that's not set up, but the validator can't work if snapshots are not being generated

Issues using Kafka KSQL AVRO table as a source for a Kafka Connect JDBC Sink

I've been struggling with this for about a week now trying to get a simple (3 fields) AVRO formated KSQL table as a source to a JDBC connector sink (mysql)
I am getting the following errors (after INFO line):
[2018-12-11 18:58:50,678] INFO Setting metadata for table "DSB_ERROR_TABLE_WINDOWED" to Table{name='"DSB_ERROR_TABLE_WINDOWED"', columns=[Column{'MOD_CLASS', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR}, Column{'METHOD', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR}, Column{'COUNT', isPrimaryKey=false, allowsNull=true, sqlType=BIGINT}]} (io.confluent.connect.jdbc.util.TableDefinitions)
[2018-12-11 18:58:50,679] ERROR WorkerSinkTask{id=dev-dsb-errors-mysql-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask)
org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: DSB_ERROR_TABLE_WINDOWED
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:127)
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:64)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:79)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:124)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:63)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:75)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I can tell that the sink is doing something properly as the schema is pulled (see just before the error above) and the table is created successfully in the database with the proper schema:
MariaDB [dsb_errors_ksql]> describe DSB_ERROR_TABLE_WINDOWED;
+-----------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+-------+
| MOD_CLASS | varchar(256) | YES | | NULL | |
| METHOD | varchar(256) | YES | | NULL | |
| COUNT | bigint(20) | YES | | NULL | |
+-----------+--------------+------+-----+---------+-------+
3 rows in set (0.01 sec)
And here is the KTABLE definition:
ksql> describe extended DSB_ERROR_TABLE_windowed;
Name : DSB_ERROR_TABLE_WINDOWED
Type : TABLE
Key field : KSQL_INTERNAL_COL_0|+|KSQL_INTERNAL_COL_1
Key format : STRING
Timestamp field : Not set - using <ROWTIME>
Value format : AVRO
Kafka topic : DSB_ERROR_TABLE_WINDOWED (partitions: 4, replication: 1)
Field | Type
---------------------------------------
ROWTIME | BIGINT (system)
ROWKEY | VARCHAR(STRING) (system)
MOD_CLASS | VARCHAR(STRING)
METHOD | VARCHAR(STRING)
COUNT | BIGINT
---------------------------------------
Queries that write into this TABLE
-----------------------------------
CTAS_DSB_ERROR_TABLE_WINDOWED_37 : create table DSB_ERROR_TABLE_windowed with (value_format='avro') as select mod_class, method, count(*) as count from DSB_ERROR_STREAM window session ( 60 seconds) group by mod_class, method having count(*) > 0;
There is an entry auto generated in the schema registry for this table (but no key entry):
{
"subject": "DSB_ERROR_TABLE_WINDOWED-value",
"version": 7,
"id": 143,
"schema": "{\"type\":\"record\",\"name\":\"KsqlDataSourceSchema\",\"namespace\":\"io.confluent.ksql.avro_schemas\",\"fields\":[{\"name\":\"MOD_CLASS\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"METHOD\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"COUNT\",\"type\":[\"null\",\"long\"],\"default\":null}]}"
}
and here is the Connect Worker definition:
{ "name": "dev-dsb-errors-mysql-sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics": "DSB_ERROR_TABLE_WINDOWED",
"connection.url": "jdbc:mysql://os-compute-d01.maeagle.corp:32692/dsb_errors_ksql?user=xxxxxx&password=xxxxxx",
"auto.create": "true",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://kafka-d01.maeagle.corp:8081",
"key.converter": "org.apache.kafka.connect.storage.StringConverter"
}
}
My understanding (which could be wrong) is that KSQL should be creating the appropriate AVRO schemas in the Schema Registry and Kafka Connect should be able to read those back properly. As I noted above, something is working as the appropriate table is being generated in Mysql, although I am surprised that there is not a key field created...
Most of the posts and examples are using JSON as opposed to AVRO so they haven't been particularly useful.
It seems to be at the deserialization portion of reading and inserting of the topic record...
I am at a loss at this point and could use some guidance.
I have also opened a similiar ticket via github:
https://github.com/confluentinc/ksql/issues/2250
Regards,
--John
As John says above, the key in the topic's record is not a string, but a string post-fixed with a single Java serialized 64bit integer, representing the window start time.
Connect does not come with a SMT that can handle the windowed key format. However, it would be possible to write one to strip off the integer and just return the natural key. You could then include this on the class path and update your connect config.
If you require the window start time in the database, then you can update you ksqlDB query to include the window start time as a field in the value.

Resources