I'm having trouble with Mule executing a JDBC query that works externally. It fails once I put parentheses in it if it also has replacement parameters. I'm wondering if MEL has some issues with parentheses that I don't understand.
The following (simplified) example executes correctly in my flow:
<jdbc:query
key="QueryStrategy"
value="SELECT product_strategy FROM licensing.product WHERE product_sku = #[flowVars['productSku']] OR product_sku IS NULL"/>
If I replace the reference to a flow variable with a constant, I can add parentheses and it still works:
<jdbc:query
key="QueryStrategy"
value="SELECT product_strategy FROM licensing.product WHERE (product_sku = 'Trial' OR product_sku IS NULL)"/>
However, the minute I add parentheses while the replacement parameter is present, I get an exception:
<jdbc:query
key="QueryStrategy"
value="SELECT product_strategy FROM licensing.product WHERE (product_sku = #[flowVars['productSku']] OR product_sku IS NULL)"/>
The exception is shown below. Is this a problem with Mule's formatting of what it sends to Microsoft or some bug in the Microsoft JDBC driver?
EDIT
Here's the amended log with DEBUG turned on for org.mule.transport.jdbc.
com.ca.eai.esb.interceptor.EaiLoggingInterceptor FlowBefore | Flow Name: Main | MULE_REMOTE_CLIENT_ADDRESS: /127.0.0.1:59952 | consumerTransId: E0acd42cb-100d-11e3-a264-852205c1b19e | consumerDateTimeSent: null | consumerApp: null | consumerUsername: null | mule.muleId: null | esbTransCode: null | messageSourceName: endpoint.https.localhost.8088.license.requests.r.v1 | currentTimestamp: 2013-08-28 14:09:55.309
org.mule.transport.jdbc.JdbcConnector Borrowing a dispatcher for endpoint: jdbc://QueryStrategy
org.mule.transport.jdbc.JdbcMessageDispatcher Connecting: JdbcMessageDispatcher{this=5764c302, endpoint=jdbc://QueryStrategy, disposed=false}
org.mule.transport.jdbc.JdbcMessageDispatcher Connected: endpoint.outbound.jdbc://QueryStrategy
org.mule.transport.jdbc.JdbcConnector Borrowed a dispatcher for endpoint: jdbc://QueryStrategy = JdbcMessageDispatcher{this=5764c302, endpoint=jdbc://QueryStrategy, disposed=false}
org.mule.transport.jdbc.JdbcConnector Borrowed dispatcher: JdbcMessageDispatcher{this=5764c302, endpoint=jdbc://QueryStrategy, disposed=false}
org.mule.transport.jdbc.sqlstrategy.SelectSqlStatementStrategy Trying to receive a message with a timeout of 10000
org.mule.transport.jdbc.sqlstrategy.SelectSqlStatementStrategy SQL QUERY: SELECT product_strategy, product_id FROM licensing.product WHERE product_name = ? AND (product_sku is null OR product_sku = ? ) ORDER BY product_sku DESC, params = {My Product,My Product}
org.mule.transport.jdbc.JdbcConnector Returning dispatcher for endpoint: jdbc://QueryStrategy = JdbcMessageDispatcher{this=5764c302, endpoint=jdbc://QueryStrategy, disposed=false}
com.ca.eai.esb.interceptor.EaiLoggingInterceptor FlowLast | Flow Name: Main | MULE_REMOTE_CLIENT_ADDRESS: /127.0.0.1:59952 | consumerTransId: E0acd42cb-100d-11e3-a264-852205c1b19e | consumerDateTimeSent: null | consumerApp: null | consumerUsername: null | mule.muleId: null | esbTransCode: null | messageSourceName: endpoint.https.localhost.8088.license.requests.r.v1 | currentTimestamp: 2013-08-28 14:09:55.791 | elapsedTime: 484ms
org.mule.exception.DefaultMessagingExceptionStrategy
********************************************************************************
Message : Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=jdbc://QueryStrategy, connector=JdbcConnector
{
name=Database
lifecycle=start
this=333d612e
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=false
connected=true
supportedProtocols=[jdbc]
serviceOverrides=<none>
}
, name='endpoint.jdbc.QueryStrategy', mep=REQUEST_RESPONSE, properties={queryTimeout=-1}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: String
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'FROM'.(SQL Code: 0, SQL State: + null) (com.microsoft.sqlserver.jdbc.SQLServerException)
com.microsoft.sqlserver.jdbc.SQLServerException:190 (null)
2. com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'FROM'. Query: SELECT product_strategy, product_id FROM licensing.product WHERE product_name = ? AND (product_sku is null OR product_sku = ? ) ORDER BY product_sku DESC Parameters: [My Product, My Product](SQL Code: 0, SQL State: + null) (java.sql.SQLException)
org.apache.commons.dbutils.QueryRunner:540 (null)
3. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=jdbc://QueryStrategy, connector=JdbcConnector
{
name=Database
lifecycle=start
this=333d612e
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=false
connected=true
supportedProtocols=[jdbc]
serviceOverrides=<none>
}
, name='endpoint.jdbc.QueryStrategy', mep=REQUEST_RESPONSE, properties={queryTimeout=-1}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: String (org.mule.api.transport.DispatchException)
org.mule.transport.AbstractMessageDispatcher:109 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
com.microsoft.sqlserver.jdbc.SQLServerException: com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'FROM'.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:190)
at com.microsoft.sqlserver.jdbc.SQLServerParameterMetaData.<init>(SQLServerParameterMetaData.java:426)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.getParameterMetaData(SQLServerPreparedStatement.java:1532)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
Related
I am getting error on JoinEnrichment and don't know why it is coming.
My query in JoinEnrichment :
SELECT original.*, enrichment.*
FROM original
FULL JOIN enrichment
ON original.name = enrichment.name
Error
JoinEnrichment[id=a17b1f06-f80c-3641-945f-c2ff331f8028] Failed to join 'original' FlowFile FlowFile[filename=cmdbci-mtaas] and 'enrichment' FlowFile FlowFile[filename=cmdbci-mtaas]; routing to failure: java.sql.SQLException: Error while preparing statement [SELECT original., enrichment.
FROM original
FULL JOIN enrichment
ON original.name = enrichment.name]
Caused by: org.apache.calcite.runtime.CalciteContextException: From line 4, column 4 to line 4, column 34: Cannot apply '=' to arguments of type '<JAVATYPE(CLASS JAVA.LANG.OBJECT)> = <JAVATYPE(CLASS JAVA.LANG.STRING)>'. Supported form(s): '<COMPARABLE_TYPE> = <COMPARABLE_TYPE>'
Caused by: : Corg.apache.calcite.sql.validate.SqlValidatorExceptionannot apply '=' to arguments of type '<JAVATYPE(CLASS JAVA.LANG.OBJECT)> = <JAVATYPE(CLASS JAVA.LANG.STRING)>'. Supported form(s): '<COMPARABLE_TYPE> = <COMPARABLE_TYPE>'
Sample input from original
Location,Environment,ip_address,category,name,dv_sys_updated_on
"",,,Hardware,Ndiggan,2022-12-17 22:37:28
"",,,Hardware,class,2022-12-31 22:37:38
"",,,,Vlan2,2022-12-27 02:17:13
Sample input from enrichment
Location,Environment,ip_address,category,name,dv_sys_updated_on
"",,,Hardware,vpna,2022-12-17 22:36:02
"",,,Hardware,dlcccno,2022-12-17 22:37:04
"",,,Hardware,Ndiggan,2022-12-17 22:37:28
For a newly allocated IP, it would be something like this in C# (using .Ref):
var arNewPublic = new ARecord(this, $"ARecord_Public_NewAlloc", new ARecordProps
{
...
Target = RecordTarget.FromValues(newElasticIPs.Select(eip => eip.Ref).ToArray())
...
But how to create ARecord target for a CfnEIPAssociation?
Doing it like that:
Target = RecordTarget.FromValues(elasticIPAssociations.Select(eipa => eipa.Eip).ToArray()),
throws:
Unhandled exception. Amazon.JSII.Runtime.JsiiException:
Got 'undefined' for non-optional instance of {"name":"values","type":{"primitive":"string"},"variadic":true}
or crashes CF deployment:
Target = RecordTarget.FromValues(elasticIPAssociations.Select(eipa => eipa.Ref).ToArray()),
9/12 | 1:58:11 PM | CREATE_FAILED | AWS::Route53::RecordSet | ARecord-Public-PreAlloc (ARecordPublicPreAllocCB385358) [Invalid Resource Record: FATAL problem: ARRDATAIllegalIPv4Address (Value is not a valid IPv4 address) encountered with 'eipassoc-0f9299c9975d21e6e']
new RecordSet (...#aws-cdk\aws-route53\lib\record-set.js:134:27)
I am trying to find the max of date from json data. But i am getting below error
Message : "You cannot compare a value of type :Null
Trace:
at reduce (Unknown)
at dw::Core::maxBy (line: 5535, column: 3)
at main (line: 1, column: 248)" evaluating expression:
"%dw 2.0 output application/json --- { Value2: ( if (vars.data.Value1 as String != "")
(payload maxBy((item) -> item.startDate)).startDate default vars.data.Value1
else "" ), RCount: sizeOf(payload) }".
Error type : MULE:EXPRESSION
Element : executeInterface/processors/8 # gg:gg.xml:121 (interface)
Element XML : <set-variable value="#[%dw 2.0 output application/json ---
{ Value2: ( if (vars.data.Value1 as String != "")
(payload maxBy((item) -> item.startDate)).startDate default vars.data.Value1
else "" ), RCount: sizeOf(payload) }]" doc:name="interface" doc:id="2c140185-2fc5-4e11-9b78-96fc2ddcfa2f" variableName="interface"></set-variable>
(set debug level logging or '-Dmule.verbose.exceptions=true' for everything).
The logic written in set variable component is
%dw 2.0
output application/json
---
{
Value2:
( if (vars.data.Value1 as String != "")
(payload maxBy((item) -> item.startDate)).startDate default vars.data.Value1
else ""
),
RCount: sizeOf(payload)
}
while running in the debug mode, found the problem is in maxby statement.
Please suggest.how to fix this issue
The error indicates that something is null when trying to compare. Given that you mentioned the error is at maxBy(), it is probable the the attribute startDate is null or not present in one of the elements. You should show also the values of the transformation so we could confirm it.
UPDATE: I can reproduce the error if the startDate attribute is not present or if it is misspelled. For example if I misspell item.StartDate (wrong uppercase 'S') it produces the error. Make sure the payload and script match exactly.
I've been struggling with this for about a week now trying to get a simple (3 fields) AVRO formated KSQL table as a source to a JDBC connector sink (mysql)
I am getting the following errors (after INFO line):
[2018-12-11 18:58:50,678] INFO Setting metadata for table "DSB_ERROR_TABLE_WINDOWED" to Table{name='"DSB_ERROR_TABLE_WINDOWED"', columns=[Column{'MOD_CLASS', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR}, Column{'METHOD', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR}, Column{'COUNT', isPrimaryKey=false, allowsNull=true, sqlType=BIGINT}]} (io.confluent.connect.jdbc.util.TableDefinitions)
[2018-12-11 18:58:50,679] ERROR WorkerSinkTask{id=dev-dsb-errors-mysql-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask)
org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: DSB_ERROR_TABLE_WINDOWED
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:127)
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:64)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:79)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:124)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:63)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:75)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I can tell that the sink is doing something properly as the schema is pulled (see just before the error above) and the table is created successfully in the database with the proper schema:
MariaDB [dsb_errors_ksql]> describe DSB_ERROR_TABLE_WINDOWED;
+-----------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+-------+
| MOD_CLASS | varchar(256) | YES | | NULL | |
| METHOD | varchar(256) | YES | | NULL | |
| COUNT | bigint(20) | YES | | NULL | |
+-----------+--------------+------+-----+---------+-------+
3 rows in set (0.01 sec)
And here is the KTABLE definition:
ksql> describe extended DSB_ERROR_TABLE_windowed;
Name : DSB_ERROR_TABLE_WINDOWED
Type : TABLE
Key field : KSQL_INTERNAL_COL_0|+|KSQL_INTERNAL_COL_1
Key format : STRING
Timestamp field : Not set - using <ROWTIME>
Value format : AVRO
Kafka topic : DSB_ERROR_TABLE_WINDOWED (partitions: 4, replication: 1)
Field | Type
---------------------------------------
ROWTIME | BIGINT (system)
ROWKEY | VARCHAR(STRING) (system)
MOD_CLASS | VARCHAR(STRING)
METHOD | VARCHAR(STRING)
COUNT | BIGINT
---------------------------------------
Queries that write into this TABLE
-----------------------------------
CTAS_DSB_ERROR_TABLE_WINDOWED_37 : create table DSB_ERROR_TABLE_windowed with (value_format='avro') as select mod_class, method, count(*) as count from DSB_ERROR_STREAM window session ( 60 seconds) group by mod_class, method having count(*) > 0;
There is an entry auto generated in the schema registry for this table (but no key entry):
{
"subject": "DSB_ERROR_TABLE_WINDOWED-value",
"version": 7,
"id": 143,
"schema": "{\"type\":\"record\",\"name\":\"KsqlDataSourceSchema\",\"namespace\":\"io.confluent.ksql.avro_schemas\",\"fields\":[{\"name\":\"MOD_CLASS\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"METHOD\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"COUNT\",\"type\":[\"null\",\"long\"],\"default\":null}]}"
}
and here is the Connect Worker definition:
{ "name": "dev-dsb-errors-mysql-sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics": "DSB_ERROR_TABLE_WINDOWED",
"connection.url": "jdbc:mysql://os-compute-d01.maeagle.corp:32692/dsb_errors_ksql?user=xxxxxx&password=xxxxxx",
"auto.create": "true",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://kafka-d01.maeagle.corp:8081",
"key.converter": "org.apache.kafka.connect.storage.StringConverter"
}
}
My understanding (which could be wrong) is that KSQL should be creating the appropriate AVRO schemas in the Schema Registry and Kafka Connect should be able to read those back properly. As I noted above, something is working as the appropriate table is being generated in Mysql, although I am surprised that there is not a key field created...
Most of the posts and examples are using JSON as opposed to AVRO so they haven't been particularly useful.
It seems to be at the deserialization portion of reading and inserting of the topic record...
I am at a loss at this point and could use some guidance.
I have also opened a similiar ticket via github:
https://github.com/confluentinc/ksql/issues/2250
Regards,
--John
As John says above, the key in the topic's record is not a string, but a string post-fixed with a single Java serialized 64bit integer, representing the window start time.
Connect does not come with a SMT that can handle the windowed key format. However, it would be possible to write one to strip off the integer and just return the natural key. You could then include this on the class path and update your connect config.
If you require the window start time in the database, then you can update you ksqlDB query to include the window start time as a field in the value.
I am using Grails 2.3.5 and I have a controller which contains a ajaxDelete method. This method receives the name as a String of the entity i want to delete. I am using a service to delete the entity which I call from within my controller.
The first time the services deleteSvnUser method is called the SVN user is deleted and my div on the page is updated with a alert stating that the user has been deleted. The second time I delete a user i get the following error:
row-was-updated-or-deleted-by-another-transaction-or-unsaved-value-mapping-was...
I have tried a couple of things to get round this:
Adding the #transactional annotation to my service (which I shouldn't have to because it should be transactional by default).
flushing my deletes (flush:true)
Refreshing the entity before i delete it (.refresh)
Locking the entity when I retrieve it (entity.lock(id))
None of the above have worked. I'm not sure what else to do to get around this.
Can anyone help? my code is below:
Controller
class SvnUserController {
def svnUserService
def ajaxDelete() {
if (!params.selectedSvnUser) {
request.error = "The selected Svn User does not exist"
render(template:"svnUserList", model:[svnUserInstanceList: SvnUser.list(params).sort{it.name}, svnUserInstanceTotal: SvnUser.count()])
return
}
String outcome = svnUserService.deleteSvnUser(params.selectedSvnUser)
switch (outcome){
case "":
request.message = "Svn User ${params.selectedSvnUser} deleted"
break
default: request.error = outcome
}
}
def svnUsers = svnUserService.getAllSvnUsers(params)
render(template:"svnUserList", model:[svnUserInstanceList: svnUsers, svnUserInstanceTotal: svnUsers.size()])
return
}
}
Service
class SvnUserService {
public String deleteSvnUser(String userName) {
String outcome = ""
try {
SvnUser svnUserInstance = SvnUser.findByName(userName)
def groups = SvnGroupController.findAllBySvnUserName(userName)
groups.each { SvnGroup group ->
group.removeFromSvnUsers(svnUserInstance)
group.save()
}
def userPermissionsList = UserPermission.findAllBySvnUser(svnUserInstance)
userPermissionsList.each {
def repoDirectory = it.repositoryDirectory
repoDirectory.removeFromUserPermissions(it)
it.delete()
}
svnUserInstance.merge()
svnUserInstance.delete()
}
catch (DataIntegrityViolationException e) {
outcome = "A error occurred while attempting to delete the Svn User from file."
}
return outcome
}
}
The stacktrace is as follows:
Error |
2014-06-10 15:10:07,394 [http-bio-8080-exec-6] ERROR errors.GrailsExceptionResolver - StaleObjectStateException occurred when processing request: [POST] /subzero/svnUser/ajaxDelete - parameters:
selectedSvnUser: ruby2
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [com.uk.nmi.subzero.SvnGroup#2]. Stacktrace follows:
Message: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [com.uk.nmi.subzero.SvnGroup#2]
Line | Method
->> 72 | deleteSvnUser in com.uk.nmi.subzero.SvnUserService
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 87 | ajaxDelete in com.uk.nmi.subzero.SvnUserController
| 200 | doFilter . . in grails.plugin.cache.web.filter.PageFragmentCachingFilter
| 63 | doFilter in grails.plugin.cache.web.filter.AbstractFilter
| 1145 | runWorker . . in java.util.concurrent.ThreadPoolExecutor
| 615 | run in java.util.concurrent.ThreadPoolExecutor$Worker
^ 744 | run . . . . . in java.lang.Thread
I can't see any problem with your code. Grails should handle the transaction fine.
If your object mapping is set up correctly, the svnUserInstance.delete() call should delete the child objects too, so I don't think you even need the lines between svnUserInstance = SvnUser.findByName(userName) and the delete.
How are you calling the ajaxDelete? Is there any chance it is being called incorrectly the second time?