I am working on my local windows IIB/MQ server. What I am trying to do is place a message on JMSOutput queue.
For that, I have created JMS Administered Object by creating an initial context factory and within it I have created Destination Queue and Connection Factory using file system option. I got a .binding file created in the Provider_URL path specified below.
In JMS Output node proprerties, I have set the JMS Provider name to
Websphere MQ
and initial context factory to
com.sun.jndi.fscontext.RefFSContextFactory
All the other options are unfilled.
Please note that the JMSADmin.config file has the following uncommented properties:
PROVIDER_URL=file:/C:/JNDI
INITIAL_CONTEXT_FACTORY=com.sun.jndi.fscontext.RefFSContextFactory
Now when I try to put a message on JMS Output node, I get the following exception:
ExceptionList
RecoverableException
File:CHARACTER:F:\build\S1000_slot1\S1000_P\src\DataFlowEngine\MessageServices\ImbDataFlowNode.cpp
Line:INTEGER:1251
Function:CHARACTER:ImbDataFlowNode::createExceptionList
Type:CHARACTER:ComIbmJMSClientOutputNode
Name:CHARACTER:test#FCMComposite_1_4
Label:CHARACTER:test.JMS Output
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:2230
Text:CHARACTER:Node throwing exception
Insert
Type:INTEGER:14
Text:CHARACTER:test.JMS Output
RecoverableException
File:CHARACTER:JMSClientErrors.java
Line:INTEGER:771
Function:CHARACTER:JMSClientErrors:handleJNDIException()
Type:CHARACTER:
Name:CHARACTER:
Label:CHARACTER:
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:4640
Text:CHARACTER:Failure to obtain JNDI administered objects
Insert
Type:INTEGER:5
Text:CHARACTER:Broker 'LOCALBK10'; Execution Group 'Test'; Message Flow 'test'; Node 'ComIbmJMSClientOutputNode'
Insert
Type:INTEGER:5
Text:CHARACTER:com.sun.jndi.fscontext.RefFSContextFactory
Insert
Type:INTEGER:5
Text:CHARACTER:
Insert
Type:INTEGER:5
Text:CHARACTER:
Insert
Type:INTEGER:5
Text:CHARACTER:Hello
Insert
Type:INTEGER:5
Text:CHARACTER: Cause:java.net.MalformedURLException: no protocol:
Insert
Type:INTEGER:5
Text:CHARACTER: , Failure to obtain JNDI administered objects
Any help would be highly appreciated.
Towards the end of above stack trace you see this
Cause:java.net.MalformedURLException: no protocol
This is because you did not set the value for property Location JNDI bindings. It has to have the same value as configured in JMSAdmin.config, i.e. file:/C:/JNDI.
Related
Recently when I setup the Kafka Sink Connector and ingest data into the database, I noticed that certain values gave errors for one specific column, while other times this one specific column only allows certain values to be ingested. Every time this column doesn't allow that value, it will throw this error
java.sql.SQLException: Exception Chain
java.sql.BatchUpdateException: ORA-00001: unique constraint (db.table) violated
java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (db.table) violated
Error : 1, Position : 0, Sql = INSERT INTO TABLE(table_id, system_timestamp)
I know that the database side can trigger the value to null, however the database admin just want to leave the situation alone. However this error is getting out of control where the logs keep generating this error message every few seconds. I researched that the Kafka Sink Connector can filter the field name and allow certain values to be entered into the database. However when I tried it out it gave error that the regex expression Kafka does not accept. Am I writing it correctly? The table_id only allows 19212 and 19213 as the values, while other integers or strings would not allow. is there a way for Kafka to only accept those two values, while other times it spits out warning and null result. Here is my code
"transforms": filterTableId",
"transforms.filterTableId.type": "io.cofluent.connect.transforms.Filter$Value",
"transforms.filterTableId.filter.condition": "$.payload.after[?(#.nestedKey == "32512" || #.nestedKey == "32513")]",
"transforms.filterTableId.filter.type": "include",
"transforms.filterTableId.missing.or.null.behavior": "fail"
from https://docs.confluent.io/platform/current/connect/transforms/filter-confluent.html#properties
Any suggestions on what I did wrong, or is it because its Confluent that my Kafka Sink Connector does not support because I also tried org.apache.kafka.connect as type and also failed to support
I am using JDBC source connector with the JDBC Driver for collecting data from Google Cloud Spanner to Kafka.
I am using "timestamp+incrementing" mode on a table. The primary key of the table includes 2 columns (order_item_id and order_id).
I used order_item_id for the incrementing column, and a column named "updated_time" for timestamp column.
When I started the connector, I got the following errors sometimes, but I still can get the data finally.
ERROR Failed to run query for table TimestampIncrementingTableQuerier{table="order_item", query='null',
topicPrefix='test_', incrementingColumn='order_item_id', timestampColumns=[updated_time]}: {}
(io.confluent.connect.jdbc.source.JdbcSourceTask:404)
com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory$JdbcAbortedDueToConcurrentModificationException:
The transaction was aborted and could not be retried due to a concurrent modification
...
Caused by: com.google.cloud.spanner.AbortedDueToConcurrentModificationException:
The transaction was aborted and could not be retried due to a concurrent modification
...
Suppressed: com.google.cloud.spanner.connection.AbstractBaseUnitOfWork$SpannerAsyncExecutionException:
Execution failed for statement:
SELECT * FROM `order_item` WHERE `order_item`.`updated_time` < #p1 AND ((`order_item`.`updated_time` = #p2 AND `order_item`.`order_item_id` > #p3) OR `order_item`.`updated_time` > #p4) ORDER BY `order_item`.`updated_time`,`order_item`.`order_item_id` ASC
...
Caused by: com.google.cloud.spanner.AbortedException: ABORTED: io.grpc.StatusRuntimeException:
ABORTED: Transaction was aborted. It was wounded by a higher priority transaction due to conflict on keys in range [[5587892845991837697,5587892845991837702], [5587892845991837697,5587892845991837702]), column adjust in table order_item.
retry_delay {
nanos: 12974238
}
- Statement: 'SELECT * FROM `order_item` WHERE `order_item`.`updated_time` < #p1 AND ((`order_item`.`updated_time` = #p2 AND `order_item`.`order_item_id` > #p3) OR `order_item`.`updated_time` > #p4) ORDER BY `order_item`.`updated_time`,`order_item`.`order_item_id` ASC'
...
I am wondering how does this error happen in my case. Btw, even with the error, the connector can still collect the data at the end. Can anyone help with it? Thank you so much!
I'm not sure exactly how your entire pipeline is set up, but the error indicates that you are executing the query in a read/write transaction. Any read/write transaction on Cloud Spanner can be aborted by Cloud Spanner, and may result in the error that you are seeing.
If your pipeline is only reading from Cloud Spanner, the best thing to do is to set your JDBC connection in read-only and autocommit mode. You can do this directly in your JDBC connection URL by adding the readonly=true and autocommit=true properties to the URL.
Example:
jdbc:cloudspanner:/projects/my-project/instances/my-instance/databases/my-database;readonly=true;autocommit=true
It could also be that the framework(s) that you are using is changing the JDBC connection after it has been opened. In that case you should have a look if you can change that in the framework(s). But changing the JDBC URL based on the above example may very well be enough in this case.
Background information:
If the JDBC connection is opened with autocommit turned off and the connection is in read/write mode, then a read/write transaction will be started automatically when a query is executed. All subsequent queries will also use the same read/write transaction, until commit() is called on the connection. This is the least efficient way to read large amounts of data on Cloud Spanner, and should therefore be avoided whenever possible. It will also cause aborted transactions, as the read operations will take locks on the data that it is reading.
I am trying to insert data from Kafka to Teradata. The payload has some null values, and the JDBC sink is throwing the following error.
[Teradata JDBC Driver] [TeraJDBC 16.20.00.10] [Error 1063] [SQLState HY000] null is not supported as a data value with this variant of the setObject method; use the setNull method or the setObject method with a targetSqlType parameter
My connector config:
name=teradata-sink-K_C_OSUSR_DGL_DFORM_I1-V2
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
connection.url=connectionString
topics=POPS-P-OSUSR_DGL_DFORM_I1-J-V2-CAL-OUT
topic.prefix=
table.name.format=K_C_OSUSR_DGL_DFORM_I1_V2
batch.size=50000
errors.tolerance=all
errors.deadletterqueue.topic.name=POPS-P-OSUSR_DGL_DFORM_I1-V2-CAL-DEAD
errors.deadletterqueue.topic.replication.factor=1
Is there a way to achieve this? I do not know if I have to change some code into the sink or just change the connector config.
You are getting the error from some line that surely look like this:
ps.setObject(1, val);
This one will throw an exception if the val you try to insert has a null value.
The error is telling that you must specify the data type of the null values incoming. You could do this:
ps.setObject(1, val, Types.VARCHAR);
This way you are casting NULL to a VARCHAR, one of the supported targetSqlTypes.
Another option for the same purpose:
ps.setNull(1, Types.VARCHAR) ;
The problem we have is we are using the standard Kafka Connect to create a sink (we are not coding any custom connector).
We have configured both .properties files for worker and connector to create a link between the topic and the teradata table, and we run it using
.../confluent/bin/connect-standalone <worker.cfg> <connector.cfg>
When we create a message with "null" values and send it into the topic, the Sink Connector is unable to insert the record into the TD table.
I am using Mybatis with springboot to connect with an Oracle database. In that process I need to update multiple fields in bulk.
I have created a procedure to implement that functionality on the database side. When calling that procedure from the DAO layer, the following exception occurs:
org.springframework.jdbc.UncategorizedSQLException:
Error getting generated key or setting result parameter object.
Cause
Java.sql.SQLException: operation not allowed.
<insert id="update" parameter="java.util.map" statement="callable">
{ call test.bulk_update_procedure(#{customerid, mode=IN , jdbcType= varchar},#{firstname, mode= IN, jdbcType= varchar},#{age,mode=IN, jdbcType=varchar})
</insert>
I can see internal server error on my app developed on spring mvc, using wildfly as the webserver and database is PostgreSQL. What does this exception mean?
ERROR: cannot execute nextval() in a read-only transaction
It was working all fine before. I tried to look for the solution here on stackoverflow but didn't find anything that could fix this issue.
The function nextval() is used to increment a sequence, which modifies the state of the database.
You get that error because you are in a read-only transaction. This can happen because
You explicitly started a read-only transaction with START TRANSACTION READ ONLY or similar.
The configuration parameter default_transaction_read_only is set to on.
You are connected to a streaming replication standby server.
If default_transaction_read_onlyis set to on, you can either start a read-write transaction with
START TRANSACTION READ WRITE;
or change the setting by editing postgresql.conf or with the superuser command
ALTER SYSTEM SET default_transaction_read_only = off;
As others have pointed out nextval() actually updates the database to get a new sequence, so can't be used in a transaction that is marked as read only.
As you're using Spring I suspect this means that you're using the spring-transaction support. If you're using annotation based transaction support, then you'd get a read only transaction if you have
#Transactional(readOnly=true)
That means that when spring starts the transaction it will put it into read only mode.
Remove the readOnly=true bit and a regular writable transaction is created instead.
Spring transaction control at http://docs.spring.io/spring-framework/docs/4.2.x/spring-framework-reference/html/transaction.html
I means exactly what it says, here is example:
t=# create sequence so49;
CREATE SEQUENCE
t=# begin;
BEGIN
t=# select nextval('so49');
nextval
---------
1
(1 row)
t=# set transaction_read_only TO on;
SET
t=# select nextval('so49');
ERROR: cannot execute nextval() in a read-only transaction
t=# end;
ROLLBACK
I presume you connected as so called ro user, which is user with "transaction_read_only" set to true, EG:
t=# select rolconfig from pg_roles where rolname ='ro';
rolconfig
----------------------------
{transaction_read_only=on}
(1 row)
you can switch that off for your user of course, but this is out of scope I believe
https://www.postgresql.org/docs/current/static/sql-set-transaction.html