See headers of dead letter queue - jdbc

I'm using JDBC sink connector to load data from kafka topic to postgres.
here is my configuration
curl --location --request PUT 'http://localhost:8083/connectors/customer_sink_1/config' \
--header 'Content-Type: application/json' \
--data-raw '{
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.url":"jdbc:postgresql://localhost:5432/postgres",
"connection.user":"user",
"connection.password":"passwd",
"tasks.max" : "1",
"topics":"table_name_same_as_topic_name",
"insert.mode":"insert",
"key.converter":"org.apache.kafka.connect.converters.ByteArrayConverter",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"quote.sql.identifiers":"never",
"errors.tolerance":"all",
"errors.deadletterqueue.topic.name":"failed_records",
"errors.deadletterqueue.topic.replication.factor":"1",
"errors.log.enable":"true",
"errors.deadletterqueue.context.headers.enable":"true",
"reporter.bootstrap.servers":"localhost:9092",
"reporter.result.topic.name":"success-responses",
"reporter.result.topic.replication.factor":"1",
"reporter.error.topic.name":"error-responses",
"reporter.error.topic.replication.factor":"1"
}'
I downloaded kafka from apache kafka on windows and using .bat files to use the service.
I was able to send the failed records to other topic but when I tried consuming it using kafka-consumer from command line was not able to see the headers, but can see the data/record which is failed.
As per the documentation, Kafka Connect Concepts
You can then use the **kcat** (formerly kafkacat) Utility to view the record header and determine why the record failed. Errors are also sent to **Connect Reporter**.
So I tried Connect Reporter, but the success-responses and error-responses topics were not created.
How Can I see the headers of failed records without kcat ???? is it possible??

Depending on your version of Kafka, you can use console consumer
kafka-console-consumer ... --property print.headers=true
Or you can write your own consumer if you cannot use kcat

Related

Gmail Postmaster tool API

I need to use Gmail Postmaster Tools API, but in google documentation I don't find it that useful. I am facing issue in finding out which API key to use and what credentials to create, like Service Account or OAuth 2. I want to test it locally. Here is the curl command I am using:
curl 'https://gmailpostmastertools.googleapis.com/v1beta1/domains/www.example.com/trafficStats/20200705?access_token=[ACCESS_TOKEN]' --header 'Accept: application/json' --compressed
Response:
{
"error": {
"code": 403,
"message": "The caller does not have permission",
"status": "PERMISSION_DENIED"
}
}
I can't try this but, if you're using oauth2l:
CREDENTIALS=[[Path to your service account]]
ENDPOINT="https://gmailpostmastertools.googleapis.com/v1beta1"
DOMAIN="www.example.com"
STATS="20200705"
URL="${ENDPOINT}/domains/${DOMAIN}/trafficStats/${STATS}"
oauth2l curl \
--scope=postmaster.readonly \
--credentials=${CREDENTIALS} \
--url=${URL}
Well it turns out that my domain was not verified with my google account. When I added DNS record than it started working.

kafka.common.KafkaException: Failed to parse the broker info from zookeeper from EC2 to elastic search

I have aws MSK set up and i am trying to sink records from MSK to elastic search.
I am able to push data into MSK into json format .
I want to sink to elastic search .
I am able to do all set up correctly .
This is what i have done on EC2 instance
wget /usr/local http://packages.confluent.io/archive/3.1/confluent-oss-3.1.2-2.11.tar.gz -P ~/Downloads/
tar -zxvf ~/Downloads/confluent-oss-3.1.2-2.11.tar.gz -C ~/Downloads/
sudo mv ~/Downloads/confluent-3.1.2 /usr/local/confluent
/usr/local/confluent/etc/kafka-connect-elasticsearch
After that i have modified kafka-connect-elasticsearch and set my elastic search url
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=AWSKafkaTutorialTopic
key.ignore=true
connection.url=https://search-abcdefg-risdfgdfgk-es-ex675zav7k6mmmqodfgdxxipg5cfsi.us-east-1.es.amazonaws.com
type.name=kafka-connect
The producer sends message like below fomrat
{
"data": {
"RequestID": 517082653,
"ContentTypeID": 9,
"OrgID": 16145,
"UserID": 4,
"PromotionStartDateTime": "2019-12-14T16:06:21Z",
"PromotionEndDateTime": "2019-12-14T16:16:04Z",
"SystemStartDatetime": "2019-12-14T16:17:45.507000000Z"
},
"metadata": {
"timestamp": "2019-12-29T10:37:31.502042Z",
"record-type": "data",
"operation": "insert",
"partition-key-type": "schema-table",
"schema-name": "dbo",
"table-name": "TRFSDIQueue"
}
}
I am little confused in how will the kafka connect start here ?
if yes how can i start that ?
I also have started schema registry like below which gave me error.
/usr/local/confluent/bin/schema-registry-start /usr/local/confluent/etc/schema-registry/schema-registry.properties
When i do that i get below error
[2019-12-29 13:49:17,861] ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
kafka.common.KafkaException: Failed to parse the broker info from zookeeper: {"listener_security_protocol_map":{"CLIENT":"PLAINTEXT","CLIENT_SECURE":"SSL","REPLICATION":"PLAINTEXT","REPLICATION_SECURE":"SSL"},"endpoints":["CLIENT:/
Please help .
As suggested in answer i upgraded the kafka connect version but then i started getting below error
ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:63)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:210)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:61)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:72)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:39)
at io.confluent.rest.Application.createServer(Application.java:201)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:41)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Timed out trying to create or validate schema topic configuration
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:168)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:111)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:208)
... 5 more
Caused by: java.util.concurrent.TimeoutException
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:161)
... 7 more
First, Confluent Platform 3.1.2 is fairly old. I suggest you get the version that aligns with the Kafka version
You start Kafka Connect using the appropriate connect-* scripts and properties located under bin and etc/kafka folders
For example,
/usr/local/confluent/bin/connect-standalone \
/usr/local/confluent/etc/kafka/kafka-connect-standalone.properties \
/usr/local/confluent/etc/kafka-connect-elasticsearch/quickstart.properties
If that works, you can move onto using connect-distributed command instead
Regarding Schema Registry, you can search its Github issues for multiple people trying to get MSK to work, but the root issue is related to MSK not exposing a PLAINTEXT listener and the Schema Registry not supporting named listeners. (This may have changed since versions 5.x)
You could also try using Connect and Schema Registry containers in ECS / EKS rather than extracting in an EC2 machine

Authorize.net Webhooks - need detailed error message

We are trying to get up to speed using Authorize.net Webhooks to connect to our Salesforce external site. Our Salesforce endpoint seems to be working OK, because I can call it using curl from the command line and pass a JSON structure that gets successfully saved to our database. This much works fine.
However when I enter the URL of our Webhook into the "Endpoint URL:" field in Authorize.net and click the "Test Webhook" button, I get the error message:
"Error: Error occured in connecting to the endpoint: (prints my endpoint URL)
I am sure that I am calling the correct URL because I have copied and pasted it from the command line into the Endpoint URL field. But I need to know why I'm getting this error. Is there a debug log for Webhooks? Or how can I get a more detailed error message from Authorize.net?
Just be clear - this is testing a webhook call from Authorize.net -> Salesforce.
Edit: the CURL call that works from the command line is:
curl -X POST -H "Content-Type: application/json" -d '{"test":"this"}' \
https://my.server.com/mysite/services/apexrest/PYMT_AuthnetHook
Thanks and Best Regards!

Can not kill a YARN application through REST api

I'm using Hadoop 2.5.0 (CDH 5.3.5).
Following this document, I tried to kill a running YARN application (whose application id is application_1438849897472_0011) through the following REST api:
curl -i -XPUT http://{rm-rest-host}:{rm-rest-port}/ws/v1/cluster/apps/application_1438849897472_0011/killed
But I got a status code of 404 and an exception message complaining about
org.apache.hadoop.yarn.webapp.WebAppException: /v1/cluster/apps/application_1438849897472_0011/killed: controller for v1 not found
So what is going wrong ?
The correct URI ends with /state, not /killed and you are missing the request body.
Try this:
curl -v -X PUT -H "Content-Type: application/json" -d '{"state": "KILLED"}' 'http://{rm-rest-host}:{rm-rest-port}/ws/v1/cluster/apps/{app-id}/state'
Try the following:
curl -v -X PUT -d '{"state": "KILLED"}''http://{rm-rest-host}:{rm-rest-port}/ws/v1/cluster/apps/application_1438849897472_0011/state'
The documentation you linked states:
With the application state API, you can query the state of a submitted app as well kill a running app by modifying the state of a running app using a PUT request with the state set to "KILLED". To perform the PUT operation, authentication has to be setup for the RM web services. In addition, you must be authorized to kill the app. Currently you can only change the state to "KILLED"; an attempt to change the state to any other results in a 400 error response. Examples of the unauthorized and bad request errors are below. When you carry out a successful PUT, the iniital response may be a 202. You can confirm that the app is killed by repeating the PUT request until you get a 200, querying the state using the GET method or querying for app information and checking the state. [...]

play2-elastic does not work when ElasticSearch is installed on EC2 server

When I'm trying to connect to ElasticSearch (elasticsearch-0.90.3) installed on EC2 from a none local machine using play2-elastic plugin it throws the following exception (the plugin works fine when connecting locally)
error] application - ElasticSearch : No ElasticSearch node is available. Please check that your configuration is correct, that you ES server is up and reachable from the network. Index has not been created and prepared.
org.elasticsearch.client.transport.NoNodeAvailableException: No node available
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:205) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.client.transport.support.InternalTransportIndicesAdminClient.execute(InternalTransportIndicesAdminClient.java:85) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.client.support.AbstractIndicesAdminClient.exists(AbstractIndicesAdminClient.java:147) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequestBuilder.doExecute(IndicesExistsRequestBuilder.java:43) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59) ~[elasticsearch-0.90.3.jar:na]
I have used different methods to test the elasticsearch server is up and running, examples:
curl -XGET '184.72.55.204:9300/_analyze?analyzer=standard' -d 'this is a test'
curl: (52) Empty reply from server
telnet 184.72.55.204 9300
Trying 184.72.55.204...
Connected to ec2-184-72-55-204.us-west-1.compute.amazonaws.com.
Escape character is '^]'.
In some google groups I also saw other people having similar problem, they seem to be able to fix the problem with turning sniffing to off, so I have this in my application.conf
elasticsearch.client="184.72.55.204:9300"
elasticsearch.sniff=false # I ADDED THIS BUT DID NOT HELP
elasticsearch.index.name="phonotags"
elasticsearch.index.settings="{ analysis: { analyzer: { my_analyzer: { type: \"custom\", tokenizer: \"standard\" } } } }"
elasticsearch.index.clazzs="indexing.*"
elasticsearch.index.show_request=true
my build.scala file contains these:
"com.clever-age" % "play2-elasticsearch" % "0.7-SNAPSHOT"
resolvers += Resolver.url("play-plugin-releases", new URL("http://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/"))(Resolver.ivyStylePatterns),
resolvers += Resolver.url("play-plugin-snapshots", new URL("http://repo.scala-sbt.org/scalasbt/sbt-plugin-snapshots/"))(Resolver.ivyStylePatterns)
I appreciate your help.
thanks
It seems your node is not available
curl -XPUT '184.72.55.204:9200/twitter/tweet/1' -d '{ "user": "kimchy", "post_date" : "2011-08-18T16:20:00", "message" : "trying out Elastic Search" }'
Can you check this ?

Resources