I am deploying a Kafka Connect JDBC Source. It is connecting properly to the dabase, but the result I am getting is this:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "bytes",
"optional": false,
"name": "org.apache.kafka.connect.data.Decimal",
"version": 1,
"parameters": {
"scale": "0"
},
"field": "ID"
},
{
"type": "bytes",
"optional": false,
"name": "org.apache.kafka.connect.data.Decimal",
"version": 1,
"parameters": {
"scale": "0"
},
"field": "TENANT_ID"
},
{
"type": "bytes",
"optional": false,
"name": "org.apache.kafka.connect.data.Decimal",
"version": 1,
"parameters": {
"scale": "0"
},
"field": "IS_ACTIVE"
},
{
"type": "int64",
"optional": false,
"name": "org.apache.kafka.connect.data.Timestamp",
"version": 1,
"field": "CREATION_DATE"
},
{
"type": "int64",
"optional": true,
"name": "org.apache.kafka.connect.data.Timestamp",
"version": 1,
"field": "LAST_LOGIN"
},
{
"type": "string",
"optional": true,
"field": "NAME"
},
{
"type": "string",
"optional": true,
"field": "MOBILEPHONE"
},
{
"type": "string",
"optional": true,
"field": "EMAIL"
},
{
"type": "string",
"optional": true,
"field": "USERNAME"
},
{
"type": "string",
"optional": true,
"field": "PASSWORD"
},
{
"type": "string",
"optional": true,
"field": "EXTERNAL_ID"
}
],
"optional": false
},
"payload": {
"ID": "fdo=",
"TENANT_ID": "Uw==",
"IS_ACTIVE": "AQ==",
"CREATION_DATE": 1548987456000,
"LAST_LOGIN": 1557401837030,
"NAME": " ",
"MOBILEPHONE": " ",
"EMAIL": " ",
"USERNAME": "ES00613751",
"PASSWORD": " ",
"EXTERNAL_ID": " "
}
}
As you can see, the numeric and timestamp values are not showing the value properly.
The config:
name=jdbc-teradata-source-connector
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=...
numeric.maping=best_fit
topic.prefix=test-2
mode=timestamp+incrementing
timestamp.column.name=LAST_LOGIN
incrementing.column.name=ID
topic=test-jdbc-oracle-source
The numeric mapping does not work since it is Confluent 3.2.2
I have also tried to cast the numbers to numeric but it does not work either.
Add in your connector config numeric.mapping
"numeric.mapping":"best_fit"
you can see all explication here
Related
I'm using Confluent Influxdb Connector to send data to Influxdb from Kafka. The configuration looks like this:
connector.class=io.confluent.influxdb.InfluxDBSinkConnector
influxdb.url=myurl
topics=mytopic
tasks.max=1
The schema looks like this:
{
"type": "record",
"name": "myrecord",
"fields": [
{
"name": "sn",
"type": "string"
},
{
"name": "value",
"type": "float"
},
{
"name": "tagnum",
"type": "string"
}
]
}
When sending the data from Kafka to Influxdb, every data item was regarded as FIELD.
How to set some of the data items as TAG when sent to InfluxDB from Kafka by using the InfluxDB Connector, such as set "tagnum" as TAG ?
Your schema would look like this. The important thing is that your tags are in a map field.
{
"type": "struct",
"fields": [
{ "type": "map", "keys": { "type": "string", "optional": false }, "values": { "type": "string", "optional": false }, "optional": false, "field": "tags" },
{ "field": "sn", "optional": false, "type": "string" },
{ "field": "value", "optional": false, "type": "float" }
],
"optional": false,
"version": 1
}
Here's an example sending a JSON payload:
kafkacat -b localhost:9092 -P -t testdata-json4 <<EOF
{ "schema": { "type": "struct", "fields": [ { "type": "map", "keys": { "type": "string", "optional": false }, "values": { "type": "string", "optional": false }, "optional": false, "field": "tags" }, { "field": "sn", "optional": false, "type": "string" }, { "field": "value", "optional": false, "type": "float" } ], "optional": false, "version": 1 }, "payload": { "tags": { "tagnum": "5" }, "sn": "FOO", "value": 500.0 } }
EOF
curl -i -X PUT -H "Accept:application/json" \
-H "Content-Type:application/json" http://localhost:8083/connectors/SINK_INFLUX_01/config \
-d '{
"connector.class" : "io.confluent.influxdb.InfluxDBSinkConnector",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "true",
"key.converter" : "org.apache.kafka.connect.storage.StringConverter",
"topics" : "testdata-json4",
"influxdb.url" : "http://influxdb:8086",
"influxdb.db" : "my_db",
"measurement.name.format" : "${topic}"
}'
The result in InfluxDB:
$ influx -execute 'SELECT * FROM "testdata-json4" GROUP BY tagnum;' -database "my_db"
name: testdata-json4
tags: tagnum=5
time sn value
---- -- -----
1579713749737000000 FOO 500
Ref: https://rmoff.net/2020/01/23/notes-on-getting-data-into-influxdb-from-kafka-with-kafka-connect/
How can I add actions for containers?
According to the documentation Container type has an "actions" object, but when testing the card it in the adaptive cards visualizer or in the bot-framework emulator no button is displayed.
Attached an example for the kind of card I'm trying to generate.
Thanks for your help.
{
"type": "AdaptiveCard",
"body": [
{
"style":"normal",
"type": "Container",
"separation" : "strong",
"actions": [
{
"type": "Action.OpenUrl",
"url": "http://foo.bar.com",
"title": "adaptivecards1"
}
],
"items": [
{
"type": "ColumnSet",
"separation": "strong",
"columns": [
{
"type": "Column",
"size":1,
"items": [
{
"type": "TextBlock",
"text": "Title",
"size": "large",
"isSubtle": true
},
{
"type": "TextBlock",
"text": "Model: ABC",
"size": "small"
}
]
},
{
"type": "Column",
"size": "1",
"items": [
{
"type": "TextBlock",
"text": " "
},
{
"type": "Image",
"url": "https://path/to/image.jpg",
"size": "large",
"horizontalAlignment" :"right"
}
]
}
]
}
]
},
{
"style":"normal",
"type": "Container",
"separation" : "strong",
"actions": [
{
"type": "Action. OpenUrl",
"url": "http://foo.bar.com",
"title": "adaptivecards2"
}
],
"items": [
{
"type": "ColumnSet",
"separation": "strong",
"columns": [
{
"type": "Column",
"size":1,
"items": [
{
"type": "TextBlock",
"text": "Another Title",
"size": "large",
"isSubtle": true
},
{
"type": "TextBlock",
"text": "Model: XYZ",
"size": "small"
} ]
},
{
"type": "Column",
"size": "1",
"items": [
{
"type": "TextBlock",
"text": " "
},
{
"type": "Image",
"url": "https://path/to/other/image.jpg",
"size": "large",
"horizontalAlignment" :"right"
}
]
}
]
}
]
}
]}
Per this GitHub issue, it seems that there is an error in the documentation and that the actions property doesn't exist on Container.
Instead, you should add an item of type ActionSet to your items array, with the list of actions.
Following your sample, it should look like:
{
"type": "AdaptiveCard",
"body": [
{
"style": "normal",
"type": "Container",
"separation": "strong",
"items": [
{
"type": "ActionSet",
"actions": [
{
"type": "Action.OpenUrl",
"url": "http://foo.bar.com",
"title": "adaptivecards1"
}
]
},
{
"type": "ColumnSet",
"separation": "strong",
"columns": [
{
"type": "Column",
"size": 1,
"items": [
{
"type": "TextBlock",
"text": "Title",
"size": "large",
"isSubtle": true
},
{
"type": "TextBlock",
"text": "Model: ABC",
"size": "small"
}
]
},
{
"type": "Column",
"size": "1",
"items": [
{
"type": "TextBlock",
"text": " "
},
{
"type": "Image",
"url": "https://path/to/image.jpg",
"size": "large",
"horizontalAlignment": "right"
}
]
}
]
}
]
},
{
"style": "normal",
"type": "Container",
"separation": "strong",
"items": [
{
"type": "ActionSet",
"actions": [
{
"type": "Action.OpenUrl",
"url": "http://foo.bar.com",
"title": "adaptivecards2"
}
]
},
{
"type": "ColumnSet",
"separation": "strong",
"columns": [
{
"type": "Column",
"size": 1,
"items": [
{
"type": "TextBlock",
"text": "Another Title",
"size": "large",
"isSubtle": true
},
{
"type": "TextBlock",
"text": "Model: XYZ",
"size": "small"
}
]
},
{
"type": "Column",
"size": "1",
"items": [
{
"type": "TextBlock",
"text": " "
},
{
"type": "Image",
"url": "https://path/to/other/image.jpg",
"size": "large",
"horizontalAlignment": "right"
}
]
}
]
}
]
}
]
}
This is also discussed here.
Similar to hive querying records for a specific uniontype
I have data on s3 in avro format and following is the avro structure:
{
"type": "record",
"name": "Event",
"namespace": "com.company.avro.event",
"fields": [
{
"name": "content",
"type": [
{
"type": "record",
"name": "Follow",
"fields": [
{
"name": "content",
"type": [
{
"type": "record",
"name": "UserFollowBrand",
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "actor",
"type": "com.company.avro.entity.User"
},
{
"name": "verb",
"type": "string",
"default": "UserFollowBrand"
},
{
"name": "direct_object",
"type": "com.company.avro.entity.Brand"
},
{
"name": "on",
"type": [
"com.company.avro.type.IoSScreen",
"com.company.avro.type.AndroidScreen",
"null"
]
},
{
"name": "using",
"type": "com.company.avro.entity.App"
},
{
"name": "from",
"type": "string"
},
{
"name": "at",
"type": "long"
}
]
},
{
"type": "record",
"name": "UserFollowUser",
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "actor",
"type": "com.company.avro.entity.User"
},
{
"name": "verb",
"type": "string",
"default": "UserFollowUser"
},
{
"name": "direct_object",
"type": "com.company.avro.entity.User"
},
{
"name": "on",
"type": [
"com.company.avro.type.IoSScreen",
"com.company.avro.type.AndroidScreen",
"null"
]
},
{
"name": "using",
"type": "com.company.avro.entity.App"
},
{
"name": "from",
"type": "string"
},
{
"name": "at",
"type": "long"
}
]
}
]
}
]
},
{
"type": "record",
"name": "Like",
"fields": [
{
"name": "content",
"type": [
{
"type": "record",
"name": "UserLikeListing",
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "actor",
"type": "com.company.avro.entity.User"
},
{
"name": "verb",
"type": "string",
"default": "UserLikeListing"
},
{
"name": "direct_object",
"type": "com.company.avro.entity.Listing"
},
{
"name": "on",
"type": [
"com.company.avro.type.IoSScreen",
"com.company.avro.type.AndroidScreen",
"com.company.avro.type.WebScreen",
"null"
]
},
{
"name": "using",
"type": "com.company.avro.entity.App"
},
{
"name": "from",
"type": "string"
},
{
"name": "at",
"type": "long"
}
]
}
]
}
]
}
]
}
]
}
I am not sure how can I query for specific field within the uniontype.
For ex: select * from events where content.verb = "a" and content.actor.id = 34
Earlier hive did not support union types but now it seems it does support https://issues.apache.org/jira/browse/HIVE-2390
Unable to figure out how to use create_union function to query this.
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-UnionTypes
I have a grid which would have many columns typed date. These all grids is generated from a generic function, so that I cannot know whether the column type is date or not. There is one rule validates through all columns. It is that all columns name end with "Date" suffix. For instance, createDate, editDate, visitedDate, etc...
So that, I can understand that it can be date and I parse it like you can see at the dataSource parse function
I have trouble when I update the cell. It does not reflect its own value to Model. The date column proto function returns "Invalid Date" error. I do not understand what it happen
var dataSource = new kendo.data.DataSource({
"data": [
{
"hidden_gridColumns": "",
"id": "21632",
"projectId": "146",
"customerTypeId": "4",
"district": "0",
"fieldSize": "12",
"fieldType": "0",
"floorCoveringType": "12",
"lastChangeDate": null,
"estimatedModificationDate": null,
"latestCompany": ""
}
],
"schema": {
"model": {
"id": "id",
"fields": {
"gridColumns": {
"type": "string"
},
"id": {
"type": "string"
},
"projectId": {
"type": "string"
},
"customerTypeId": {
"type": "string"
},
"district": {
"type": "string"
},
"fieldSize": {
"type": "string"
},
"fieldType": {
"type": "string"
},
"floorCoveringType": {
"type": "string"
},
"lastChangeDate": {
"type": "date"
},
"estimatedModificationDate": {
"type": "string"
},
"latestCompany": {
"type": "string"
}
}
},
parse: function(data){
$.each(data,
function(rowNo,
row){
$.each(row,
function(colName,
column){
if(colName.indexOf("Date")>=0){
console.log(colName + " taranıyor");
row[colName] = kendo.parseDate(row[colName], "dd-MM-yyyy");
}
});
});
return data;
}
},
"batch": true
})
And this is my columns structure:
var columns = [
{
"title": "gridColumns",
"field": "gridColumns",
"hidden": true
},
{
"title": "id",
"field": "id",
"hidden": true
},
{
"title": "projectId",
"field": "projectId",
"hidden": true
},
{
"title": "Müşteri Tipi",
"field": "customerTypeId",
"hidden": false,
"width": "91",
"values": [
{
"value": "1",
"text": "Yurtiçi "
},
{
"value": "2",
"text": "Yurtdışı"
},
{
"value": "3",
"text": "Spor Kulübü"
},
{
"value": "4",
"text": "Diğer"
},
{
"value": "5",
"text": "Üniversite"
},
{
"value": "6",
"text": ""
}
]
},
{
"title": "district",
"field": "district",
"hidden": true,
"width": "132"
},
{
"title": "Saha Ölçüsü",
"field": "fieldSize",
"hidden": false,
"width": "85"
},
{
"title": "Saha Türü",
"field": "fieldType",
"hidden": false,
"values": [
{
"value": "1",
"text": "Açık"
},
{
"value": "0",
"text": "Kapalı"
}
]
},
{
"title": "Halı Cinsi",
"field": "floorCoveringType",
"hidden": false,
"width": "76"
},
{
"title": "Son Halı değişim Tarih",
"field": "lastChangeDate",
"hidden": false,
"format": "{0:dd-MM-yyyy}"
},
{
"title": "Tahmini Yenileme Tarihih",
"field": "estimatedModificationDate",
"hidden": false
},
{
"title": "Son Çalıştığı Halı Firması",
"field": "latestCompany",
"hidden": false
}
]
I solved this problem by the code below:
parse: function(response) {
var tmpData=[];
for (var i = 0; i < response.length; i++) {
var tmpRow = response[i];
$.each(tmpRow, function(colNo, colValue){
if(colNo.indexOf("Date")>-1){
tmpRow[colNo]=kendo.parseDate(new Date(tmpRow[colNo]));
}
});
tmpData.push(tmpRow);
}
return tmpData;
}
I'm in the process of trying to setup a Kibana dashboard. This dashboard is hitting an ElasticSearch index. My index has the following mappings:
"myindex": {
"mappings": {
"animals": {
"properties": {
"#timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"#version": {
"type": "string"
},
"Class": {
"type": "string"
},
"Order": {
"type": "string"
},
"Family": {
"type": "string"
},
"Genus": {
"type": "string"
},
"Species": {
"type": "string"
}
}
},
"elements" : {
"properties": {
"#timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"#version": {
"type": "string"
},
"Symbol": {
"type": "string"
},
"Name": {
"type": "string"
},
"Group": {
"type": "string"
},
"Period": {
"type": "string"
}
}
}
}
}
As the mappings show, my index has two different types of information. My challenge is, I don't know how to setup my kibana dashboard to just list the information for each type. I've confirmed that the data in my elasticsearch instance is the correct data.
In my dashboard, I'm trying to show two tables. One table will show all of the documents associated with "animals". The other table will show all of the documents associated with "elements". Unfortunately, I can't figure out how to focus the results of a table down to a specific type. I'm basically trying to figure out how to setup either a query or a filter (not sure the difference between the two in the kibana world) for a specific panel. Currently, my dashboard looks like this:
{
"title": "Research",
"services": {
"query": {
"list": {
"0": {
"query": "*",
"alias": "",
"color": "#7EB26D",
"id": 0,
"pin": false,
"type": "lucene"
}
},
"ids": [
0
]
},
"filter": {
"list": {
"0": {
"type": "time",
"field": "#timestamp",
"from": "now-{{ARGS.from || '24h'}}",
"to": "now",
"mandate": "must",
"active": true,
"alias": "",
"id": 0
}
},
"ids": [
0
]
}
},
"rows": [
{
"title": "Animals",
"height": "350px",
"editable": true,
"collapse": false,
"collapsable": true,
"panels": [
{
"title": "Animals",
"error": false,
"span": 12,
"editable": true,
"group": [
"default"
],
"type": "table",
"size": 100,
"pages": 5,
"offset": 0,
"sort": [
"#timestamp",
"desc"
],
"style": {
"font-size": "9pt"
},
"overflow": "min-height",
"fields": [
"Class",
"Order",
"Family",
"Genus",
"Species"
],
"localTime": true,
"timeField": "#timestamp",
"highlight": [],
"sortable": true,
"header": true,
"paging": true,
"spyable": true,
"queries": {
"mode": "all",
"ids": [
0
]
},
"field_list": true,
"status": "Stable",
"trimFactor": 300,
"normTimes": true
}
],
"notice": false
},
{
"title": "",
"height": "350px",
"editable": true,
"collapse": false,
"collapsable": true,
"panels": [
{
"title": "Elements",
"error": false,
"span": 12,
"editable": true,
"group": [
"default"
],
"type": "table",
"size": 100,
"pages": 5,
"offset": 0,
"sort": [
"#timestamp",
"desc"
],
"style": {
"font-size": "9pt"
},
"overflow": "min-height",
"fields": [
"Symbol",
"Name",
"Group",
"Period"
],
"localTime": true,
"timeField": "#timestamp",
"highlight": [],
"sortable": true,
"header": true,
"paging": true,
"spyable": true,
"queries": {
"mode": "all",
"ids": [
0
]
},
"field_list": true,
"trimFactor": 300,
"normTimes": true
}
],
"notice": false
}
],
"editable": true,
"failover": false,
"index": {
"interval": "none",
"default": "myindex"
},
"style": "dark",
"panel_hints": true,
"pulldowns": [
{
"type": "query",
"collapse": false,
"notice": false,
"query": "*",
"pinned": true,
"history": [],
"remember": 10
},
{
"type": "filtering",
"collapse": true,
"notice": false
}
],
"loader": {
"save_gist": false,
"save_elasticsearch": true,
"save_local": true,
"save_default": true,
"save_temp": true,
"save_temp_ttl_enable": true,
"save_temp_ttl": "30d",
"load_gist": true,
"load_elasticsearch": true,
"load_elasticsearch_size": 20,
"load_local": true,
"hide": false
},
"refresh": "30s"
}
Can someone tell me how to show two different types of documents in Kibana? I see a queries object on the table panel. Yet, I have no idea how to use it.
Thank you so much
You can use the _type field to narrow the result to a specific elastic search type (e.g. animals).
So when you define the query (or filter) for your table, just make sure to specify the relevant _type (i.e. _type: animals)
You can use scripted fields to have value of type as separate field which will be indexed.
or you can add _type field to search field it will be available.
In case of scripted fields add as doc['_type'].value and give it any name you want.
https://github.com/elastic/kibana/issues/5684