Kafka Connect JDBC sink - write Avro field into PG JSONB - jdbc

I'm trying to build a pipeline where Avro data is written into a Postgres DB. Everything works fine with simple schemas and the AvroConverter for the values. However, I would like to have a nested field written into a JSONB column. There are a couple of problems with this. First, it seems that the Connect plugin does not support STRUCT data. Second, the plugin cannot write directly into the JSONB column.
The second problem should be avoided by adding a cast in PG, as described in this issue. The first problem is proving more diffult. I have tried different transformations but have not been able to get the Connect plugin to interpret one complex field as a string. The schema in questions looks something like this (in practice there would be more fields on the first level besides the timestamp):
{
"namespace": "test.schema",
"name": "nested_message",
"type": "record",
"fields": [
{
"name": "timestamp",
"type": "long"
},
{
"name": "nested_field",
"type": {
"name": "nested_field_record",
"type": "record",
"fields": [
{
"name": "name",
"type": "string"
},
{
"name": "prop",
"type": "float",
"doc": "Some property"
}
]
}
}
]
}
The message is written in Kafka as
{"timestamp":1599493668741396400,"nested_field":{"name":"myname","prop":377.93887}}
In order to write the contents of nested_field into a single DB column, I would like to interpret this entire field as a string. Is this possible? I have tried the cast transformation, but this only supports prmitive Avro types. Something along the lines of HoistField could work, but I don't see a way to limit this to a single field. Any ideas or advice would be greatly appreciated.
A completely different approach would be to use two connect plugins and UPSERT into the table. One plugin would use the AvroConverter for all fields save the nested one, while the second plugin uses the StringConverter for the nested field. This feels wrong in all kinds of ways though.

Related

keeping [Europe/Berlin] (or other timezones in this fomat) while indexing Elasticsearch

I'm trying to familiarize myself with the Elasticsearch, specifically defining the mapping within a json file and creating a new index with it (with the help of the new Java API Client and Spring boot).
This is what my json file looks like:
{
"mappings": {
"properties": {
"Id": {
"type": "text"
},
"timestamp": {
"type": "date",
"format": "date_optional_time"
},
"metadata":{
"type": "nested"
},
"attributes": {
"type": "nested"
}
}
}
}
this can index my documents just fine, but I realized that if I use ZonedDateTime.now() for the data in my timestamp field, it fails to index due to the [Europe/Berlin] at the end. It works if I change it to
ZonedDateTime now = ZonedDateTime.now();
String date = now.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME);
which gives me the time but without [Europe/Berlin]! As far as I understand from my various googling and "stackoverflow-ing", ES does not take [Timezone] in its date types, only the +2:00 format. But is it possible to keep it? (Maybe through an ingest pipeline?)
There are various documents that I would like to reindex that has [Timezone] hanging at the end of it, but these older documents saved it as text.... I would like to be able to do date math with the timestamp field in the future, which is why I decided to try and create a new/better mapping with proper fields. Any pointers appreciated!

How to convert json to collection in power apps

I have a power app that using the flow from power automate.
My flow is doing an HTTP get and respond a JSON to power apps like below.
Here is the JSON as text:
{"value": "[{\"dataAreaId\":\"mv\",\"AccountNum\":\"100000\",\"Name\":\"*****L FOOD AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100001\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100014\",\"Name\":\"****(SEB)\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100021\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100029\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500100\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500210\",\"Name\":\"****\"}]"}
But when I try to convert this JSON to the collection, It doesn't behave like a list.
It just seems like a text. Here is how I try to bind the list.
How can I create a collection from JSON to bind to the gallery view?
I found the solution. I finally create a collection from the response of flow.
The flow's name is GetVendor.
The response of flow is like this :
{"value": "[{\"dataAreaId\":\"mv\",\"AccountNum\":\"100000\",\"Name\":\"*****L FOOD AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100001\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100014\",\"Name\":\"****(SEB)\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100021\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100029\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500100\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500210\",\"Name\":\"****\"}]"}
Below code creates a list from this response :
ClearCollect(_vendorData, MatchAll(GetVendors.Run(_token.value).value, "\{""dataAreaId"":""(?<dataAreaId>[^""]*)"",""AccountNum"":""(?<AccountNum>[^""]*)"",""Name"":""(?<Name>[^""]*)""\}"));
And I could bind the accountnum and name from _vendorDatra collection to the gallery view
In my case I had the same issue as you, but couldn't manage to get data into _vendorData collection, because MatchAll regex part was not working correctly, even if I had exactly the same scenario and I could not make it work.
My solution was to modify the flow itself, where I returned Response instead of Respond to a Power app or Flow, so basically I could return full request from Http.
This caused me some issues also, because when I generated schema from sample I could not register the flow to the powerapp with the error Failed during http send request.
The solution was to manually review the response schema and change all column types to one of the following three, because other are not supported: string, integer or boolean. Object and array can be set only on top level items, but never on children, so if you have anything else than my mentioned three, replace it to string. And no property can be left with undefined type.
Basically I like this solution even more, because in powerapps itself you do not need to do any conversion or anything - simply use the data as is, because it is already recognized as collection in case of array and you have all the properties already named for you.
Response step schema example is below.
{
"type": "object",
"properties": {
"PropertyOne": {
"type": "string"
},
"PropertyTwo": {
"type": "integer"
},
"PropertyThree": {
"type": "boolean"
},
"PropertyFour": {
"type": "array",
"items": {
"type": "object",
"properties": {
"PropertyArray1": {
"type": "string"
},
"PropertyArray1": {
"type": "integer"
},
"PropertyArray1": {
"type": "boolean"
}
}
}
It is easy now.
Power Apps introduced ParseJSON function which helps converting string to collection easily.
Table(ParseJSON(JSONString));
In gallery, map columns like - ThisItem.Value.ColumnName

Elastic Beats - Changing the Field Type of Default Fields in Beats Documents?

I'm still fairly new to the Elastic Stack and I'm still not seeing the entire picture from what I'm reading on this topic.
Let's say I'm using the latest versions of Filebeat or Metricbeat for example, and pushing that data to Logstash output, (which is then configured to push to ES). I want an "out of the box" field from one of these beats to have its field type changed (example: change beat.hostname from it's current default "text" type to "keyword"), what is the best place/practice for configuring this? This kind of change is something I would want consistent across multiple hosts running the same Beat.
I wouldn't change any existing fields since Kibana is building a lot of visualizations, dashboards, SIEM,... on the exptected fields + data types.
Instead extend (add, don't change) the default mapping if needed. On top of the default index template, you can add your own and they will be merged. Adding more fields will require some more disk space (and probably memory when loading), but it should be manageable and avoids a lot of drawbacks of other approaches.
Agreed with #xeraa. It is not advised to change the default template since that field might be used in any default visualizations.
Create a new template, you can have multiple templates for the same index pattern. All the mappings will be merged.The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them.
For your case, probably create a multi-field for any field that needs to be changed. Eg: As shown here create a new keyword multifield, then you can refer the new field as
fieldname.raw
.
"properties": {
"city": {
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
}
}
The other answers are correct but I did the below in Dev console to update the message field from text to text & keyword
PUT /index_name/_mapping
{
"properties": {
"message": {
"type": "match_only_text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 10000
}
}
}
}
}

Script fields in nested objects specificaly geo_shapes

Part of my document mapping consisits of the mapping below
"locations": {
"type": "nested",
"properties": {
"point": {
"type": "geo_shape",
"tree": "quadtree",
"precision": "100m"
}
}
}
When I attempt to issue a script_field as part of a query Elasticsearch is returning an error
failed to run inline script [doc['locations.point'].distanceInMiles(53.4791,-2.2441)] using lang [groovy]
With a reason of:
failed to find field data builder for field locations.point, and type geo_shape
I'm assuming this is because the field is nested (it has a few (geo) points inside the field and the search matches on any one of them, however as it's nested the context of the path locations.point is obviously wrong, it needs to be something like locations.point[10] (for the 11th one perhaps - this is dependant on the context of the matched item in the query).
So, does anyone know a way to perform this properly? Is there a special operator I can tell the script so that it knows it needs to look at the matched point from the field?
Thanks in advance.
Turns out it's actually not possible to do this with geo_shape's

Generating GraphQL GUI from Schema and Schema from GUI

While using GraphiQL works well, my boss has asked me to implement a user interface where users can check elements presented to them via UI elements like checkbox, map relationships and get the data and doing this will generate a graphql input for the person, call the API and get the result back to the user.
So, basically this involves 2 generations. Generating a user interface from a GraphQL schema and generating a GraphQL input query from the user's selection.
I searched and I was not able to find any tools which already do this. My server is in Node and I am using Express GraphQL. I converted my express schema to GraphQLSchema language using https://github.com/graphql-cli/graphql-cli and I introspected the GraphQLSchema language using the introspect function at https://github.com/sheerun/graphqlviz/blob/master/cli.js
The object which I got was something like this (only partial schema output given below)
`
"data": {
"__schema": {
"queryType": {
"name": "Query"
},
"mutationType": {
"name": "Mutation"
},
"subscriptionType": null,
"types": [{
"kind": "OBJECT",
"name": "Query",
"description": null,
"fields": [{
"name": "employee",
"description": null,
"args": [{
"name": "ecode",
"description": null,
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"defaultValue": null
}],
`
I am looping through the elements trying to generate UI but I am quite stuck.
What is the best way to do this? Thanks in advance.
Well for the part of generating the ui from the introspection query, I think that the response contains enough data for a sufficient ui (description for each field can be used as a placeholder for each field's input box). If you're asking how can you generate a dynamic form from the introspection response, you can take a look at other projects that created transformers from json to html forms for inspiration/usage (take a look at https://github.com/brutusin/json-forms/blob/master/README.md#cdn). For complex fields (not primitive types) you may need to do some more work.
For the schema generation from the ui, you can use any query builder tool, and build a query according to the user inputs. Each combobox will be mapped to a specific SCHEMANAME.FIELDNAME and the value will be the value of the input box.
I hope it helped a bit. BTW, it sounds like an interesting tool so let us know if you succeed!

Resources