I ran into the following problem:
I specified relationships in the data structures, but some data is output empty.
For example: I get data on a product, the price is pulled up, the stocks are pulled up, but the data on the store where they are located is not pulled up to the stock.
The structures that I use:
type Product struct {
Id uint `json:"id"`
Code int `json:"code" gorm:"index"`
ProductName string `json:"name"`
Brand string `json:"brand"`
Price Price `json:"price" gorm:"foreignKey:ProductCode;references:Code"`
Stocks []Stock `json:"stock" gorm:"foreignKey:ProductCode;references:Code"`
}
type Stock struct {
ProductCode int `json:"product_code"`
StoreCode uint `json:"store_code" gorm:"index;"`
Quantity int `json:"quantity"`
Store Store `json:"store" gorm:"foreignKey:StoreId;references:StoreCode;constraint:OnUpdate:CASCADE,OnDelete:CASCADE;"`
Timestamp int64 `json:"timestamp"`
}
type Price struct {
ProductCode int `json:"product_code"`
OldPrice int `json:"old_price"`
CurrentPrice int `json:"current_price"`
WholesalePrice int `json:"wholesale_price"`
Timestamp int64 `json:"timestamp"`
}
type Store struct {
Id uint `json:"id" gorm:"primaryKey"`
StoreKaspiId string `json:"store_kaspi_id"`
StoreId uint `json:"store_id"`
CityId uint `json:"city_id"`
City City `json:"city" gorm:"foreignKey:CityId"`
StoreAddress string `json:"store_address"`
}
type City struct {
Id uint `json:"id"`
Name string `json:"name"`
}
Query result:
{
"id": 79,
"code": 687,
"name": "Электромясорубка Аксион M21.03",
"brand": "Аксион",
"price": {
"product_code": 687,
"old_price": 30990,
"current_price": 23910,
"wholesale_price": 19900,
"timestamp": 1628824966063
},
"stock": [
{
"product_code": 687,
"store_code": 15,
"quantity": 37,
"store": {
"id": 0,
"store_kaspi_id": "",
"store_id": 0,
"city_id": 0,
"city": {
"id": 0,
"name": ""
},
"store_address": ""
},
"timestamp": 1628824966219
},
{
"product_code": 687,
"store_code": 20,
"quantity": 39,
"store": {
"id": 0,
"store_kaspi_id": "",
"store_id": 0,
"city_id": 0,
"city": {
"id": 0,
"name": ""
},
"store_address": ""
},
"timestamp": 1628824966219
},
{
"product_code": 687,
"store_code": 42,
"quantity": 39,
"store": {
"id": 0,
"store_kaspi_id": "",
"store_id": 0,
"city_id": 0,
"city": {
"id": 0,
"name": ""
},
"store_address": ""
},
"timestamp": 1628824966219
},
{
"product_code": 687,
"store_code": 45,
"quantity": 47,
"store": {
"id": 0,
"store_kaspi_id": "",
"store_id": 0,
"city_id": 0,
"city": {
"id": 0,
"name": ""
},
"store_address": ""
},
"timestamp": 1628824966219
}
]
}
Please help me understand!
Thank you in advance!
Related
Hi I would like to draw from graphql only those records whose date is equal to the month - August
If I want to pull another month, it is enough to replace it only in the query. At the moment, my query takes all the months instead of the ones it gives inside the filter
schema.json
{
"kind": "collectionType",
"collectionName": "product_popularities",
"info": {
"singularName": "product-popularity",
"pluralName": "product-popularities",
"displayName": "Popularity",
"description": ""
},
"options": {
"draftAndPublish": true
},
"pluginOptions": {},
"attributes": {
"podcast": {
"type": "relation",
"relation": "manyToOne",
"target": "api::product.products",
"inversedBy": "products"
},
"value": {
"type": "integer"
},
"date": {
"type": "date"
}
}
}
My query
query {
Popularities(filters: {date: {contains: [2022-08]}}) {
data {
attributes {
date
value
}
}
}
}
Response
{
"data": {
"Popularities": {
"data": [
{
"attributes": {
"date": "2022-08-03",
"value": 50
}
},
{
"attributes": {
"date": "2022-08-04",
"value": 1
}
},
{
"attributes": {
"date": "2022-08-10",
"value": 100
}
},
{
"attributes": {
"date": "2022-07-06",
"value": 20
}
}
]
}
}
}
I am using Confluent's kafka-connect-jdbc to read data from different RDBMS into kafka.
Here is my test table:
CREATE TABLE DFOCUSVW.T4(
COL1 VARCHAR(100) NOT null,
COL2 DECIMAL(6, 3) NOT null,
COL3 NUMERIC(6, 3) NOT null,
COL4 DECIMAL(12, 9) NOT null,
COL5 NUMERIC(12, 9) NOT null,
COL6 DECIMAL(18, 15) NOT null,
COL7 NUMERIC(18, 15) NOT null,
COL8 INTEGER NOT null,
Td_Update_Ts timestamp NOT null,
PRIMARY KEY (col1)
);
In my view, the numeric.mapping=best_fit could convert for COL2...COL5 into FLOAT64 (15 digits precision), but COL6...COL7 should be serialized as bytes without any conversion because they do not fit into FLOAT64.
Here is the auto-generated AVRO schema, which is the same for both numeric.mapping=best_fit and numeric.mapping=none:
{
"connect.name": "T4",
"fields": [
{
"name": "COL1",
"type": "string"
},
{
"name": "COL2",
"type": {
"connect.name": "org.apache.kafka.connect.data.Decimal",
"connect.parameters": {
"scale": "3"
},
"connect.version": 1,
"logicalType": "decimal",
"precision": 64,
"scale": 3,
"type": "bytes"
}
},
{
"name": "COL3",
"type": {
"connect.name": "org.apache.kafka.connect.data.Decimal",
"connect.parameters": {
"scale": "3"
},
"connect.version": 1,
"logicalType": "decimal",
"precision": 64,
"scale": 3,
"type": "bytes"
}
},
{
"name": "COL4",
"type": {
"connect.name": "org.apache.kafka.connect.data.Decimal",
"connect.parameters": {
"scale": "9"
},
"connect.version": 1,
"logicalType": "decimal",
"precision": 64,
"scale": 9,
"type": "bytes"
}
},
{
"name": "COL5",
"type": {
"connect.name": "org.apache.kafka.connect.data.Decimal",
"connect.parameters": {
"scale": "9"
},
"connect.version": 1,
"logicalType": "decimal",
"precision": 64,
"scale": 9,
"type": "bytes"
}
},
{
"name": "COL6",
"type": {
"connect.name": "org.apache.kafka.connect.data.Decimal",
"connect.parameters": {
"scale": "15"
},
"connect.version": 1,
"logicalType": "decimal",
"precision": 64,
"scale": 15,
"type": "bytes"
}
},
{
"name": "COL7",
"type": {
"connect.name": "org.apache.kafka.connect.data.Decimal",
"connect.parameters": {
"scale": "15"
},
"connect.version": 1,
"logicalType": "decimal",
"precision": 64,
"scale": 15,
"type": "bytes"
}
},
{
"name": "COL8",
"type": "int"
},
{
"name": "Td_Update_Ts",
"type": {
"connect.name": "org.apache.kafka.connect.data.Timestamp",
"connect.version": 1,
"logicalType": "timestamp-millis",
"type": "long"
}
}
],
"name": "T4",
"type": "record"
}
This schema shows that even, in case of best_fit, the connect-framework did not convert the logical type “DECIMAL” into AVRO’s primitive type “double” for the COL2...COL5 before passing the rows to AVRO serializer.
This schema also reports precision always as 64, which does not conform to the AVRO spec:
From Avro spec:
scale, a JSON integer representing the scale (optional). If not specified the scale is 0.
precision, a JSON integer representing the (maximum) precision of decimals stored in this type (required).
So, the “precision” for these types should be 6,12, and 18 and not 64!
All that being said, the avro deserializer should still have enough info to deserialize accurately, but when reading the topic with avro console consumer, I am getting:
{"COL1":"x2","COL2":"\u0003g“","COL3":"\u0003g“","COL4":"3ó1Ã\u0015","COL5":"3ó1Ã\u0015","COL6":"\u0003\u0018±š\u000E÷_y","COL7":"\u0003\u0018±š\u000E÷_y","COL8":2,"Td_Update_Ts":1583366400000}
{"COL1":"x3","COL2":"\u0004î3","COL3":"\u0004î3","COL4":"K;¨«\u0015","COL5":"K;¨«\u0015","COL6":"\u0004{÷\u0012l_y","COL7":"\u0004{÷\u0012l_y","COL8":3,"Td_Update_Ts":1583366400000}
{"COL1":"x1","COL2":"\u0001àó","COL3":"\u0001àó","COL4":"\u001CªºÛ\u0015","COL5":"\u001CªºÛ\u0015","COL6":"\u0001µl!±m_y","COL7":"\u0001µl!±m_y","COL8":1,"Td_Update_Ts":1583366400000}
For this data:
INSERT INTO t4 VALUES('x1', 123.123, 123.123, 123.123456789, 123.123456789, 123.123456789012345, 123.123456789012345, 1, '2020-03-05 00:00:00.000000 +00:00');
INSERT INTO t4 VALUES('x2', 223.123, 223.123, 223.123456789, 223.123456789, 223.123456789012345, 223.123456789012345, 2, '2020-03-05 00:00:00.000000 +00:00');
INSERT INTO t4 VALUES('x3', 323.123, 323.123, 323.123456789, 323.123456789, 323.123456789012345, 323.123456789012345, 3, '2020-03-05 00:00:00.000000 +00:00');
I have tried kafka-avro-console-consumer both with --property value.schema passing the above schema manually and --property schema.registry.url=http://localhost:8081
So, the deserializer clearly failed to use the AVRO schema to de-ser properly.
I was wondering why?
Given the following mapping:
"item": {
"properties": {
"name": {
"type": "string",
"index": "standard"
},
"state": {
"type": "string",
"index": "not_analyzed"
},
"important_dates": {
"properties": {
"city_id": {
"type": "integer"
},
"important_date": {
"type": "date",
"format": "dateOptionalTime"
}
}
}
}
}
And given the following items in an index:
{
"_id": 1,
"name": "test data 1",
"state": "california",
"important_dates": [
{
"city_id": 100,
"important_date": "2016-01-01T00:00:00"
},
{
"city_id": 200,
"important_date": "2016-05-15T00:00:00"
}
},
{
"_id": 2,
"name": "test data 2",
"state": "wisconsin",
"important_dates": [
{
"city_id": 300,
"important_date": "2016-04-10T00:00:00"
},
{
"city_id": 400,
"important_date": "2016-05-20T00:00:00"
}
}
Is it possible to do a range filter on important_dates, but only filter using the min date in the important_dates array? Could this also be expanded to only use the date for a specific city if a city_id was given as a parameter?
Example Queries:
If I have a range filter of 4/9/2016 to 5/17/2016 on important_dates, I only want to get back item 2 since the min date in item 1 doesn't fall within the range given.
If I have a range filter range filter of 4/9/2016 to 5/17/2016 on important_dates and pass in city_id 400, I should not get any results.
I try to refrain from asking questions with simple answers but I can't seem to figure out what the issue is here... (Issue in title)
Relevant code:
match := new(Match)
if _, msgB, err = ws.ReadMessage(); err != nil {
panic(err)
}else {
println(string(msgB))
err = json.Unmarshal(msgB, match)
if err != nil { panic(err) }
}
type Match struct {
Teams [][]Char
Map [][]Tile
ID string //uuid
Socket *websocket.Conn `json:'-'`
}
type Char struct {
ID int
HP int
CT int
Stats statList
X int
Y int
ACList Actions
}
type statList struct {
Str int
Vit int
Int int
Wis int
Dex int
Spd int
}
type Actions struct {
Actions []Action
TICKCT int
}
String to unmarshal (Formatted for visibility):
{
"Teams": [
[
{
"ID": 1,
"HP": 10,
"CT": 0,
"Stats": [
1,
1,
1,
1,
1,
1
],
"X": 0,
"Y": 0,
"ACList": {
"Actions": [],
"TICKCT": 0
}
}
],
[
{
"ID": 2,
"HP": 10,
"CT": 0,
"Stats": [
1,
1,
1,
1,
1,
1
],
"X": 2,
"Y": 2,
"ACList": {
"Actions": [],
"TICKCT": 0
}
}
]
],
"Map": [
[
{
"Depth": 1,
"Type": 1,
"Unit": 1
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
}
],
[
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
}
],
[
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": 2
}
]
],
"ID": "0b055e19-9b96-e492-b816-43297f12cc39"}
Error:
2014/03/28 12:11:41 http: panic serving 127.0.0.1:56436: json: cannot
unmarshal number into Go value of type main.Char
I made a fixed version of the code (playground). This seemed to be the main mistake:
type Char struct {
ID int
HP int
CT int
Stats []int // This was statList which won't work
X int
Y int
ACList Actions
}
Also note the definition I made of Tile which allows numbers to be nil.
type Tile struct {
Depth int
Type int
Unit *int
}
You didn't provide all the structures so I made some up - probably wrong! All together that is:
import (
"encoding/json"
"fmt"
)
type Match struct {
Teams [][]Char
Map [][]Tile
ID string //uuid
// Socket *websocket.Conn `json:'-'`
}
type Char struct {
ID int
HP int
CT int
Stats []int // This was statList which won't work
X int
Y int
ACList Actions
}
type statList struct {
Str int
Vit int
Int int
Wis int
Dex int
Spd int
}
type Action string
type Actions struct {
Actions []Action
TICKCT int
}
type Tile struct {
Depth int
Type int
Unit *int
}
var data = `{
"Teams": [
[
{
"ID": 1,
"HP": 10,
"CT": 0,
"Stats": [
1,
1,
1,
1,
1,
1
],
"X": 0,
"Y": 0,
"ACList": {
"Actions": [],
"TICKCT": 0
}
}
],
[
{
"ID": 2,
"HP": 10,
"CT": 0,
"Stats": [
1,
1,
1,
1,
1,
1
],
"X": 2,
"Y": 2,
"ACList": {
"Actions": [],
"TICKCT": 0
}
}
]
],
"Map": [
[
{
"Depth": 1,
"Type": 1,
"Unit": 1
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
}
],
[
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
}
],
[
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": 2
}
]
],
"ID": "0b055e19-9b96-e492-b816-43297f12cc39"}`
func main() {
match := new(Match)
err := json.Unmarshal([]byte(data), match)
if err != nil {
panic(err)
}
fmt.Printf("match = %#v\n", match)
}
My sample Json object is shown below:
{
"o": [
{
"level": 0,
"outlineItemId": 8,
"parentItemId": null,
"parentItem": null,
"order": 0,
"text": "section 1",
"isLeaf": "false",
"expanded": "true"
},
{
"level": 1,
"outlineItemId": 9,
"parentItemId": 8,
"parentItem": {
"level": 0,
"outlineItemId": 8,
"parentItemId": null,
"parentItem": null,
"order": 0,
"text": "section 1",
"isLeaf": "false",
"expanded": "true"
},
"order": 0,
"text": "sub 1",
"isLeaf": "false",
"expanded": "true"
},
{
"level": 2,
"outlineItemId": 10,
"parentItemId": 9,
"parentItem": {
"level": 1,
"outlineItemId": 9,
"parentItemId": 8,
"parentItem": {
"level": 0,
"outlineItemId": 8,
"parentItemId": null,
"parentItem": null,
"order": 0,
"text": "section 1",
"isLeaf": "false",
"expanded": "true"
},
"order": 0,
"text": "sub 1",
"isLeaf": "false",
"negateDevice": null,
"expanded": "true"
},
"order": 0,
"text": "sub sub 1",
"isLeaf": "true",
"expanded": "true"
}
]
}
Earlier when the tree was configured as:
treeReader: {
level_field: "level",
parent_id_field: "parentItemId",
leaf_field: "isLeaf",
expanded_field: "expanded"
},
I was displaying the correct indentation and image icons, however they were not expanded when the json obj always had "expanded":"true" so i tried the below code.
treeReader: {
level_field: "o.level",
parent_id_field: "o.parentItemId",
leaf_field: "o.isLeaf",
expanded_field: "o.expanded"
},
Now I am not getting the Image icons and the tree which was expanded earlier is now flat.
My Json reader just in case i goofed up..
jsonReader: {
root: 'o',
id: 'o.outlineItemId',
parentItemId: 'o.parentItem.outlineItemId',
text: 'o.text',
repeatitems: false,
page: function(obj) { return 1; },
total: function(obj) { return 1; },
records: function(obj) { return obj.o.length; },
},
Any help will be appreciated.
Shah
Got it!
For the reader i had to include cell:'' and remove the o. references. and also loaded:true in the json object.