I am trying to create a new event (my content type name) trough the strapi rest API.
My payload:
{
title: "Event1",
description: "Somedescription",
date: "SomeDate",
store: 3 //this is the relation id
},
According to the documentation it should be correct... the event is created but the relation is not being filled, it shows as empty...
my events schema:
....
"store": {
"type": "relation",
"relation": "manyToOne",
"target": "api::store.store",
"inversedBy": "events"
}
.....
I took a look at the strapi log and it does not seem to be doing any try to record the relation because the sql looks like this:
{
method: 'insert',
options: {},
timeout: false,
cancelOnTimeout: false,
bindings: [
2022-05-21T15:14:47.404Z,
2022-05-30T16:00:00.000Z,
'h',
2022-05-21T15:14:47.403Z,
'h',
2022-05-21T15:14:47.404Z
],
__knexQueryUid: 'ZimE1XysRQZdZzz07fOQt',
sql: 'insert into "events" ("created_at", "date", "description", "published_at", "title", "updated_at") values (?, ?, ?, ?, ?, ?) returning "id"',
returning: 'id'
}
Any idea what I am doing wrong?
Related
I have an app that makes a lot of requests to different urls.
Each of them have different request & response structure.
Configurations of the requests types located in Mysql databse.
It contains next data:
URL
Method
Query params
Body params
Headers
Response structure
So while before I was using Node.js for this things it was easy to make solution for this.
But with Go I see only way in using reflect package for this. But reflect harms performance of the app.
Is there a simple way to generate code for this?
Example of request config:
{
id: 1,
name: "Req1",
url: "http://example.com",
query: [
{name: "user_id", source: "user", value: "id" },
{name: "user_ip", source: "user", value: "ip" },
{name: "token", source: "const", value: "xxx" },
],
headers: [
{name: "Accept-Language", source: "user", value: "language"}
],
body: [
{"name": "user.ua", "user", "ua"}
]
}
User example:
{
ip: "127.0.0.1",
id: "123",
ua: "User Agent...",
"language": "en"
}
And in the output should be made next request:
URL: http://example.com?user_id=123&user_ip=127.0.0.1&token=xxx
Headers:
{
Accept-Language: en
}
Body:
{
user: {
ua: "User Agent..."
}
}
In body might be used path of the param.
Is there a tool for automatically generating this type of code?
I write in Ruby On Jets and use Dynomite to work with DynamoDB. And I have a problem with GSI.
I have a table that has 3 fields: display, value, title_hilight. I need to use search across all three fields. For this, I decided to use the global secondary index. For testing purposes, I decided to add GSI for the "display" field.
I created migration
class SomeTableMigration<Dynomite::Migration
def up
create_table 'table-name' do | t |
t.partition_key "id: string: hash" # required
t.gsi do | i |
i.partition_key "display: string"
end
end
end
end
Then I created a model
require "active_model"
class ModelName<ApplicationItem
self.set_table_name 'some-model-name'
column :id, :display,:val, :title_hilight
end
Now I'm trying to find a record by value from the "display" field:
ModelName.where ({display: 'asd'}) and I'm getting that error:
Aws::DynamoDB::Errors::ValidationException (Query condition missed key schema element)
Here is the output of aws dynamodb describe-table --table-name table-name --endpoint-url http://localhost:8000
{
"Table": {
"AttributeDefinitions": [
{
"AttributeName": "id",
"AttributeType": "S"
},
{
"AttributeName": "display",
"AttributeType": "S"
}
],
"TableName": "some-table-name",
"KeySchema": [
{
"AttributeName": "id",
"KeyType": "HASH"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": "2020-10-26T14:52:59.589000+03:00",
"ProvisionedThroughput": {
"LastIncreaseDateTime": "1970-01-01T03:00:00+03:00",
"LastDecreaseDateTime": "1970-01-01T03:00:00+03:00",
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"TableSizeBytes": 112,
"ItemCount": 1,
"TableArn": "arn:aws:dynamodb:ddblocal:000000000000:table/some-table-name",
"GlobalSecondaryIndexes": [
{
"IndexName": "display-index",
"KeySchema": [
{
"AttributeName": "display",
"KeyType": "HASH"
}
],
"Projection": {
"ProjectionType": "ALL"
},
"IndexStatus": "ACTIVE",
"ProvisionedThroughput": {
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"IndexSizeBytes": 112,
"ItemCount": 1,
"IndexArn": "arn:aws:dynamodb:ddblocal:000000000000:table/some-table-name/index/display-index"
}
]
}
}
I changed the name of the real table to SomeTableName (sometimes just table-name). The rest of the code remained unchanged. Thanks for help
As mentioned here:
In DynamoDB, you can optionally create one or more secondary indexes
on a table and query those indexes in the same way that you query a
table.
You need to specify GSI name explicitly in your query.
#jny answer is correct. He told me to use a different index. I don’t know how to use a different model (see comments on his answer), but the idea with an index is very, very correct. This is how everything works for me now
ModelName.query(
index_name: 'display-index',
expression_attribute_names: { "#display_name" => "display" },
expression_attribute_values: { ":display_value" => "das" },
key_condition_expression: "#display_name = :display_value",
)
My readQuery is returning a field not found error, even though it seems like the field is present in the cache.
QUERY
const GETIMSFROMCACHE_QUERY = gql`
query getIMsFromCache($fromID: String!){
instant_message(fromID:$fromID){
id,
fromID,
toID,
msgText
}
} `;
CACHE RESOLVER
client.cache.cacheResolvers = {
Query: {
instant_message: (_, args) => toIdValue(client.dataIdFromObject({
__typename: 'instant_message',
id: args.id
})),
},
};
READQUERY CALL, IN UPDATE
let instant_message = cache.readQuery({ query: GETIMSFROMCACHE_QUERY, variables: {"fromID": fromID} });
ERROR
Error: Can't find field instant_message({"fromID":"ayqNLcA7c6r8vip3i"}) on object (ROOT_QUERY) {
"getMyUserData({\"id\":\"ayqNLcA7c6r8vip3i\"})": [
{
"type": "id",
"generated": false,
"id": "MyUserData:ayqNLcA7c6r8vip3i",
"typename": "MyUserData"
}
],
"getMyUserData({\"id\":\"9W95z8A7Y6i34buk7\"})": [
{
"type": "id",
"generated": false,
"id": "MyUserData:9W95z8A7Y6i34buk7",
"typename": "MyUserData"
}
],
"Appts({\"originatingUserID\":\"ayqNLcA7c6r8vip3i\"})": [],
"instant_message({\"fromID\":\"ayqNLcA7c6r8vip3i\",\"toID\":\"9W95z8A7Y6i34buk7\"})": [
{
"type": "id",
"generated": false,
"id": "instant_message:126",
"typename": "instant_message"
},
{
"type": "id",
"generated": false,
"id": "instant_message:127",
"typename": "instant_message"
},
{
"type": "id",
"generated": false,
"id": "instant_message:128",
"typename": "instant_message"
}
]
}.
Looking at the error message, there seems to be a instant_message object present on the ROOT_QUERY object with the target user id, but I'm getting this error.
How can I correct this?
Thanks in advance to all for any info.
Solved! This was tricky because the regular resolver for the original query, brings back any IM that is to or from either of the two users. The resolver returns any instant_messages that are from fromID and to toID, or vice-versa.
So I thought I needed some sort of cache resolver to repeat this when querying the cache.
Eventually I realized that the cache doesn't care what happened in the resolver -- it's going to store the instant_message objects as being from fromID and to toID, regardless of what happened in the resolver.
Once I realized that, I dropped the special cache resolver query, and just used the original query that retrieved the instant_messages in the first place, with the same query variables used with it in the first place, and it worked. :)
JSONata offers conditional expressions and predicates which can be used to select values out of JSON trees.
However, I have not been able to find a way to test the datatype of a JSON value.
For example, given the array:
[null, true, false, 1, 2.3, "a", ["x"], {}, {"y" : "z}]
I only want to pull out the numeric values.
[1, 2.3]
Q: In a JSONata query, how does one test the JSON datatype (null, boolean, number, string, array, object) of a value?
Currently there is no way to do this in JSONata. Worthy of an enhancement request though.
Wow, today discovered this cool JSONata. Here is my try:
http://try.jsonata.org/
[null, true, false, 1, 2.3, "a", ["x"], {}, {"y" : "z"}]
*[$ ~> /^[0-9\.]{1,}$/m]
JSONata offers the $type-method to check datatype of a JSON value. For example. the following snippet will return the datatypes of the values in the invoices example data in https://try.jsonata.org.
Account.Order.Product.{
"priceType": $type(Price),
"productNameType": $type($.'Product Name'),
"descriptionType": $type(Description)
}
The result is:
[
{
"priceType": "number",
"productNameType": "string",
"descriptionType": "object"
},
{
"priceType": "number",
"productNameType": "string",
"descriptionType": "object"
},
{
"priceType": "number",
"productNameType": "string",
"descriptionType": "object"
},
{
"priceType": "number",
"productNameType": "string",
"descriptionType": "object"
}
]
By changing values of Price and Product Name to null in the example JSON, the result will change for that particular object to:
{
"priceType": "null",
"productNameType": "null",
"descriptionType": "object"
},
One can check for 'null', 'number', 'string', 'array', 'object', 'boolean'.
For my concern I've used it to check for null values when converting dates to milliseconds:
'enddate': $type($v.enddate) = 'null' ? null : $toMillis($v.enddate),
You can check if a value is a number doing value - value = 0. In case of type is number it will always be 0, so result will be true. It will trigger an error if it is a string.
I'm using kendo ui data grid with a firebase (rest json response). The structure can contain multiple objects. However, these objects are not in a standard array format. See my json file below:
{
"users": {
"contactdetails": {
"email": "johnlittle#email.com"
},
"firstname": "John",
"id": 1,
"surname": "Little"
}
}
I am able to read firstname and surname onto the grids column but cannot get to the email object.
This is my schema definition:
schema: {
model: {
fields: {
id: {type: "number"},
firstname: {type: "string"},
surname: {type: "string"},
email: {type: "string"}
}
}
}
As far I know, u can not specify nested object to schema model definition. One way is you can use column template for email column.
columns: [
{ field: "firstname", title: "FirstName" },
{ field: "surname", title: "Surename" },
{ title: "Email", template: "#= data.contactdetails.email #" },
],