I'm new to DynamoDB. I am trying to create DynamoDB with SAM project. I know I can only use "S", "N", "B" for AttributeType but I want to use like the below.
{
"TableName": "xyz",
"KeySchema": [
{
"AttributeName": "uid",
"KeyType": "HASH"
}
],
"AttributeDefinitions": [
{
"AttributeName": "uid",
"AttributeType": "S"
},
{
"AttributeName": "email",
"AttributeType": "S"
},
{
"AttributeName": "postal_code",
"AttributeType": "S"
},
{
"AttributeName": "bookmark",
"AttributeType": "L" ← (I want to use List)
},
{
"AttributeName": "children",
"AttributeType": "M" ← (I want to use Map)
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": 2,
"WriteCapacityUnits": 2
}
It is my table.json and I want to create the table with this aws command.
aws dynamodb --profile local --endpoint-url http://localhost:8000 create-table --cli-input-json file://./testdata/table.json
How do you have list data and map data with DynamoDB?
It is best to do it when you add items to the table. DynamoDB has a flexible schema and therefore does not enforce a schema beyond the primary key. Item A might have attribute1, but item B might omit that attribute.
When you create your table, just add the primary key (either partition key or partition key + sort key) and that's it. Then add your items with the data typed attributes that item needs.
I think SS can be used for List where SS stands for String Set. e.g.:
...
{
"AttributeName": "bookmark",
"AttributeType": "SS"
},
...
Ref: DynamoDB Supported Data Types
Related
I write in Ruby On Jets and use Dynomite to work with DynamoDB. And I have a problem with GSI.
I have a table that has 3 fields: display, value, title_hilight. I need to use search across all three fields. For this, I decided to use the global secondary index. For testing purposes, I decided to add GSI for the "display" field.
I created migration
class SomeTableMigration<Dynomite::Migration
def up
create_table 'table-name' do | t |
t.partition_key "id: string: hash" # required
t.gsi do | i |
i.partition_key "display: string"
end
end
end
end
Then I created a model
require "active_model"
class ModelName<ApplicationItem
self.set_table_name 'some-model-name'
column :id, :display,:val, :title_hilight
end
Now I'm trying to find a record by value from the "display" field:
ModelName.where ({display: 'asd'}) and I'm getting that error:
Aws::DynamoDB::Errors::ValidationException (Query condition missed key schema element)
Here is the output of aws dynamodb describe-table --table-name table-name --endpoint-url http://localhost:8000
{
"Table": {
"AttributeDefinitions": [
{
"AttributeName": "id",
"AttributeType": "S"
},
{
"AttributeName": "display",
"AttributeType": "S"
}
],
"TableName": "some-table-name",
"KeySchema": [
{
"AttributeName": "id",
"KeyType": "HASH"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": "2020-10-26T14:52:59.589000+03:00",
"ProvisionedThroughput": {
"LastIncreaseDateTime": "1970-01-01T03:00:00+03:00",
"LastDecreaseDateTime": "1970-01-01T03:00:00+03:00",
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"TableSizeBytes": 112,
"ItemCount": 1,
"TableArn": "arn:aws:dynamodb:ddblocal:000000000000:table/some-table-name",
"GlobalSecondaryIndexes": [
{
"IndexName": "display-index",
"KeySchema": [
{
"AttributeName": "display",
"KeyType": "HASH"
}
],
"Projection": {
"ProjectionType": "ALL"
},
"IndexStatus": "ACTIVE",
"ProvisionedThroughput": {
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"IndexSizeBytes": 112,
"ItemCount": 1,
"IndexArn": "arn:aws:dynamodb:ddblocal:000000000000:table/some-table-name/index/display-index"
}
]
}
}
I changed the name of the real table to SomeTableName (sometimes just table-name). The rest of the code remained unchanged. Thanks for help
As mentioned here:
In DynamoDB, you can optionally create one or more secondary indexes
on a table and query those indexes in the same way that you query a
table.
You need to specify GSI name explicitly in your query.
#jny answer is correct. He told me to use a different index. I don’t know how to use a different model (see comments on his answer), but the idea with an index is very, very correct. This is how everything works for me now
ModelName.query(
index_name: 'display-index',
expression_attribute_names: { "#display_name" => "display" },
expression_attribute_values: { ":display_value" => "das" },
key_condition_expression: "#display_name = :display_value",
)
Knowing the schema (fetched via getIntrospectionQuery), how could I get the type of a particular field?
For example, say I run this query:
query {
User {
name
lastUpdated
friends {
name
}
}
}
and get this result:
{
"data": {
"User": [
{
"name": "alice",
"lastUpdated": "2018-02-03T17:22:49+00:00",
"friends": []
},
{
"name": "bob",
"lastUpdated": "2017-09-01T17:08:49+00:00",
"friends": [
{
"name": "eve"
}
]
}
]
}
}
I'd like to know the types of the fields and construct something like this:
{
"name": "String",
"lastUpdated": "timestamptz",
"friends": "[Friend]"
}
How could I do that without extra requests to the server?
After retrieving the schema, you can build it into a JSON object (if your graphql framework does not do it already for you).
Using a JSON parser, you can retrieve the the types of each field.
I will not enter into the detail, as it would depend on the technology your are using.
{
"id": 1,
"subdocuments": [
{
"id": "A",
"name": 1
},
{
"id": "B",
"name": 2
},
{
"id": "C",
"name": 3
}
]
}
How do update a subdocument "A"s "name" to a value of 2 in RethinkDB in either Javascript or Python?
If you can rely of the position of your "A " element you can update like this:
r.db("DB").table("TABLE").get(1)
.update({subdocuments:
r.row("subdocuments").changeAt(0, r.row("subdocuments").nth(0).merge({"name":2}))})
If you can not rely on the position, you have to find it yourself:
r.db("DB").table("TABLE").get(1).do(function(doc){
return doc("subdocuments").offsetsOf(function(sub){return sub("id").match("A")}).nth(0)
.do(function(index){
return r.db("DB").table("TABLE").update({"subdocuments":
doc("subdocuments").changeAt(index, doc("subdocuments").nth(index).merge({"name":2})) })})
})
As an alternative you can use the map function to iterate over the array elements and update the one that matches your condition
r.db("DB").table("TABLE").get(1)
.update({
subdocuments: r.row("subdocuments").map(function(sub){
return r.branch(sub("id").eq("A"), sub.merge({name: 2}), sub)
})
})
tl;dr
Can ParseCloud/MongoDB filter by Pointer<class>.filed ? By
Pointer<class>.Pointer<class> ? By existence of data in that filed?
Long question:
Round is object which will be played automatically when time will come.
Payment object which indicates that user made payment. When payment being spent we set field round to it.
Player which links online User with Payment
I need to query player for few conditions:
Player
online
has valid(no round and valid equal to 'valid') payment
Player
user equal to specific user
has no payment
Player
user equal to specific user
has valid(no round and valid equal to 'valid') payment
And I made everything to work except validating Payment inside Player query.
Here is condition 1 from the list.
var query = new Parse.Query(keys.Player);
query.skip(0);
query.limit(oneRoundMaxPlayers);
query.greaterThanOrEqualTo(keys.last_online_date, lastAllowedOnline);
// looks like no filter applied here
query.doesNotExist("payment.round");
query.exists(keys.payment);
// This line will make query return 0 elements
// query.equalTo("payment.valid", "valid");
query.include(keys.user);
query.include(keys.payment);
Here is 2 OR 3
var queryPaymentExists = new Parse.Query(keys.Player);
queryPaymentExists.skip(0);
queryPaymentExists.limit(1);
queryPaymentExists.exists(keys.payment);
//This line not filtering
queryPaymentExists.doesNotExist(keys.payment + "." + keys.round);
queryPaymentExists.equalTo(keys.user, user);
// This line makes query always return 0 elements
// queryPaymentExists.equalTo(keys.payment + "." + keys.valid, keys.payment_valid);
var queryPaymentDoesNotExist = new Parse.Query(keys.Player);
queryPaymentDoesNotExist.skip(0);
queryPaymentDoesNotExist.limit(1);
queryPaymentDoesNotExist.doesNotExist(keys.payment);
queryPaymentDoesNotExist.equalTo(keys.user, user);
var compoundQuery = Parse.Query.or(queryPaymentExists, queryPaymentDoesNotExist);
compoundQuery.include(keys.user);
compoundQuery.include(keys.payment);
compoundQuery.include(keys.payment + "." + keys.round);
I've checked logs from Mongo and they looks following
verbose: REQUEST for [GET] /classes/Player: {
"include": "user,payment,payment.round",
"where": {
"$or": [
{
"payment": {
"$exists": true
},
"payment.round": {
"$exists": false
},
"user": {
"__type": "Pointer",
"className": "_User",
"objectId": "ASPKs6UVwb"
}
},
{
"payment": {
"$exists": false
},
"user": {
"__type": "Pointer",
"className": "_User",
"objectId": "ASPKs6UVwb"
}
}
]
}
}
Here is response:
verbose: RESPONSE from [GET] /classes/Player: {
"response": {
"results": [
{
"objectId": "VHU9uwmLA7",
"last_online_date": {
"__type": "Date",
"iso": "2017-10-28T15:15:23.547Z"
},
"user": {
"objectId": "ASPKs6UVwb",
"username": "cn92Ekv5WPJcuHjkmTajmZMDW",
},
"createdAt": "2017-10-22T11:43:16.804Z",
"updatedAt": "2017-10-25T09:23:20.035Z",
"ACL": {
"*": {
"read": true
},
"ASPKs6UVwb": {
"read": true,
"write": true
}
},
"__type": "Object",
"className": "_User"
},
"createdAt": "2017-10-27T21:03:35.442Z",
"updatedAt": "2017-10-28T15:15:23.556Z",
"payment": {
"objectId": "nr7ln7U3eJ",
"payment_date": {
"__type": "Date",
"iso": "2017-10-27T23:42:50.614Z"
},
"user": {
"__type": "Pointer",
"className": "_User",
"objectId": "ASPKs6UVwb"
},
"createdAt": "2017-10-27T23:42:50.624Z",
"updatedAt": "2017-10-28T15:12:30.131Z",
"valid": "valid",
"round": {
"objectId": "jF9gqG4ndh",
"round_date": {
"__type": "Date",
"iso": "2017-10-28T15:12:00.027Z"
},
"createdAt": "2017-10-28T15:11:00.036Z",
"updatedAt": "2017-10-28T15:12:30.108Z",
,
"ACL": {
"*": {
"read": true
}
},
"__type": "Object",
"className": "Round"
},
"ACL": {
"ASPKs6UVwb": {
"read": true
}
},
"__type": "Object",
"className": "Payment"
},
"ACL": {
"ASPKs6UVwb": {
"read": true
}
}
}
]
}
}
You can see that response contains payment.round.
My question is following:
Can ParseCloud/MongoDB filter by Pointer<class>.filed ? By Pointer<class>.Pointer<class> ? By existence of data in that filed?
How can I workaround in situation when I need to check field presence if User can have may Players, User can have many Payments.
UPD
As far as I found mongo should support filtering by "dot notation"
mongodb query by sub-field
So what am I doing wrong?
Short answer:
No
Simplify your data structure
Long answer:
Dot notation can be used to
include documents of pointers, as you already did in your code, e.g. include(keys.user)
filter for properties of fields, e.g. {properyA: 1, propertyB: 2}. All the data is in the field, not in another document in another collection that is referenced by a Parse pointer.
Dot notation cannot be used as filter parameter for referenced pointers in a Parse query. MongoDB also does not support such a filtering, the concept of pointer is one by Parse and not by MongoDB. In a NoSQL environment like MongoDB there are no relations between tables to be used in the query language, as it is not a "relational database" like an SQL database. However Parse provides some comfort of an SQL for simple queries with its concepts of pointer, compoundQuery and matchesKeyInQuery.
If that is not sufficient in your case, simply add the fields to the collection. To the expense that you may have the same fields and data in multiple collections but with the advantage of faster query execution time.
Finding the right data structure is one of the big topics for NoSQL as there is no general right structure. The collections and document structures are basically designed as a trade off between:
execution performance
query necessity / frequency
security (access level)
and data storage size
And they are liquid and can change over time. As your app and its queries mutate you'd also change the data structure if the long term gain is greater than the one time effort.
I am trying to import multiple collection from mongodb to elasticsearch and join them , lets say if join is not possible at least I want specific fields from some mongo collections to river in to elasticsearch using single river meta?
tried below meta , doesn't work.
PUT _river/mongodbicslicense/_meta
{
"type": "mongodb",
"mongodb": {
"servers": [
{
"host": "abc",
"port": "27017"
}
],
"options": {
"skip_initial_import": false
"include_collection": [
"abc",
"xyz"
],
"include_fields": [
"A",
"B",
"X",
"Z"
]
},
"db": "datadb",
"gridfs": false,
"credentials": [
{
"db": "datadb",
"user": "me",
"password": "mypass"
}
]
},
"index": {
"name": "frommongoindex",
"type": "abcd"
}
}
exploring mongo , need help ?
It is not possible to import multiple mongo collections using single river.
Elasticsearch-river-mongodb creates a new river for each MongoDB collection that should be indexed by Elasticsearch.