currently i am using dynamodb for storing connection id's of aws websockets, and i am using them as primary index for my documents, overall my schema looks like this
{
ID: connectionId,
userId: userId,
domainName,
stage,
}
everything is okay with this schema, just one problem, i have an sns topic that dispatches user id to this api endpoint, and i need to delete every connection with that userId, i was looking into batchWrite but it requires me to use userId as primary index rather than connectionId, i chose this schema type because it is flexible, i can easily find documents with connection id when user disconnects and delete with one command, and add it as well, is there option for me to batchwrite without primary key? second option is to transform schema as this
{
ID: userId,
connections: [
{
connectionId: connectionId,
stage,
domainName
}
],
}
which i am not so keen of, is this the only other option?
You need to change the DB schema by the following:
For the primary index
connectionId: partition key
Create global secondary index:
userId: partition key
First scenario:
When you need to delete all connections belonging to userId you need to query using userId and then run batchWrite command to delete all rows
Query using GSI:
const items = ddb.query({
TableName: "connections",
IndexName: "globalSecondaryIndexNameHere",
KeyConditionExpression: "userId = :userId",
ExpressionAttributeValues: {
":userId": "abc"
}
})
Then loop throw items and make batchWrite request to delete:
ddb.batchWrite({
RequestItems: {
"connections": [
{
DeleteRequest: {
Key: {
"connectionId": "connectionId1"
}
}
},
{
DeleteRequest: {
Key: {
"connectionId": "connectionId2"
}
}
},
// ...
]
}
})
Second scenario:
When you need to delete one row by connectionId
Delete:
ddb.deleteItem({
TableName: "connections",
Key: {
"connectionId": "connectionId1"
}
})
NOTE: I recommend using AWS AppSync instead of API Gateway, since appsync manages your connectionIds instead of saving them in DynamoDB plus many other reasons stated HERE
Related
So I am working on an app with AWS amplify. I am using a single table design and I am trying to run a mutation where I only update the profile field of the UserV0 in the single table design. I am trying to only update the profile(s3key) but when I run my mutations it deletes the rest of the contents of UserV0.
Graph QL Schema
type SingleTable #model {
pk: String! #primaryKey(sortKeyFields: ["sk"])
sk: String!
user: UserV0
post: PostV0
}
type UserV0 {
name: String
username: String
email: String
profile: String
}
type PostV0 {
...
}
query getUserInfo {
getSingleTable(pk: "TEST", sk: "TEST") {
user {
username
name
profile
email
}
}
}
mutation createTable {
createSingleTable(input: {pk: "TEST", sk: "TEST", user: {email: "email#email.com", name: "testname", profile: "testPath", username: "testusername"}}) {
updatedAt
}
}
mutation updateTable {
updateSingleTable(input: {pk: "TEST", sk: "TEST", user: {profile: "TESTING", username: "TESTING123"}}) {
createdAt
}
}
If I run the update mutation above, then the entire user is reset and when I check it in my DynamoDB field the name and email fields are all lost. How can I make it so that when I run the mutation, I only update the profile field and leave the other fields without deleting them. Thanks in advance.
Edit: I put in all of the queries and mutations that I am running in AppSync. I run createTable and then getUserInfo and it returns this as it should.
{
"data": {
"getSingleTable": {
"user": {
"username": "testusername",
"name": "testname",
"profile": "testPath",
"email": "email#email.com"
}
}
}
}
But after I run the updateTable and then getUserInfo it returns this.
{
"data": {
"getSingleTable": {
"user": {
"username": "TESTING123",
"name": null,
"profile": "TESTING",
"email": null
}
}
}
}
As you can see the name and email fields are reset, set to null and removed from the DynamoDB database. I am pretty sure it is because it sees the user object as a new input. But how do I get it just recognize that I only want to update certain fields in userV0 and not the entire thing.
Make sure your function updateSingleTable uses the UpdateItem operation:
"operation" : "UpdateItem"
If you're using PutItem which I assume you are, it performs and overwrite and thus removing existing data.
https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-dynamodb.html
Under the hood I believe the DynamoDB client is DynamoDB Mapper. As a result, it will delete values if set to null. To avoid this, you must ensure you do not set values to null, instead omit any values not being used in the request.
I have an application that uses a GraphQL API, provided by the AWS AppSync service. It uses GraphQL subscriptions to send messages between different clients in near real time. There is a mutation pushItems that's configured with a resolver, which has a 'none' data source and forwards the request data unmodified to the subscription onItemChange.
The app hasn't been used for a couple of years, and now when I attempt to trigger a subscription event, I get an error on the subscribed client. Previously this worked without issues.
{
"data": {
"onItemChange": null
},
"errors": [
{
"message": "Cannot return null for non-nullable type: 'ID' within parent 'Item' (/onItemChange/id)",
"path": [
"onItemChange",
"id"
]
}
]
}
The error message suggests that the property id within the Item object is null, however when I send a mutation from the AWS AppSync web console with a hard-coded string for the item ID, I still get the same issue, even though the mutation response contains the correct data that should be forwarded to the subscribed client.
I've created a minimal configuration in AWS AppSync to reproduce the issue, which is detailed below. Is it possible that the AppSync service has changed the way it handles subscription data in the last few years?
GraphQL schema
schema {
query: Query
mutation: Mutation
subscription: Subscription
}
type Query {
getItem(id: ID!): Item
}
type Mutation {
pushItems(
items: [ItemInput]
): [Item]}
type Subscription {
onItemChange: Item
#aws_subscribe(
mutations: [
"pushItems"
]
)
}
type Item {
id: ID!
}
input ItemInput {
id: ID!
}
Resolver mappings for mutation pushItems
Request:
{
"version" : "2017-02-28",
"payload": $util.toJson($context.arguments.items)
}
Response:
$util.toJson($context.result)
Example queries
The following queries can be used to reproduce the issue in the AWS AppSync web console. First, subscribe to onItemChange:
subscription MySubscription {
onItemChange {
id
}
}
Then (in a different browser tab) send some data to pushItems:
mutation MyMutation {
pushItems(items: [{id: "foo"}]) {
id
}
}
You are returning Set of items in the mutation so you should update subscription return type to [Item]
type Subscription {
onItemChange: [Item] <--- HERE
#aws_subscribe(mutations: ["pushItems"])
}
I need to distinguish between two queries with the same __typename (Union) to get Apollo Client typePolicies cache to work properly.
So RelatedNodes is a Union and I don't get a unique identifier field from the server.
The nodes are differentiated by a field called type. See the query:
query GetNodesTypeOne($limit: Int, $offset: Int,) {
getNodesTypeOne(limit: $limit, offset: $offset) {
__typename
nodes {
uuid
type
title
}
}
}
I want to use that field nodes.type to create a unique identifier, which I can use in the keyFields property (like keyFields: ['type']).
The Apollo Client typePolicies configured like:
typePolicies: {
RelatedNodes: {
keyFields: [],
fields: {
nodes: offsetLimitPagination(),
},
},
},
What I am trying:
Adding a local only field to my query:
query GetNodesTypeOne($limit: Int, $offset: Int,) {
getNodesTypeOne(limit: $limit, offset: $offset) {
type #client // the field a want to use in the typePolicies
nodes {
uuid
type
title
}
}
}
Then with Apollo Client read function, I want to create a type field which gets it 's value from nodes.type?
Is that possible?
Maybe the title is not accurate but I really don't know how to describe it anymore. I went through multiple documentations and descriptions but still couldn't figure it out.
I want to implement a basic social media like followers/following query on my type User. I am using MySQL and for that I made a separate table called Follow as it's a many-to-many connection.
Here is a pseudo-ish representation of my tables in the database without the unnecessary columns:
Table - User
user_id primary key Int
Table - Follow
follow_er foreign_key -> User(user_id) Int
follow_ed foreign_key -> User(user_id) Int
A user could "act" as a follow_er so I can get the followed people
And a user could be follow_ed, so I can get the followers.
My prisma schema look like this:
model User {
user_id Int #id #default(autoincrement())
following Follow[] #relation("follower")
followed Follow[] #relation("followed")
}
model Follow {
follow_er Int
follower User #relation("follower", fields: [follow_er], references: [user_id])
follow_ed Int
followed User #relation("followed", fields: [follow_ed], references: [user_id])
##id([follow_er, follow_ed])
##map("follow")
}
By implementing this I can get the followers and following object attached to the root query of the user:
const resolvers = {
Query: {
user: async (parent, arg, ctx) => {
const data = await ctx.user.findUnique({
where: {
user_id: arg.id
},
include: {
following: true,
followed:true
}
})
return data
}....
Here is my GraphQL schema I tried to make:
type Query{
user(id: Int!): User
}
type User{
id: ID
following: [User]
followed: [User]
}
So I can get something like:
query {
user(id: $id) {
id
following {
id
}
followed{
id
}
}
}
}
But I couldn't make it work as even if I get the the array of objects of {follow-connections}:
[
{
follow_er:1,
follow_ed:2
},
{
follow_er:1,
follow_ed:3
},
{
follow_er:3,
follow_ed:1
},
]
I can't iterate through the array. As far as I know, I have to pass either the follow_er or follow_ed, which is a user_id to get a User object.
What am I missing? Maybe I try to solve it from a wrong direction. If anybody could help me with this, or just tell me some keywords or concepts I have to look for it would be cool. Thanks!
I would suggest creating self-relations for this structure in the following format:
model User {
id Int #id #default(autoincrement())
name String?
followedBy User[] #relation("UserFollows", references: [id])
following User[] #relation("UserFollows", references: [id])
}
And then querying as follows:
await prisma.user.findUnique({
where: { id: 1 },
include: { followedBy: true, following: true },
})
So you will get a response like this:
I have a CHAT_MESSAGE_FRAGMENT that returns all the message data from my Hasura graphql api.
However, the Gifted Chat react-native component requires the data in a specific structure so I'm attempting to convert it with the query below.
I'm able to alias all the top level data but can't figure out how to add a nested level of data.
I'm guessing it isn't possible but I thought I'd ask in case I'm missing something.
const GIFTED_CHAT_GROUP_MESSAGES_QUERY = gql`
query chatGroupMessages($chatGroupId: Int!) {
chat_message(
where: { to: { id: { _eq: $chatGroupId } } }
) {
_id: id,
# user: {
# _id: from.id, <== How do I add
# name: from.name, <== this secondary level?
# },
text: message,
image: image_url,
createdAt: created_at,
system: message_type,
}
}
${CHAT_MESSAGE_FRAGMENT}
`;
Assuming you already have chat_message.user_id -> users.id foreign key constraint set up, you'll also need to alias the from object in addition aliasing any of its nested fields:
const GIFTED_CHAT_GROUP_MESSAGES_QUERY = gql`
query chatGroupMessages($chatGroupId: Int!) {
chat_message(
where: { to: { id: { _eq: $chatGroupId } } }
) {
_id: id,
from: user: {
_id: id,
name
},
text: message,
image: image_url,
createdAt: created_at,
system: message_type,
}
}
${CHAT_MESSAGE_FRAGMENT}
`;
The secondary level of data is basically nested object queries in Hasura. You can nest any number of queries as long as a relationship has been created.
In this case, assuming the chat_message table has a user_id field, you can establish a foreign key constraint for chat_message.user_id -> users.id, where users is a table with id as primary key.
Once the foreign key constraint is created, Hasura Console automatically suggests relationships. Here user would be an object relationship in chat_message table.
Here's the official docs link for Creating a relationship