Change primary key from "id" to "any_other_key" - rethinkdb

How can I change primary key of RethinkDB after my rethinkdb feathers service is created?
I tried below code but it won't affect.
app.use('/messages', service({
id: 'user_id',
Model: db,
name: 'messages'
}));
Is there anything I missed?

When using a different primary key the RethinkDB database needs to be initialized with the same primaryKey option. You can either do that manually or use service.init as documented here:
// Initialize the `messages` table with `user_id` as the primary key
app.service('messages').init({
primaryKey: 'user_id'
}).then(() => {
// Use service here
});

Related

keystonejs form a multi-column unique constraint

How to form a unique constraint with multiple fields in keystonejs?
const Redemption = list({
access: allowAll,
fields: {
program: relationship({ ref: 'Program', many: false }),
type: text({ label: 'Type', validation: { isRequired: true }, isIndexed: 'unique' }),
name: text({ label: 'name', validation: { isRequired: true }, isIndexed: 'unique' }),
},
//TODO: validation to check that program, type, name form a unique constraint
})
The best way I can think to do this currently is by adding another field to the list and concatenating your other values into it using a hook. This lets you enforces uniqueness across these three values (combine) at the DB-level.
The list config (and hook) might look like this:
const Redemption = list({
access: allowAll,
fields: {
program: relationship({ ref: 'Program', many: false }),
type: text({ validation: { isRequired: true } }),
name: text({ validation: { isRequired: true } }),
compoundKey: text({
isIndexed: 'unique',
ui: {
createView: { fieldMode: 'hidden' },
itemView: { fieldMode: 'read' },
listView: { fieldMode: 'hidden' },
},
graphql: { omit: ['create', 'update'] },
}),
},
hooks: {
resolveInput: async ({ item, resolvedData }) => {
const program = resolvedData.program?.connect.id || ( item ? item?.programId : 'none');
const type = resolvedData.type || item?.type;
const name = resolvedData.name || item?.name;
resolvedData.compoundKey = `${program}-${type}-${name}`;
return resolvedData;
},
}
});
Few things to note here:
I've removed the isIndexed: 'unique' config for the main three fields. If I understand the problem you're trying to solve correctly, you actually don't want these values (on their own) to be distinct.
I've also remove the label config from your example. The label defaults to the field key so, in your example, that config is redundant.
As you can see, I've added the compoundKey field to store our composite values:
The ui settings make the field appear as uneditable in the UI
The graphql settings block updates on the API too (you could do the same thing with access control but I think just omitting the field is a bit cleaner)
And of course the unique index, which will be enforced by the DB
I've used a resolveInput hook as it lets you modify data before it's saved. To account for both create and update operations we need to consult both the resolvedData and item arguments - resolvedData gives us new/updated values (but undefined for any fields not being updated) and item give us the existing values in the DB. By combining values from both we can build the correct compound key each time and add it to the returned object.
And it works! When creating a redemption we'll be prompted for the 3 main fields (the compound key is hidden):
And the compound key is correctly set from the values entered:
Editing any of the values also updates the compound key:
Note that the compound key field is read-only for clarity.
And if we check the resultant DB structure, we can see our unique constraint being enforced:
CREATE TABLE "Redemption" (
id text PRIMARY KEY,
program text REFERENCES "Program"(id) ON DELETE SET NULL ON UPDATE CASCADE,
type text NOT NULL DEFAULT ''::text,
name text NOT NULL DEFAULT ''::text,
"compoundKey" text NOT NULL DEFAULT ''::text
);
CREATE UNIQUE INDEX "Redemption_pkey" ON "Redemption"(id text_ops);
CREATE INDEX "Redemption_program_idx" ON "Redemption"(program text_ops);
CREATE UNIQUE INDEX "Redemption_compoundKey_key" ON "Redemption"("compoundKey" text_ops);
Attempting to violate the constraint will produce an error:
If you wanted to customise this behaviour you could implement a validateInput hook and return a custom ValidationFailureError message.

Removing data without primary index DynamoDB

currently i am using dynamodb for storing connection id's of aws websockets, and i am using them as primary index for my documents, overall my schema looks like this
{
ID: connectionId,
userId: userId,
domainName,
stage,
}
everything is okay with this schema, just one problem, i have an sns topic that dispatches user id to this api endpoint, and i need to delete every connection with that userId, i was looking into batchWrite but it requires me to use userId as primary index rather than connectionId, i chose this schema type because it is flexible, i can easily find documents with connection id when user disconnects and delete with one command, and add it as well, is there option for me to batchwrite without primary key? second option is to transform schema as this
{
ID: userId,
connections: [
{
connectionId: connectionId,
stage,
domainName
}
],
}
which i am not so keen of, is this the only other option?
You need to change the DB schema by the following:
For the primary index
connectionId: partition key
Create global secondary index:
userId: partition key
First scenario:
When you need to delete all connections belonging to userId you need to query using userId and then run batchWrite command to delete all rows
Query using GSI:
const items = ddb.query({
TableName: "connections",
IndexName: "globalSecondaryIndexNameHere",
KeyConditionExpression: "userId = :userId",
ExpressionAttributeValues: {
":userId": "abc"
}
})
Then loop throw items and make batchWrite request to delete:
ddb.batchWrite({
RequestItems: {
"connections": [
{
DeleteRequest: {
Key: {
"connectionId": "connectionId1"
}
}
},
{
DeleteRequest: {
Key: {
"connectionId": "connectionId2"
}
}
},
// ...
]
}
})
Second scenario:
When you need to delete one row by connectionId
Delete:
ddb.deleteItem({
TableName: "connections",
Key: {
"connectionId": "connectionId1"
}
})
NOTE: I recommend using AWS AppSync instead of API Gateway, since appsync manages your connectionIds instead of saving them in DynamoDB plus many other reasons stated HERE

Updating Apollo Cache for external query after entity mutation

I'd like to display a list of users, based on a filtered Apollo query
// pseudo query
if (user.name === 'John) return true
User names can be edited. Unfortunately, if I change a user name to James, the user is still displayed in my list (the query is set to fetch from cache first)
I tried to update this by using cache.modify:
cache.modify({
id: cache.identify({
__typename: 'User',
id: userId,
}),
fields: {
name: () => {
return newName; //newName is the input new value
},
},
});
But I'm not quite sure this is the correct way to do so.
Of course, if I use refetchQueries: ['myUsers'], I get the correct result, but obviously, this is a bit overkill to refetch the whole list every time a name is updated.
Did I miss something?

How to reshape a GraphQL (via Hasura) query response?

I have a CHAT_MESSAGE_FRAGMENT that returns all the message data from my Hasura graphql api.
However, the Gifted Chat react-native component requires the data in a specific structure so I'm attempting to convert it with the query below.
I'm able to alias all the top level data but can't figure out how to add a nested level of data.
I'm guessing it isn't possible but I thought I'd ask in case I'm missing something.
const GIFTED_CHAT_GROUP_MESSAGES_QUERY = gql`
query chatGroupMessages($chatGroupId: Int!) {
chat_message(
where: { to: { id: { _eq: $chatGroupId } } }
) {
_id: id,
# user: {
# _id: from.id, <== How do I add
# name: from.name, <== this secondary level?
# },
text: message,
image: image_url,
createdAt: created_at,
system: message_type,
}
}
${CHAT_MESSAGE_FRAGMENT}
`;
Assuming you already have chat_message.user_id -> users.id foreign key constraint set up, you'll also need to alias the from object in addition aliasing any of its nested fields:
const GIFTED_CHAT_GROUP_MESSAGES_QUERY = gql`
query chatGroupMessages($chatGroupId: Int!) {
chat_message(
where: { to: { id: { _eq: $chatGroupId } } }
) {
_id: id,
from: user: {
_id: id,
name
},
text: message,
image: image_url,
createdAt: created_at,
system: message_type,
}
}
${CHAT_MESSAGE_FRAGMENT}
`;
The secondary level of data is basically nested object queries in Hasura. You can nest any number of queries as long as a relationship has been created.
In this case, assuming the chat_message table has a user_id field, you can establish a foreign key constraint for chat_message.user_id -> users.id, where users is a table with id as primary key.
Once the foreign key constraint is created, Hasura Console automatically suggests relationships. Here user would be an object relationship in chat_message table.
Here's the official docs link for Creating a relationship

Prisma - Graphql queries on preloaded mysql database returning empty

Looking for how to debug this or a reason why it might be returning empty.
I'm using Prisma graphql with a mysql databse and I was able to preload the database with data and then set up the schema to match the database.
For example I have the schema:
# Also tried renaming this to PRIMITIVE_TYPE but no luck
type PrimitiveType {
PRIMITIVE_TYPE_ID: Int! #unique
PRIMITIVE_TYPE: String!
}
and in the database it was created with:
CREATE TABLE PRIMITIVE_TYPE
(
PRIMITIVE_TYPE_ID SMALLINT NOT NULL,
PRIMITIVE_TYPE VARCHAR(20) NOT NULL,
);
ALTER TABLE PRIMITIVE_TYPE ADD CONSTRAINT CONSTRAINT_24 PRIMARY KEY
(PRIMITIVE_TYPE_ID);
everything starts up fine and the playground recognizes the schema. But when I try
{
primitiveTypes {
PRIMITIVE_TYPE_ID
PRIMITIVE_TYPE
}
}
It just returns
{
"data": {
"primitiveTypes": []
}
}
I connected to the database manually and the table had data in it, I'm not really sure what else to try or how to debug it.
This was basically due to the fact that prisma has yet to implement mysql introspection. They are currently working it now.

Resources