Well hello fellas, i'm starting in dynamodb and i have some missunders when i want to use the ExclusiveStartKey.currently im working with the GSI and heres is how i have the params for the query
{
TableName: 'Search',
IndexName: 'GSI1',
ExclusiveStartKey: {
GSI1PK: { S: '8a2bb021182ffff' },
GSI1SK: { S: '5#182854f0-c4ea-39c7-a3f5-4b0b0d947cea' }
},
KeyConditionExpression: 'GSI1PK = :gsiHk AND begins_with(GSI1SK, :entityType)',
ExpressionAttributeValues: { ':gsiHk': { S: '8a2bb021182ffff' }, ':entityType': {S:'5'}},
Limit: 500
}
and this returns me an error
ValidationException: The provided starting key is invalid
Is this the correct way to use it or how can i fix it ??
It's uncommon to explicitly set an ExclusiveStartKey
From the docs:
ExclusiveStartKey The primary key of the first item that this
operation will evaluate. Use the value that was returned for
LastEvaluatedKey in the previous operation.
so the normal use of Query, given it's 1MB read limit is in a loop as shown in the following pseudo-code
do
result=Query(parms);
//process results
parms.ExclusiveStartKey = results.LastEvaluatedKey;
until results.LastEvaluatedKey is null;
I supposed there's no reason you couldn't explicitly set it, but I don't see that the format is documented anywhere. You'll have to examine what's returned in LastEvaluatedKey
Related
I’ve inherited a project that’s setting an inmemorycache with the following key field syntax. None of the examples showcase this particular signature (that I can find at least). All the fields I see in the examples use multiple fields and are placed in the key field attribute. Is this looking for any nested “myField” attributes? How is this expected in the graphql data? (Apollo client 3.2)
const cache = new InMemoryCache({
typePolicies: {
Query: {
/// query info
},
},
UserData: {
fields: {
fieldA: {
merge(existing = [], incoming = []) {
return incoming;
},
},
fieldB: {
merge(existing = [], incoming = []) {
return incoming;
},
},
},
keyFields: [["myField"]], // <-- What is this looking for?
},
},
});
This leads to an invariant violation error:
Uncaught Invariant Violation: Missing field 'myField' while extracting keyFields from {"id":"462a349...... (does not contain myField)
Your code seems fine when it comes to fields map. On the other hand, keyFields in a slightly different question. You could totally skip setting it.
The purpose of keyFields is to uniquely identify your record, so the cache would know how to update. Just like in the relational databases you have a primary key that consists of one or more columns that consider your record unique.
I believe this is well documented in Apollo's documentation, see this:
https://www.apollographql.com/docs/react/caching/cache-configuration/#customizing-cache-ids
I am fairly newbie when it comes down to graphql. I have the following schema
type query {
allJobs (
limit: Int
cursorId: String
): JobSearchResults!
type JobSearchResults {
jobs: [Job!]
hasMoreJobs: Boolean!
}
}
So there is the query allJobs and the result is an object with jobs array and a simply boolean hasMoreJobs to signal the end of the jobs.
On the client side I am able to query this and get results, but I am totally confused on how to cache these results. On ApolloClient I have the following:
cache: new InMemoryCache({
typePolicies: {
Query: {
fields: {
// cache the previous results and concat the new results to original data
allJobs: concatPagination(),
},
},
},
}),
I know that this would work if I was just returning an array of jobs like that.
type query {
allJobs (
limit: Int
cursorId: String
): [Jobs]!
My question is if there is a way to use concatPagination to cache only the jobs: [Job!] from the original query.
Or if there is a better way to deal with this problem? Maybe I need to rethink and reconstruct the original schema?
I think you'll find the solution of your problem here:
https://www.apollographql.com/blog/pagination-and-infinite-scrolling-in-apollo-client-59ff064aac61/
I have a query that works when manually typed:
queryName(where: { ids: ["1234567890123456789", "1234567890123456790"] }, offset: 0, max: 10) {
but when the same values are passed in a variable:
const idArr = ["1234567890123456789", "1234567890123456790"];
...
queryName(where: { ids: ${idArr} }, offset: 0, max: 10) {
I get the error:
Uncaught GraphQLError: Syntax Error: Expected Name, found Int "1234567890123456789"
Can anyone explain this?
Using string interpolation like that will result in the following value being inserted inside your string:
"1234567890123456789,1234567890123456790"
This is not valid GraphQL syntax and so results in a syntax error. Instead of using string interpolation, you should use GraphQL variables to provide dynamic values along with your query:
query ($idArr: [ID!]!) {
queryName(where: { ids: $idArr }, offset: 0, max: 10) {
...
}
}
Note that the type of the variable will depend on the argument where it's being used, which depends on whatever schema you're actually querying.
How you include the variables along with your request depends on the client you're using to make that request, which is not clear from your post. If you're using fetch or some other simple HTTP client, you just include the variables alongside the query as another property in the payload you send to the server:
{
"query": "...",
"variables": {
...
}
}
i want to create a new graphql api and i have an issue that i am struggling to fix.
the code is open source and can be found at: https://github.com/glitr-io/glitr-api
i want to create a mutation to create a record with relations... it seems the record is created correctly with all the expected relations, (when checking directly into the database), but the value returned by the create<YourTableName> method, is missing all the relations.
... so so i get an error on the api because "Cannot return null for non-nullable field Meme.author.". i am unable to figure out what could be wrong in my code.
the resolver looks like the following:
...
const newMeme = await ctx.prisma.createMeme({
author: {
connect: { id: userId },
},
memeItems: {
create: memeItems.map(({
type,
meta,
value,
style,
tags = []
}) => ({
type,
meta,
value,
style,
tags: {
create: tags.map(({ name = '' }) => (
{
name
}
))
}
}))
},
tags: {
create: tags.map(({ name = '' }) => (
{
name
}
))
}
});
console.log('newMeme', newMeme);
...
that value of newMeme in the console.log here (which what is returned in this resolver) is:
newMeme {
id: 'ck351j0f9pqa90919f52fx67w',
createdAt: '2019-11-18T23:08:46.437Z',
updatedAt: '2019-11-18T23:08:46.437Z',
}
where those fields returned are the auto-generated fields. so i get an error for a following mutation because i tried to get the author:
mutation{
meme(
memeItems: [{
type: TEXT
meta: "test1-meta"
value: "test1-value"
style: "test1-style"
}, {
type: TEXT
meta: "test2-meta"
value: "test2-value"
style: "test2-style"
}]
) {
id,
author {
displayName
}
}
}
can anyone see what issue could be causing this?
(as previously mentioned... the record is created successfully with all relationships as expected when checking directly into the database).
As described in the prisma docs the promise of the Prisma client functions to write data, e.g for the createMeme function, only returns the scalar fields of the object:
When creating new records in the database, the create-method takes one input object which wraps all the scalar fields of the record to be
created. It also provides a way to create relational data for the
model, this can be supplied using nested object writes.
Each method call returns a Promise for an object that contains all the
scalar fields of the model that was just created.
See: https://www.prisma.io/docs/prisma-client/basic-data-access/writing-data-JAVASCRIPT-rsc6/#creating-records
To also return the relations of the object you need to read the object again using an info fragment or the fluent api, see: https://www.prisma.io/docs/prisma-client/basic-data-access/reading-data-JAVASCRIPT-rsc2/#relations
We are in the situation that the response of our GraphQL Query has to return some dynamic properties of an object. In our case we are not able to predefine all possible properties - so it has to be dynamic.
As we think there are two options to solve it.
const MyType = new GraphQLObjectType({
name: 'SomeType',
fields: {
name: {
type: GraphQLString,
},
elements: {
/*
THIS is our special field which needs to return a dynamic object
*/
},
// ...
},
});
As you can see in the example code is element the property which has to return an object. A response when resolve this could be:
{
name: 'some name',
elements: {
an_unkonwn_key: {
some_nested_field: {
some_other: true,
},
},
another_unknown_prop: 'foo',
},
}
1) Return a "Any-Object"
We could just return any object - so GraphQL do not need to know which fields the Object has. When we tell GraphQL that the field is the type GraphQlObjectType it needs to define fields. Because of this it seems not to be possible to tell GraphQL that someone is just an Object.
Fo this we have changed it like this:
elements: {
type: new GraphQLObjectType({ name: 'elements' });
},
2) We could define dynamic field properties because its in an function
When we define fields as an function we could define our object dynamically. But the field function would need some information (in our case information which would be passed to elements) and we would need to access them to build the field object.
Example:
const MyType = new GraphQLObjectType({
name: 'SomeType',
fields: {
name: {
type: GraphQLString,
},
elements: {
type: new GraphQLObjectType({
name: 'elements',
fields: (argsFromElements) => {
// here we can now access keys from "args"
const fields = {};
argsFromElements.keys.forEach((key) => {
// some logic here ..
fields[someGeneratedProperty] = someGeneratedGraphQLType;
});
return fields;
},
}),
args: {
keys: {
type: new GraphQLList(GraphQLString),
},
},
},
// ...
},
});
This could work but the question would be if there is a way to pass the args and/or resolve object to the fields.
Question
So our question is now: Which way would be recommended in our case in GraphQL and is solution 1 or 2 possible ? Maybe there is another solution ?
Edit
Solution 1 would work when using the ScalarType. Example:
type: new GraphQLScalarType({
name: 'elements',
serialize(value) {
return value;
},
}),
I am not sure if this is a recommended way to solve our situation.
Neither option is really viable:
GraphQL is strongly typed. GraphQL.js doesn't support some kind of any field, and all types defined in your schema must have fields defined. If you look in the docs, fields is a required -- if you try to leave it out, you'll hit an error.
Args are used to resolve queries on a per-request basis. There's no way you can pass them back to your schema. You schema is supposed to be static.
As you suggest, it's possible to accomplish what you're trying to do by rolling your own customer Scalar. I think a simpler solution would be to just use JSON -- you can import a custom scalar for it like this one. Then just have your elements field resolve to a JSON object or array containing the dynamic fields. You could also manipulate the JSON object inside the resolver based on arguments if necessary (if you wanted to limit the fields returned to a subset as defined in the args, for example).
Word of warning: The issue with utilizing JSON, or any custom scalar that includes nested data, is that you're limiting the client's flexibility in requesting what it actually needs. It also results in less helpful errors on the client side -- I'd much rather be told that the field I requested doesn't exist or returned null when I make the request than to find out later down the line the JSON blob I got didn't include a field I expected it to.
One more possible solution could be to declare any such dynamic object as a string. And then pass a stringified version of the object as value to that object from your resolver functions. And then eventually you can parse that string to JSON again to make it again an object on the client side.
I'm not sure if its recommended way or not but I tried to make it work with this approach and it did work smoothly, so I'm sharing it here.