how to use pascal casing instead of camel casing in graphql dotnet - graphql

here I have a mutation, as you can see the parameter name is Product with capital P and the field name is CreateProduct with capital C when I execute this mutation from graphical I have to write the name of the field on camel casing and also the name of the parameter, is there a way to configure graphql-dotnet to respect the names as they are written in the code?
const string STR_Product = "Product";
public Mutations(IProductService ProductService)
{
Name = "Mutation";
Field<ProductType>(
"CreateProduct",
arguments: new QueryArguments(
new QueryArgument<NonNullGraphType<ProductInputType>> { Name = STR_Product }),
resolve: context =>
{
var ProductInput = context.GetArgument<ProductInput>(STR_Product);
return ProductService.CreateAsync(ProductInput.Code, ProductInput.Name, ProductInput.Description);
//return new ProductInputType();
}
);
}
}

You can pass a IFieldNameConverter to the ExecutionOptions. It defaults to using CamelCaseFieldNameConverter.
Some special considerations are needed for the Introspection types, as those are required to be camel case according to the GraphQL Specification.
GraphQL.NET provides CamelCaseFieldNameConverter, PascalCaseFieldNameConverter, and DefaultFieldNameConverter. You could also write your own. Source found here.
using System;
using GraphQL;
using GraphQL.Types;
public class Program
{
public static void Main(string[] args)
{
var schema = Schema.For(#"
type Query {
Hello: String
}
");
var json = schema.Execute(_ =>
{
_.Query = "{ Hello }";
_.Root = new { Hello = "Hello World!" };
_.FieldNameConverter = new DefaultFieldNameConverter();
});
Console.WriteLine(json);
}
}

See the above answer by Joe McBride, but instead of "FieldNameConverter" should be using "NameConverter". Example:
var json = schema.Execute(_ =>
{
_.NameConverter = new DefaultNameConverter(); //or PascalNameConverter, etc.
});
(posted as an answer because I am unable to comment)

Related

HotChocolate (GraphQL) schema first approach on complex type

I'm novice in HotChocolate and I'm trying to PoC some simple usage.
I've created very simple .graphql file:
#camera.graphql
type Camera {
id: ID!
name: String!
}
type Query {
getCamera: Camera!
}
And a very simple .NET code for camera wrapping:
public class QlCamera
{
public static QlCamera New()
{
return new QlCamera
{
Id = Guid.NewGuid().ToString(),
Name = Guid.NewGuid().ToString()
};
}
public string Id { get; set; }
public string Name { get; set; }
}
as well as such for schema creation:
public void CreateSchema()
{
string path = System.IO.Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
var smBuilder = SchemaBuilder.New();
smBuilder.AddDocumentFromFile(path + "/GraphQL/camera.graphql");
smBuilder.AddResolver("Query", "getCamera", () => QlCamera.New());
var schema = smBuilder.Create();
}
On the last line however I do get an exception :
HotChocolate.SchemaException: 'Multiple schema errors occured:
The field Camera.id has no resolver. - Type: Camera
The field Camera.name has no resolver. - Type: Camera
'
I've tried to create :
public class QlCameraType : ObjectType<QlCamera>
{
protected override void Configure(IObjectTypeDescriptor<QlCamera> descriptor)
{
descriptor.Name("Camera");
descriptor.Field(t => t.Id).Type<NonNullType<StringType>>();
descriptor.Field(t => t.Name).Type<StringType>();
}
}
and to replace
smBuilder.AddResolver("Query", "getCamera", () => QlCamera.New());
with
smBuilder.AddResolver("Query", "getCamera", () => new QlCameraType());
But I continue to get the same exception.
Obviously I miss something here, But I cannot understand what exactly.
Could someone explain me what I do miss ?
(I've passed few times trough the documentation, but I cannot find relevant help there)
As exception clearly states - there are no revolvers bind for the particular fields ("id" and "name") of the "Camera" type/object.
So they just have to be added with :
smBuilder.AddResolver("Camera", "id", rc => rc.Parent<QlCamera>().Id);
smBuilder.AddResolver("Camera", "name", rc => rc.Parent<QlCamera>().Name);
And that is it.

Abstract object not mapped correctly in Elasticsearch using Nest 7.0.0-alpha1

I am using NEST (.NET 4.8) to import my data, and I have a problem getting the mapping to work in NEST 7.0.0-alpha1.
I have the following class structure:
class LinkActor
{
public Actor Actor { get; set; }
}
abstract class Actor
{
public string Description { get; set; }
}
class Person : Actor
{
public string Name { get; set; }
}
I connect to Elasticsearch this way:
var connectionSettings = new ConnectionSettings(new Uri(connection));
connectionSettings.DefaultIndex(indexName);
var client = new ElasticClient(connectionSettings);
The actual data looks like this:
var personActor = new Person
{
Description = "Description",
Name = "Name"
};
var linkActor = new LinkActor
{
Actor = personActor
};
And the data is indexed like this:
result = client.IndexDocument(linkActor);
Using NEST 6.6 I am getting the following data in Elasticsearch 6.5.2:
"actor": {
"name": "Name",
"description": "Description"
}
However when using NEST 7.0.0-alpha1 I get the following data in Elasticsearch 7.0.0:
"actor": {
"description": "Description"
}
So the data from the concrete class is missing. I am obviously missing / not understanding some new mapping feature, but my attempts with AutoMap has failed:
client.Map<(attempt with each of the above classes)>(m => m.AutoMap());
Is is still possible to map the data from the concrete class in NEST 7.0.0-alpha1?
I found a workaround using the NEST.JsonNetSerializer (remember to install this), which allows me to pass a JObject directly:
Connect to Elasticsearch using a pool so you can add the JsonNetSerializer.Default:
var pool = new SingleNodeConnectionPool(new Uri(connection));
var connectionSettings = new ConnectionSettings(pool, JsonNetSerializer.Default);
connectionSettings.DefaultIndex(indexName);
var client = new ElasticClient(connectionSettings);
Convert the linkActor object from above to a JObject (JsonSerializerSettings omitted for clarity, add them to get CamelCasing):
var linkActorSerialized = JsonConvert.SerializeObject(linkActor);
var linkActorJObject = JObject.Parse(linkActorSerialized);
result = client.IndexDocument(linkActorJObject);
This gives the desired result:
"actor": {
"name": "Name",
"description": "Description"
}
It is a workaround, hopefully someone will be able to explain the mapping in the question.

In GraphQL, how to "aggregate" properties

Excuse the vague code, I can't really copy/paste. :)
I have type in GraphQL like this:
type Thing {
toBe: Boolean
orNot: Boolean
}
I'm trying to create a new property on this type that is an... aggregate of those two. Basically return a new value based upon those values. The code would be like:
if (this.toBe && !this.orNot) { return "To be!"; }
if (!this.toBe && !this.orNot) { return "OrNot!"; }
Does this make sense? So it would return something like:
Thing1 {
toBe: true;
orNot: false;
newProp: "To be!"
}
Yes, you can easily create aggregated fields in your graphql Object types by handling your required logic in that aggregated field resolver. While creating object types, you have instance of that object, and therefore, you can easily create aggregated fields which are not present in your domain models using object's data and this is one of the beauty of graphql. Note that this can differ on each implementation of GraphQL libraries. Following is the example for such use case in JavaScript and Scala.
Example in Graphql.js:
var FooType = new GraphQLObjectType({
name: 'Foo',
fields: {
toBe: { type: GraphQLBoolean},
orNot: { type: GraphQLBoolean},
newProp: { type: GraphQLString,
resolve(obj) {
if (obj.toBe && !obj.orNot) { return "To be!"; }
else { return "OrNot!"; }
}
}
});
Example in Sangria-graphql:
ObjectType(
"Foo",
"graphql object type for foo",
fields[Unit, Foo](
Field("toBe",BooleanType,resolve = _.value.name),
Field("orNot",BooleanType,resolve = _.value.path),
Field("newProp",StringType,resolve = c => {
if (c.value.toBe && !c.value.orNot) "To be!" else "OrNot!"
})
)
)
The various GraphQL server library implementations all have ways to provide resolver functions that can provide the value for a field. You'd have to include it in your schema and write the code for it, but this is a reasonable thing to do and the code you quote is a good starting point.
In Apollo in particular, you pass a map of resolvers that get passed as a resolvers: option to the ApolloServer constructor. If a field doesn't have a resolver it will default to returning the relevant field from the native JavaScript object. So you can write
const resolvers = {
Thing: {
newProp: (parent) => {
if (parent.toBe && !parent.orNot) { return "To be!"; }
if (!parent.toBe && !parent.orNot) { return "OrNot!"; }
return "That is the question";
}
}
};

Emit deprecation warnings with Apollo client

Background
We are working on a fairly large Apollo project. A very simplified version of our api looks like this:
type Operation {
foo: String
activity: Activity
}
type Activity {
bar: String
# Lots of fields here ...
}
We've realised splitting Operation and Activity does no benefit and adds complexity. We'd like to merge them. But there's a lot of queries that assume this structure in the code base. In order to make the transition gradual we add #deprecated directives:
type Operation {
foo: String
bar: String
activity: Activity #deprecated
}
type Activity {
bar: String #deprecated(reason: "Use Operation.bar instead")
# Lots of fields here ...
}
Actual question
Is there some way to highlight those deprecations going forward? Preferably by printing a warning in the browser console when (in the test environment) running a query that uses a deprecated field?
So coming back to GraphQL two years later I just found out that schema directives can be customized (nowadays?). So here's a solution:
import { SchemaDirectiveVisitor } from "graphql-tools"
import { defaultFieldResolver } from "graphql"
import { ApolloServer } from "apollo-server"
class DeprecatedDirective extends SchemaDirectiveVisitor {
public visitFieldDefinition(field ) {
field.isDeprecated = true
field.deprecationReason = this.args.reason
const { resolve = defaultFieldResolver, } = field
field.resolve = async function (...args) {
const [_,__,___,info,] = args
const { operation, } = info
const queryName = operation.name.value
// eslint-disable-next-line no-console
console.warn(
`Deprecation Warning:
Query [${queryName}] used field [${field.name}]
Deprecation reason: [${field.deprecationReason}]`)
return resolve.apply(this, args)
}
}
public visitEnumValue(value) {
value.isDeprecated = true
value.deprecationReason = this.args.reason
}
}
new ApolloServer({
typeDefs,
resolvers,
schemaDirectives: {
deprecated: DeprecatedDirective,
},
}).listen().then(({ url, }) => {
console.log(`🚀 Server ready at ${url}`)
})
This works on the server instead of the client. It should print all the info needed to track down the faulty query on the client though. And having it in the server logs seem preferable from a maintenance perspective.

Issue with RANGE_ADD in Relay Mutations

I was going through the relay docs and came to following code in RANGE_ADD.
class IntroduceShipMutation extends Relay.Mutation {
// This mutation declares a dependency on the faction
// into which this ship is to be introduced.
static fragments = {
faction: () => Relay.QL`fragment on Faction { id }`,
};
// Introducing a ship will add it to a faction's fleet, so we
// specify the faction's ships connection as part of the fat query.
getFatQuery() {
return Relay.QL`
fragment on IntroduceShipPayload {
faction { ships },
newShipEdge,
}
`;
}
getConfigs() {
return [{
type: 'RANGE_ADD',
parentName: 'faction',
parentID: this.props.faction.id,
connectionName: 'ships',
edgeName: 'newShipEdge',
rangeBehaviors: {
// When the ships connection is not under the influence
// of any call, append the ship to the end of the connection
'': 'append',
// Prepend the ship, wherever the connection is sorted by age
'orderby(newest)': 'prepend',
},
}];
}
/* ... */
}
Now over here it is mentioned that edgeName is required for adding new node to the connection. Looks well and fine.
Now, I move further down the documentation and reached the GraphQL implementation of this mutation.
mutation AddBWingQuery($input: IntroduceShipInput!) {
introduceShip(input: $input) {
ship {
id
name
}
faction {
name
}
clientMutationId
}
}
Now according to docs this mutation gives me output as
{
"introduceShip": {
"ship": {
"id": "U2hpcDo5",
"name": "B-Wing"
},
"faction": {
"name": "Alliance to Restore the Republic"
},
"clientMutationId": "abcde"
}
}
I cannot see edgeName being present here.
I was using graphene for my project. Over there also I saw something similar only
class IntroduceShip(relay.ClientIDMutation):
class Input:
ship_name = graphene.String(required=True)
faction_id = graphene.String(required=True)
ship = graphene.Field(Ship)
faction = graphene.Field(Faction)
#classmethod
def mutate_and_get_payload(cls, input, context, info):
ship_name = input.get('ship_name')
faction_id = input.get('faction_id')
ship = create_ship(ship_name, faction_id)
faction = get_faction(faction_id)
return IntroduceShip(ship=ship, faction=faction)
Over here also I cannot see edgeName anywhere.
Any help please? I am working on mutations for the first so wanted to confirm a m I missing something or is something wrong here?
This example might be either simplified or a bit obsoloete, because in practice there is need to return edge and that's exactly what is fetched by relay (other fields in RANGE_ADD are more a kind of declaration and are not necessarily fetched).
Here is how you can do it in graphene:
# Import valid as of graphene==0.10.2 and graphql-relay=0.4.4
from graphql_relay.connection.arrayconnection import offset_to_cursor
class IntroduceShip(relay.ClientIDMutation):
class Input:
ship_name = graphene.String(required=True)
faction_id = graphene.String(required=True)
ship = graphene.Field(Ship)
faction = graphene.Field(Faction)
new_ship_edge = graphene.Field(Ship.get_edge_type().for_node(Ship))
#classmethod
def mutate_and_get_payload(cls, input, context, info):
ship_name = input.get('ship_name')
faction_id = input.get('faction_id')
ship = create_ship(ship_name, faction_id)
faction = get_faction(faction_id)
ship_edge_type = Ship.get_edge_type().for_node(Ship)
new_ship_edge = edge_type(
# Assuming get_ships_number supplied
cursor=offset_to_cursor(get_ships_number())
node=ship
)
return cls(ship=ship, faction=faction, new_ship_edge=new_ship_edge)

Resources