How to check if a function parameter is a Normalizr schema class? - normalizr

Is there any way to check, at runtime, if a function parameter is (or not) a Normalizr schema class? Could be any type: entity, array, object, etc.
For example:
function processTMDBRespose(response, schema) {
// if 'schema' param is not a normalizr schema, throw!
// some code
}

You can and can't do what you're looking for.
If you write yourself a lint rule that only allows creating schema from normalizr classes, like new schema.Array() and prohibits the use of shorthand [], then you can check using instanceof:
if (
mySchema instanceof schema.Array ||
mySchema instanceof schema.Entity ||
mySchema instanceof schema.Object ||
mySchema instanceof schema.Union ||
mySchema instanceof schema.Values
) {
// your code
} else {
throw new Error('mySchema is not a schema');
}
However, if you use shorthand, any array [] or plain object {} is also a valid schema for schema.Array and schema.Object, respectively. This is much more difficult to validate, because almost everything is typeof Object in JavaScript (like null)

Related

How to trigger visitInputObject method on custom directive?

I'm building a custom directive in which I'm hoping to validate entire input objects. I'm using the INPUT_OBJECT type with the visitInputObject method on SchemaDirectiveVisitor extended class.
Every time I run a mutation using the input type then visitInputObject does not run.
I've used the other types/methods like visitObject and visitFieldDefinition and they work perfectly. But when trying to use input types and methods they will not trigger.
I've read all the available documentation I can find. Is this just not supported yet?
Some context code(Not actual):
directive #validateThis on INPUT_OBJECT
input MyInputType #validateThis {
id: ID
someField: String
}
type Mutation {
someMutation(myInput: MyInputType!): SomeType
}
class ValidateThisDirective extends SchemaDirectiveVisitor {
visitInputObject(type) {
console.log('Not triggering');
}
}
All the visit methods of a SchemaDirectiveVisitor are ran at the same time -- when the schema is built. That includes visitFieldDefinition and visitFieldDefinition. The difference is that when we use visitFieldDefinition, we often do it to modify the resolve function for the visited field. It's this function that's called during execution.
You use each visit methods to modify the respective schema element. You can use visitInputObject to modify an input object, for example to add or remove fields from it. You cannot use it to modify the resolution logic of an output object's field. You should use visitFieldDefinition for that.
visitFieldDefinition(field, details) {
const { resolve = defaultFieldResolver } = field
field.resolve = async function (parent, args, context, info) {
Object.keys(args).forEach(argName => {
const argDefinition = field.args.find(a => a.name === argName)
// Note: you may have to "unwrap" the type if it's a list or non-null
const argType = argDefinition.type
if (argType.name === 'InputTypeToValidate') {
const argValue = args[argName]
// validate here
}
})
return resolve.apply(this, [parent, args, context, info]);
}
}

check for missing resolvers

How would you scan schema for missing resolver for queries and non-scalar fields ?
I'm trying to work with a dynamic schema so I need to be able to test this programmatically. I've been browsing graphql tools for few hours to find a way to do this, but I'm getting nowhere...
checkForResolveTypeResolver - this only apply to interface and union resolveType resolver
I can't find a way to know when a defaultFieldResolver is applied
I tried working with custom directives to add #requiredResolver, to help identify those fields, but custom resolver are far from being fully supported:
introspection & directives
no graphql-js directives handler (can workaround this with graphql-tools tho)
any help is appreciated !
Given an instance of GraphQLSchema (i.e. what's returned by makeExecutableSchema) and your resolvers object, you can just check it yourself. Something like this should work:
const { isObjectType, isWrappingType, isLeafType } = require('graphql')
assertAllResolversDefined (schema, resolvers) {
// Loop through all the types in the schema
const typeMap = schema.getTypeMap()
for (const typeName in typeMap) {
const type = schema.getType(typeName)
// We only care about ObjectTypes
// Note: this will include Query, Mutation and Subscription
if (isObjectType(type) && !typeName.startsWith('__')) {
// Now loop through all the fields in the object
const fieldMap = type.getFields()
for (const fieldName in fieldMap) {
const field = fieldMap[fieldName]
let fieldType = field.type
// "Unwrap" the type in case it's a list or non-null
while (isWrappingType(fieldType)) {
fieldType = fieldType.ofType
}
// Only check fields that don't return scalars or enums
// If you want to check *only* non-scalars, use isScalarType
if (!isLeafType(fieldType)) {
if (!resolvers[typeName]) {
throw new Error(
`Type ${typeName} in schema but not in resolvers map.`
)
}
if (!resolvers[typeName][fieldName]) {
throw new Error(
`Field ${fieldName} of type ${typeName} in schema but not in resolvers map.`
)
}
}
}
}
}
}

How to extend LINQ select method in my own way

The following statement works fine if the source is not null:
Filters.Selection
.Select(o => new GetInputItem() { ItemID = o.ItemId })
It bombs if "Filters.Selection" is null (obviously). Is there any possible way to write my own extension method which returns null if the source is null or else execute the "Select" func, if the source is not null.
Say, something like the following:
var s = Filters.Selection
.MyOwnSelect(o => new GetInputItem() { ItemID = o.ItemId })
"s" would be null if "Filters.Selection" is null, or else, "s" would contain the evaluated "func" using LINQ Select.
This is only to learn more about LINQ extensions/customizations.
thanks.
You could do this:
public static IEnumerable<U> SelectOrNull<T,U>(this IEnumerable<T> seq, Func<T,U> map)
{
if (seq == null)
return Enumerable.Empty<U>(); // Or return null, though this will play nicely with other operations
return seq.Select(map);
}
Yes have a look at the Enumerable and Queryable classes in the framework, they implement the standard query operators.
You would need to implement a similar class with the same Select extension methods matching the same signatures, then if the source is null exit early, you should return an empty sequence.
Assuming you're talking about LINQ to Objects, absolutely:
public static class NullSafeLinq
{
public static IEnumerable<TResult> NullSafeSelect<TSource, TResult>
(this IEnumerable<TSource> source, Func<TSource, TResult> selector)
{
// We don't intend to be safe against null projections...
if (selector == null)
{
throw new ArgumentNullException("selector");
}
return source == null ? null : source.Select(selector);
}
}
You may also want to read my Edulinq blog post series to learn more about how LINQ to Objects works.

Cast between Oracle Table field and Entity framework fails

Within a WCF web service I make a query on a ORACLE 11g database and I use entity framework as model. The target field is of type Numeric, while in the entity framework is Int64.
When I try to update the field I get the following exception: The specified cast from a materialized 'System.Decimal' type to the 'System.Int64' type is not valid.
The method generating the error is below, in particular for the line within the else statement: result = _context.ExecuteStoreQuery(query).FirstOrDefault();
public string GetDatabaseTimestamp(Type timestampFieldType, string query)
{
object result;
if (timestampFieldType == typeof(string))
{
result = _context.ExecuteStoreQuery<string>(query).FirstOrDefault();
}
else
{
result = _context.ExecuteStoreQuery<long>(query).FirstOrDefault();
}
return result.ToString();
}
Would it be possible to define at the EF level a converter or something similar? The option of changing the Database is not feasible, therefore I have to edit code.
In The EF I already changed the type into decimal. Problem is that the query was executed directly on the Oracle database and this was generating the exception.
I solved the issue by using attributes for the target entity and changed the method into:
public string GetDatabaseTimestamp(Type timestampFieldType, string query)
{
object result;
if (timestampFieldType == typeof(string))
{
result = _context.ExecuteStoreQuery<string>(query).FirstOrDefault();
}
else if (timestampFieldType == typeof(decimal))
{
result = _context.ExecuteStoreQuery<decimal>(query).FirstOrDefault();
}
else
{
result = _context.ExecuteStoreQuery<long>(query).FirstOrDefault();
}
return result.ToString();
}
In this way the passed timestampFieldType is of type decimal and the proper cast is selected. This issue is due to different data types used for similar fields (as example Update fields) in the Oracle DB (legacy database).

query a list of dynamic objects

I am using massive to get the config table in the database. I would like to cache the config since the app gets values from it all the time.
once cached is there an easy way to find the object where name = 'something'
here is where the whole table is cached.
protected override dynamic Get()
{
var ret = HttpRuntime.Cache["Config"];
if (ret == null)
{
ret = _table.All();
HttpRuntime.Cache.Add("Config", ret, null, DateTime.Now.AddMinutes(2), Cache.NoSlidingExpiration,CacheItemPriority.Low, null );
}
return ret;
}
here is where I would like to pull one record from that method
protected override dynamic Get(string name)
{
return this.Get().Where(x => x.Name == name ).SingleOrDefault();
}
I know linq or lambda statements are not allowed in dynamic objects. but what is the next best way to pull that one object out of that list?
You could not write the lamda expression directly as the Where argument, but you could assign it to a Func variable.
Also I believe extension methods would not work on dynamic objects, so you have to call the extension method directly.
I think you could use the below code,
Func<dynamic, bool> check = x => x.Name == name;
System.Linq.Enumerable.Where<dynamic>(this.Get(), check);

Resources