Graphene mutation with list as input - graphql

I have a graphene mutation like this:
class User(ObjectType):
username = String()
class ImportUsers(Mutation):
class Arguments:
users = List(User)
Output = List(User)
#staticmethod
def mutation(root, info, users):
...
But graphene gives me the following error: AssertionError: Mutations.importUsers(users:) argument type must be Input Type but got: [User].
How can I have a mutation in graphene which accepts a list of objects?

I was trying roughly the same thing as you did.
Figured out that custom input types should inherit from graphene.InputObjectType instead of graphene.ObjectType.
class User(graphene.InputObjectType): # <-- Changed to InputObjectType
username = graphene.String()
So, having your User like this should solve the case.

Yeah so, basically, you need to have this:
class User(graphene.ObjectType):
username = graphene.String()
class ImportUsers(Mutation):
class Arguments:
users = graphene.List(User)
Output = graphene.List(User)
#staticmethod
def mutation(root, info, users):
...
Graphene has a List type. Also, I don't know if it's just me or not, but I think you need to have graphene.(type), not just the type. I am working on something very similar right now to this, so hopefully you find or found your solution, and if you do, let me know how it went! Hopefully I helped xD. I am kinda new to all of this so ye

Related

Why isn't Python NewType compatible with isinstance and type?

This doesn't seem to work:
from typing import NewType
MyStr = NewType("MyStr", str)
x = MyStr("Hello World")
isinstance(x, MyStr)
I don't even get False, but TypeError: isinstance() arg 2 must be a type or tuple of types because MyStr is a function and isinstance wants one or more type.
Even assert type(x) == MyStr or is MyStr fails.
What am I doing wrong?
Cross-reference: inheritance from str or int
Even more detailed in the same question: https://stackoverflow.com/a/2673802/1091677
If you would like to subclass Python's str, you would need to do the following way:
class MyStr(str):
# Class instances construction in Python follows this two-step call:
# First __new__, which allocates the immutable structure,
# Then __init__, to set up the mutable part.
# Since str in python is immutable, it has no __init__ method.
# All data for str must be set at __new__ execution, so instead
# of overriding __init__, we override __new__:
def __new__(cls, *args, **kwargs):
return str.__new__(cls, *args, **kwargs)
Then:
x = MyStr("Hello World")
isinstance(x, MyStr)
returns True as expected
As at python 3.10...
I'd speculate that the answer to
"Why isn't Python NewType compatible with isinstance and type?"
... is "It is a limitation of NewType".
I'd speculate that the answer to
"What am I doing wrong?"
... is "nothing". You are assuming NewType makes a new runtime type, it appears that it doesn't.
And for what it's worth, I wish it did work.
maybe you want a type that is methods like str does but is not a str?
A simple way to get that effect is just:
class MyStr:
value:str
def __init__(self,value:str):
self.value=value
pass
... but that means using all the string methods is "manual", e.g.
x=MyStr('fred')
x.value.startswith('fr')
... you could use #dataclass to add compare etc.
This is not a one-size-fits-all answer, but it might suit your application.
then make that simple
class MyStr:
value:str
def __init__(self,value:str):
self.value=value
...generic like Str in (incomplete) https://github.com/urnest/urnest/blob/master/xju/newtype.py
... and you can write:
class MyStrType:pass; class MyOtherStrType:pass
class MyStr(Str[MyStrType]):
pass
class MyOtherStr(Str[MyOtherStrType]):
pass
x=MyStr('fred')
y=MyOtherStr('fred')
x < y # ! mypy error distinct types MyStr and MyOtherStr
That's what I was after, which might be what you were after? I have to provide Int,Str separately but in my experience distinct int, str, float, bool, bytes types give a lot of readability and error-rejection leverage. I will add Float, Bool, Bytes to xju.newtype soon and give them all the full set of methods.
looks might have been "fixed" in python 3.10:
https://docs.python.org/3/library/typing.html?highlight=typing#newtype
says:
Changed in version 3.10: NewType is now a class rather than a function. There is some additional runtime cost when calling NewType over a regular function. However, this cost will be reduced in 3.11.0.
I don't have 3.10 handy as I write this to try your example.

Validate at least 1 of a set of args is present in Kotlin class constructor

Scenario
I need to create a Kotlin class that can receive up to 4 arguments for its constructor, but only requires at least 1 out of a set of 3 (the fourth being entirely optional). To illustrate:
class Pie {
// Completely optional, the constructor should use it if present, otherwise it may be null.
var topping: String?
// Of these three [fillingA, fillingB, fillingC] 1 or more must be present.
var fillingA: String?
var fillingB: String?
var fillingC: String?
}
Thoughts
I've attempted to use Kotlin init{} blocks for validation, or telescoping constructors, but it gets ugly fast and I've yet to solve the issue. I have not found anything in the kotlinlang.org docs on primary/secondary constructors that is more elegant, though. My preference would be to find something similar to the #Size or #NotNull annotations, but I have failed to locate anything close.
It is important to note that I am using this class as a model for an API response.
Question
What is the most concise way to validate that a Kotlin class has at least 1 of a set of arguments passed to its constructor?
Are this fillings interchangeable? You could assume that fillingA is always required and the other ones are optional, something like this:
class Pie constructor(
val fillingA: String,
val fillingB: String? = null,
val fillingC: String? = null,
val topping: String? = null
){...}

It is possible to customize the response to nested object attributes?

I'm trying to figure out if there's a way to return one attribute of a nested object when the attribute is addressed using the 'dot' notation, but to return different attributes of that object when subsequent attributes are requested.
ex)
class MyAttributeClass:
def __init__(self, value):
self.value = value
from datetime.datetime import now
self.timestamp = now()
class MyOuterClass:
def __init__(self, value):
self._value = MyAttributeClass(value)
test = MyOuterClass(5)
test.value (should return test._value.value)
test.value.timestamp (should return test._value.timestamp)
Is there any way to accomplish this? I imagine, if there is one, it involves defining the __getattr__ method of MyOuterClass, but I've been searching around and I haven't found any way to do this. Is it just impossible? It's not a big deal if it can't be done, but I've wanted to do this many times and I'd just like to know if there's a way.
It seems obvious now, but inheritance was the answer. Defining attributes as instances of a custom class which inherits from the datatype I wanted for ordinary attribute access (i.e. object.attr) and giving this subclass the desired attributes for subsequent attribute requests (i.e. object.attr.subattr).

ipython parallel push custom object

I am unable to send object to direct view workers.
Here is what I want to do:
class Test:
def __init__(self):
self.id = 'ddfdf'
from IPython.parallel import Client
rc = Client()
dv = rc[:]
t = Test()
dv['t'] = t
print dv['t']
NameError: name 't' is not defined
This would work if I try to push pandas object or any of the build in objects.
What is the way to do it with custom object?
I tried:
dv['Test'] = Test
dv['t'] = t
print dv['t']
UnpicklingError: NEWOBJ class argument isn't a type object
For interactively defined classes (in __main__), you do need to push the class definition, or use dill. But even this doesn't appear to work for old-style classes, which is a bug in IPython's handling of old-style classes[1]. This code works fine if you use a new-style class:
class Test(object):
...
instead of on old-style class. Note that old-style classes are not available in Python 3.
It's generally a good idea to always use new-style classes anyway.

Lift Record: empty value for required field but no validation errors

I've been trying to figure out how to do this without manually defining a validation but without any success so far.
I have a StringField
class Foo private() extends MongoRecord[Foo] with ObjectIdKey[Foo] {
...
object externalId extends StringField(this, 255) {
// none of these seem to have any effect on validation whatsoever:
override def optional_? = false
override def required_? = true
override def defaultValueBox = Empty
}
...
}
Now when I call .validate on a Foo, it returns no errors:
val foo = Foo.createRecord
foo.validate match {
case Nil => foo.save
...
}
...and the document is saved into the (mongo) DB with no externalId.
So the question is: is there any way at all to have Lift automatically validate missing fields without me having to manually add stuff to validations?
EDIT: am I thinking too much in terms of the type of productivity that frameworks like Django and Rails provide out of the box? i.e. things like basic and very frequent validation without having to write anything but a few declarative attributes/flags. If yes, why has Lift opted to not provide this sort of stuff out of the box? Why would anybody not want .validate to automatically take into consideration all the def required_? = true/def optional_? = false fields?
As far as I'm aware, there isn't a way for you to validate a field without explicitly defining validations. The reason that optional_? and required_? don't provide validation is that it isn't always clear what logic to use, especially for non String fields. The required_? value itself is used by Crudify to determine whether to mark the field as required in the produced UI, but it's up to you to provide the proper logic to determine that the requirement is satisfied.
Validating the field can be as easy as
override def validations = super.validations :: valMinLen(1, "Required!")
Or see the answer to your other question here for how to create a generic Required trait.

Resources