How can I make Graphene resolvers run concurrently - graphene-python

I have a graphene query that looks similar to below where I have more than 1 resolver function. These queries run sequentially per the Django debug toolbar. Even though these queries are not dependent on each-other. I've been digging through the Graphene repo and stackoverflow for how to handle vanilla query resolvers asynchronously to no avail. It seems there are ways to tackle subscriptions(ie websockets). But currently my only concern is queries. I'm not sure which libraries I may need or how to configure this.
class MyQuery(ObjectType):
test_one = graphene.Field(TestNode)
test_two = graphene.Field(TestNode)
def resolve_test_one(self, info, **args):
return MyTestModel.objects.get(id="id-1")
def resolve_test_two(self, info, **args):
return MyTestModel.objects.get(id="id-2")

Related

WrappedAPIView.__name__ = func.__name__ . AttributeError: 'dict' object has no attribute '__name__'

I am in a little bit of a pickle.
I have multiple decorators wrapping my view functions. I want to test that view function using pytest and this means the decorators will also be executed. Now, in some of those decorators, I am making API calls to an external service and I do not want to make those API calls while running my test, what I am doing instead is to mock the response from those decorators. When I ran the test I got AttributeError: 'dict' object has no attribute '__name__' and pytest is pointing to the decorators.py file in the djangorestframework package as the source of the error. Any idea what I am doing wrong?
Views.py file
#api_view(['POST'])
#DecoratorClass.decorator_one
#DecoratorClass.decorator_two
#DecoratorClass.decorator_three
#DecoratorClass.decorator_four
#DecoratorClass.decorator_five
#DecoratorClass.decorator_six
#DecoratorClass.decorator_seven
def my_view_fun(request):
my_data = TenantService.create_tenant(request)
return ResponseManager.handle_response(message="sucessful", data=my_data.data, status=201)
This works perfectly with manual testing, I only get this problem when I am running the test with pytest.
I am making the external API calls in decorators three, four and five.
TL;DR:
How can I handle the decorators wrapped around a view function when testing that view function in a situation where some of those decorators are making external API calls which should ideally be mocked in a test.

Using `createMockClient` for testing non react code?

I have mixed application that uses Apollo for both React and non-react code.
However, I can’t find documentation or code examples around testing non-react code with the apollo client,not using MockedProvider. I did, however, notice that apollo exports a mock client from the testing directory.
import { createMockClient } from '#apollo/client/testing';
I haven’t found any documentation about this API and am wondering if it’s intended to be used publicly and, if not, what the supported approach is for this.
The reason I need this is simple: When using Next.js’ SSR and/or SSG features data fetching and actual data rendering are split into separate functions.
So the fetching code is not using React, but Node.js to fetch data.
Therefore I use apolloClient.query to fetch the data I need.
When trying to wrap a react component around that fetching code in a test an wrap MockedProvider around that the apolloClient’s query method always returns undefined for mocked queries - so it seems this only works for the useQuery hook?
Do you have any idea how to mock the client in non-react code?
Thank you for your support in advance. If you need any further information from me feel free to ask.
Regards,
Horstcredible
I was in a similar position where I wanted to use a MockedProvider and mock the client class, rather than use useQuery as documented here: https://www.apollographql.com/docs/react/development-testing/testing/
Though it doesn't seem to be documented, createMockClient from '#apollo/client/testing' can be passed the same mocks as MockedProvider to mock without useQuery. These examples assume you have a MockedProvider:
export const mockGetAssetById = async (id: Number): Promise<any> => {
const client = createMockClient(mocks, GetAsset)
const data = await client.query({
query: GetAsset,
variables: id,
})
return data
}
Accomplishes the same as:
const { data } = useQuery(
GetAsset,
{ variables: { id } }
)

Django rest framework question: Using TestCase I'm not understanding using fixtures

I'm confused about factories.
#pytest.fixture
def a_api_request_factory():
return APIRequestFactory()
class TestUserProfileDetailView(TestCase):
def test_create_userprofile(self, up=a_user_profile, rf=a_api_request_factory):
"""creates an APIRequest and uses an instance of UserProfile from a_user_profile to test a view user_detail_view"""
request = rf().get('/api/userprofile/') # the problem line
request.user = up.user
response = userprofile_detail_view(request)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['user'], up.user.username)
if I take out the parens from rf().get.... then I get
"function doesn't have a get attribute".
If I call it directly then it gives me:
"Fixture "a_api_request_factory" called directly. Fixtures are not
meant to be called directly, but are created automatically when test
functions request them as parameters. See
https://docs.pytest.org/en/stable/fixture.html for more information
about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly
about how to update your code."
I do believe I've hit every combination of with or without parens in all relevant locations. Where do the parens go for fixtures?
Or better yet is there a pattern to avoid this type of confusion completely?

Ruby neo4j-core mass processing data

Has anyone used Ruby neo4j-core to mass process data? Specifically, I am looking at taking in about 500k lines from a relational database and insert them via something like:
Neo4j::Session.current.transaction.query
.merge(m: { Person: { token: person_token} })
.merge(i: { IpAddress: { address: ip, country: country,
city: city, state: state } })
.merge(a: { UserToken: { token: token } })
.merge(r: { Referrer: { url: referrer } })
.merge(c: { Country: { name: country } })
.break # This will make sure the query is not reordered
.create_unique("m-[:ACCESSED_FROM]->i")
.create_unique("m-[:ACCESSED_FROM]->a")
.create_unique("m-[:ACCESSED_FROM]->r")
.create_unique("a-[:ACCESSED_FROM]->i")
.create_unique("a-[:ACCESSED_FROM]->r")
.create_unique("i-[:IN]->c")
.exec
However doing this locally it takes hours on hundreds of thousands of events. So far, I have attempted the folloiwng:
Wrapping Neo4j::Connection in a ConnectionPool and multi-threading it - I did not see much speed improvements here.
Doing tx = Neo4j::Transaction.new and tx.close every 1000 events processed - looking at a TCP dump, I am not sure this actually does what I expected. It does the exact same requests, with the same frequency, but just has a different response.
With Neo4j::Transaction I see a POST every time the .query(...).exec is called:
Request: {"statements":[{"statement":"MERGE (m:Person{token: {m_Person_token}}) ...{"m_Person_token":"AAA"...,"resultDataContents":["row","REST"]}]}
Response: {"commit":"http://localhost:7474/db/data/transaction/868/commit","results":[{"columns":[],"data":[]}],"transaction":{"expires":"Tue, 10 May 2016 23:19:25 +0000"},"errors":[]}
With Non-Neo4j::Transactions I see the same POST frequency, but this data:
Request: {"query":"MERGE (m:Person{token: {m_Person_token}}) ... {"m_Person_token":"AAA"..."c_Country_name":"United States"}}
Response: {"columns" : [ ], "data" : [ ]}
(Not sure if that is intended behavior, but it looks like less data is transmitted via the Non-Neo4j::Transaction technique - highly possibly I am doing something incorrectly)
Some other ideas I had:
* Post process into a CSV, SCP up and then use the neo4j-import command line utility (although, that seems kinda hacky).
* Combine both of the techniques I tried above.
Has anyone else run into this / have other suggestions?
Ok!
So you're absolutely right. With neo4j-core you can only send one query at a time. With transactions all you're really getting is the ability to rollback. Neo4j does have a nice HTTP JSON API for transactions which allows you to send multiple Cypher requests in the same HTTP request, but neo4j-core doesn't currently support that (I'm working on a refactor for the next major version which will allow this). So there are a number of options:
You can submit your requests via raw HTTP JSON to the APIs. If you still want to use the Query API you can use the to_cypher and merge_params methods to get the cypher and params for that (merge_params is a private method currently, so you'd need to send(:merge_params))
You can load via CSV as you said. You can either
use the neo4j-import command which allows you to import very fast but requires you to put your CSV in a specific format, requires that you be creating a DB from scratch, and requires that you create indexes/constraints after the fact
use the LOAD CSV command which isn't as fast, but is still pretty fast.
You can use the neo4apis gem to build a DSL to import your data. The gem will create Cypher queries under the covers and will batch them for performance. See examples of the gem in use via neo4apis-twitter and neo4apis-github
If you are a bit more adventurous, you can use the new Cypher API in neo4j-core via the new_cypher_api branch on the GitHub repo. The README in that branch has some documentation on the API, but also feel free to drop by our Gitter chat room if you have questions on this or anything else.
If you're implementing a solution which is going to make queries like above where you have multiple MERGE clauses, you'll probably want to profile your queries to make sure that you are avoiding the eager (that post is a bit old and newer versions of Neo4j have alleviated some of the need for care, but you can still look for Eager in your PROFILE)
Also worth a look: Max De Marzi's post on Scaling Cypher Writes

Convert multiple querysets to json in django

I asked a related question earlier today
In this instance, I have 4 queryset results:
action_count = Action.objects.filter(complete=False, onhold=False).annotate(action_count=Count('name'))
hold_count = Action.objects.filter(onhold=True, hold_criteria__isnull=False).annotate(action_count=Count('name'))
visible_tags = Tag.objects.filter(visible=True).order_by('name').filter(action__complete=False).annotate(action_count=Count('action'))
hidden_tags = Tag.objects.filter(visible=False).order_by('name').filter(action__complete=False).annotate(action_count=Count('action'))
I'd like to return them to an ajax function. I have to convert them to json, but I don't know how to include multiple querysets in the same json string.
I know this thread is old, but using simplejson to convert django models doesn't work for many cases like decimals ( as noted by rebus above).
As stated in the django documentation, serializer looks like the better choice.
Django’s serialization framework provides a mechanism for
“translating” Django models into other formats. Usually these other
formats will be text-based and used for sending Django data over a
wire, but it’s possible for a serializer to handle any format
(text-based or not).
Django Serialization Docs
You can use Django's simplejson module. This code is untested though!
from django.utils import simplejson
dict = {
'action_count': list(Action.objects.filter(complete=False, onhold=False).annotate(action_count=Count('name')).values()),
'hold_count': list(Action.objects.filter(onhold=True, hold_criteria__isnull=False).annotate(action_count=Count('name')).values()),
...
}
return HttpResponse( simplejson.dumps(dict) )
I'll test and rewrite the code as necessary when I have the time to, but this should get you started.

Resources