graphene schema for query input is list of dictionary / object - graphene-python

My query input is something like:
[{name: "joe", age: 20}, {name: "bob", age: 30}]
Then I use a resolver to return something similar like:
[{name: "joe", age: 20}, {name: "bob", age: 30}, {name: "jane", age: 21}]
Suppose I don't change the dataset, so no need to use mutate.
I am using flask_graphql and graphene
What's the better way to implement this? (I mean how to build the schema) Thanks!

Not sure on use case but GenericScalar might help.
data = graphene.GenericScalar()
In mutation, this won't have any specific schema though. Input can accept any structure.

Related

GraphQL input parameter

There is JSON-object (an array of different structures) like:
"cells": [
{
"x":3,
"y":6,
},
{
"type": "shape",
"direction": right,
},
....
{
"a": 4,
"b": 5,
"c": 6,
}
]
I need to save this object using graphql mutation. Whats the way to do it?
type A {
x: Int!
y: Int!
}
type B {
type: String!
direction: String!
}
type C {
a: Int!
b: Int!
c: Int!
}
union Request = A | B | C
input InputR {
cells: [Request!]!
}
type Mutation {
save(cells: InputR!): InputR
}
I tried the following, but it doesnt work. Any ideas?
Thanks!
So there are a few answers, depending on what you're looking for.
You can't. GraphQL doesn't have union input types (today). It was originally proposed to be an antipattern, though there are proposals making their way through the processes as of today.
You could just create a single input type that has all of those fields, all nullable, though that leads you to the problem of "Anemic Mutations"
Are you sure you want to? Those structures are so incredibly different that it may be you're thinking about the problem from an implementation standpoint, rather than a data-architecture one.
If #1 & #2 aren't good enough and #3 isn't true, the "best way for today" at that point is probably just to use a JSON scalar type. It requires you pass all of that in as a single JSON-compatible string. Here is an example of one.

GraphQL, extract all values of a specific type into an array

Let's assume I have schema like
type StaticData {
id: Int!
language: String!
name: String!
description: String!
someA: SpecialType!
someB: SpecialType!
}
SpecialType is a scalar. SpecialType can be used in further nested structures as well. Let's now assume I query for a list of StaticData. Is it somehow possible to receive a list of all SpecialType values, without me extracting it manually from the returned object?
Example:
My return object looks like this:
[
{
id: 1,
language: 'en',
name: 'Something',
description: 'SomeDescription',
someA: 1234,
someB: 2345
},
{
id: 2,
language: 'en',
name: 'SomethingElse',
description: 'SomeDescription',
someA: 4564,
someB: 1234
},
]
Since I want all SpecialType values, I do want to extract [1234, 2345, 4564, 1234]. Maybe it is possible to do the extraction on the server using resolvers and receive it with the full result object?
If you want the GraphQL response to be just a list of integers, then you'll need to provide a type of that shape to your schema, i.e. someA: [SpecialType!].
You're on the right track - it's on you to then build and return a list in the resolver that satisfies this structure. Hope this helps!

Merging Rest datasets in Federation with resolvers?

pretty new to GraphQL and Apollo Federation.
I have a question, is it possible to populate one dataset with another such as:
# in Shop Service
type carId {
id: Int
}
type Shop #key(fields: "id") {
id: ID!
name: String
carIds: [CarId]
}
# in Car Service
type Car {
id: ID!
name: String
}
extends type Shop #key(fields: "id") {
id: ID! #external
cars: [Car]
}
Car Resolver
Query{...},
Shop: {
async cars(shop, _, { dataSources }) {
console.log(shop); // Issue here is it returns the references that are an object only holding the `id` key of the shop, I need the `cars` key here, to pass to my CarsAPI
return await dataSources.CarsAPI.getCarsByIds(shop.carsIds);
}
}
From the Shop rest api the response would look like:
[{id: 1, name: "Brians Shop", cars: [1, 2, 3]}, {id: 2, name: "Ada's shop", cars: [4,5,6]}]
From the Car rest api the response would look like:
[{id: 1, name: "Mustang"}, {id: 2, name: "Viper"}, {id: 3, name: "Boaty"}]
So what I want to archive is to query my GraphQL server for:
Shop(id: 1) {
id
name
cars {
name
}
}
And then expect:
{
id: 1,
name: "Brian's shop",
cars: [
{name: "Mustang"},
{name: "Viper"},
{name: "Boaty"}
]
}
Is this possible, it was what I thought when I chose federation :)
So, if I understand correctly after your comments, what you want is to have the carIds from Shop service coming in your Car service inside the cars resolver.
You can make use of the #requires directive which will instruct Apollo Server that you need a field (or a couple of) before it starts executing the cars resolver. That is:
Car Service
extend type Shop #key(fields: "id") {
id: ID! #external
carIds: [Int] #external
cars: [Car] #requires(fields: "carIds")
}
Now, inside the cars resolver, you should be able to access shop.carIds on your first parameter.
See: https://www.apollographql.com/docs/apollo-server/federation/advanced-features/#computed-fields

Updating in MongoDB without first having to pull a Mongoid object?

When performing queries I can go through Mongoid:
product_obj = Product.where(
_id: "58f876f683c336eec88e9db5"
).first # => #<Product _id: 58f876f683c336eec88e9db5, created_at: nil, updated_at: nil, sku: "123", name: "Some text", ...)
​
or I can circumvent it:
product_hsh = Product.collection.find( {
_id: BSON::ObjectId.from_string("58f876f683c336eec88e9db5")
}, {
projection: {
_id: 1,
name: 1
}
} ).first # => {"_id"=>BSON::ObjectId('58f876f683c336eec88e9db5'), "name"=>"Some text"}
I prefer to circumvent, because it performs better; I can limit which fields to get in the response.
My problem, however, is how to further work with the returned product. The response is a hash, not an object, so if I have to update it I need to pull it through Mongoid anyway, and thereby the performance gains are gone:
Product.find(product_hsh["_id"]).update_attribute(:name, "Some other text")
My question is: how do I update without first having to pull a Mongoid object?
You don't need to pull/fetch at all. You can just send the $set commands directly:
Product.where(id: product_hsh["_id"]).update_all(name: "Some other text")

Can someone advise on an HBase schema click stream data

I would like to create a click stream application using HBase, in sql this would be a pretty simple task but in Hbase I have not got the first clue. Can someone advise me on a schema design and keys to use in HBase.
I have provided a rough data model and several questions that I would like to interrogate the data for.
Questions I would like to ask for accessing data
What events led to a conversion?
What was the last page / How many paged viewed?
What pages a customer drops off?
What products does a male customer between 20 and 30 like to buy?
A customer has bought product x also likely to buy product y?
Conversion amount from first page ?
{
PageViews: [
{
date: "19700101 00:00",
domain: "http://foobar.com",
path: "pageOne.html",
timeOnPage: "10",
pageViewNumber: 1,
events: [
{ name: "slideClicked", value: 0, time: "00:00"},
{ name: "conversion", value: 100, time: "00:05"}
],
pageData: {
category: "home",
pageTitle: "Home Page"
}
},
{
date: "19700101 00:01",
domain: "http://foobar.com",
path: "pageTwo.html",
timeOnPage: "20",
pageViewNumber: 2,
events: [
{ name: "addToCart", value: 50.00, time: "00:02"}
],
pageData: {
category: "product",
pageTitle: "Mans Shirt",
itemValue: 50.00
}
},
{
date: "19700101 00:03",
domain: "http://foobar.com",
path: "pageThree.html",
timeOnPage: "30",
pageViewNumber: 3,
events: [],
pageData: {
category: "basket",
pageTitle: "Checkout"
}
}
],
Customer: {
IPAddress: 127.0.0.1,
Browser: "Chrome",
FirstName: "John",
LastName: "Doe",
Email: "john.doe#email.com",
isMobile: 1,
returning: 1,
age: 25,
sex: "Male"
}
}
Well, you data is mainly in one-to-many relationship. One customer and an array of page view entities. And since all your queries are customer centric, it makes sense to store each customer as a row in Hbase and have customerid(may be email in your case) as part of row key.
If you decide to store one row for one customer, each page view details would be stored as nested. The video link regarding hbase design will help you understand that. So for you above example, you get one row, and three nested entities
Another approach would be, denormalized form, for hbase to perform good lookup. Here each row would be page view, and customer data gets appended for every row.So for your above example, you end up with three rows. Data would be duplicated. Again the video gives info regarding that too(compression things).
You have more nested levels inside each page view - live events and pagedata. So it will only get worse, with respect to denormalization. As everything in Hbase is a key value pair, it is difficult to query and match these nested levels. Hope this helps you to kick off
Good video link here

Resources