I have this json header/detail that I send to hassura graphql.
I would like to use variables as objects and arrays of objects to organize the code.
mutation insertData(
$presidente: bigint,
$vocal1: bigint,
$vocal2: bigint,
$ano: Int,
$libro: Int,
$folio: Int,
$ffolio: Int,
$fecha: date,
$c_alumno_p_f: Int,
$institucion: bigint,
$id_carrera: bigint,
$id_mesa: bigint,
$id_materia: bigint,
$detalle : [detalle_acta_regulares_insert_input!]!
){
insert_actas_regulares(
objects:[
{
presidente: $presidente,
vocal1: $vocal1,
vocal2: $vocal2,
ano: $ano,
libro: $libro,
folio: $folio,
ffolio: $ffolio,
fecha: $fecha,
c_alumno_p_f: $c_alumno_p_f,
institucion: $institucion,
id_carrera: $id_carrera,
id_mesa: $id_mesa,
id_materia: $id_materia,
detalle_acta_regulares:{
data:
$detalle
}
}
]
){
affected_rows
}
}
The variables I use are these:
{
"presidente": 107,
"vocal1": 196,
"vocal2": 208,
"ano": 2022,
"libro": 2,
"folio": 1,
"ffolio": 2,
"fecha": "2022-11-07",
"c_alumno_p_f": 3,
"institucion": 17,
"id_carrera": 5,
"id_mesa": 40863,
"id_materia": 11347,
"detalle": [
{
"id_alumno": 2186,
"escrito": 4,
"oral":0,
"definitivo": 4
},
{
"id_alumno": 9869,
"escrito": 8,
"oral":0,
"definitivo": 8
}
]
}
How should I structure the query to send the header as an object too?
I read the documentation, but I don't understand how to mount the structures
It is usually much easier to simplify your mutation and make use of a single variable.
There are definitely typos, but you get the idea:
mutation insertData($objects: [actas_regulares_insert_input!]!) {
insert_actas_regulares(objects: $objects) {
affected_rows
}
}
And this is your variable payload:
{
"objects": [{
"presidente": 107,
"vocal1": 196,
"vocal2": 208,
"ano": 2022,
"libro": 2,
"folio": 1,
"ffolio": 2,
"fecha": "2022-11-07",
"c_alumno_p_f": 3,
"institucion": 17,
"id_carrera": 5,
"id_mesa": 40863,
"id_materia": 11347,
"detalle_acta_regulares": {
"data": [
{
"id_alumno": 2186,
"escrito": 4,
"oral": 0,
"definitivo": 4
},
{
"id_alumno": 9869,
"escrito": 8,
"oral": 0,
"definitivo": 8
}
]
}
}]
}
I hope that helps!
Related
I have a question regarding GraphQL because I do not know if it is possible or not.
I have a simple scheme like this:
enum Range{
D,
D_1,
D_7
}
type Data {
id: Int!
levels(range: [Range!]):[LevelEntry]
}
type LevelEntry{
range: Range!
levelData: LevelData
}
type LevelData {
range: Range!
users: Int
name: String
stairs: Int
money: Float
}
Basically I want to do a query so I can retrieve different attributes for the different entries on the levelData property of levels array which can be filtered by some levels range.
For instance:
data {
"id": 1,
"levels": [
{
"range": D,
"levelData": {
"range": D,
"users": 1
}
},
{
"range": D_1,
"levelData": {
"range": D_1,
"users": 1,
"name": "somename"
}
}
]
This means i want for D "range, users" properties and for D_1 "range,users,name" properties
I have done an example of query but I do not know if this is possible:
query data(range: [D,D_1]){
id,
levels {
range
... on D {
range,
users
}
... on D_1 {
range,
users,
name
}
}
}
Is it possible? If it is how can i do it?
{
"rules": [
{
"rank": 1,
"grades": [
{
"id": 100,
"hierarchyCode": 32
},
{
"id": 200,
"hierarchyCode": 33
}
]
},
{
"rank": 2,
"grades": []
}
]
}
I've a json like above and I'm using streams to return "hierarchyCode" based on some condition. For example if I pass "200" my result should print 33. So far I did something like this:
request.getRules().stream()
.flatMap(ruleDTO -> ruleDTO.getGrades().stream())
.map(gradeDTO -> gradeDTO.getHierarchyCode())
.forEach(hierarchyCode -> {
//I'm doing some business logic here
Optional<SomePojo> dsf = someList.stream()
.filter(pojo -> hierarchyCode.equals(pojo.getId())) // lets say pojo.getId() returns 200
.findFirst();
System.out.println(dsf.get().getCode());
});
So in the first iteration for the expected output it returns 33, but in the second iteration it is failing with Null pointer instead of just skipping the loop since "grades" array is empty this time. How do I handle the null pointer exception here?
You can use the below code snippet using Java 8:
int result;
int valueToFilter = 200;
List<Grade> gradeList = data.getRules().stream().map(Rule::getGrades).filter(x-> x!=null && !x.isEmpty()).flatMap(Collection::stream).collect(Collectors.toList())
Optional<Grade> optional = gradeList.stream().filter(x -> x.getId() == valueToFilter).findFirst();
if(optional.isPresent()){
result = optional.get().getHierarchyCode();
System.out.println(result);
}
I have created POJO's according to my code, you can try this approach with your code structure.
In case you need POJO's as per this code, i will share the same as well.
Thanks,
Girdhar
I want to keep all of the types in a single collection named “BenchmarkDatasets”. Do I need to declare the subtypes(LatData, AggregateData, MetaData) differently or do I just need to accept that I’ll have a collection for every type?
Any help is greatly appreciated.
Here's the Schema I generated:
type LatData {
LatResults: [[Int ]]
LatResultSize: [Int ]
}
type AggregateData {
EVRCounter: Int
EVRLatencyTotal: Int
EVRLatencyAverage: Float
LatTestCount: Int
LatencyTotal: Int
LatencyAverage: Float
}
type MetaData {
StartTimeUTC: String
EndTimeUTC: String
StartTimeLocal: String
EndTimeLocal: String
}
type BenchmarkDataset {
LatData: LatData
AggregateData: AggregateData
MetaData: MetaData
}
type Query {
allBenchmarkDatasets: [BenchmarkDataset!]
}
And here's the data I want to shove into "BenchmarkDatasets":
{
"MetaData" :
{
"StartTimeUTC" : "Sun Oct 18 21:41:38 2020\n",
"EndTimeUTC" : "Sun Oct 18 21:45:38 2020\n",
"StartTimeLocal" : "Sun Oct 18 16:41:38 2020\n",
"EndTimeLocal" : "Sun Oct 18 16:45:38 2020\n"
},
"AggregateData" :
{
"EVRCounter" : 3,
"EVRLatencyTotal" : 70,
"EVRLatencyAverage" : 23.333333333333332,
"LatTestCount" : 159,
"LatencyTotal" : 11871,
"LatencyAverage" : 74.660377358490564
},
"LatData" :
{
"LatResultSize" :
[
4,
4,
4
],
"LatResults" :
[
[
0,
2,
"zoom",
"latencymonitor"
],
[
1,
1,
"zoom",
"latencymonitor"
],
[
2,
1,
"zoom",
"latencymonitor"
],
[
3,
1,
"dota2",
"dota2"
]
]
}
}
Also, I know that my data is not well formatted(specifically the 2d "LatData" array that contains 2 ints and 2 strings), and any data format tips are also appreciated.
Figured out the issue! I specifically needed to use the "#embedded" directive to make my schema look something like this:
type LatData #embedded {
LatResults: [[Int ]]
LatResultSize: [Int ]
}
type AggregateData #embedded {
EVRCounter: Int
EVRLatencyTotal: Int
EVRLatencyAverage: Float
LatTestCount: Int
LatencyTotal: Int
LatencyAverage: Float
}
type MetaData #embedded {
StartTimeUTC: String
EndTimeUTC: String
StartTimeLocal: String
EndTimeLocal: String
}
type BenchmarkDataset {
LatData: LatData
AggregateData: AggregateData
MetaData: MetaData
}
type Query {
allBenchmarkDatasets: [BenchmarkDataset!]
}
Here's the way I pass variables now:
mutation Test($a: Int, $b: Int, $c: Int, $d: Int) {
test(a: $a, b: $b, c: $c, d: $d) {
id
}
}
// Example
{
a: 1,
b: 2,
c: 3,
d: 4
}
I want to put these variables into an input:
// Example
{
first: {
a: 1,
b: 2
},
second: {
c: 3,
d: 4
}
}
// GraphQL
input TestInput {
first: First
second: Second
}
input First {
a: Int
b: Int
}
input Second {
c: Int
d: Int
}
// The code below doesn't work
mutation Test($input: TestInput) {
test(a: $input.first.a, b: $input.first.b, c: $input.second.c, d: $input.second.d) {
id
}
}
This approach will save me from creating mapping functions and VM/Models in my code.
Is it even possible?
GraphQL does not currently support accessing individual fields on variable values that are objects. To make this work, you need to change your test field to accept a single argument of the type TestInput:
type Mutation {
test(input: TestInput): SomeType
}
then you can do:
mutation Test($input: TestInput) {
test(input: $input) {
id
}
}
Given the format at the end of the question, what's the best way to get the top-level name for a given item?
Top-level names are the ones with parentId = 1.
def getTopLevel(name: String): String = {
// Environment(150) -> Environment(150) - since its parentId is 1
// Assassination -> Security - since Assassination(12) -> Terrorism(10) -> Security(2)
}
Here's my current approach but is there something better?
unmapped = categories.size
Loop through this list until there are still unmapped items.
- build a Map(Int, String) for top levels.
- build a Map(Int, Int) - that maps an id to top level id.
- keep track of unmapped items
once loop exits, I can use both Maps to get the job done.
[
{
"name": "Destination Overview",
"id": 1,
"parentId": null
},
{
"name": "Environment",
"id": 150,
"parentId": 1
},
{
"name": "Security",
"id": 2,
"parentId": 1
},
{
"name": "Armed Conflict",
"id": 10223,
"parentId": 2
},
{
"name": "Civil Unrest",
"id": 21,
"parentId": 2
},
{
"name": "Terrorism",
"id": 10,
"parentId": 2
},
{
"name": "Assassination",
"id": 12,
"parentId": 10
}
]
This is actually two questions.
Parsing Json into a Scala collection and
Using that collection to trace items back to the top parent
For the first question, you can use play-json. The second part can be handled with a tail-recursive function. Here is the full program that solves both problems:
import play.api.libs.json.{Json, Reads}
case class Node(name: String, id: Int, parentId: Option[Int])
object JsonParentFinder {
def main(args: Array[String]): Unit = {
val s =
"""
|[
| {
| "name": "Destination Overview",
| "id": 1,
| "parentId": null
| },
| {
| "name": "Environment",
| "id": 150,
| "parentId": 1
| },
// rest of the json
|]
|""".stripMargin
implicit val NodeReads : Reads[Node] =Json.reads[Node]
val r = Json.parse(s).as[Seq[Node]]
.map(x => x.id -> x).toMap
println(getTopLevelNode(150, r))
println(getTopLevelNode(12, r))
}
def getTopLevelNode(itemId : Int, nodes: Map[Int, Node], path : List[Node] = List.empty[Node]) : List[Node] = {
if(nodes(itemId).id == 1)
nodes(itemId) +: path
else
getTopLevelNode(nodes(nodes(itemId).parentId.get).id, nodes, nodes(itemId) +: path)
}
}
Output will be:
List(Node(Destination Overview,1,None), Node(Environment,150,Some(1)))
List(Node(Destination Overview,1,None), Node(Security,2,Some(1)), Node(Terrorism,10,Some(2)), Node(Assassination,12,Some(10)))
A few notes:
I have not implemented comprehensive error-handling logic. The implicit assumption is that the only item with parentId==None is the root node. nodes(itemId).parentId.get could lead to failure.
Also, in creating the map, the assumption is that all items have unique ids.
Another assumption is that all nodes eventually have a path to the root node. If that is not the case, this will fail. But it should be straightforward to fix these cases by adding more stop conditions.
I am prepending items to the accumulator list(named path here) because prepend operation on Scala's lists takes constant time. You can just reverse the resulting list or use another data structure like Vector to efficiently build the path.