GraphQL test schema and resolvers - graphql

WHAT I TRIED
I am using JEST for testing resolvers and schema but I am having trouble in creating folder and structure.Currently I import resolver functions and call functions and compare result or check if fields are defined.But it does not sometimes satisfy complex scenarios.
WHAT I AM LOOKING FOR
The best practices to test graphql schema and resolver functions, and what testing tool is recommended or mostly used?

Also, you can try this npm package that will test your Schema, Queries and Mutations... there is an example of it using mocha & chai.. here is the link
What you need to do, is import the schema and pass to the easygraphql-tester and then you can create UT.

There are multiple frameworks out there for integration testing your API using for example YAML files. There you can specify request and response. A simpler approach can be to use Jest snapshots and simply execute test queries using the graphql function from graphql-js. It will return a promise with the result and you can then await it and expect it to match the snapshot.
import { graphql } from 'graphql';
import schema from './schema';
import createContext from './createContext';
describe('GraphQL Schema', () => {
let context;
before(() => {
context = createContext();
database.setUp();
});
after(() => {
database.tearDown();
});
it('should resolve simple query', async () => {
const query = '{ hello }';
const result = await graphql(schema, query, null, context);
expect(result).toMatchSnapshot();
});
});
Tipp: You can also create dynamic tests for example by reading queries from files in a directory and then iterating over them creating a new test for each file. An example for that (not GraphQL though) can be found on my Github.

There is not a recomendable way to do it. Specially for documents and folder structure.
In my case, I am working on this repo. And this is my folder structure in the first level:
src/
├── App.js
├── configs
├── helpers
├── index.js
├── middlewares
├── models
├── resolvers
├── routes
├── schema
├── seeds
├── templates
├── tests
└── utils
In the root, I have the test folder, mainly to checkout the App basic behavior and some utils functions. In the other hand, inside the resolvers i have the main test for the GraphQl, queries and mutations.
src/resolvers/
├── camping
│   ├── camping.mutations.js
│   ├── camping.query.js
│   ├── camping.query.test.js
│   └── camping.resolver.js
├── clientes.resolver.js
├── guest
│   ├── guest.mutation.js
│   ├── guest.mutation.test.js
│   ├── guest.query.js
│   ├── guest.query.test.js
│   └── guest.resolver.js
├── index.js
├── turno
│   ├── turno.mutations.js
│   ├── turno.query.js
│   ├── turno.query.test.js
│   └── turno.resolver.js
└── users
├── user.mutations.js
├── user.mutations.test.js
├── user.queries.js
├── user.query.test.js
└── user.resolver.js
Every single resolver has his test, you can check there if the basic endpoints are working as expected.
I am planning to do some workflow tests, they will be in the test root folder later.

Related

AWS CDK CLI: Passing environment variables when running cdk deploy command

I have written some infrastructure code with AWS CDK (Go). My code is structured like so:
.
├── api
│   └── aj
│   ├── lambda
│   │   └── main.go
│   ├── email.go
│   ├── emailService.go
│   ├── handler.go
│   └── handler_test.go
├── cdk
│   ├── cdk.go
│   ├── cdk.json
│   ├── cdk_test.go
│   └── README.md
├── 00-test-image.jpg
├── go.mod
└── go.sum
5 directories, 12 files
My CDK code simply creates an AWS Lambda HTTP Endpoint that will handle a form submission and send an email.
I am using the algnhsa Go adapter package to make things easier to deploy.
At the top of my emailService.go I have the following:
var (
host = os.Getenv("EMAIL_HOST")
username = os.Getenv("EMAIL_USERNAME")
password = os.Getenv("EMAIL_PASSWORD")
portNumber = os.Getenv("EMAIL_PORT")
)
My question is, when I run cdk deploy, how do I set those environment variables so that they're available and set within the code?
You configure the Lambda Function's environment variables with its Environment prop as a *map[string]*string.
You'd typically pass the values down as stack props from its parent stack to the Lambda construct. Or simply hardcode the values in the Function construct code. The CDK best practice is to have such input values fixed at synth-time, so that deploys are deterministic and under source control.

How can I enable deployment of individual Lambda functions using Terraform?

I'm working on automating deployment of Lambda functions using Terraform. The goal is to be able to deploy either a single Lambda function or all of the functions in the repo. I'm able to deploy a number of functions from a structure that looks like:
├── README.md
├── js
│   ├── README.md
│   ├── jsFunction1/
│ │ └── main.tf
│ └── jsFunction2/
│ └── main.tf
├── py
│   ├── README.md
│   ├── pyFunction1/
│ │ └── main.tf
│   └── pyFunction2/
│ └── main.tf
└── terraform
├── README.md
├── common/
├── global/
├── main.tf
├── modules/
├── prod/
└── stage/
The goal is to be able to deploy js/jsFunction1 independently (without building packages for any other functions) while also having terraform/main.tf able to deploy all the lambda functions in the repository (after they've been built). This is so a developer can update the stage deployment with changes to an individual function without concern that they might have an improper version of a function that the developer isn't working on.
What I was hoping to do is create a back-end for each Lambda function so that the developer can use terraform apply from within the source folder. I don't see how to import the state of the Lambda functions that were deployed from the terraform/ module. Can you tell me if importing the state of the resources is a reasonable approach or recommend a better way to achieve the ability to deploy one of many Lambda functions?
Here is the main.tf from js/jsFunction1
module "jsFunction1" {
source = "../../terraform/modules/lambda"
source_path = "${path.module}/dist"
lambda_function_name = "jsFunction1"
lambda_handler = "lambdaAdapter.handler"
}
There is a similar main.tf in each of the folders under js and py. This is main.tf from the terraform folder
module "jsFunction1" {
source = "./../js/jsFunction1"
}
module "jsFunction2" {
source = "./../js/jsFunction2"
}
module "pyFunction1" {
source = "./../py/pyFunction1"
}
module "pyFunction2" {
source = "./../py/pyFunction2"
}
You can make use lambda package module to zip seperate packages for specific lambda's and deploy them.
The package module has the ability to build the package only if change is happened in the respective folders.
So you can have a single main.tf and the reference the output values from package module.

Correct directory structure for Puppet RSpec testing

I'm having some issues creating unit tests for my Puppet control repository.
I mostly work with roles and profiles with the following directory structure:
[root#puppet]# tree site
site
├── profile
│   ├── files
│   │   └── demo-website
│   │   └── index.html
│   └── manifests
│   ├── base.pp
│   ├── ci_runner.pp
│   ├── docker.pp
│   ├── gitlab.pp
│   ├── logrotate.pp
│   └── website.pp
├── role
│   └── manifests
│   ├── gitlab_server.pp
│   └── nginx_webserver.pp
Where do I need to place my spec files and what are the correct filenames?
I tried placing them here:
[root#puppet]# cat spec/classes/profile_ci_runner_spec.rb
require 'spec_helper'
describe 'profile::ci_runner' do
...
But I get an error:
Could not find class ::profile::ci_runner
The conventional place for a module's spec tests is in the module, with the spec/ directory in the module root. So site/profile/spec/classes/ci_runner_spec.rb, for example.
You could consider installing PDK, which can help you set up the structure and run tests, among other things.

Altering snakemake workflow to anticipate and accommodate different data-structures

I have an existing snakemake RNAseq workflow that works fine with a directory tree as below. I need to alter the workflow so that it can accommodate another layer of directories. Currently, I use a python script that os.walks the parent directory and creates a json file for the sample wildcards (json file for sample widlcards also included below). I am not very familiar with python, and it seems to me that adapting the code for an extra layer of directories shouldn't be too difficult and was hoping someone would be kind enough to point me in the right direction.
RNAseqTutorial/
├── Sample_70160
│ ├── 70160_ATTACTCG-TATAGCCT_S1_L001_R1_001.fastq.gz
│ └── 70160_ATTACTCG-TATAGCCT_S1_L001_R2_001.fastq.gz
├── Sample_70161
│ ├── 70161_TCCGGAGA-ATAGAGGC_S2_L001_R1_001.fastq.gz
│ └── 70161_TCCGGAGA-ATAGAGGC_S2_L001_R2_001.fastq.gz
├── Sample_70162
│ ├── 70162_CGCTCATT-ATAGAGGC_S3_L001_R1_001.fastq.gz
│ └── 70162_CGCTCATT-ATAGAGGC_S3_L001_R2_001.fastq.gz
├── Sample_70166
│ ├── 70166_CTGAAGCT-ATAGAGGC_S7_L001_R1_001.fastq.gz
│ └── 70166_CTGAAGCT-ATAGAGGC_S7_L001_R2_001.fastq.gz
├── scripts
├── groups.txt
└── Snakefile
{
"Sample_70162": {
"R1": [ "/gpfs/accounts/SlurmMiKTMC/Sample_70162/Sample_70162.R1.fq.gz"
],
"R2": [ "/gpfs/accounts//SlurmMiKTMC/Sample_70162/Sample_70162.R2.fq.gz"
]
},
{
"Sample_70162": {
"R1": [ "/gpfs/accounts/SlurmMiKTMC/Sample_70162/Sample_70162.R1.fq.gz"
],
"R2": [ "/gpfs/accounts/SlurmMiKTMC/Sample_70162/Sample_70162.R2.fq.gz"
]
}
}
The structure I need to accommodate is below
RNAseqTutorial/
├── part1
│   ├── 030-150-G
│   │   ├── 030-150-GR1_clipped.fastq.gz
│   │   └── 030-150-GR2_clipped.fastq.gz
│   ├── 030-151-G
│   │   ├── 030-151-GR1_clipped.fastq.gz
│   │   └── 030-151-GR2_clipped.fastq.gz
│   ├── 100T
│   │   ├── 100TR1_clipped.fastq.gz
│   │   └── 100TR2_clipped.fastq.gz
├── part2
│   ├── 030-025G
│   │   ├── 030-025GR1_clipped.fastq.gz
│   │   └── 030-025GR2_clipped.fastq.gz
│   ├── 030-131G
│   │   ├── 030-131GR1_clipped.fastq.gz
│   │   └── 030-131GR2_clipped.fastq.gz
│   ├── 030-138G
│   │   ├── 030-138R1_clipped.fastq.gz
│   │   └── 030-138R2_clipped.fastq.gz
├── part3
│   ├── 030-103G
│   │   ├── 030-103GR1_clipped.fastq.gz
│   │   └── 030-103GR2_clipped.fastq.gz
│   ├── 114T
│   │   ├── 114TR1_clipped.fastq.gz
│   │   └── 114TR2_clipped.fastq.gz
├── scripts
├── groups.txt
└── Snakefile
The main script that generates the json file for the sample wildcards is below
for root, dirs, files in os.walk(args):
for file in files:
if file.endswith("fq.gz"):
full_path = join(root, file)
#R1 will be forward reads, R2 will be reverse reads
m = re.search(r"(.+).(R[12]).fq.gz", file)
if m:
sample = m.group(1)
reads = m.group(2)
FILES[sample][reads].append(full_path)
I just can't seem to wrap my head around a way to accommodate that extra layer. Is there another module or function other than os.walk? Could I somehow force os.walk to skip a directory and merge the part and sample prefixes? Any suggestions would be helpful!
Edited to add:
I wasn't clear in describing my problem, and noticed that the second example wasn't representative of the problem, and I fixed the examples accordingly, because the second tree was taken from a directory processed by someone else. Data I get comes in two forms, either samples of only one tissue, where the directory consists of WD, sampled folders, and fastq files, where the fastq files have the same prefix as the sample folders that they reside in. The second example is of samples from two tissues. These tissues must be processed separate from each other. But tissues from both types can be found in separate "Parts", but tissues of the same type from different "Parts" must be processed together. If I could get os.walk to return four tuples, or even use
root,dirs,files*=os.walk('Somedirectory')
where the * would append the rest of the directory string to the files variable. Unfortunately, this method does not go to the file level for the third child directory 'root/part/sample/fastq'. In an ideal world, the same snakemake pipeline would be able to handle both scenarios with minimal input from the user. I understand that this may not be possible, but I figured I ask and see if there was a module that could return all portions of each sample directory string.
It seems to me that your problem doesn't have much to do on how to accommodate the second layer. Instead, the question is about the specifications of the directory trees and file names you expect.
In the first case, it seems you can extract the sample name from the first part of the file name. In the second case, file names are all the same and the sample name comes from the parent directory. So, either you implement some logic that tells which naming scheme you are parsing (and this depends on who/what provides the files) or you always extract the sample name from the parent directory as this should work also for the first case (but again, assuming you can rely on such naming scheme).
If you want to go for the second option, something like this should do:
FILES = {}
for root, dirs, files in os.walk('RNAseqTutorial'):
for file in files:
if file.endswith("fastq.gz"):
sample = os.path.basename(root)
full_path = os.path.join(root, file)
if sample not in FILES:
FILES[sample]= {}
if 'R1' in file:
reads = 'R1'
elif 'R2' in file:
reads = 'R2'
else:
raise Exception('Unexpected file name')
if reads not in FILES[sample]:
FILES[sample][reads] = []
FILES[sample][reads].append(full_path)
Not sure if I understand correctly, but here you go:
for root, dirs, files in os.walk(args):
for file in files:
if file.endswith("fq.gz"):
full_path = join(root, file)
reads = 'R1' if 'R1' in file else 'R2'
sample = root.split('/')[-1]
FILES[sample][reads].append(full_path)

Mimicking Multiproject Execution In Composite Included Build with Gradle Kotlin DSL?

I am simplifying the setup I have to illustrate my issue, but have included structural complexities.
Using Gradle's Kotlin DSL I have a composite build where the root project is empty and the two included builds are both side-by-side multiproject builds with varying structures that make use of "container" projects (aka, empty directories with no build.gradle.kts files) for organization purposes.
.
├── app
│   ├── common
│   │   └── build.gradle.kts
│   ├── js
│   │   └── build.gradle.kts
│   ├── jvm
│   │   └── build.gradle.kts
│   ├── build.gradle.kts
│   └── settings.gradle.kts
├── library
│   ├── core
│   │   ├── common
│   │   │   └── build.gradle.kts
│   │   ├── js
│   │   │   └── build.gradle.kts
│   │   └── jvm
│   │   └── build.gradle.kts
│   ├── other-component
│   │   ├── common
│   │   │   └── build.gradle.kts
│   │   ├── js
│   │   │   └── build.gradle.kts
│   │   └── jvm
│   │   └── build.gradle.kts
│   ├── util
│   │   ├── util1
│   │   │   ├── common
│   │   │   │   └── build.gradle.kts
│   │   │   ├── js
│   │   │   │   └── build.gradle.kts
│   │   │   └── jvm
│   │   │   └── build.gradle.kts
│   │   └── util2
│   │   ├── common
│   │   │   └── build.gradle.kts
│   │   ├── js
│   │   │   └── build.gradle.kts
│   │   └── jvm
│   │   └── build.gradle.kts
│   ├── build.gradle.kts
│   └── settings.gradle.kts
├── build.gradle.kts
└── settings.gradle.kts
My desire is to be able to run build in the root composite project within the IDE (Intellij) and it mimic the behavior of a multiproject execution, where everything underneath that project executes the task in turn.
In Groovy, one can just use the spread operator on includedBuilds*.tasks* in the composite project to wire it up, but in the Kotlin DSL, we only have access to task, which is a single TaskReference and no way to get a collection of Tasks (TaskCollection or Collection of Tasks) or collection of TaskReferences.
So in the rootProject of the composite build.gradle.kts, I have:
tasks {
val clean by getting {
gradle.includedBuilds.forEach { this.dependsOn(it.task(":cleanAll")) }
}
val build by getting {
gradle.includedBuilds.forEach { this.dependsOn(it.task(":buildAll")) }
}
}
Then in one of the included builds build.gradle.kts files, I have tried wiring them two different ways (well many but these are the two approaches):
// Variation 1
tasks {
val buildAll : GradleBuild by creating {
this.dependsOn(tasks.getByPath(":build"))
}
val cleanAll : Delete by creating {
this.dependsOn(tasks.getByPath(":clean"))
}
}
// Variation 2
tasks {
val buildAll: GradleBuild by creating {
subprojects.forEach {
this.dependsOn(it.tasks.getByPath(":build"))
}
}
val cleanAll: Delete by creating {
subprojects.forEach {
this.dependsOn(it.tasks.getByPath(":clean"))
}
}
}
// Variation 2.b
tasks {
val buildAll: GradleBuild by creating {
this.dependsOn(subprojects.mapNotNull(it.tasks.getByPath(":build")))
}
val cleanAll: Delete by creating {
this.dependsOn(subprojects.mapNotNull(it.tasks.getByPath(":clean")))
}
}
// I even used different ways to try and get the tasks such as it.tasks["root:library:build"], it.tasks[":library:build"], and it.tasks["library:build"] since I know that the included builds are executed in an isolated fashion. None of these worked
// The result was when I used absolute paths, gradle spat back that these tasks didn't exist (I assumed because they were lifecycle tasks).
Basically, trying the variations above only ever built and cleaned the rootProjects of the included builds and never the subprojects. Is this a bug?
I do not want to have to resort to needing knowledge of the underlying structure of the included builds to wire this up. That would be unsustainable. What am I doing wrong?
I'm using the following code to achieve this. First, create a settings.gradle.kts in the root that programmatically searches for builds for include:
rootDir.walk().filter {
it != rootDir && !it.name.startsWith(".") && it.resolve("build.gradle.kts").isFile
}.forEach(::includeBuild)
Then create a build.gradle.kts file in the root that "forwards" all root task invocations of the form all<TASK> to <INCLUDED_BUILD>:<TASK>:
tasks.addRule("Pattern: all<TASK>") {
val taskName = this
val taskPrefix = "all"
if (startsWith(taskPrefix)) {
task(taskName) {
gradle.includedBuilds.forEach { build ->
val buildTaskName = taskName.removePrefix(taskPrefix).decapitalize()
dependsOn(build.task(":$buildTaskName"))
}
}
}
}
This way, running ./gradlew allAssemble on the root project will effectively execute the assemble task on all included builds.
Okay, not sure what was going on and why those other methods didn't work, but I found a method that works and does not require me to manufacture synthetic tasks that depend on the lifecycle tasks.
Composite rootproject build.gradle.kts tasks remain the same as stated in the original question:
tasks {
val clean by getting {
gradle.includedBuilds.forEach { this.dependsOn(it.task(":cleanAll")) }
}
val build by getting {
gradle.includedBuilds.forEach { this.dependsOn(it.task(":buildAll")) }
}
}
Tasks declaration in included build root projects' build.gradle.kts-es need to collect and depend on tasks in the following way:
tasks {
val buildAll: GradleBuild by creating {
dependsOn(getTasksByName("build", true))
}
val cleanAll: Delete by creating {
dependsOn(getTasksByName("clean", true))
}
}
This will recursively gather the tasks. Although my previous techniques of iterating through all of the subprojects should have also done the trick since subprojects contains all subprojects, for some reason it wasn't working. This is though! Hopefully this helps out other people.

Resources