I have written some infrastructure code with AWS CDK (Go). My code is structured like so:
.
├── api
│ └── aj
│ ├── lambda
│ │ └── main.go
│ ├── email.go
│ ├── emailService.go
│ ├── handler.go
│ └── handler_test.go
├── cdk
│ ├── cdk.go
│ ├── cdk.json
│ ├── cdk_test.go
│ └── README.md
├── 00-test-image.jpg
├── go.mod
└── go.sum
5 directories, 12 files
My CDK code simply creates an AWS Lambda HTTP Endpoint that will handle a form submission and send an email.
I am using the algnhsa Go adapter package to make things easier to deploy.
At the top of my emailService.go I have the following:
var (
host = os.Getenv("EMAIL_HOST")
username = os.Getenv("EMAIL_USERNAME")
password = os.Getenv("EMAIL_PASSWORD")
portNumber = os.Getenv("EMAIL_PORT")
)
My question is, when I run cdk deploy, how do I set those environment variables so that they're available and set within the code?
You configure the Lambda Function's environment variables with its Environment prop as a *map[string]*string.
You'd typically pass the values down as stack props from its parent stack to the Lambda construct. Or simply hardcode the values in the Function construct code. The CDK best practice is to have such input values fixed at synth-time, so that deploys are deterministic and under source control.
Related
I'm working on automating deployment of Lambda functions using Terraform. The goal is to be able to deploy either a single Lambda function or all of the functions in the repo. I'm able to deploy a number of functions from a structure that looks like:
├── README.md
├── js
│ ├── README.md
│ ├── jsFunction1/
│ │ └── main.tf
│ └── jsFunction2/
│ └── main.tf
├── py
│ ├── README.md
│ ├── pyFunction1/
│ │ └── main.tf
│ └── pyFunction2/
│ └── main.tf
└── terraform
├── README.md
├── common/
├── global/
├── main.tf
├── modules/
├── prod/
└── stage/
The goal is to be able to deploy js/jsFunction1 independently (without building packages for any other functions) while also having terraform/main.tf able to deploy all the lambda functions in the repository (after they've been built). This is so a developer can update the stage deployment with changes to an individual function without concern that they might have an improper version of a function that the developer isn't working on.
What I was hoping to do is create a back-end for each Lambda function so that the developer can use terraform apply from within the source folder. I don't see how to import the state of the Lambda functions that were deployed from the terraform/ module. Can you tell me if importing the state of the resources is a reasonable approach or recommend a better way to achieve the ability to deploy one of many Lambda functions?
Here is the main.tf from js/jsFunction1
module "jsFunction1" {
source = "../../terraform/modules/lambda"
source_path = "${path.module}/dist"
lambda_function_name = "jsFunction1"
lambda_handler = "lambdaAdapter.handler"
}
There is a similar main.tf in each of the folders under js and py. This is main.tf from the terraform folder
module "jsFunction1" {
source = "./../js/jsFunction1"
}
module "jsFunction2" {
source = "./../js/jsFunction2"
}
module "pyFunction1" {
source = "./../py/pyFunction1"
}
module "pyFunction2" {
source = "./../py/pyFunction2"
}
You can make use lambda package module to zip seperate packages for specific lambda's and deploy them.
The package module has the ability to build the package only if change is happened in the respective folders.
So you can have a single main.tf and the reference the output values from package module.
I'm having some issues creating unit tests for my Puppet control repository.
I mostly work with roles and profiles with the following directory structure:
[root#puppet]# tree site
site
├── profile
│ ├── files
│ │ └── demo-website
│ │ └── index.html
│ └── manifests
│ ├── base.pp
│ ├── ci_runner.pp
│ ├── docker.pp
│ ├── gitlab.pp
│ ├── logrotate.pp
│ └── website.pp
├── role
│ └── manifests
│ ├── gitlab_server.pp
│ └── nginx_webserver.pp
Where do I need to place my spec files and what are the correct filenames?
I tried placing them here:
[root#puppet]# cat spec/classes/profile_ci_runner_spec.rb
require 'spec_helper'
describe 'profile::ci_runner' do
...
But I get an error:
Could not find class ::profile::ci_runner
The conventional place for a module's spec tests is in the module, with the spec/ directory in the module root. So site/profile/spec/classes/ci_runner_spec.rb, for example.
You could consider installing PDK, which can help you set up the structure and run tests, among other things.
I am trying to generate client code using k8s.io/code-generator.
These are the instructions that I am following: https://itnext.io/how-to-generate-client-codes-for-kubernetes-custom-resource-definitions-crd-b4b9907769ba
My question is, does my go module need to be present on a repository or can I simply run the generate-groups.sh script on a go module that is ONLY present on my local system and not on any repository?
I have already tried running it and from what I understand, there needs to be a repository having ALL the contents of my local go module. Is my understanding correct?
You CAN run kubernetes/code-generator's generate-groups.sh on a go module that is only present on your local system. Neither code-generator nor your module needs to be in your GOPATH.
Verification
Cloned kubernetes/code-generator into a new directory.
$HOME/somedir
├── code-generator
Created a project called myrepo and mocked it with content to resemble sample-controller. Did this in the same directory to keep it simple.
somedir
├── code-generator
└── myorg.com
└── myrepo # mock of sample-controller
├── go.mod
├── go.sum
└── pkg
└── apis
└── myorg
├── register.go
└── v1alpha1
├── doc.go
├── register.go
└── types.go
My go.mod looked like
module myorg.com/myrepo
go 1.14
require k8s.io/apimachinery v0.17.4
Ran generate-group.sh. The -h flag specifies which header file to use. The -o flag specifies the output base which is necessary here because we're not in GOPATH.
$HOME/somedir/code-generator/generate-groups.sh all myorg.com/myrepo/pkg/client myorg.com/myrepo/pkg/apis "myorg:v1alpha1" \
-h $HOME/somedir/code-generator/hack/boilerplate.go.txt \
-o $HOME/somedir
Confirmed code generated in correct locations
myrepo
├── go.mod
├── go.sum
└── pkg
├── apis
│ └── myorg
│ ├── register.go
│ └── v1alpha1
│ ├── doc.go
│ ├── register.go
│ ├── types.go
│ └── zz_generated.deepcopy.go
└── client
├── clientset
│ └── versioned
│ ├── clientset.go
│ ├── doc.go
│ ├── fake
│ ├── scheme
│ └── typed
├── informers
│ └── externalversions
│ ├── factory.go
│ ├── generic.go
│ ├── internalinterfaces
│ └── myorg
└── listers
└── myorg
└── v1alpha1
Sources
Go modules support https://github.com/kubernetes/code-generator/issues/57
Documentation or support for Go modules https://github.com/kubernetes/sample-controller/issues/47
WHAT I TRIED
I am using JEST for testing resolvers and schema but I am having trouble in creating folder and structure.Currently I import resolver functions and call functions and compare result or check if fields are defined.But it does not sometimes satisfy complex scenarios.
WHAT I AM LOOKING FOR
The best practices to test graphql schema and resolver functions, and what testing tool is recommended or mostly used?
Also, you can try this npm package that will test your Schema, Queries and Mutations... there is an example of it using mocha & chai.. here is the link
What you need to do, is import the schema and pass to the easygraphql-tester and then you can create UT.
There are multiple frameworks out there for integration testing your API using for example YAML files. There you can specify request and response. A simpler approach can be to use Jest snapshots and simply execute test queries using the graphql function from graphql-js. It will return a promise with the result and you can then await it and expect it to match the snapshot.
import { graphql } from 'graphql';
import schema from './schema';
import createContext from './createContext';
describe('GraphQL Schema', () => {
let context;
before(() => {
context = createContext();
database.setUp();
});
after(() => {
database.tearDown();
});
it('should resolve simple query', async () => {
const query = '{ hello }';
const result = await graphql(schema, query, null, context);
expect(result).toMatchSnapshot();
});
});
Tipp: You can also create dynamic tests for example by reading queries from files in a directory and then iterating over them creating a new test for each file. An example for that (not GraphQL though) can be found on my Github.
There is not a recomendable way to do it. Specially for documents and folder structure.
In my case, I am working on this repo. And this is my folder structure in the first level:
src/
├── App.js
├── configs
├── helpers
├── index.js
├── middlewares
├── models
├── resolvers
├── routes
├── schema
├── seeds
├── templates
├── tests
└── utils
In the root, I have the test folder, mainly to checkout the App basic behavior and some utils functions. In the other hand, inside the resolvers i have the main test for the GraphQl, queries and mutations.
src/resolvers/
├── camping
│ ├── camping.mutations.js
│ ├── camping.query.js
│ ├── camping.query.test.js
│ └── camping.resolver.js
├── clientes.resolver.js
├── guest
│ ├── guest.mutation.js
│ ├── guest.mutation.test.js
│ ├── guest.query.js
│ ├── guest.query.test.js
│ └── guest.resolver.js
├── index.js
├── turno
│ ├── turno.mutations.js
│ ├── turno.query.js
│ ├── turno.query.test.js
│ └── turno.resolver.js
└── users
├── user.mutations.js
├── user.mutations.test.js
├── user.queries.js
├── user.query.test.js
└── user.resolver.js
Every single resolver has his test, you can check there if the basic endpoints are working as expected.
I am planning to do some workflow tests, they will be in the test root folder later.
So I have:
buildSrc/
├── build.gradle
└── src
├── main
│ ├── groovy
│ │ └── build
│ │ ├── ExamplePlugin.groovy
│ │ └── ExampleTask.groovy
│ └── resources
│ └── META-INF
│ └── gradle-plugins
│ └── build.ExamplePlugin.properties
└── test
└── groovy
└── build
├── ExamplePluginTest.groovy
└── ExampleTaskTest.groovy
Question:
It seems like build.ExamplePlugin.properties maps directly to the build.ExamplePlugin.groovy. Is this the case? Seems terribly inefficient to have only one property in the file. Does it have to be fully qualified, i.e. does the name have to exactly match the full qualification of the class?
Now in the example, I see:
project.pluginManager.apply 'build.ExamplePlugin'
...however if I have that in my test, I get an error to the effect that the simple task the plugin defines, is already defined.
Why bother with test examples that require 'apply' when that is inappropriate for packaging?