Package files into specific folder of application bundle when deploying to AWS Lambda via Serverless Framework - aws-lambda

Context
I am using the aws-node-typescript example of the Serverless Framework. My goal is to integrate Prisma into it.
So far, I have:
Created the project locally using serverless create
Set up a PostgreSQL database on Railway
Installed prisma, ran prisma init, created a basic User model and ran prisma migrate dev successfully
Created a second users function by copying the existing hello function
Deployed the function using serverless deploy
Now in my function, when I instantiate PrismaClient, I get an internal server error and the function logs this error: "ENOENT: no such file or directory, open '/var/task/src/functions/users/schema.prisma'"
My project structure looks as follows:
.
├── README.md
├── package-lock.json
├── package.json
├── prisma
│ ├── migrations
│ │ ├── 20221006113352_init
│ │ │ └── migration.sql
│ │ └── migration_lock.toml
│ └── schema.prisma
├── serverless.ts
├── src
│ ├── functions
│ │ ├── hello
│ │ │ ├── handler.ts
│ │ │ ├── index.ts
│ │ │ ├── mock.json
│ │ │ └── schema.ts
│ │ ├── index.ts
│ │ └── users
│ │ ├── handler.ts
│ │ └── index.ts
│ └── libs
│ ├── api-gateway.ts
│ ├── handler-resolver.ts
│ └── lambda.ts
├── tsconfig.json
└── tsconfig.paths.json
Also, here's the handler for the users function:
ts
import { formatJSONResponse } from '#libs/api-gateway';
import { middyfy } from '#libs/lambda';
import { PrismaClient } from '#prisma/client'
const users = async (event) => {
console.log(`Instantiating PrismaClient inside handler ...`)
const prisma = new PrismaClient()
return formatJSONResponse({
message: `Hello, ${event.queryStringParameters.name || 'there'} welcome to the exciting Serverless world!`,
event,
});
};
export const main = middyfy(users);
The problem arises because in order to instantiate PrismaClient, the schema.prisma file needs to be part of the application bundle. Specifically, it needs to be in /var/task/src/functions/users/ as indicated by the error message.
I already adjusted the package.patterns option in my serverless.ts file to look as follows:
package: { individually: true, patterns: ["**/*.prisma"] },
Question
This way, the bundle that's uploaded to AWS Lambda includes the prisma directory in its root, here's the .serverless folder after I ran sls package (I've unzipped users.zip here so that you can see its contents):
.
├── cloudformation-template-create-stack.json
├── cloudformation-template-update-stack.json
├── hello.zip
├── serverless-state.json
├── users
│ ├── prisma
│ │ └── schema.prisma
│ └── src
│ └── functions
│ └── users
│ ├── handler.js
│ └── handler.js.map
└── users.zip
I can also confirm that the deployed version of my AWS Lambda has the same folder structure.
How can I move the users/prisma/schema.prisma file into users/src/functions/users using the patterns in my serverless.ts file?

I found a (pretty ugly) solution. If anyone can think of a more elegant one, I'm still very open to it and happy to give you the points for a correct answer.
Solving the "ENOENT: no such file or directory, open '/var/task/src/functions/users/schema.prisma'" error
To solve this, I just took a very naive approach and manually copied over the schema.prisma file from the prisma directory into the src/functions/users. Here's the file structure I now had:
.
├── README.md
├── package-lock.json
├── package.json
├── prisma
│ ├── migrations
│ │ ├── 20221006113352_init
│ │ │ └── migration.sql
│ │ └── migration_lock.toml
│ └── schema.prisma
├── serverless.ts
├── src
│ ├── functions
│ │ ├── hello
│ │ │ ├── handler.ts
│ │ │ ├── index.ts
│ │ │ ├── mock.json
│ │ │ └── schema.ts
│ │ ├── index.ts
│ │ └── users
│ │ ├── schema.prisma
│ │ ├── handler.ts
│ │ └── index.ts
│ └── libs
│ ├── api-gateway.ts
│ ├── handler-resolver.ts
│ └── lambda.ts
├── tsconfig.json
└── tsconfig.paths.json
This is obviously a horrible way to solve this, because I now have two Prisma schema files in different locations and have to make sure I always update the one in src/functions/users/schema.prisma after changing the original one in prisma/schema.prisma to keep them in sync.
Once I copied this file and redeployed, the schema.prisma file was in place in the right location in the AWS Lambda and the error went away and PrismaClient could be instantiated.
I then added a simple Prisma Client query into the handler:
const users = async (event) => {
console.log(`Instantiating PrismaClient inside handler ...`)
const prisma = new PrismaClient()
const userCount = await prisma.user.count()
console.log(`There are ${userCount} users in the database`)
return formatJSONResponse({
message: `Hello, ${event.queryStringParameters.name || 'there'} welcome to the exciting Serverless world!`,
event,
});
};
export const main = middyfy(users);
... and encountered a new error, this time, about the query engine:
Invalid `prisma.user.count()` invocation:
Query engine library for current platform \"rhel-openssl-1.0.x\" could not be found.
You incorrectly pinned it to rhel-openssl-1.0.x
This probably happens, because you built Prisma Client on a different platform.
(Prisma Client looked in \"/var/task/src/functions/users/libquery_engine-rhel-openssl-1.0.x.so.node\")
Searched Locations:
/var/task/.prisma/client
/Users/nikolasburk/prisma/talks/2022/serverless-conf-berlin/aws-node-typescript/node_modules/#prisma/client
/var/task/src/functions
/var/task/src/functions/users
/var/task/prisma
/tmp/prisma-engines
/var/task/src/functions/users
To solve this problem, add the platform \"rhel-openssl-1.0.x\" to the \"binaryTargets\" attribute in the \"generator\" block in the \"schema.prisma\" file:
generator client {
provider = \"prisma-client-js\"
binaryTargets = [\"native\"]
}
Then run \"prisma generate\" for your changes to take effect.
Read more about deploying Prisma Client: https://pris.ly/d/client-generator
Solving the Query engine library for current platform \"rhel-openssl-1.0.x\" could not be found. error
I'm familiar enough with Prisma to know that Prisma Client depends on a query engine binary that has to be built specifically for the platform Prisma Client will be running on. This can be configured via the binaryTargets field on the generator block in my Prisma schema. The target for AWS Lamda is rhel-openssl-1.0.x.
So I adjusted the schema.prisma file (in both locations) accordingly:
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "rhel-openssl-1.0.x"]
}
After that, I ran npx prisma generate to update the generated Prisma Client in node_modules.
However, this hadn't resolved the error yet, the problem still was the Prisma Client couldn't find the query engine binary.
So I followed the same approach as for the schema.prisma file when it was missing:
I manually copied it into src/functions/users (this time from its location inside node_modules/.prisma/libquery_engine-rhel-openssl-1.0.x.so.node)
I added the new path to the package.patterns property in my serverless.ts:
package: {
individually: true,
patterns: ["**/*.prisma", "**/libquery_engine-rhel-openssl-1.0.x.so.node"],
},
After I redeployed and tested the function, another error occured:
Invalid `prisma.user.count()` invocation:
error: Environment variable not found: DATABASE_URL.
--> schema.prisma:11
|
10 | provider = \"postgresql\"
11 | url = env(\"DATABASE_URL\")
|
Validation Error Count: 1
Solving the Environment variable not found: DATABASE_URL. error
This time, it was pretty straightforward and I went into the AWS Console at https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1#/functions/aws-node-typescript-dev-users?tab=configure and added a DATABASE_URL env var via the Console UI, pointing to my Postgres instance on Railway:

I usually lurk but the answer above lead me to come up with a reasonably elegant solution I felt I should share for the next poor sap to come along and try to integrate serverless and prisma in Typescript (though, I bet this solution and process would work in other build systems).
I was using the example aws-nodejs-typescript template, which is plagued with a bug which required me to apply the fix here by patching the local node_modules serverless package.
I then had to essentially walk through #nburk's answer to get myself up and running which is, as stated, inelegant.
In my travels trying to understand prisma's behavior, requirement of a platform-specific binary, and how to fix it, figured that if I could manually side-load the binary into the build folder post-compile I could get the serverless bundler to zip it up.
I came across the 'serverless-plugin-scripts' plugin, which allows us to do exactly this via serverless lifecycle hooks.
I put this in my serverless.ts:
plugins: ['serverless-esbuild', 'serverless-plugin-scripts'],
I put the following in package.json:
"scripts": {
"test": "echo 'Error: no test specified' && exit 1",
"postbuild": "yarn fix-scrape-scheduler",
"fix-scrape-scheduler": "cp ../../node_modules/.prisma/client/schema.prisma .esbuild/.build/src/functions/schedule-scrapes/. && cp ../../node_modules/.prisma/client/libquery_engine-rhel-openssl-1.0.x.so.node .esbuild/.build/src/functions/schedule-scrapes/."
},
and this also in my serverless.ts:
scripts: {
hooks: {
'before:package:createDeploymentArtifacts': 'yarn run postbuild'
}
}
This causes the 'serverless-plugin-scripts' plugin to call my post-build yarn script and fixup the .build folder that esbuild creates. I imagine that if your build system (such as webpack or something) creates the build dir under a different name (such as lib), this process could be modified accordingly.
I will have to create a yarn script to do this for each function that is individually packaged, however this is dynamic, precludes the need to keep multiple copies of schema.prisma in source, and copies the files from the dynamically generated .prisma folder in node_modules.
Note, I am using yarn workspaces here, so the location of your node_modules folder will vary based on your repo setup.
Also, I did run into this error Please make sure your database server is running at which was remedied by making sure the proper security groups were whitelisted outbound for the lambda function, and inbound for RDS. Also make sure to check your subnet ACLs and route-tables.

We ran into the same issue recently. But our context is slightly different: the path to the schema.prisma file was /var/task/node_modules/.prisma/client/schema.prisma.
We solved this issue by using Serverless Package Configuration.
serverless.yml
service: 'your-service-name'
plugins:
- serverless-esbuild
provider:
# ...
package:
include:
- 'node_modules/.prisma/client/schema.prisma' # <-------- this line
- 'node_modules/.prisma/client/libquery_engine-rhel-*'
This way only the src folder containing the lambda functions and the node_modules folder containing these two Prisma files were packaged and uploaded to AWS.
Although the use of serverless.package.include and serverless.package.exclude is deprecated in favor of serverless.package.patterns, this was the only way to get it to work.

An option is to use webpack with the copy-webpack-plugin and change the structure of your application, put all handlers inside the handlers folder.
Folders structure:
.
├── handlers/
│ ├── hello.ts
│ └── ...
└── services/
├── hello.ts
└── ...
webpack.config.js:
/* eslint-disable #typescript-eslint/no-var-requires */
const path = require("path");
// const nodeExternals = require("webpack-node-externals");
const CopyPlugin = require("copy-webpack-plugin");
const slsw = require("serverless-webpack");
const { isLocal } = slsw.lib.webpack;
module.exports = {
target: "node",
stats: "normal",
entry: slsw.lib.entries,
// externals: [nodeExternals()],
mode: isLocal ? "development" : "production",
optimization: { concatenateModules: false },
resolve: { extensions: [".js", ".ts"] },
output: {
libraryTarget: "commonjs",
filename: "[name].js",
path: path.resolve(__dirname, ".webpack"),
},
module: {
rules: [
{
test: /\.tsx?$/,
loader: "ts-loader",
exclude: /node_modules/,
},
],
},
plugins: [
new CopyPlugin({
patterns: [
{
from: "./prisma/schema.prisma",
to: "handlers/schema.prisma",
},
{
from: "./node_modules/.prisma/client/libquery_engine-rhel-openssl-1.0.x.so.node",
to: "handlers/libquery_engine-rhel-openssl-1.0.x.so.node",
},
],
}),
],
};
If you need to run the npx prisma generate before assembling the package you can use the plugin serverless-scriptable-plugin (put before webpack):
plugins:
- serverless-scriptable-plugin
- serverless-webpack
custom:
scriptable:
hooks:
before:package:createDeploymentArtifacts: npx prisma generate
webpack:
includeModules: false
Dependences:
npm install -D webpack serverless-webpack webpack-node-externals copy-webpack-plugin serverless-scriptable-plugin

Related

Import protobuf file from GitHub repository

I currently have two protobuf repos: api and timestamp:
timestamp Repo:
- README.md
- timestamp.proto
- timestamp.pb.go
- go.mod
- go.sum
api Repo:
- README.md
- protos/
- dto1.proto
- dto2.proto
Currently, timestamp contains a reference to a timestamp object that I want to use in api but I'm not sure how the import should work or how I should modify the compilation process to handle this. Complicating this process is the fact that the api repo is compiled to a separate, downstream repo for Go called api-go.
For example, consider dto1.proto:
syntax = "proto3";
package api.data;
import "<WHAT GOES HERE?>";
option go_package = "github.com/my-user/api/data"; // golang
message DTO1 {
string id = 1;
Timestamp timestamp = 2;
}
And my compilation command is this:
find $GEN_PROTO_DIR -type f -name "*.proto" -exec protoc \
--go_out=$GEN_OUT_DIR --go_opt=module=github.com/my-user/api-go \
--go-grpc_out=$GEN_OUT_DIR --go-grpc_opt=module=github.com/my-user/api-go \
--grpc-gateway_out=$GEN_OUT_DIR --grpc-gateway_opt logtostderr=true \
--grpc-gateway_opt paths=source_relative --grpc-gateway_opt
generate_unbound_methods=true \{} \;
Assuming I have a definition in timestamp for each of the programming languages I want to compile api into, how would I import this into the .proto file and what should I do to ensure that the import doesn't break in my downstream repo?
There is no native notion of remote import paths with protobuf. So the import path has to be relative to some indicated local filesystem base path (specified via -I / --proto_path).
Option 1
Generally it is easiest to just have a single repository with protobuf definitions for your organisation - e.g. a repository named acme-contract
.
└── protos
└── acme
├── api
│ └── data
│ ├── dto1.proto
│ └── dto2.proto
└── timestamp
└── timestamp.proto
Your dto1.proto will look something like:
syntax = "proto3";
package acme.api.data;
import "acme/timestamp/timestamp.proto";
message DTO1 {
string id = 1;
acme.timestamp.Timestamp timestamp = 2;
}
As long as you generate code relative to the protos/ dir of this repository, there shouldn't be an issue.
Option 2
There are various alternatives whereby you continue to have definitions split over various repositories, but you can't really escape the fact that imports are filesystem relative.
Historically that could be handled by manually cloning the various repositories and arranging directories such that the path are relative, or by using -I to point to various locations that might intentionally or incidentally contain the proto files (e.g. in $GOPATH). Those strategies tend to end up being fairly messy and difficult to maintain.
buf makes things somewhat easier now. If you were to have your timestamp repo:
.
├── buf.gen.yaml
├── buf.work.yaml
├── gen
│ └── acme
│ └── timestamp
│ └── timestamp.pb.go
├── go.mod
├── go.sum
└── protos
├── acme
│ └── timestamp
│ └── timestamp.proto
├── buf.lock
└── buf.yaml
timestamp.proto looking like:
syntax = "proto3";
package acme.timestamp;
option go_package = "github.com/my-user/timestamp/gen/acme/timestamp";
message Timestamp {
int64 unix = 1;
}
buf.gen.yaml looking like:
version: v1
plugins:
- name: go
out: gen
opt: paths=source_relative
- name: go-grpc
out: gen
opt:
- paths=source_relative
- require_unimplemented_servers=false
- name: grpc-gateway
out: gen
opt:
- paths=source_relative
- generate_unbound_methods=true
... and everything under gen/ has been generated via buf generate.
Then in your api repository:
.
├── buf.gen.yaml
├── buf.work.yaml
├── gen
│ └── acme
│ └── api
│ └── data
│ ├── dto1.pb.go
│ └── dto2.pb.go
└── protos
├── acme
│ └── api
│ └── data
│ ├── dto1.proto
│ └── dto2.proto
├── buf.lock
└── buf.yaml
With buf.yaml looking like:
version: v1
name: buf.build/your-user/api
deps:
- buf.build/your-user/timestamp
breaking:
use:
- FILE
lint:
use:
- DEFAULT
dto1.proto looking like:
syntax = "proto3";
package acme.api.data;
import "acme/timestamp/timestamp.proto";
option go_package = "github.com/your-user/api/gen/acme/api/data";
message DTO1 {
string id = 1;
acme.timestamp.Timestamp timestamp = 2;
}
and buf.gen.yaml the same as in the timestamp repo.
The code generated via buf generate will depend on the timestamp repository via Go modules:
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc (unknown)
// source: acme/api/data/dto1.proto
package data
import (
timestamp "github.com/your-user/timestamp/gen/acme/timestamp"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
// <snip>
Note that if changes are made to dependencies you'll need to ensure that both buf and Go modules are kept relatively in sync.
Option 3
If you prefer not to leverage Go modules for importing generated pb code, you could also look to have a similar setup to Option 2, but instead generate all code into a separate repository (similar to what you're doing now, by the sounds of it). This is most easily achieved by using buf managed mode, which will essentially make it not require + ignore any go_modules directives.
In api-go:
.
├── buf.gen.yaml
├── go.mod
└── go.sum
With buf.gen.yaml containing:
version: v1
managed:
enabled: true
go_package_prefix:
default: github.com/your-user/api-go/gen
plugins:
- name: go
out: gen
opt: paths=source_relative
- name: go-grpc
out: gen
opt:
- paths=source_relative
- require_unimplemented_servers=false
- name: grpc-gateway
out: gen
opt:
- paths=source_relative
- generate_unbound_methods=true
You'd then need to generate code for each respective repo (bushed to BSR):
$ buf generate buf.build/your-user/api
$ buf generate buf.build/your-user/timestamp
After which you should have some generated code for both:
.
├── buf.gen.yaml
├── gen
│ └── acme
│ ├── api
│ │ └── data
│ │ ├── dto1.pb.go
│ │ └── dto2.pb.go
│ └── timestamp
│ └── timestamp.pb.go
├── go.mod
└── go.sum
And the imports will be relative to the current module:
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc (unknown)
// source: acme/api/data/dto1.proto
package data
import (
timestamp "github.com/your-user/api-go/gen/acme/timestamp"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
// <snip>
All in all, I'd recommend Option 1 - consolidating your protobuf definitions into a single repository (including vendoring 3rd party definitions) - unless there is a particularly strong reason not to.

Dockerize a multi maven project (not multi-module)

In my maven application i have multiple projects:
Core
Application 1
Application 2
Application 1 and Application 2 are two projects that uses the core (for example, those application are built for two different customers)
In order to Dockerize all of them, the simplest way would be to create a multi-module project, but the downside is that i have all inside a single project (core + Application 1 + Application 2).
I would like to have the core separated from them.
The main problem with this configuration is that the core project need to built before the other two, and App 1 and App 2 use this as maven dependency:
App 1
<dependency>
<groupId>it.myorg</groupId>
<artifactId>core-project</artifactId>
<version>1.12.0-SNAPSHOT</version>
</dependency>
If i try to dockerize the App 1 its fail when i package it, because inside the docker container core-project 1.12.0-SNAPSHOT do not exists.
I was thinking to setup a "local maven repo", pushing the core there and App 1 will "pull" the jar from the repo and not from .m2 folder, but i dont like this soulution.
I can provide more information, sorry if i dont provide examples, but my paper is white right now :(
Folder structure
+- Core
--- pom.xml
--- src
+- Application1
--- pom.xml
--- src
The solution i'm trying to do now is create a Dockerfile for core project (FROM maven:latest), building the image with a tag and in Dockerfile of App1 use this image (so, creating a multi stage build but in two separate moments).
The best would be
FROM maven:latest as core-builder
## build the core
FROM maven:latest
## Copy jar from builder
Because the project are in separate folder, i cant build the core in this way. I need to build del core BEFORE (running docker build -t) and later copy from them.
UPDATE
After the correct answer from #mihai i'm asking if its possible a structure like this:
-- myapp-docker
- Dockerfile
- docker-compose.yml
-- core-app
-- application_1
Having Dockerfile at the same level of core-app and application_1 its totally fine and 100% working. The only "problem" is that i should put all the projects in the same repo.
This is the proposed solution with multi-stage builds.
To replicate your setup I created this structure:
.
├── Dockerfile-app1
├── application1
│ ├── pom.xml
│ └── src
│ └── main
│ ├── resources
│ └── webapp
│ ├── WEB-INF
│ │ └── web.xml
│ └── index.jsp
├── core
│ ├── pom.xml
│ └── src
│ ├── main
│ │ └── java
│ │ └── com
│ │ └── test
│ │ └── App.java
│ └── test
│ └── java
│ └── com
│ └── test
│ └── AppTest.java
In the pom.xml file from Application 1 I added the dependency to core:
<dependency>
<groupId>com.test</groupId>
<artifactId>core</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
I named the Dockerfile Dockerfile-app1, this way you can have more than 1 of them.
This is the Dockerfile-app1:
FROM maven:3.6.0-jdk-8 as build
WORKDIR /apps
COPY ./core .
RUN mvn clean install
FROM maven:3.6.0-jdk-8
# If you comment this out then the build fails because it cannot find the dependency to 'core'
COPY --from=build /root/.m2 /root/.m2
COPY ./application1 ./
RUN mvn clean install
You should probably add an entrypoint at the end to run your project or even better add another 3rd stage that only copies the generated artefacts and runs your project (this way the final image will not have your sourced in).
The first stage only builds the core submodule.
The second stage used the results of the first stage, copies only the source for application1 and builds it.
You can easily replicate this for application2 by creating a similar file Dockerfile-app2.
Since you're using maven, try dockerfile-maven to build the image. You don't want any of your build information inside of your image (like what the dependencies are), you should just add the jar at the end. I usually use it together with spring-boot-maven-plugin and repackage, to get a fully self-contained jar.

Using Makefile with Terraform and split project layout

I have a Terraform project layout that's similar to
stage
└ Makefile
└ terraform.tfvars
└ vpc
└ services
└ frontend-app
└ backend-app
└ vars.tf
└ outputs.tf
└ main.tf
└ data-storage
└ mysql
└ redis
Where the contents of Makefile are similar to
.PHONY: all plan apply destroy
all: plan
plan:
terraform plan -var-file terraform.tfvars -out terraform.tfplan
apply:
terraform apply -var-file terraform.tfvars
destroy:
terraform plan -destroy -var-file terraform.tfvars -out terraform.tfplan
terraform apply terraform.tfplan
As far as I understand it, Terraform will only run on templates in the current directory. So I would need to cd stage/services/backend-app and run terraform apply there.
However I would like to be able to manage the whole stack from the Makefile. I have not seen a good clean way to pass arguments to make.
My goal is to have targets such as
make s3 plan # verify syntax
make s3 apply # apply plan
Unless there's a better way to run terraform from a parent directory? Is there something similar to:
make all plan # create stage plan
make all apply # apply stage plan
Another solution could be to create a tmp folder on each run and use terraform init ... and terraform get..., like this (the example also shows the remote state management using partial configuration):
readonly orig_path=$(pwd) && \
mkdir tmp && \
cd tmp && \
terraform init -backend=true -backend-config="$tf_backend_config" -backend-config="key=${account}/${envir}/${project}.json" $project_path && \
terraform get $project_path && \
terraform apply && \
cd $orig_path && \
rm -fR tmp
Or maybe wrap the above into a shell script, and call it from make file under "apply" etc.
-- adding this section to address a comment/question from Sam Hammamy --
In general, with the way how the current versions of terraform processes projects, we do want to think ahead of time to how to structure our projects, and how to break them down into manageable still functional pieces. Which is why usually we break them into "foundational" projects like VPC, VPN, SecurityGroups, IAM-Policies, Bastions etc. vs. 'functional" like "db", "web-cluster" etc. We usually run/deploy/modify the "fundamental" pieces once or occasionally, while the "functional" pieces we might re-deploy several times a day.
Which means that with the fragmenting of our IaC code like that, we also will end up of fragmenting of our remote state accordingly, and the execution of our project deployment as well.
For a project structure, which reflects that "philosophy" we usually end up with a project structure similar to this (common modules are not shown):
├── projects
│ └── application-name
│ ├── dev
│ │ ├── bastion
│ │ ├── db
│ │ ├── vpc
│ │ └── web-cluster
│ ├── prod
│ │ ├── bastion
│ │ ├── db
│ │ ├── vpc
│ │ └── web-cluster
│ └── backend.config
└── run-tf.sh
Where each project is a subfolder, and for each application_name/env/component = folder (i.e. dev/vpc) we added a placeholder backend configuration file: backend.tf:
terraform {
backend "s3" {
}
}
Where the folder content for each component will contain files similar to:
│ ├── prod
│ │ ├── vpc
│ │ │ ├── backend.tf
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
At "application_name/" or "application_name/env" level we added a backend.config file, with a content:
bucket = "BUCKET_NAME"
region = "region_name"
lock = true
lock_table = "lock_table_name"
encrypt = true
Our wrapper shell script expects parameters application-name, environment, component, and the actual terraform cmd to run.
The content of run-tf.sh script (simplified):
#!/bin/bash
application=$1
envir=$2
component=$3
cmd=$4
tf_backend_config="root_path/$application/$envir/$component/backend.config"
terraform init -backend=true -backend-config="$tf_backend_config" -backend-config="key=tfstate/${application}/${envir}/${component}.json"
terraform get
terraform $cmd
Here is how a typical run-tf.sh invocation looks like (to be executed from Makefile):
$ run-tf.sh application_name dev vpc plan
$ run-tf.sh application_name prod bastion apply
We use shell scripts to handle this exact use case which more nicely handles cding around.
However you can set Make variables by either using environment variables or setting it directly on the command line following the target like this:
make target FOO=bar
So in your case you might want something like:
ifndef LOCATION
$(error LOCATION is not set)
endif
.PHONY: all plan apply destroy
all: plan
plan:
cd $(LOCATION) && \
terraform plan -var-file terraform.tfvars -out terraform.tfplan
apply:
cd $(LOCATION) && \
terraform apply -var-file terraform.tfvars
destroy:
cd $(LOCATION) && \
terraform plan -destroy -var-file terraform.tfvars -out terraform.tfplan
terraform apply terraform.tfplan
I'd probably be inclined to have a target that runs terraform get and also configures remote state as well but that should be trivial to set now.

Building Play Framework 2.4 application with Gradle 2.6 FAILED

everyone!
The main problem is that I fail to build my Play Framework 2.4.0 application with Gradle 2.6.
The following is my build.gradle file (nothing special, everything here is from the official docs on using gradle with play framework https://docs.gradle.org/current/userguide/play_plugin.html):
plugins {
id 'play'
}
repositories {
jcenter()
maven {
name "typesafe-maven-release"
url "https://repo.typesafe.com/typesafe/maven-releases"
}
ivy {
name "typesafe-ivy-release"
url "https://repo.typesafe.com/typesafe/ivy-releases"
layout "ivy"
}
mavenCentral()
}
model {
components {
play {
platform play: '2.4.0'
}
}
}
I used playBinary, runPlayBinary and the composite tasks one by one (such as compilePlayBinaryRoutes, compilePlayBinaryTwirlTemplates and compilePlayBinaryScala), however the result is essentially the same every time:
~/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI$ gradle playBinary
:compilePlayBinaryRoutes UP-TO-DATE
:compilePlayBinaryTwirlTemplates UP-TO-DATE
:compilePlayBinaryScala
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryRoutes/router/Routes.scala:56: value index is not a member of object controllers.Application
controllers.Application.index(),
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryRoutes/router/Routes.scala:73: value updateSettings is not a member of object controllers.Application
controllers.Application.updateSettings(),
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryRoutes/router/Routes.scala:107: value getResource is not a member of object controllers.Application
controllers.Application.getResource(fakeValue[String]),
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryTwirlTemplates/views/html/index.template.scala:75: value get is not a member of List[String]
"""),format.raw/*57.25*/("""<tr class=""""),_display_(/*57.37*/abbreviations/*57.50*/.get(i)),format.raw/*57.57*/("""_"""),_display_(/*57.59*/i),format.raw/*57.60*/("""" title=""""),_display_(/*57.70*/keysToParse/*57.81*/.get(i + 1)),format.raw/*57.92*/(""""><td>"""),_display_(/*57.99*/abbreviations/*57.112*/.get(i)),format.raw/*57.119*/(""" """),format.raw/*57.120*/("""sum : </td><td>"""),_display_(/*57.136*/aggrResults/*57.147*/.get(i)),format.raw/*57.154*/("""</td></tr>
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryTwirlTemplates/views/html/index.template.scala:75: value get is not a member of List[String]
"""),format.raw/*57.25*/("""<tr class=""""),_display_(/*57.37*/abbreviations/*57.50*/.get(i)),format.raw/*57.57*/("""_"""),_display_(/*57.59*/i),format.raw/*57.60*/("""" title=""""),_display_(/*57.70*/keysToParse/*57.81*/.get(i + 1)),format.raw/*57.92*/(""""><td>"""),_display_(/*57.99*/abbreviations/*57.112*/.get(i)),format.raw/*57.119*/(""" """),format.raw/*57.120*/("""sum : </td><td>"""),_display_(/*57.136*/aggrResults/*57.147*/.get(i)),format.raw/*57.154*/("""</td></tr>
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryTwirlTemplates/views/html/index.template.scala:75: value get is not a member of List[String]
"""),format.raw/*57.25*/("""<tr class=""""),_display_(/*57.37*/abbreviations/*57.50*/.get(i)),format.raw/*57.57*/("""_"""),_display_(/*57.59*/i),format.raw/*57.60*/("""" title=""""),_display_(/*57.70*/keysToParse/*57.81*/.get(i + 1)),format.raw/*57.92*/(""""><td>"""),_display_(/*57.99*/abbreviations/*57.112*/.get(i)),format.raw/*57.119*/(""" """),format.raw/*57.120*/("""sum : </td><td>"""),_display_(/*57.136*/aggrResults/*57.147*/.get(i)),format.raw/*57.154*/("""</td></tr>
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryTwirlTemplates/views/html/index.template.scala:75: value get is not a member of List[Long]
"""),format.raw/*57.25*/("""<tr class=""""),_display_(/*57.37*/abbreviations/*57.50*/.get(i)),format.raw/*57.57*/("""_"""),_display_(/*57.59*/i),format.raw/*57.60*/("""" title=""""),_display_(/*57.70*/keysToParse/*57.81*/.get(i + 1)),format.raw/*57.92*/(""""><td>"""),_display_(/*57.99*/abbreviations/*57.112*/.get(i)),format.raw/*57.119*/(""" """),format.raw/*57.120*/("""sum : </td><td>"""),_display_(/*57.136*/aggrResults/*57.147*/.get(i)),format.raw/*57.154*/("""</td></tr>
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryTwirlTemplates/views/html/index.template.scala:158: value get is not a member of List[String]
document.getElementById('timeLength').value = '"""),_display_(/*140.73*/timeLengths/*140.84*/.get(1)),format.raw/*140.91*/("""'""")))}/*140.94*/else/*140.99*/{_display_(Seq[Any](format.raw/*140.100*/("""
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryTwirlTemplates/views/html/index.template.scala:182: value get is not a member of List[String]
"""),format.raw/*164.41*/("""<span title=""""),_display_(/*164.55*/keysToParse/*164.66*/.get(i + 1)),format.raw/*164.77*/("""" class=""""),_display_(/*164.87*/abbreviations/*164.100*/.get(i)),format.raw/*164.107*/("""_"""),_display_(/*164.109*/i),format.raw/*164.110*/(""""><input type="checkbox" id=""""),_display_(/*164.140*/i),format.raw/*164.141*/("""" checked onclick="change(this)">"""),_display_(/*164.175*/abbreviations/*164.188*/.get(i)),format.raw/*164.195*/("""</span>
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryTwirlTemplates/views/html/index.template.scala:182: value get is not a member of List[String]
"""),format.raw/*164.41*/("""<span title=""""),_display_(/*164.55*/keysToParse/*164.66*/.get(i + 1)),format.raw/*164.77*/("""" class=""""),_display_(/*164.87*/abbreviations/*164.100*/.get(i)),format.raw/*164.107*/("""_"""),_display_(/*164.109*/i),format.raw/*164.110*/(""""><input type="checkbox" id=""""),_display_(/*164.140*/i),format.raw/*164.141*/("""" checked onclick="change(this)">"""),_display_(/*164.175*/abbreviations/*164.188*/.get(i)),format.raw/*164.195*/("""</span>
^
/home/qb-user/projects/QuickBlox-ChatStatsUIApp/ChatStatsUI/build/playBinary/src/compilePlayBinaryTwirlTemplates/views/html/index.template.scala:182: value get is not a member of List[String]
"""),format.raw/*164.41*/("""<span title=""""),_display_(/*164.55*/keysToParse/*164.66*/.get(i + 1)),format.raw/*164.77*/("""" class=""""),_display_(/*164.87*/abbreviations/*164.100*/.get(i)),format.raw/*164.107*/("""_"""),_display_(/*164.109*/i),format.raw/*164.110*/(""""><input type="checkbox" id=""""),_display_(/*164.140*/i),format.raw/*164.141*/("""" checked onclick="change(this)">"""),_display_(/*164.175*/abbreviations/*164.188*/.get(i)),format.raw/*164.195*/("""</span>
^
11 errors found
:compilePlayBinaryScala FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':compilePlayBinaryScala'.
> Compilation failed
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Total time: 8.445 secs
And here's the structure of the build directory, after build failure:
build
├── playBinary
│   ├── classes
│   └── src
│   ├── compilePlayBinaryRoutes
│   │   ├── controllers
│   │   │   ├── javascript
│   │   │   │   └── JavaScriptReverseRoutes.scala
│   │   │   ├── ReverseRoutes.scala
│   │   │   └── routes.java
│   │   └── router
│   │   ├── RoutesPrefix.scala
│   │   └── Routes.scala
│   └── compilePlayBinaryTwirlTemplates
│   └── views
│   └── html
│   └── index.template.scala
└── tmp
└── compilePlayBinaryScala
My guess is that it might have something to do with the fact, that Gradle 2.6 doesn't support reverse routing for now. I tried creating a new Play application (2.4.2 this time) and built it straight away, however it also failed on the same part:
controllers.Application.index(),
^
So let's go one by one:
All the errors after the first 3 were my mistakes, but I somehow assumed, that they were caused by the first 3. All of those were basically the result of not proper treating of the data structures in scala (my bad, I'm a noob there).
The first 3 however were caused by the fact, that I had non-static methods in my main Controller. Somehow, I must have overlooked the fact, that you have to prefix your non-static methods calls in routes file with a '#' sign. So the solution is to either place the prefix in the routes file, or make methods static.
The only reference to this (static/non-static behavior), that I found is a scarce mentioning here (https://www.playframework.com/documentation/2.4.0/JavaRouting#Dependency-Injection [last line of this paragraph]).

Qmake configuration using Buildroot

I’ve tried to add a package to Buildroot that uses Qt and Boost. The package uses qmake to generate a Makefile, this part seems to be working, however I get an error when I build saying:
Could not find qmake configuration file qws/linux-arm-g++.
Error processing project file: MsgDisplay.pro
The contents of my package is laid out like this:
DummyPgm
├── main.cpp
├── MsgDisplay.pri
├── MsgDisplay.pro
├── MsgDisplay.pro.user
├── MsgHandler.cpp
├── MsgHandler.h
├── MsgServer.cpp
├── MsgServer.h
├── Tcp
│ ├── TcpAddrPort.cpp
│ ├── TcpAddrPort.h
│ ├── TcpServer.cpp
│ ├── TcpServer.h
│ ├── TcpSocket.cpp
│ └── TcpSocket.h
└── Tools
├── Banner.cpp
├── Banner.h
├── IoExt.h
├── SeparateArgumentList.cpp
├── SeparateArgumentList.h
└── SysTypes.h
2 directories, 20 files
I have added a package directory, dummypgm, which contains Config.in and dummypgm.mk files. The contents of the files are:
Config.in:
config BR2_PACKAGE_DUMMYPGM
bool "dummypgm"
help
Foo Software.
http://www.foo.com
dummypgm.mk:
DUMMYPGM_VERSION = 0.1.0
DUMMYPGM_SOURCE = DummyPgm-$(DUMMYPGM_VERSION).tar.gz
define DUMMYPGM_CONFIGURE_CMDS
(cd $(#D); $(QT_QMAKE) MsgDisplay.pro)
endef
define DUMMYPGM_BUILD_CMDS
$(MAKE) -C $(#D)
endef
$(eval $(generic-package))
Since the package is hosted locally, I’ve simply put the DummyPgm-0.1.0.tar.gz in the dl directory.
I’ve also added the following to package/Config.in:
source "package/dummypgm/Config.in"
I’m a little lost as to why this doesn’t work, if anyone could help me I would be very grateful. Also, is there any way to call $(eval $(qmake-package)) or something?
Are you using Qt4 or Qt5 ? Your package/dummypgm/Config.in should have a depends on on one of them, and your dummypgm.mk should have a DUMMYPGM_DEPENDENCIES = qt or DUMMYPGM_DEPENDENCIES = qt5base.
My intuition is that you are using Qt5. In this case, you shouldn't call $(QT_QMAKE), but $(QT5_QMAKE).
Have a look at http://git.buildroot.net/buildroot/tree/package/qextserialport/qextserialport.mk for an example. Note that this example supports both Qt4 and Qt5, probably in your case you only need one of the two.
Also, you should really subscribe to the Buildroot mailing list, you would get a lot more answers than here.

Resources