Failure: Cannot find module handler AWS Lambda - aws-lambda

Hello I am trying to setup a new serverless graphql project but the serverless.yml file doesn't find my handler, which is in src/graphql.ts
It throws an error like this:
Failure: Cannot find module '/Users/VIU/Projects/am/src/graphql'
The src is in the root directory and the path is correct, so I don't understand what is going on.
The serverless.yml looks like this:
graphql:
handler: src/graphql.graphqlHandler
events:
- http:
path: graphql
method: post
cors: true
- http:
path: graphql
method: get
cors: true
And the graphql handler file like this:
import { ApolloServer, gql } from "apollo-server-lambda"
// Construct a schema, using GraphQL schema language
const typeDefs = gql`
type Query {
hello: String
}
`;
// Provide resolver functions for your schema fields
const resolvers = {
Query: {
hello: () => 'Hello world!',
},
};
const server = new ApolloServer({ typeDefs, resolvers });
exports.graphqlHandler = server.createHandler();
I've also tried
module.exports.graphqlHandler = server.createHandler();
export const graphqlHandler = server.createHandler();
But none of that seems to work either.
Has someone any idea of what I am doing wrong? Thank You!

In order to run an AWS Lambda function with a Node.js runtime, you'll need to provide a .js file as its handler. Specifically, when using TypeScript and the Serverless framework, that means that the handler field must refer to the compiled file name, namely, ending with a .js extension.
One option for you to resolve this is to simply change the handler field to point to the compiled version of your file. For example, given the following structure:
├── am
│ ├── built
│ │ └── graphql.js
│ ├── package-lock.json
│ ├── package.json
│ └── src
│ └── graphql.ts
└── serverless.yaml
The correct handler field is:
graphql:
handler: built/graphql.graphqlHandler
However, another option which I believe is the preferred one (and possibly what you were originally aiming for) is to use the serverless-plugin-typescript plugin of the Serverless framework. That should reduce your efforts and allow you to use TypeScript almost seamlessly. There is actually an example provided by Serverless that is very similar to your need and I think you can find useful.

Related

Best way to override single method in Illuminate\Foundation\Application through service provider

I just made changes to the application structure for my Laravel application. It works well when runnning tests (for the Http controllers). The problem is when i try to run artisan commands (that literally need to access "getNamespace()" method), it wont resolve the namespaces.
Here are the composer.json:
"autoload": {
"psr-4": {
"App\\": "app/",
"Database\\Factories\\": "database/factories/",
"Database\\Seeders\\": "database/seeders/",
"Modules\\": "modules/"
},
"files": [
"app/Helpers/app.php",
"app/Helpers/form.php",
"app/Helpers/view.php"
]
},
"autoload-dev": {
"psr-4": {
"Tests\\": "tests/"
}
},
I do aware that i can add Modules\ModuleA, Modules\ModuleB to the composer json, but that put alot of work. So i decided to override the getNamespace() method instead, but what is the best way to override single method illuminate/foundation/xxx classes through service provider?
Folder tree:
laravel-project/
├── app/
│ ├── Exception
│ ├── Providers
│ └── ...
├── modules/
│ ├── ModuleA/
│ │ ├── Services
│ │ ├── Http/
│ │ │ ├── Controllers
│ │ │ └── Requests
│ │ └── Models
│ └── ModuleB/
│ └── ...
├── tests
└── ...
If you want to override a single method in Illuminate\Foundation\Application through a service provider in Laravel, you can use the following steps:
Create a new service provider by running the command php artisan make:provider YourServiceProvider in your terminal.
In your YourServiceProvider class, extend the Illuminate\Support\ServiceProvider class.
Override the register() method in your YourServiceProvider class. In this method, you can bind your custom implementation of the method you want to override to the container. For example, if you want to override the loadEnvironmentFrom() method, you can do so as follows:
use Illuminate\Foundation\Application;
class YourServiceProvider extends ServiceProvider
{
public function register(){
$this->app->bind(Application::class, function ($app) {
return new class($app) extends Application {
public function loadEnvironmentFrom($file)
{
// Your custom implementation here
}
};
});
}
}
Then in your config.app file, add the service provider to the list of providers:
'providers' => [
// Other service providers
App\Providers\YourServiceProvider::class,
],
This way, the method you've overridden will use your custom implementation instead of the default implementation in Illuminate\Foundation\Application.
Hope this helps

Terraform: how to not duplicate security groups when creating it using modules?

Update
I don't get it, but I reran terraform apply and it did not try to duplicate resources (no errors). Now it checks resources correctly. Kind of unexpected end of events.
I'm learning Terraform, and I created module to allow creating some basic security groups. It runs fine first time and creates resources as expected. But if I run terraform apply second time, it tries to create same groups again and then I get duplicate error, because such security groups already exist.
If I would create security groups directly without module, Terraform recognizes it and does not try to recreate existing resources.
I probably do something wrong here.
Here is my module and how I try to use it:
My project structure looks like this
├── main.tf
├── modules
│   ├── security_group_ec2
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── security_group_rds
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
├── scripts
│   └── update-odoo-cfg.py
├── security_groups.tf
├── terraform.tfstate
├── terraform.tfstate.backup
├── variables.tf
└── vpc.tf
Now my security_group_ec2 content:
main.tf:
resource "aws_security_group" "sg" {
name = "${var.name}"
description = "${var.description}"
vpc_id = "${var.vpc_id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
variables.tf:
variable "name" {
description = "Name of security group"
}
variable "description" {
description = "Description of security group"
}
variable "vpc_id" {
description = "Virtual Private Cloud ID to assign"
}
outputs:
output "sg_id" {
value = "${aws_security_group.sg.id}"
}
And this is file were I call module to create two security groups.
security_groups.tf:
# EC2
module "security_group_staging_ec2" {
source = "modules/security_group_ec2"
name = "ec2_staging_sg"
description = "EC2 Staging Security Group"
vpc_id = "${aws_default_vpc.default.id}"
}
module "security_group_prod_ec2" {
source = "modules/security_group_ec2"
name = "ec2_prod_sg"
description = "EC2 Production Security Group"
vpc_id = "${aws_default_vpc.default.id}"
}
This is error output when running terraform apply:
module.security_group_staging.aws_security_group.sg: Destruction complete after 1s
module.security_group_prod.aws_security_group.sg: Destruction complete after 1s
Error: Error applying plan:
2 error(s) occurred:
* module.security_group_staging_ec2.aws_security_group.sg: 1 error(s) occurred:
* aws_security_group.sg: Error creating Security Group: InvalidGroup.Duplicate: The security group 'ec2_staging_sg' already exists for VPC 'vpc-2a84a741'
status code: 400, request id: 835004f0-d8a1-4ed5-8e21-17f01eb18a23
* module.security_group_prod_ec2.aws_security_group.sg: 1 error(s) occurred:
* aws_security_group.sg: Error creating Security Group: InvalidGroup.Duplicate: The security group 'ec2_prod_sg' already exists for VPC 'vpc-2a84a741'
status code: 400, request id: 953b23e8-20cb-4ccb-940a-6a9ddab54d53
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
P.S. I probably need to somehow indicate that resource is created when calling module?
This looks like a race condition. Terraform tries to parallelise the creation of resources which do not depend on each other, and in this case it looks like it tried to destroy the security groups from module.security_group_staging while simultaneously trying to create them in module.security_group_staging_ec2 with the same names. Did you rename security_group_staging into security_group_staging_ec2?
The destruction succeeded, but the creation failed, because it ran in parallel with destruction.
The second time you ran it there was no race condition, because module.security_group_staging was already destroyed.
As a side note, it is usually a good idea to not keep separate environments in the same state file.

Separate page graphQL queries in separate files (.graphql)

Is it possible to separate a page graphql query in a separate .graphql file, and then import it in the page file? I assume I need to add some kind og .graphQL loader?
If you want to write your query, mutation with .graphql or .gql file extension.
You need graphql-tag/loader, here is my webpack.config.js:
{
test: /\.(graphql|gql)$/,
exclude: /node_modules/,
use: 'graphql-tag/loader'
}
You can use graphql-import to do that.
Example:
Assume the following directory structure:
.
├── schema.graphql
├── posts.graphql
└── comments.graphql
schema.graphql
# import Query.*, Mutation.* from "posts.graphql"
# import Query.*, Mutation.* from "comments.graphql"
posts.graphql
# import Comment from 'comments.graphql'
type Post {
comments: [Comment]
id: ID!
text: String!
tags: [String]
}
comments.graphql
type Comment {
id: ID!
text: String!
}
You can see full document in here
check 'apollo-universal-starter-kit' - should be adaptable to gatsby
To my knowledge that's not possible out of the box. The only way to split up queries is with Fragments:
https://www.gatsbyjs.org/docs/querying-with-graphql/#fragments
However if you want to get a discussion started I'd recommend opening an issue on GitHub.

Getting the puppet module pcfens/filebeat to work

I'm new to this site and puppet. I'm trying so setup a puppet module for filebeat. I want Linux nodes to send logs to logstash using this module
I want a configuration that looks something like this:
class { 'filebeat':
outputs => {
'logstash' => {
'hosts' => [
'<FQDN>:5044',
],
'enabled' => true,
},
},
}
filebeat::prospector { 'syslogs':
paths => [
'/var/log/*.log',
'/var/log/messages',
],
doc_type => 'syslog-beat',
}
Does anyone have any experience with this module or with Puppet in general and can tell me how to configure this module with the configuration above. I feel clueless right now and I can't seem to find a lot of documentation about this module. I would really appreciate a push into the right direction on how to setup this module.
You probably want to ask about how to start your Control Repo. But before you do that, make sure you read up on the Roles and Profiles design pattern.
To get you started, you will have start something like this:
$ tree
.
└── modules
├── profile
│   └── manifests
│   ├── base
│   │   └── filebeat.pp
│   └── base.pp
└── role
└── manifests
├── base
└── myrole.pp
7 directories, 3 files
(Obviously, as you can see from the example I linked above, it is going to have a lot more in it eventually.)
Then your base class:
$ cat modules/profile/manifests/base.pp
class profile::base {
include profile::base::filebeat
}
Which includes (the code you wrote above):
$ cat modules/profile/manifests/base/filebeat.pp
class profile::base::filebeat {
class { 'filebeat':
outputs => {
'logstash' => {
'hosts' => [
'<FQDN>:5044',
],
'enabled' => true,
},
},
}
filebeat::prospector { 'syslogs':
paths => [
'/var/log/*.log',
'/var/log/messages',
],
doc_type => 'syslog-beat',
}
}
Your role:
$ cat modules/role/manifests/myrole.pp
class role::myrole {
include profile::base
}
Now, you can test the code on the local host just by ensuring that your modules directory gets copied one way or another into Puppet's modulepath.
If so, try:
# puppet module install pcfens/filebeat
# puppet apply -e 'include role::myrole'
Providing you installed Puppet correctly, and your code above is correct, that would get you started.

Can't get gulp-ruby-sass or gulp-sass to work at all

I'm trying to use gulp-ruby-sass and/or gulp-sass but neither are working for me and think i've got it all set up correctly. I've looked at a bunch of other SO posts but nothing works for me as yet.
I've got another gulp task which is recursively copying an assets directory and index.html from src to dist and this works every time.
To test the sass setup is correct i run a vanilla sass compile and then run gulp; the sass changes work and render via the recursive copy. Here's the commands for that sass test:
$ sass ./sass/main.scss ./src/assets/css/main.css
$ gulp
Forgetting the vanilla sass test and back to the gulp sass issue here - in my gulpfile i'm running the gulp sass task before i run the recursive copy task, so if it worked then the sass changes should be applied and copied. At least that's what i thought.
Here's my dir structure showing relevant files:
├── src
│ ├── index.html
│ └── assets
│ ├── css
│ │ └── main.css
│ ├── js
│ │ └── app.js
│ └── img
│ └── etc.jpg
│
├── dist
│ └── index.html ( from ./src via recursive copy)
│ └── assets
│ └── (same as ./src/assets via recursive copy)
│
├── sass
│ ├── main.scss
│ ├── _partial1.scss
│ ├── _partial2.scss
│ └── etc ...
│
├── gulpfile.js
│
├── node_modules
│ └── etc ...
│
└── bower_components
└── etc ...
In gulpfile.js there are a couple of file mapping objects which work fine for the recursive copy of src/assets/. But for the sake of testing the gulp-ruby-sass task i'm hard-coding the sass/css paths to remove the possibility of the file mapping as an error.
For the record I'm running on OSX Maverics 10.9.5 and think i have the correct environment setup:
$ ruby -v
ruby 2.0.0p481 (2014-05-08 revision 45883) [universal.x86_64-darwin13]
$ sass -v
Sass 3.4.9 (Selective Steve)
Here's my gulpfile.js showing approaches that i've tried so far, with gulp-sass related task commented-out:
var gulp = require('gulp');
var watch = require('gulp-watch');
var gsass = require('gulp-ruby-sass');
// var gsass = require('gulp-sass');
var gutil = require('gulp-util');
// Base paths:
var basePaths = {
srcRoot: './src/',
distRoot: './dist/',
bowerRoot: './bower_components/'
};
// File paths:
var filePaths = {
sassRoot: basePaths.srcRoot + 'sass/',
assetsBuildRoot: basePaths.srcRoot + 'assets/',
jqMin: basePaths.bowerRoot + 'jquery/dist/jquery.min.js',
html: basePaths.srcRoot + 'index.html'
};
// With gulp-ruby-sass
gulp.task('compile-sass', function() {
gulp.src('./sass/main.scss')
.pipe(gsass({sourcemap: true, sourcemapPath: './sass/'}))
.on('error', function (err) { console.log(err.message); })
.pipe(gulp.dest('./src/assets/css'));
});
// With gulp-sass
// gulp.task('gsass', function () {
// gulp.src('./sass/*.scss')
// .pipe(gsass())
// .pipe(gulp.dest('./src/assets/css'));
// });
// Assets directory copied recursively from /src to /dist:
gulp.src(filePaths.assetsBuildRoot + '**/*.*', {base : basePaths.srcRoot})
.pipe(gulp.dest(basePaths.distRoot));
// Copy index.html from /src to /dist:
gulp.src(filePaths.html)
.pipe(gulp.dest(basePaths.distRoot));
gulp.task('default', function() {
// With gulp-ruby-sass
// return gulp.src('./sass/main.scss')
// .pipe(gsass({sourcemap: true, sourcemapPath: './sass/'}))
// .on('error', function (err) { console.log(err.message); })
// .pipe(gulp.dest('./src/assets/css'));
// gulp.watch('compile-sass');
console.log('You reached the finishing line');
});
I have tried allsorts to bugfix, e.g.:
Removing all of the vanilla sass compiled .css files and running the gulp compile, but no .css is produced.
Also tried removing all of the *.map files generated by the vanilla sass compile then running gulp but no dice.
Can anyone see anything glaringly and obviously wrong?
Thanks in advance.
If you are using Sass >= 3.4, you will need to install gulp-ruby-sass version 1.0.0-alpha:
npm install --save-dev gulp-ruby-sass#1.0.0-alpha
In this new version, gulp-ruby-sass is a gulp source adapter and the syntax has changed slightly. Instead of:
gulp.task('compile-sass', function() {
gulp.src('./sass/main.scss')
task code here
});
The new syntax is:
gulp.task('compile-sass', function() {
return sass('./sass/main.scss')
task code here
});
You can find more info in the new version documentation including the new syntax for sourcemaps. https://github.com/sindresorhus/gulp-ruby-sass/tree/rw/1.0

Resources