I'm using serverless stack, now attempting to add a Lambda Custom Authenticator to validate authorization tokens with Auth0 and add custom data to my request context when the authentication passes.
Everything works mostly fine at this point, except for when I cache the Authenticator response for the same token.
I'm using a 5-second cache for development. The first request with a valid token goes through as it should. The next requests in the 5-second window fail with a mysterious 500 error without ever reaching my code.
Authorizer configuration
// MyStack.ts
const authorizer = new sst.Function(this, "AuthorizerFunction", {
handler: "src/services/Auth/handler.handler",
});
const api = new sst.Api(this, "MarketplaceApi", {
defaultAuthorizationType: sst.ApiAuthorizationType.CUSTOM,
defaultAuthorizer: new HttpLambdaAuthorizer("Authorizer", authorizer, {
authorizerName: "LambdaAuthorizer",
resultsCacheTtl: Duration.seconds(5), // <-- this is the cache config
}),
routes: {
"ANY /{proxy+}": "APIGateway.handler",
},
});
Authorizer handler
const handler = async (event: APIGatewayAuthorizerEvent): Promise<APIGatewayAuthorizerResult> => {
// Authenticates with Auth0 and serializes context data I wanna
// forward to the underlying service
const authentication = await authenticate(event);
const context = packAuthorizerContext(authentication.value);
const result: APIGatewayAuthorizerResult = {
principalId: authentication.value?.id || "unknown",
policyDocument: buildPolicy(authentication.isSuccess ? "Allow" : "Deny", event.methodArn),
context, // context has the following shape:
// {
// info: {
// id: string,
// marketplaceId: string,
// roles: string,
// permissions: string
// }
// }
};
return result;
};
CloudWatch logs
☝️ Every uncached request succeeds, with status code 200, an integration ID and everything, as it's supposed to. Every other request during the 5-second cache fails with 500 error code and no integration ID, meaning it doesn't reach my code.
Any tips?
Update
I just found this in an api-gateway.d.ts #types file (attention to the comments, please):
// Poorly documented, but API Gateway will just fail internally if
// the context type does not match this.
// Note that although non-string types will be accepted, they will be
// coerced to strings on the other side.
export interface APIGatewayAuthorizerResultContext {
[name: string]: string | number | boolean | null | undefined;
}
And I did have this problem before I could get the Authorizer to work in the first place. I had my roles and permissions properties as string arrays, and I had to transform them to plain strings. Then it worked.
Lo and behold, I just ran a test right now, removing the context information I was returning for successfully validated tokens and now the cache is working 😔 every request succeeds, but I do need my context information...
Maybe there's a max length for the context object? Please let me know of any restrictions on the context object. As the #types file states, that thing is poorly documented. This is the docs I know about.
The issue is that none of the context object values may contain "special" characters.
Your context object must be something like:
"context": {
"someString": "value",
"someNumber": 1,
"someBool": true
},
You cannot set a JSON object or array as a valid value of any key in the context map. The only valid value types are string, number and boolean.
In my case, though, I needed to send a string array.
I tried to get around the type restriction by JSON-serializing the array, which produced "[\"valueA\",\"valueB\"]" and, for some reason, AWS didn't like it.
TL;DR
What solved my problem was using myArray.join(",") instead of JSON.stringify(myArray)
Related
I'm trying to do a performance test on a
SPA with a Frontend in React, deployed with Netlify
As a backend we're using Hasura Cloud Graphql (std version) https://hasura.io/, where everything from the client goes directly through Hasura to the DB.
DB is in Postgress housed in Heroku (Std 0 tier).
We're hoping to be able to have around 800 users simultaneous.
The problem is that i'm loss about how to do it or if i'm doing it correctly, seeing how most of our stuff are "subscriptions/mutations" that I had to transform into queries. I tried doing those test with k6 and Jmeter but i'm not sure if i'm doing them properly.
k6 test
At first, i did a quick search and collected around 10 subscriptions that are commonly used. Then i tried to create a performance test with k6 https://k6.io/docs/using-k6/http-requests/ but i wasn't able to create a working subscription test so i just transform each subscription into a query and perform a http.post with this setup:
export const options = {
stages: [
{ duration: '30s', target: 75 },
{ duration: '120s', target: 75 },
{ duration: '60s', target: 50 },
{ duration: '30s', target: 30 },
{ duration: '10s', target: 0 }
]
};
export default function () {
var res = http.post(prod,
JSON.stringify({
query: listaQueries.GetDesafiosCursosByKey(
keys.desafioCursoKey
)}), params);
sleep(1)
}
I did this for every query and ran each test individually. Unfortunately, the numbers i got were bad, and somehow our test environment was getting better times than production. (The only difference afaik is that we're using Hasura Cloud for production).
I tried to implement websocket, but i couldn't getthem work and configure them to do a stress/load test.
K6 result
Jmeter test
After that, i tried something similar with Jmeter, but again i couldn't figure how to set up a subscription test (after i while, i read in a blog that jmeter doesn't support it
https://qainsights.com/deep-dive-into-graphql-in-jmeter/ ) so i simply transformed all subscriptions into a query and tried to do the same, but the numbers I was getting were different and much higher than k6.
Jmeter query Config 1
Jmeter query config 2
Jmeter thread config
Questions
I'm not sure if i'm doing it correctly, if transforming every subscription into a query and perform a http request is a correct approach for it. (At least I know that those queries return the data correctly).
Should i just increase the number of VUS/threads until i get a constant timeout to simulate a stress test? There were some test that are causing a graphql error on the website Graphql error, and others were having a
""WARN[0059] Request Failed error="Post \"https://xxxxxxx-xxxxx.herokuapp.com/v1/graphql\": EOF""
in the k6 console.
Or should i just give up with k6/jmeter and try to search for another tool to perfom those test?
Thanks you in advance, and sorry for my English and explanation, but i'm a complete newbie at this.
I'm not sure if i'm doing it correctly, if transforming every
subscription into a query and perform a http request is a correct
approach for it. (At least I know that those queries return the data
correctly).
Ideally you would be using WebSocket as that is what actual clients will most likely be using.
For code samples, check out the answer here.
Here's a more complete example utilizing a main.js entry script with modularized Subscription code in subscriptions\bikes.brands.js. It also uses the Httpx library to set a global request header:
// main.js
import { Httpx } from 'https://jslib.k6.io/httpx/0.0.5/index.js';
import { getBikeBrandsByIdSub } from './subscriptions/bikes-brands.js';
const session = new Httpx({
baseURL: `http://54.227.75.222:8080`
});
const wsUri = 'wss://54.227.75.222:8080/v1/graphql';
const pauseMin = 2;
const pauseMax = 6;
export const options = {};
export default function () {
session.addHeader('Content-Type', 'application/json');
getBikeBrandsByIdSub(1);
}
// subscriptions/bikes-brands.js
import ws from 'k6/ws';
/* using string concatenation */
export function getBikeBrandsByIdSub(id) {
const query = `
subscription getBikeBrandsByIdSub {
bikes_brands(where: {id: {_eq: ${id}}}) {
id
brand
notes
updated_at
created_at
}
}
`;
const subscribePayload = {
id: "1",
payload: {
extensions: {},
operationName: "query",
query: query,
variables: {},
},
type: "start",
}
const initPayload = {
payload: {
headers: {
"content-type": "application/json",
},
lazy: true,
},
type: "connection_init",
};
console.debug(JSON.stringify(subscribePayload));
// start a WS connection
const res = ws.connect(wsUri, initPayload, function(socket) {
socket.on('open', function() {
console.debug('WS connection established!');
// send the connection_init:
socket.send(JSON.stringify(initPayload));
// send the chat subscription:
socket.send(JSON.stringify(subscribePayload));
});
socket.on('message', function(message) {
let messageObj;
try {
messageObj = JSON.parse(message);
}
catch (err) {
console.warn('Unable to parse WS message as JSON: ' + message);
}
if (messageObj.type === 'data') {
console.log(`${messageObj.type} message received by VU ${__VU}: ${Object.keys(messageObj.payload.data)[0]}`);
}
console.log(`WS message received by VU ${__VU}:\n` + message);
});
});
}
Should i just increase the number of VUS/threads until i get a
constant timeout to simulate a stress test?
Timeouts and errors that only happen under load are signals that you may be hitting a bottleneck somewhere. Do you only see the EOFs under load? These are basically the server sending back incomplete responses/closing connections early which shouldn't happen under normal circumstances.
My expectation is that your test should be replicating the real user activity as close as possible. I doubt that real users will be sending requests to GraphQL directly and well-behaved load test must replicate the real life application usage as close as possible.
So I believe you should move to HTTP protocol level and mimic the network footprint of the real browser instead of trying to come up with individual GraphQL queries.
With regards to JMeter and k6 differences it might be the case that k6 produces higher throughput given the same hardware and running requests at maximum speed as it evidenced by kind of benchmark in the Open Source Load Testing Tools 2021 article, however given you're trying to simulate real users using real browsers accessing your applications and the real users don't hammer the application non-stop, they need some time to "think" between operations you should be getting the same number of requests for both load testing tools, if JMeter doesn't give you the load you want to conduct make sure to follow JMeter Best Practices and/or consider running it in distributed mode .
I am following Google's tutorial on setting up Google Cloud endpoint (not AWS API Gateway) in front of my Cloud Function. I am triggering my Cloud Function to trigger an AWS lambda function, AND I am trying to pass a path parameter from my Endpoint as defined by OpenAPI spec.
Path parameters are variable parts of a URL path. They are typically used to point to a specific resource within a collection, such as a user identified by ID. A URL can have several path parameters, each denoted with curly braces { }.
paths: /users/{id}:
get:
parameters:
- in: path
name: id # Note the name is the same as in the path
required: true
schema:
type: integer
GET /users/{id}
My openapi.yaml
swagger: '2.0'
info:
title: Cloud Endpoints + GCF
description: Sample API on Cloud Endpoints with a Google Cloud Functions backend
version: 1.0.0
host: HOST
x-google-endpoints:
- name: "HOST"
allowCors: "true
schemes:
- https
produces:
- application/json
paths:
/function1/{pathParameters}:
get:
operationId: function1
parameters:
- in: path
name: pathParameters
required: true
type: string
x-google-backend:
address: https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/function1
responses:
'200':
description: A successful response
schema:
type: string
The error I get when I use Endpoint URL https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/function1/conversations is a TypeError from my AWS lambda function
StatusCode:200, FunctionError: "Unhandled", ExecutedVersion: "$LATEST". Payload: "errorType":"TypeError", errorMessage:"Cannot read property 'startsWith' of undefined..."
It is saying that on line
var path = event.pathParameters;
...
...
if (path.startsWith('conversations/'){...};
my path var is undefined.
I initially thought my Google Function was not correctly passing pathParameters but when I tested my Google function using triggering event {"pathParameters":"conversations"}, my Lambda returns the payload successfully.
My Google Cloud Function:
let AWS = require('aws-sdk');
AWS.config.update({
accessKeyId: 'key',
secretAccessKey: 'secret',
region: 'region'
})
let lambda = new AWS.Lambda();
exports.helloWorld = async(req,res) => {
let params = {
FunctionName:'lambdafunction',
InvocationType: "RequestRespone",
Payload: JSON.stringify(req.body)
};
res.status(200).send(await lambda.invoke(params, function(err,data){
if(err){throw err}
else{
return data.Payload
}
}).promise());
}
EDIT 1:
Seeing this Google Group post, I tried adding to my openapi.yaml file path_translation: APPEND_PATH_TO_ADDRESS, yet still I'm having no luck.
...
paths:
/{pathParameters}:
get:
...
x-google-backend:
address: https://tomy.cloudfunctions.net/function-Name
path_translation: APPEND_PATH_TO_ADDRESS
#Arunmainthan Kamalanathan mentioned in the comments that testing in AWS and Google Cloud directly with trigger event {"pathParameters":"conversations"} is not equivalent to passing req.body from my Google function to AWS lambda. I think this is where my error is occurring -- I'm not correctly passing my path parameter in the payload.
EDIT 2:
There is this Stackoverflow post concerning passing route parameters to Cloud Functions using req.path. When I console.log(req.path) I get / and console.log(req.params) I get {'0': '' }, so for some reason my path parameter is not getting passed correctly from Cloud Endpoint URL to my Google function.
I was running into the same issue when specifying multiple paths/routes within my openapi.yaml. It all depends on where you place the x-google-backend (top-level versus operation-level). This has implications on the behaviour of the path_translation. You could also overwrite path_translation yourself, as the documentation clearly describes:
path_translation: [ APPEND_PATH_TO_ADDRESS | CONSTANT_ADDRESS ]
Optional. Sets the path translation strategy used by ESP when making
target backend requests.
Note: When x-google-backend is used at the top level of the OpenAPI
specification, path_translation defaults to APPEND_PATH_TO_ADDRESS,
and when x-google-backend is used at the operation level of the
OpenAPI specification, path_translation defaults to CONSTANT_ADDRESS.
For more details on path translation, please see the Understanding
path translation section.
This means that if you want the path to be appended as a path parameter instead of a query parameter in your backend, you should adhere to the following scenario's:
Scenario 1: Do you have one cloud function in the x-google-backend.address that handles all of your paths in the openapi specification? Put x-google-backend at the top-level.
Scenario 2: Do you have multiple cloud functions corresponding to different paths? Put x-google-backend at the operation-level and set x-google-backend.path_translation to APPEND_PATH_TO_ADDRESS.
When your invocation type is RequestRespone, you can access the payload directly from the event parameter of lambda.
Look at the `Payload' parameter of the Google function:
let params = {
FunctionName:'lambdafunction',
InvocationType: "RequestRespone",
Payload: JSON.stringify({ name: 'Arun'})
};
res.status(200).send(await lambda.invoke(params)...)
Also the Lambda part:
exports.handler = function(event, context) {
context.succeed('Hello ' + event.name);
};
I hope this helps.
I am developing apis & microservices in nestJS,
this is my controller function
#Post()
#MessagePattern({ service: TRANSACTION_SERVICE, msg: 'create' })
create( #Body() createTransactionDto: TransactionDto_create ) : Promise<Transaction>{
return this.transactionsService.create(createTransactionDto)
}
when i call post api, dto validation works fine, but when i call this using microservice validation does not work and it passes to service without rejecting with error.
here is my DTO
import { IsEmail, IsNotEmpty, IsString } from 'class-validator';
export class TransactionDto_create{
#IsNotEmpty()
action: string;
// #IsString()
readonly rec_id : string;
#IsNotEmpty()
readonly data : Object;
extras : Object;
// readonly extras2 : Object;
}
when i call api without action parameter it shows error action required but when i call this from microservice using
const pattern = { service: TRANSACTION_SERVICE, msg: 'create' };
const data = {id: '5d1de5d787db5151903c80b9', extras:{'asdf':'dsf'}};
return this.client.send<number>(pattern, data)
it does not throw error and goes to service.
I have added globalpipe validation also.
app.useGlobalPipes(new ValidationPipe({
disableErrorMessages: false, // set true to hide detailed error message
whitelist: false, // set true to strip params which are not in DTO
transform: false // set true if you want DTO to convert params to DTO class by default its false
}));
how will it work for both api & microservice, because i need all at one place and with same functionality so that as per clients it can be called.
ValidationPipe throws HTTP BadRequestException, where as the proxy client expects RpcException.
#Catch(HttpException)
export class RpcValidationFilter implements ExceptionFilter {
catch(exception: HttpException, host: ArgumentsHost) {
return new RpcException(exception.getResponse())
}
}
#UseFilters(new RpcValidationFilter())
#MessagePattern('validate')
async validate(
#Payload(new ValidationPipe({ whitelist: true })) payload: SomeDTO,
) {
// payload validates to SomeDto
. . .
}
I'm going out on a limb and assuming in you main.ts you have the line app.useGlobalPipes(new ValidationPipe());. From the documentation
In the case of hybrid apps the useGlobalPipes() method doesn't set up pipes for gateways and micro services. For "standard" (non-hybrid) microservice apps, useGlobalPipes() does mount pipes globally.
You could instead bind the pipe globally from the AppModule, or you could use the #UsePipes() decorator on each route that will be needing validation via the ValidationPipe
More info on binding pipes here
As I understood, useGlobalPipes is working fine for api but not for microservice.
Reason behind this, nest microservice is a hybrid application and it has some restrictions. Please refer below para.
By default a hybrid application will not inherit global pipes, interceptors, guards and filters configured for the main (HTTP-based) application. To inherit these configuration properties from the main application, set the inheritAppConfig property in the second argument (an optional options object) of the connectMicroservice() call.
Please refer this Nest Official Document
So, you need to add inheritAppConfig option in connectMicroservice() method.
const microservice = app.connectMicroservice(
{
transport: Transport.TCP,
},
{ inheritAppConfig: true },
);
It worked for me!
I'm trying the new request verification process for Slack API on AWS Lambda but I can't produce a valid signature from a request.
The example showed in https://api.slack.com/docs/verifying-requests-from-slack is for a slash command but I'm using for an event subscription, especially, a subscription to a bot event (app_mention). Does the new process support event subscriptions as well?
If so, am I missing something?
Mapping template for Integration request in API Gateway. I can't get a raw request as the slack documentation says but did my best like this:
{
"body" : $input.body,
"headers": {
#foreach($param in $input.params().header.keySet())
"$param": "$util.escapeJavaScript($input.params().header.get($param))" #if($foreach.hasNext),#end
#end
}
}
My function for verification:
def is_valid_request(headers, body):
logger.info(f"DECODED_SECRET: {DECODED_SECRET}")
logger.info(f"DECRYPTED_SECRET: {DECRYPTED_SECRET}")
timestamp = headers.get(REQ_KEYS['timestamp'])
logger.info(f"timestamp: {timestamp}")
encoded_body = urlencode(body)
logger.info(f"encoded_body: {encoded_body}")
base_str = f"{SLACK_API_VER}:{timestamp}:{encoded_body}"
logger.info(f"base_str: {base_str}")
base_b = bytes(base_str, 'utf-8')
dgst_str = hmac.new(DECRYPTED_SECRET, base_b, digestmod=sha256).hexdigest()
sig_str = f"{SLACK_API_VER}={dgst_str}"
logger.info(f"signature: {sig_str}")
req_sig = headers.get(REQ_KEYS['sig'])
logger.info(f"req_sig: {req_sig}")
logger.info(f"comparing: {hmac.compare_digest(sig_str, req_sig)}")
return hmac.compare_digest(sig_str, req_sig)
Lambda Log in CloudWatch. I can't show the values for security reasons but it seems like each variable/constant has a reasonable value:
DECODED_SECRET: ...
DECRYPTED_SECRET: ...
timestamp: 1532011621
encoded_body: ...
base_str: v0:1532011621:token= ... &team_id= ... &api_app_id= ...
signature: v0=3 ...
req_sig: v0=1 ...
comparing: False
signature should match with req_sig but it doesn't. I guess there is something wrong with base_str = f"{SLACK_API_VER}:{timestamp}:{encoded_body}". I mean, the concatination or urlencoding of the request body, but I'm not sure. Thank you in advance!
My Parse app has a GiftCode collection which disallows the find operation at the class-level.
I am writing a beforeSave cloud function that prevents duplicate codes from being entered by our team from Parse's dashboard:
Parse.Cloud.beforeSave('GiftCode', function (req, res) {
Parse.Cloud.useMasterKey();
const code = req.object.get('code');
if (!code) {
res.success();
} else {
const finalCode = code.toUpperCase().trim();
req.object.set('code', finalCode);
(new Parse.Query('GiftCode'))
.equalTo('code', finalCode)
.first()
.then((gift) => {
if (!gift) {
res.success();
} else {
res.error(`GiftCode with code=${finalCode} already exists (objectId=${gift.id})`);
}
}, (err) => {
console.error(err);
res.error(err);
});
}
});
As you can see, I am calling Parse.Cloud.useMasterKey() (and this is running in the Parse cloud), but I am still getting the following error:
This user is not allowed to perform the find operation on GiftCode.
I use useMasterKey() in other normal cloud functions and am able to perform find operations as needed.
Is useMasterKey() not applicable to beforeSave functions?
I've never tried to use the master key in a beforeSave function but I wouldn't be surprised if there's some extra safeguards in place to prevent it. From a security standpoint, it seems like it could make all write-based CLPs and ACLs worthless for that class.
Try selectively using the master key by passing it as an option to the query like so
(new Parse.Query('GiftCode'))
.equalTo('code', finalCode)
.first({ useMasterKey: true })
.then((gift) => {
...
Parse.Cloud.useMasterKey(); has been deprecated in Parse Server version 2.3.0 (Dec 7, 2016). From that version on, it is a no-op (it does nothing). You should now insert the {useMasterKey:true} optional parameter to each of the methods that need to override the ACL or CLP in your code.