I am attempting to use a node based lambda function to return jpeg images from s3, using API Gateway.
My Lambda function reads as:
s3.getObject(params).promise().then((result) => {
let resp = {
statusCode: 200,
headers: {
'Content-Type': 'image/jpeg'
},
body: result.Body.toString('base64'),
isBase64Encoded: true
};
callback(null, resp);
});
I have also modified the integration response in API gateway to "Convert to binary (if needed)". When I try testing this function I receive the error "Execution failed due to configuration error: Unable to base64 decode the body.".
Is there a step I am missing to allow me to retrieve base64 encoded files?
I'm not sure about it, but have you tried to use this instead of the toString called directly on your object?
Buffer.from(result.Body).toString('base64')
Sounds like you're using AWS integration type of API Gateway instead of LAMBDA integration and in that case API Gateway would expect entire message to be base64 encoded, not just the body. For your use case you probably should use LAMBDA integration and return json with statusCode, body, headers, and Content-Type as you currently do.
Related
I have code that calls a vendor API to do a formdata upload of a file by axios from inside an AWS Lambda. The call returns a 400 error. If I run the code locally using the same node version v14 it works. I want to capture both raw requests and compare them for differences. How do I capture both raw requests? I've tried using ngrok and pipedream but they don't show the raw but decode the request and the file.
let response = null;
try {
const newFile = fs.createReadStream(doc);
const formData = new FormData();
formData.append("file", newFile);
formData.append("url", url);
const headers = {
Authorization: "Bearer " + token,
...formData.getHeaders(),
};
console.log("Headers: ", headers);
response = await axios.post(`${APIBASE}/file/FileUpload`, formData, {
headers,
});
console.log("file upload response", response);
} catch (err) {
console.log("fileupload error at API", err);
}
You might be able to just use a custom request interceptor and interrogate at the requests that way.
https://axios-http.com/docs/interceptors
You're not able to capture the request on the network level, as this is totally controlled by AWS. Maybe there's a way to do this when running in a VPC, but I don't think so.
You could simply use a tool such as axios debug logger to print out all of the request and response contents (including headers etc) before the request is made/after the response has arrived. This might provide some more information as to where things are going wrong.
As to the cause of the problem, it is difficult to help you there since you haven't shared the error message nor do we know anything about the API you're trying to call.
There are multiple ways to debug
axios debug logger .
AWS cloud watch where you can see all the logs. you can capture the request
and response.
Use postman to call the prod lambda endpoint and verify the response.
I build a REST service generated by a proto file with rpc.
I succeed receiving a single event as follows:
rpc PostEvent(Event) returns (google.protobuf.Empty) {
option (google.api.http) = {
post: "/myEP"
body: "*"
};
}
and it works - converts a json object to Event{} struct.
My question is, how to do the same thing when I want to receive an array of Event{}s.
This could work:
message EventsWrapper {
repeated Event events = 1;
}
rpc PostEvents(EventsWrapper) returns (google.protobuf.Empty) {
option (google.api.http) = {
post: "/myEP"
body: "*"
};
}
But then it will expect a json like:
{"events":[{},..,{}]}
While I receive only:
[{},..,{}]
I don't control the way I receive the call. Any ideas how I can tweak my code to handle such an array call?
If you need to accept a JSON array, and the gRPC transcoding implementation you use supports it, you can use the body attribute to specify a repeated field that is mapped to the request body, instead of using *.
Quoting from the Google HttpRule API documentation:
If an API needs to use a JSON array for request or response body, it can map the request or response body to a repeated field. However, some gRPC Transcoding implementations may not support this feature.
body — The name of the request field whose value is mapped to the HTTP request body, or * for mapping all request fields not captured by the path pattern to the HTTP body, or omitted for not having any HTTP request body.
In other words, you would define your service as:
message EventsWrapper {
repeated Event events = 1;
}
// ...
rpc PostEvents(EventsWrapper) returns (google.protobuf.Empty) {
option (google.api.http) = {
post: "/myEP"
body: "events"
};
}
I believe this is supported by at least gRPC-Gateway (since PR #712) and likely also Envoy Proxy's gRPC-JSON transcoder. The latter also supports translating between JSON arrays and streaming gRPC methods, so if you define an rpc PostEvent(stream Event) returns (google.protobuf.Empty) method, Envoy will expect an array of Event objects in the request (and possibly even stream the translated messages as they arrive).
I am trying to get access token from MYOB. The POST call i make returns a "400 Bad Request error"
i'm using "axios" to make the POST call
i already got the Access code which i use in the data i'm sending in the POST call
here is my code
const config= { headers:{'Content-Type':"application/x-www-form-urlencoded"}}
const data={
client_id:"xxxxxxxxxxxxxxxxxxxxxxx",
client_secret:"xxxxxxxxxxxxxxxxxxxxx",
scope:"CompanyFile",
code: code,
redirect_uri:"http%3A%2F%2Flocalhost%3A30002Fcallback",
grant_type : "authorization_code"
}
axios.post("https://secure.myob.com/oauth2/v1/authorize", data, config)
.then((res) =>{
console.log ("response ...............", res
}
)
.catch((error) => {
console.error("Error here is ........",error)
}
)
Axios will, by default, attempt to POST your data fields as JSON which is not correct.
Instead, you want to url encode them and post the url-encoded string in the HTTP body. See the 'example call' in the docs.
There's a good example of how to url encode w/ axios here.
I also note that your redirect_uri field is already url encoded, so attempting to simply encode it a second time means you'll end up with something like http%253A%252F%252Flocalhost which is not correct. Double check your URL encoding against the example call to make sure you're not accidentally encoding certain fields twice. From memory the access code is already encoded appropriately so you might need to fiddle with decoding it before re-encoding it to get it working.
I would like to download a PDF file using a nodejs lambda function deployed in AWS. Please let me know the configurations to be provided in serverless settings.yaml file.
I am able to download PDF by making below configuration changes from console.
1) Add Content-Type as application/pdf 2) Map the response model for application/pdf=>Empty 3) Change the content handling in integration response from passthrough (default) to Convert to Binary. I am looking for options where these can be provided in serverless configuration file
I am looking for options where content handling and response model can be set using serverless
Below is the snippet from serverless.yml
events:
- http:
path: /test
method: get
integration: lambda
response:
statusCodes:
200:
pattern: '' # Default response method
headers:
Content-Type: "'application/pdf'"
In your lambda function, you have to return a json object like that:
{
statusCode: 200,
headers: { 'Content-Type': 'application/pdf' },
body: YOUR_PDF_base64_encoded_string,
isBase64Encoded: true, // important
};
then, you can use serverless-apigw-binary plugin to config APIGateway Binary Support or you can do it by manualy: Change APIGateway setting
use application/pdf instead of my image mime types.
After setting up Amazon API Gateway CORS as instructed, I still get the following error when send an Ajax POST request.
XMLHttpRequest cannot load https://-------.execute-api.us-west-2.amazonaws.com/--------. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://------.s3-website-us-west-2.amazonaws.com' is therefore not allowed access. The response had HTTP status code 400.
I'm using Amazon S3 to host the website, which does not support web script so I can't use python or php to fix this.
I'd really appreciate any help.
Could it be that you're using Lambda-proxy integration and your Lambda is not returning those headers? If that's the case, you have to add those headers yourself.
This is how I use to create the response that I return using callback(null, response).
function createResponse(statusCode, body) {
const headers = {
'Access-Control-Allow-Origin': '*',
}
return {
headers,
statusCode,
body: body ? JSON.stringify(body) : undefined,
}
}