Parsing formdata from React using Serverless and API Gateway - aws-lambda

I'm trying to upload a file and send data from a React frontend to a S3 bucket using an API Gateway/ Lambda function setup using the Serverless framework and I've been struggling with it for the last couple of days.
From the frontend I am using axios and creating a formdata to send a post request to the API like the following:
let formData = new FormData();
formData.append('imageFile', selectedImage);
formData.append('itemId', clubIdRef.current.value);
formData.append('itemDescription', itemDescRef.current.value);
axios.post(
baesURL+"/item/create", formData,
{headers: {
'Content-Type': 'multipart/form-data'
}}
).then((response) => {
console.log("response" + response)
console.log("response.data" + response.data)
})
Appending string attributes to the formdata feels off but the only way I could find to send data and an image at the same time was like the above.
Then to receive this data in the backend I've been using lambda-multipart-parser like the following:
const createItem = async (event) => {
const result = await multipartParser.parse(event);
const imageFile = result.imageFile;
const itemDescription = result.itemDescription;
where the result console logs as:
{
files: [],
imageFile: '[object File]',
itemId: '12',
itemDescription: "Description"
}
I can then store the imageFile successfully in S3 and generate the URL. Next, I create an Item object with the S3 url and id and description to store in dynamoDB. Everything works fine but when I open the S3 url the file is corrupted and just opens as a grey box instead of the actual image I uploaded.
This is how I am uploading the file using the s3 sdk
const AWS = require("aws-sdk");
const s3 = new AWS.S3();
const params = {
Bucket: BUCKET_NAME,
Key: `images/${directoryPath}/${id}.png`,
Body: imageFile,
ContentType: "image/png",
ACL : "public-read"
}
uploadResult = await s3.putObject(params).promise();
These are the things I've tried but still don't have any success uploading the correct image to my S3 bucket:
Looking and changing the BinaryMediaType of the API gateway but I can't find the settings under the API...
Tried using aws-lambda-multipart-parser but still wasn't able to add multipart/form-data binary media type and parse the full form data correctly
I know that I could first try to send a request directly from React to S3 to upload the image using aws-sdk in react to get a preSignedURL and attach that URL and make a POST request to my API Gateway simply parse the event.body without having to use a multipart form parser, but I want to avoid sending multiple requests if needed and handle everything in the backend.
Any suggestions would be highly appreciated!

It is quite hard to understand where is the problem with given context.
We have no idea which image format you are uploading, no idea how you store this image to S3.
My answer will try to cover these missing informations as it is a common mistake on S3 uploads.
S3 files are stored and returned with given ContentType.
You might check your S3 file's ContentType on AWS console.
Console > S3 > Select object (image) > Metadata > ContentType
I will suppose that image format is PNG and image data is correct and might be posted to S3 as is (from result).
S3Service.ts
import AWS, {S3} from "aws-sdk";
import {PutObjectRequest} from "aws-sdk/clients/s3";
import {PutObjectResponse} from "aws-sdk/clients/mediastoredata";
AWS.config.update({region: 'eu-west-3' });
const s3: S3 = new AWS.S3();
export class S3Service {
public static async putImage(key: string, data: string, contentType: string): Promise<PutObjectResponse> {
const s3Params: PutObjectRequest = {
Bucket: process.env.S3_BUCKET,
Key: key,
Body: data,
ContentType: contentType // <== I draw your attention here
}
return await s3.putObject(s3Params).promise()
}
}
index.ts
import { S3Service } from "service/aws/s3-service";
await S3Service.putImage(result.itemId + ".png", result.imageFile, "image/png");
A common mistake, which I assume might be the cause of your problem, is to forget content-type resulting in incorrect download format.

Related

How to parse image form data from FilePond

I'm attempting to upload image files to my nextjs app where I'll eventually store in GCS but I'm having some trouble with the image form data. I'm using FilePond on the client to handle uploading the file and sending a req to a simple API that I have on the server.
// Component
import { FilePond, File, registerPlugin } from "react-filepond";
import FilePondPluginImageExifOrientation from 'filepond-plugin-image-exif-orientation';
import FilePondPluginImagePreview from "filepond-plugin-image-preview";
registerPlugin(FilePondPluginImageExifOrientation, FilePondPluginImagePreview);
const Page = () => {
const [productImages, setProductImages] = useState<File[]>([]);
return (
<FilePond
allowMultiple={true}
maxFiles={2}
files={productImages}
onupdateFiles={setProductImages}
server={{
process: {
url: "/api/upload",
method: "POST",
headers: {
"Content-Type": "mutlipart/form-data"
},
ondata: formData => {
formData.append('image', "test-image");
return formData;
}
}
}}
/>
);
};
export default Page;
// ./pages/api/upload
import { NextApiRequest, NextApiResponse } from "next";
const Index = (_req: NextApiRequest, res: NextApiResponse) => {
const reqBody = _req.body ?? null;
console.log(_req);
if (!reqBody) res.status(200).json({ message: "No request body found" });
res.status(200).json({ data: "OK" });
};
export default Index;
The issue I'm seeing is the files are being sent as a giant blob string and I've seen other people be able to access the files property from the incoming request (shown here). This is my first time building a file uploading feature into any of my projects so I'm not entirely sure what's best practice for handling files from incoming requests and parsing them to be stored in some file storage service like GCP or S3.
You might need to chunk the image file. set the configuration chunkUploads to true.
Then your backend should process the chunked file like this.

how to get the file link after successfully uploading in minio

I am using minio to manage the files
const getMinioClient = () => {
const minioClient = new Minio.Client({
endPoint: '127.0.0.1',
port: 9000,
useSSL: false,
accessKey: 'minioadmin',
secretKey: 'minioadmin'
});
return minioClient;
};
uploadFile(bucketName, newFileName, localFileLocation,metadata={}) {
return new Promise((resolve, reject) => {
const minioClient = getMinioClient();
//'application/octet-stream'
minioClient.fPutObject(bucketName, newFileName, localFileLocation, metadata , (err, etag) => {
if (err) return reject(err);
return resolve(etag);
});
});
}
with the following code I can upload the file, after successfully uploading it returns me only with etag, but I want to get the download link, how would I get it directly without searching the filename again.
You won't be able to get something like Public URL/Link for accessing images unless you ask for it to manually generate a time limited download URL using something like:
https://min.io/docs/minio/linux/reference/minio-mc/mc-share-download.html#generate-a-url-to-download-object-s
One workaround is to let nginx directly access the location you are uploading your files to:
https://gist.github.com/harshavardhana/f05b60fe6f96803743f38bea4b565bbf
After you have successfully written your file with your code above, you can use presignedUrl method to generate the link to your image.
An example for Javascript is here: https://min.io/docs/minio/linux/developers/javascript/API.html#presignedUrl:~:text=//%20presigned%20url%20for%20%27getObject%27%20method.%0A//%20expires%20in%20a%20day.%0AminioClient.presignedUrl(%27GET%27%2C%20%27mybucket%27%2C%20%27hello.txt%27%2C%2024*60*60%2C%20function(err%2C%20presignedUrl)%20%7B%0A%20%20if%20(err)%20return%20console.log(err)%0A%20%20console.log(presignedUrl)%0A%7D)
In any case you have to set an expiration time. Here or you set a very long time, which is suitable to your app or if you have a backend, require the images from Frontend through the backend with the getObject method: getObject(bucketName, objectName, getOpts[, callback]).
https://min.io/docs/minio/linux/developers/javascript/API.html#presignedUrl:~:text=getObject(bucketName%2C%20objectName%2C%20getOpts%5B%2C%20callback%5D)
If you have only a few number of static images to show in your app, (which are not uploaded by your app), you can also create the links manually with tme minio client or from the Minio-UI.

How can I add a header to an AWS S3 presigned URL in Go?

I'm uploading a file to an AWS S3 bucket using a presigned URL. This works fine, but if I try to add a x-amz-tagging header I get the error "There were headers present in the request which were not signed".
The backend generating the presigned URL is written in Go:
// Upload generates a new URL where a file can be uploaded
func (s *S3) Upload(key string, c Config) (string, error) {
req, _ := s.client.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String(s.bucketName),
Key: aws.String(key),
})
return req.Presign(c.ExpiresIn)
}
The answer to S3 presigned upload url error suggests that we need to declare the header as part of the presigned URL. How can I add a header declaration to this? The examples given on Creating Pre-Signed URLs for Amazon S3 Buckets don't cover this.
If i remember correctly, you can add metadata like this:
'Bucket': 'bucket',
'Key': 'signed.json',
'Metadata': {
'x-amz-tagging': 'whatever'
},
So it'll look something like this:
// Upload generates a new URL where a file can be uploaded
func (s *S3) Upload(key string, c Config) (string, error) {
req, _ := s.client.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String(s.bucketName),
Key: aws.String(key),
Body: {
'x-amz-tagging': 'whatever'
},
})
return req.Presign(c.ExpiresIn)
}
I wrote this on the fly, so it might not work and need a bit of tweaking. Refer to the docs - but it should look like this. Test it out and let me know.
Read more here: https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#PutObjectInput
The presigned URL request just needs the tags under the Tagging field. When you use this URL to make a request, that's when you pass the header with "x-amz-tagging" as the key and the tags (e.g., "temp=true&public=yes").
Below is an example of uploading an image with tags (through the headers) using a presigned url that was also requested with tags (with Tagging).
const uploadImage = async (filepath, presignedURL) => {
const headers = new Headers();
headers.append('x-amz-tagging', 'temp=true&public=yes');
const response = await fetch(filepath);
const blob = await response.blob();
return await fetch(
presignedURL,
{
method: 'PUT',
body: blob,
headers,
},
);
};

Uploading image data from CKEditor 5 to Firebase Storage creates a malformed image

I'm uploading form data containing an image using an XHRHttpRequest with CKEditor 5. I'm receiving the Buffer correctly, and I've retrieved the content type successfully:
const data = new FormData
data.append('upload', fileObject)
myXhrHttpRequest.send(data)
I retrieve the image data by accessing the body of the HTTP POST request (a Buffer), and then I upload it to Firebase Storage:
app.post('/save-image', async ({ query: { imageId, contentType }, body: data }, res) => {
storage
.file(`images/${id}`)
.save(data, {
public: true,
metadata: {
contentType,
metadata: {
firebaseStorageDownloadTokens: token
}
}
})
// send back results, etc...
})
Unfortunately, the image is corrupt. Any ideas about what I might be doing wrong? This is an example of one of the uploaded images:
https://firebasestorage.googleapis.com/v0/b/memorize-ai.appspot.com/o/deck-assets%2Fsample_deck_id%2FHq7vZ8oEgFqlLdSfWYBl?alt=media&token=cfa99560-5618-48e1-8772-4ffd9d45f789

Google Cloud Platform: Unable to upload a new file version in Storage via API

I wrote a script that uploads a file to a bucket in Google Cloud Storage:
Ref: https://cloud.google.com/storage/docs/json_api/v1/objects/insert
function submitForm(bucket, accessToken) {
console.log("Fetching the file...");
var input = document.getElementsByTagName('input')[0];
var name = input.files[0].name;
var uploadUrl = 'https://www.googleapis.com/upload/storage/v1/b/'+
bucket + '/o?uploadType=media&access_token=' + accessToken + '&name=' + name;
event.preventDefault();
fetch(uploadUrl, {
method: 'POST',
body: input.files[0]
}).then(function(res) {
console.log(res);
location.reload();
})
.catch(function(err) {
console.error('Got error:', err);
});
}
It works perfectly fine when uploading a new file.
However, I get a 403 status code in the API response body while trying to replace an existing file with a new version.
Please note that:
The OAuth 2.0 scope for Google Cloud Storage is: https://www.googleapis.com/auth/devstorage.read_write
I did enable the versioning for the destination bucket
Could someone help me in pointing out what I did wrong?
Update I:
As suggested, I am trying to invoke the rewrite function as follows:
const input = document.getElementsByName('uploadFile')[0];
const name = input.files[0].name;
const overwriteObjectUrl = 'https://www.googleapis.com/storage/v1/' +
'b/' + bucket +
'/o/' + name +
'/rewriteTo/b/' + bucket +
'/o/' + name;
fetch(overwriteObjectUrl, {
method: 'POST',
body: input.files[0]
})
However, I am getting a 400 (bad request error).
{"error":{"errors":[{"domain":"global","reason":"parseError","message":"Parse Error"}],"code":400,"message":"Parse Error"}}
Could you explain me what I am doing wrong?
Update II:
By changing body: input.files[0] with body: input.files[0].data I made it working... Theoretically!
I get a positive response body:
{
"kind":"storage#rewriteResponse",
"totalBytesRewritten":"43",
"objectSize":"43",
"done":true,
"resource":{
"kind":"storage#object",
"id":"mybuck/README.txt/1520085847067373",
"selfLink":"https://www.googleapis.com/storage/v1/b/mybuck/o/README.txt",
"name":"README.txt",
"bucket":"mybuck",
"generation":"1520085847067373",
"metageneration":"1",
"contentType":"text/plain",
"timeCreated":"2018-03-03T14:04:07.066Z",
"updated":"2018-03-03T14:04:07.066Z",
"storageClass":"MULTI_REGIONAL",
"timeStorageClassUpdated":"2018-03-03T14:04:07.066Z",
"size":"43",
"md5Hash":"UCQnjcpiPBEzdl/iWO2e1w==",
"mediaLink":"https://www.googleapis.com/download/storage/v1/b/mybuck/o/README.txt?generation=1520085847067373&alt=media",
"crc32c":"y4PZOw==",
"etag":"CO2VxYep0NkCEAE="
}
}
Whit as well a new generation number (versioning enabled).
However, the file content has been not updated: I did append new strings but they did not show off within the file. Do you have any idea?
Thanks a lot in advance.
Based on the information available it's difficult to diagnose this issue with certainty- however I would check the roles assigned to the user or service account you are using for this operation.
As you have been able to upload a file, but not overwrite a file, this sounds like you may have assigned the user or service account that is attempting to perform this task the 'Storage Object Creator' role.
Users/service accounts with the Storage Object Creator role can create new objects in buckets but not overwrite existing ones (you can see this mentioned here).
If this is the case, you could try assigning the user/service account the role of 'Storage Object Admin' which allows users full control over bucket objects.
"insert" is only to be used to create new objects per the Methods section of the API's documentation, so you'll need to use "rewrite" to rewrite an existing object.

Resources