I am using minio to manage the files
const getMinioClient = () => {
const minioClient = new Minio.Client({
endPoint: '127.0.0.1',
port: 9000,
useSSL: false,
accessKey: 'minioadmin',
secretKey: 'minioadmin'
});
return minioClient;
};
uploadFile(bucketName, newFileName, localFileLocation,metadata={}) {
return new Promise((resolve, reject) => {
const minioClient = getMinioClient();
//'application/octet-stream'
minioClient.fPutObject(bucketName, newFileName, localFileLocation, metadata , (err, etag) => {
if (err) return reject(err);
return resolve(etag);
});
});
}
with the following code I can upload the file, after successfully uploading it returns me only with etag, but I want to get the download link, how would I get it directly without searching the filename again.
You won't be able to get something like Public URL/Link for accessing images unless you ask for it to manually generate a time limited download URL using something like:
https://min.io/docs/minio/linux/reference/minio-mc/mc-share-download.html#generate-a-url-to-download-object-s
One workaround is to let nginx directly access the location you are uploading your files to:
https://gist.github.com/harshavardhana/f05b60fe6f96803743f38bea4b565bbf
After you have successfully written your file with your code above, you can use presignedUrl method to generate the link to your image.
An example for Javascript is here: https://min.io/docs/minio/linux/developers/javascript/API.html#presignedUrl:~:text=//%20presigned%20url%20for%20%27getObject%27%20method.%0A//%20expires%20in%20a%20day.%0AminioClient.presignedUrl(%27GET%27%2C%20%27mybucket%27%2C%20%27hello.txt%27%2C%2024*60*60%2C%20function(err%2C%20presignedUrl)%20%7B%0A%20%20if%20(err)%20return%20console.log(err)%0A%20%20console.log(presignedUrl)%0A%7D)
In any case you have to set an expiration time. Here or you set a very long time, which is suitable to your app or if you have a backend, require the images from Frontend through the backend with the getObject method: getObject(bucketName, objectName, getOpts[, callback]).
https://min.io/docs/minio/linux/developers/javascript/API.html#presignedUrl:~:text=getObject(bucketName%2C%20objectName%2C%20getOpts%5B%2C%20callback%5D)
If you have only a few number of static images to show in your app, (which are not uploaded by your app), you can also create the links manually with tme minio client or from the Minio-UI.
Related
I'm trying to upload a file and send data from a React frontend to a S3 bucket using an API Gateway/ Lambda function setup using the Serverless framework and I've been struggling with it for the last couple of days.
From the frontend I am using axios and creating a formdata to send a post request to the API like the following:
let formData = new FormData();
formData.append('imageFile', selectedImage);
formData.append('itemId', clubIdRef.current.value);
formData.append('itemDescription', itemDescRef.current.value);
axios.post(
baesURL+"/item/create", formData,
{headers: {
'Content-Type': 'multipart/form-data'
}}
).then((response) => {
console.log("response" + response)
console.log("response.data" + response.data)
})
Appending string attributes to the formdata feels off but the only way I could find to send data and an image at the same time was like the above.
Then to receive this data in the backend I've been using lambda-multipart-parser like the following:
const createItem = async (event) => {
const result = await multipartParser.parse(event);
const imageFile = result.imageFile;
const itemDescription = result.itemDescription;
where the result console logs as:
{
files: [],
imageFile: '[object File]',
itemId: '12',
itemDescription: "Description"
}
I can then store the imageFile successfully in S3 and generate the URL. Next, I create an Item object with the S3 url and id and description to store in dynamoDB. Everything works fine but when I open the S3 url the file is corrupted and just opens as a grey box instead of the actual image I uploaded.
This is how I am uploading the file using the s3 sdk
const AWS = require("aws-sdk");
const s3 = new AWS.S3();
const params = {
Bucket: BUCKET_NAME,
Key: `images/${directoryPath}/${id}.png`,
Body: imageFile,
ContentType: "image/png",
ACL : "public-read"
}
uploadResult = await s3.putObject(params).promise();
These are the things I've tried but still don't have any success uploading the correct image to my S3 bucket:
Looking and changing the BinaryMediaType of the API gateway but I can't find the settings under the API...
Tried using aws-lambda-multipart-parser but still wasn't able to add multipart/form-data binary media type and parse the full form data correctly
I know that I could first try to send a request directly from React to S3 to upload the image using aws-sdk in react to get a preSignedURL and attach that URL and make a POST request to my API Gateway simply parse the event.body without having to use a multipart form parser, but I want to avoid sending multiple requests if needed and handle everything in the backend.
Any suggestions would be highly appreciated!
It is quite hard to understand where is the problem with given context.
We have no idea which image format you are uploading, no idea how you store this image to S3.
My answer will try to cover these missing informations as it is a common mistake on S3 uploads.
S3 files are stored and returned with given ContentType.
You might check your S3 file's ContentType on AWS console.
Console > S3 > Select object (image) > Metadata > ContentType
I will suppose that image format is PNG and image data is correct and might be posted to S3 as is (from result).
S3Service.ts
import AWS, {S3} from "aws-sdk";
import {PutObjectRequest} from "aws-sdk/clients/s3";
import {PutObjectResponse} from "aws-sdk/clients/mediastoredata";
AWS.config.update({region: 'eu-west-3' });
const s3: S3 = new AWS.S3();
export class S3Service {
public static async putImage(key: string, data: string, contentType: string): Promise<PutObjectResponse> {
const s3Params: PutObjectRequest = {
Bucket: process.env.S3_BUCKET,
Key: key,
Body: data,
ContentType: contentType // <== I draw your attention here
}
return await s3.putObject(s3Params).promise()
}
}
index.ts
import { S3Service } from "service/aws/s3-service";
await S3Service.putImage(result.itemId + ".png", result.imageFile, "image/png");
A common mistake, which I assume might be the cause of your problem, is to forget content-type resulting in incorrect download format.
There must be something obvious I am missing. I have an external record like https/udspace.udel.edu/rest/handle/19716/28599 that is not set up for CORS, but I would like to get access to it in my application. In the past, I have set up a server to simply GET the resource and redeliver it for my application. However, I read about Node http-proxy-middleware and thought I'd try out a more direct proxy.
First, I could not use the target in the createProxyMiddleware() constructor because I wanted to send in the hostname and path of the desired resource like, perhaps, http://example.com/https/udspace.udel.edu/rest/handle/19716/28599 to get the resource above.
Relevant Code (index.js)
const express = require('express')
const { createProxyMiddleware } = require('http-proxy-middleware')
const app = express()
app.get('/info', (req, res, next) => {
res.send('Proxy is up.');
})
// Proxy endpoint
app.use('/https', createProxyMiddleware({
target: "incoming",
router: req => {
const protocol = 'https'
const hostname = req.path.split("/https/")[1].split("/")[0]
const path = req.path.split(hostname)[1]
console.log(`returning: ${protocol}, ${hostname}, ${path}`)
return `${protocol}://${hostname}/${path}`
},
changeOrigin: true
}))
app.listen(PORT, HOST, () => {
console.log(`Starting Proxy at ${HOST}:${PORT}`);
})
I was getting a 404 from the DSpace server without other information, so I knew that the request was going through, but transformed incorrectly. I tried again with an item in an S3 bucket, since AWS gives better errors and saw that my path was being duplicated:
404 Not Found
Code: NoSuchKey
Message: The specified key does not exist.
Key: api/cookbook/recipe/0001-mvm-image/manifest.json/https/iiif.io/api/cookbook/recipe/0001-mvm-image/manifest.json
What dumb thing am I doing wrong? Is this not what this proxy is for and I need to do something else?
I'm trying to deploy an S3 static website and API gateway/lambda in a single stack.
The javascript in the S3 static site calls the lambda to populate an HTML list but it needs to know the API Gateway URL for the lambda integration.
Currently, I generate a RestApi like so...
const handler = new lambda.Function(this, "TestHandler", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.asset("build/test-service"),
handler: "index.handler",
environment: {
}
});
this.api = new apigateway.RestApi(this, "test-api", {
restApiName: "Test Service"
});
const getIntegration = new apigateway.LambdaIntegration(handler, {
requestTemplates: { "application/json": '{ "statusCode": "200" }' }
});
const apiUrl = this.api.url;
But on cdk deploy, apiUrl =
"https://${Token[TOKEN.39]}.execute-api.${Token[AWS::Region.4]}.${Token[AWS::URLSuffix.1]}/${Token[TOKEN.45]}/"
So the url is not parsed/generated until after the static site requires the value.
How can I calculate/find/fetch the API Gateway URL and update the javascript on cdk deploy?
Or is there a better way to do this? i.e. is there a graceful way for the static javascript to retrieve a lambda api gateway url?
Thanks.
You are creating a LambdaIntegration but it isn't connected to your API.
To add it to the root of the API do: this.api.root.addMethod(...) and use this to connect your LambdaIntegration and API.
This should give you an endpoint with a URL
If you are using the s3-deployment module to deploy your website as well, I was able to hack together a solution using what is available currently (pending a better solution at https://github.com/aws/aws-cdk/issues/12903). The following together allow for you to deploy a config.js to your bucket (containing attributes from your stack that will only be populated at deploy time) that you can then depend on elsewhere in your code at runtime.
In inline-source.ts:
// imports removed for brevity
export function inlineSource(path: string, content: string, options?: AssetOptions): ISource {
return {
bind: (scope: Construct, context?: DeploymentSourceContext): SourceConfig => {
if (!context) {
throw new Error('To use a inlineSource, context must be provided');
}
// Find available ID
let id = 1;
while (scope.node.tryFindChild(`InlineSource${id}`)) {
id++;
}
const bucket = new Bucket(scope, `InlineSource${id}StagingBucket`, {
removalPolicy: RemovalPolicy.DESTROY
});
const fn = new Function(scope, `InlineSource${id}Lambda`, {
runtime: Runtime.NODEJS_12_X,
handler: 'index.handler',
code: Code.fromAsset('./inline-lambda')
});
bucket.grantReadWrite(fn);
const myProvider = new Provider(scope, `InlineSource${id}Provider`, {
onEventHandler: fn,
logRetention: RetentionDays.ONE_DAY // default is INFINITE
});
const resource = new CustomResource(scope, `InlineSource${id}CustomResource`, { serviceToken: myProvider.serviceToken, properties: { bucket: bucket.bucketName, path, content } });
context.handlerRole.node.addDependency(resource); // Sets the s3 deployment to depend on the deployed file
bucket.grantRead(context.handlerRole);
return {
bucket: bucket,
zipObjectKey: 'index.zip'
};
},
};
}
In inline-lambda/index.js (also requires archiver installed into inline-lambda/node_modules):
const aws = require('aws-sdk');
const s3 = new aws.S3({ apiVersion: '2006-03-01' });
const fs = require('fs');
var archive = require('archiver')('zip');
exports.handler = async function(event, ctx) {
await new Promise(resolve => fs.unlink('/tmp/index.zip', resolve));
const output = fs.createWriteStream('/tmp/index.zip');
const closed = new Promise((resolve, reject) => {
output.on('close', resolve);
output.on('error', reject);
});
archive.pipe(output);
archive.append(event.ResourceProperties.content, { name: event.ResourceProperties.path });
archive.finalize();
await closed;
await s3.upload({Bucket: event.ResourceProperties.bucket, Key: 'index.zip', Body: fs.createReadStream('/tmp/index.zip')}).promise();
return;
}
In your construct, use inlineSource:
export class TestConstruct extends Construct {
constructor(scope: Construct, id: string, props: any) {
// set up other resources
const source = inlineSource('config.js', `exports.config = { apiEndpoint: '${ api.attrApiEndpoint }' }`);
// use in BucketDeployment
}
}
You can move inline-lambda elsewhere but it needs to be able to be bundled as an asset for the lambda.
This works by creating a custom resource that depends on your other resources in the stack (thereby allowing for the attributes to be resolved) that writes your file into a zip that is then stored into a bucket, which is then picked up and unzipped into your deployment/destination bucket. Pretty complicated but gets the job done with what is currently available.
The pattern I've used successfully is to put a CloudFront distribution or an API Gateway in front of the S3 bucket.
So requests to https://[api-gw]/**/* are proxied to https://[s3-bucket]/**/*.
Then I will create a new Proxy path in the same API gateway, for the route called /config which is a standard Lambda-backed API endpoint, where I can return all sorts of things like branding information or API keys to the frontend, whenever the frontend calls GET /config.
Also, this avoids issues like CORS, because both origins are the same (the API Gateway domain).
With CloudFront distribution instead of an API Gateway, it's pretty much the same, except you use the CloudFront distribution's "origin" configuration instead of paths and methods.
I am using Puppeteer to take screenshots of a web page for my company. I need to test multiple people's accounts so that means visiting the page multiple times (150 times in this case). This results in our firewall kicking me out for making too many requests.
My solution is to just fetch the contents of the page and save them locally. Then I use puppeteer on that local file, overriding the function used to get data from our servers to instead just use data already loaded into Node from a CSV.
All of this works, but it looks like it's still making requests to our servers.
I tried giving it a userDataDir so it could cache any resources. In theory, if it's loading it from file://, it's caching the resources and there's no Ajax requests, it shouldn't be making any further requests, right?
I also tried installing a debugging proxy but since it's https I can't see what it's trying to request.
This is how I start it:
puppeteer.launch({
userDataDir: "temp/"
})
.then(browser => {
next(browser, links);
)
.catch(error => {
cb(error, null);
});
next will iterate through any links it needs to visit.
This part saves the page locally:
if (this._linkCache[baseLink] === undefined) {
fetch(baseLink)
.then(resp => resp.text())
.then(contents => {
fs.writeFile(fullFileName, contents, 'utf8', err => {
if (err) {
cb(err, null);
} else {
this._linkCache[baseLink] = fileUrl;
gotoPage(fileUrl);
}
});
})
.catch(error => {
cb(error, null);
});
}
// Go to the cached version
else {
gotoPage(this._linkCache[baseLink] + queryParams);
}
And this gets the screenshots:
const gotoPage = async(url) => {
try {
const page = await browser.newPage();
// Override 'fetchAccountData' function
await page.evaluateOnNewDocument(testData => {
window["fetchAccountData"] = (cb: (err: any, data: any)=>void) => {
cb(null, testData);
};
}, data);
// Go to page and get screenshot
await page.goto(url);
const screenie = `${outputPath}${uuid()}.png`;
await page.screenshot({ fullPage: true, path: screenie, type: "png" });
pageHtml.push(`<img src="file://${screenie}" />`);
next(browser, rest);
} catch (e) {
cb(e, null);
}
};
I was hoping this would be able to only make a few requests at the beginning while it saves the html locally and caches all the resources, but it seems to make a request for every link.
How can I stop it?
How can I delete an image's file from the server using Parse Cloud Code. I am using back4app.com
After Deleting Image Row
I am getting the images urls, then calling a function to delete the image using its url
Parse.Cloud.afterDelete("Image", function(request) {
// get urls
var imageUrl = request.object.get("image").url();
var thumbUrl = request.object.get("thumb").url();
if(imageUrl!=null){
//delete
deleteFile(imageUrl);
}
if(thumbUrl!=null){
//delete
deleteFile(thumbUrl);
}
});
Delete the image file from the server
function deleteFile(url){
Parse.Cloud.httpRequest({
url: url.substring(url.lastIndexOf("/")+1),
method: 'DELETE',
headers: {
'X-Parse-Application-Id': 'xxx',
'X-Parse-Master-Key': 'xxx'
}
}).then(function(httpResponse) {
console.log(httpResponse.text);
}, function(httpResponse) {
console.error('Request failed with response code ' + httpResponse.status);
});
}
for security reasons, not is posible to delete directly the image from Back4App, using DELETE from SDK or REST API. I believe that you can follow the guide below:
https://help.back4app.com/hc/en-us/articles/360002327652-How-to-delete-files-completely-
After struggling with this for a while it seems to be possible through cloud function as mentioned here. One need to use MasterKey in the cloud code:
Parse.Cloud.define('deleteGalleryPicture', async (request) => {
const {image_id} = request.params;
const Gallery = Parse.Object.extend('Gallery');
const query = new Parse.Query(Gallery);
try {
const Image = await query.get(image_id);
const picture = Image.get('picture');
await picture.destroy({useMasterKey: true});
await Image.destroy();
return 'Image removed.';
} catch (error) {
console.log(error);
throw new Error('Error deleting image');
}
});
For me it was first confusing since I could open the link to that file even after I deleted the reference object in the dashboard, but then I found out that the dashboard is not calling Parse.Cloud.beforeDelete() trigger for some reason.
Trying to download the data from the url after deleting the file through the cloud code function returns 0kB data and therefore confirms that they were deleted.