NestJS validate total size of uploaded files - validation

I'm trying to upload a bunch of attachments in my NestJS project. It's an array of multiple files and uploaded like so
#Post('/text')
async addText(#UploadedFiles() files){
console.log ("The files", files)
}
How do I ensure that the total size of the all the attachments do not exceed say 5MB? Is there a way to validate all the files?

in document has mention this validation, or you can use Multer config to validate the size in your module like this:
imports: [
MulterModule.registerAsync({
useFactory: () => ({
// other config
limits: {
fileSize: parseInt(process.env.MAX_SIZE_PER_FILE_UPLOAD),
files: parseInt(process.env.MAX_NUMBER_FILE_UPLOAD),
},
}),
}),
//other code
]

Related

Failure to upload images from Strapi to a cloudinary folder

I'm working with Strapi v4.1.7 and I'm trying to upload my images to Cloudinary in a specific folder (portfolio) but they just get added to the root folder of cloudinary.
Also I'm using "#strapi/provider-upload-cloudinary": "^4.1.9", package.
My plugins.js is as follows:
module.exports = ({ env }) => ({
// ...
upload: {
config: {
provider: "cloudinary",
providerOptions: {
cloud_name: env("CLOUDINARY_NAME"),
api_key: env("CLOUDINARY_KEY"),
api_secret: env("CLOUDINARY_SECRET"),
},
actionOptions: {
upload: {
folder: env("CLOUDINARY_FOLDER", "portfolio"),
},
delete: {},
},
},
},
// ...
});
Also in my .env file, I have the folder set as follows:
....
CLOUDINARY_FOLDER=portfolio
Also, I was asking is it possible to create dynamic folders in Cloudinary like '/portfolio/Project1/all-project1-assets' from Strapi for all projects.
I need help to achieve this. Thanks !!!
Just change the Upload to uploadStream as highlighted below :
`actionOptions: {
uploadStream: {
folder: env("CLOUDINARY_FOLDER", "portfolio"),
},
delete: {},
},
`
You might have to create a upload preset in cloudinary, go to your Cloudinary Settings > Upload > Upload presets > Add upload preset, and then create a upload preset with a folder and in Signed Mode.

404 on i18n json files

I'm trying to enable i18n json files with SSR on assets folder following this docs:
https://sap.github.io/spartacus-docs/i18n/
But when enabled, all files in PT folder results 404 error.
Here's my provideConfig on spartacus-configuration.module.ts file:
and my assets folder:
Thanks for your time, have a nice day!
Looks like it's trying to load a bunch of json files that aren't in your directories.
What I did on mine was I provided original Spartacus translations then I added mine below that:
provideConfig(<I18nConfig>{
i18n: {
resources: translations,
chunks: translationChunksConfig,
fallbackLang: 'en'
},
}),
provideConfig(<I18nConfig>{
i18n: {
backend: {
loadPath: 'assets/i18n-assets/{{lng}}/{{ns}}.json',
chunks: {
footer: ['footer']
}
}
},
})
otherwise, you can try to add those files its complaining about (orderApproval.json, savedCart.json, etc) to your 'pt' folder (not sure what language that is but perhaps Spartacus doesn't come with translations for it)

How to add hash code in js bundle files for caching

Am new to webpack configuration, pls let me know how to add hashcode in the generated js bundle files so as to cache my static assets. Thanks in advance
To add hashcode for your generated bundle, please add these lines into your webpack.config.js file.
output: {
filename: '[name].[contenthash].js',
path: path.resolve(__dirname, 'dist'),
}
For server caching
You need to split your main chunk into runtime chunk and vendor chunk. For doing this you need to add the following code in the optimization section of webpack.config.js file.
optimization: {
runtimeChunk: 'single',
moduleIds: 'hashed',
splitChunks: {
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name: 'vendors',
chunks: 'all',
},
},
},
}
When every time you change the code other chunks/hash (vendor, runtime) doesn't change. So the client (browser) doesn't fetch unchanged chunk it loads from the cache.
Reference Link
https://webpack.js.org/guides/caching/

Can I get FilePond to show previews of loaded local images?

I use FilePond to show previously uploaded images with the load functionality. The files are visible, however I don't get a preview (which I get when uploading a file).
Should it be possible to show previews for files through load?
files: [{
source: " . $profile->profileImage->id . ",
options: {
type: 'local',
}
}],
First you have to install and register File Poster and File Preview plugins and here is the example of how to register it in your code:
import * as FilePond from 'filepond';
import FilePondPluginImagePreview from 'filepond-plugin-image-preview';
import FilePondPluginFilePoster from 'filepond-plugin-file-poster';
FilePond.registerPlugin(
FilePondPluginImagePreview,
FilePondPluginFilePoster,
);
then You have to set the server.load property to your server endpoint and add a metadata property to your files object which is the link to your image on the server:
const pond = FilePond.create(document.querySelector('file'));
pond.server = {
url: '127.0.0.1:3000/',
process: 'upload-file',
revert: null,
// this is the property you should set in order to render your file using Poster plugin
load: 'get-file/',
restore: null,
fetch: null
};
pond.files = [
{
source: iconId,
options: {
type: 'local',
metadata: {
poster: '127.0.0.1:3000/images/test.jpeg'
}
}
}
];
the source property is the variable you want to send to your end point which in my case I wanted to send to /get-file/{imageDbId}.
In this case it does not matter what your endpoint in the load property returns but my guess is, we have to return a file object.

Not seeing any improvement in speed by using Concurrent Chunk Upload

uploader 5 to upload large videos to S3 and we are using php.
As per documentation from fineuploader the upload speed should increase if we enable concurrent chunk upload, we didn't see any improvement, below is the configuration we are using please suggest if we are missing some thing.
Our Statistics:
152 MB file Upload to S3 without Concurrent Chunk Enabled: 7:40 sec
152 MB file Upload to S3 with Concurrent Chunk Enabled: 7:29 sec
Below is the Code we are using to enable Concurrent Chunk Upload:
$(document).ready(function () {
var idUpload = "fineuploader-s3";
$('#'+idUpload).fineUploaderS3({
debug: true,
request: {
// REQUIRED: We are using a custom domain
// for our S3 bucket, in this case. You can
// use any valid URL that points to your bucket.
//endpoint: "http://testvibloo.s3.amazonaws.com",
endpoint: "testvibloo.s3.amazonaws.com",
// REQUIRED: The AWS public key for the client-side user
// we provisioned.
accessKey: "AWS Access Key",
forceMultipart: false,
},
objectProperties: {
key: function(fileId) {
var keyRetrieval = new qq.Promise(),
filename = $("#"+idUpload).fineUploader("getName", fileId);
keyRetrieval.success('testing/'+new Date().getTime()+'_'+filename);
return keyRetrieval;
}
},
template: "simple-previews-template",
// REQUIRED: Path to our local server where requests
// can be signed.
signature: {
endpoint: "http://hostname/testing/html/templates/s3demo.php"
},
// USUALLY REQUIRED: Blank file on the same domain
// as this page, for IE9 and older support.
iframeSupport: {
localBlankPagePath: "success.html"
},
// optional feature
chunking: {
enabled: true,
concurrent: {
enabled: true,
},
},
//maxConnections: 5,
// optional feature
resume: {
enabled: true
},
// optional feature
validation: {
sizeLimit: 1024 * 1024 * 1024
},
})
// Enable the "view" link in the UI that allows the file to be downloaded/viewed
.on('complete', function(event, id, name, response) {
var $fileEl = $(this).fineUploaderS3("getItemByFileId", id),
$viewBtn = $fileEl.find(".view-btn");
if (response.success) {
$viewBtn.show();
$viewBtn.attr("href", response.tempLink);
}
});
});
As per documentation from fineuploader the upload speed should increase if we enable concurrent chunk upload,
The documentation does not exactly say this. Here's what it does say:
There is a clear benefit in terms of upload speed when sending multiple chunks at once. The concurrent chunks feature is primarily in place to maximize bandwidth usage for single large file uploads.
Note that last sentence: "for single large file uploads". If you are uploading multiple files at a time, you are already likely maxing out your connection to S3. The concurrent chunking feature is aimed at ensuring all available connections are used for single file uploads. Without this feature, only one connection would be used at a time.

Resources