Displaying / rendering metadata in filepond--file with fileposter - filepond

I am using the fileposter plugin to successfully load and display locally-stored / previously uploaded files. An AJAX request gets the associated images files and returns an array of JSON objects representing the image data:
var imageFiles = [];
$(response.files).each(function(index, element){
let file = {
source: element.id,
options: {
type: 'local',
file: {
name: element.filename,
size: element.size,
type: element.extension
},
metadata: {
poster: element.web_path,
date: element.date_uploaded
},
}
}
imageFiles.push(file);
});
pond.setOptions({files: imageFiles});
There are some items of metadata that i have added (using the metadata plugin) that i would also like rendered with the image preview - such as date uploaded, name of person who uploaded the file etc. Is there a way of adding this? There seems to be no html/markup in the library.

Related

NestJS validate total size of uploaded files

I'm trying to upload a bunch of attachments in my NestJS project. It's an array of multiple files and uploaded like so
#Post('/text')
async addText(#UploadedFiles() files){
console.log ("The files", files)
}
How do I ensure that the total size of the all the attachments do not exceed say 5MB? Is there a way to validate all the files?
in document has mention this validation, or you can use Multer config to validate the size in your module like this:
imports: [
MulterModule.registerAsync({
useFactory: () => ({
// other config
limits: {
fileSize: parseInt(process.env.MAX_SIZE_PER_FILE_UPLOAD),
files: parseInt(process.env.MAX_NUMBER_FILE_UPLOAD),
},
}),
}),
//other code
]

Azure Data Factory Blob Event Trigger not working

We see below error message for ADF Blob Event Trigger and there was no code change for Blob trigger container, folder path. We see this error for Web Activity, when included into pipeline.
ErrorCode=InvalidTemplate, ErrorMessage=Unable to parse expression '*sanitized*'
I faced the same problem and fixed it. Here is the solution
The problem is that I parameterised some input for linkedService and datasets. For example, here is one of my blob storage datasets Bicep file
resource stagingBlobDataset 'Microsoft.DataFactory/factories/datasets#2018-06-01' = {
// ... Create a JSON file dataset in a blob storage linkedService
parameters: {
tableName: {
type: 'string'
}
}
typeProperties: {
location: {
type: 'AzureBlobStorageLocation'
// fileName: '#concat(dataset().tableName,\'.json\')' // WRONG LINE
// new line
fileName: {
value: '#concat(dataset().tableName,\'.json\')'
type: 'Expression'
}
}
}
}
}
I hope that Microsoft has provided more info. Anyway, I found the issue in my Data Factory code

Failure to upload images from Strapi to a cloudinary folder

I'm working with Strapi v4.1.7 and I'm trying to upload my images to Cloudinary in a specific folder (portfolio) but they just get added to the root folder of cloudinary.
Also I'm using "#strapi/provider-upload-cloudinary": "^4.1.9", package.
My plugins.js is as follows:
module.exports = ({ env }) => ({
// ...
upload: {
config: {
provider: "cloudinary",
providerOptions: {
cloud_name: env("CLOUDINARY_NAME"),
api_key: env("CLOUDINARY_KEY"),
api_secret: env("CLOUDINARY_SECRET"),
},
actionOptions: {
upload: {
folder: env("CLOUDINARY_FOLDER", "portfolio"),
},
delete: {},
},
},
},
// ...
});
Also in my .env file, I have the folder set as follows:
....
CLOUDINARY_FOLDER=portfolio
Also, I was asking is it possible to create dynamic folders in Cloudinary like '/portfolio/Project1/all-project1-assets' from Strapi for all projects.
I need help to achieve this. Thanks !!!
Just change the Upload to uploadStream as highlighted below :
`actionOptions: {
uploadStream: {
folder: env("CLOUDINARY_FOLDER", "portfolio"),
},
delete: {},
},
`
You might have to create a upload preset in cloudinary, go to your Cloudinary Settings > Upload > Upload presets > Add upload preset, and then create a upload preset with a folder and in Signed Mode.

Can I get FilePond to show previews of loaded local images?

I use FilePond to show previously uploaded images with the load functionality. The files are visible, however I don't get a preview (which I get when uploading a file).
Should it be possible to show previews for files through load?
files: [{
source: " . $profile->profileImage->id . ",
options: {
type: 'local',
}
}],
First you have to install and register File Poster and File Preview plugins and here is the example of how to register it in your code:
import * as FilePond from 'filepond';
import FilePondPluginImagePreview from 'filepond-plugin-image-preview';
import FilePondPluginFilePoster from 'filepond-plugin-file-poster';
FilePond.registerPlugin(
FilePondPluginImagePreview,
FilePondPluginFilePoster,
);
then You have to set the server.load property to your server endpoint and add a metadata property to your files object which is the link to your image on the server:
const pond = FilePond.create(document.querySelector('file'));
pond.server = {
url: '127.0.0.1:3000/',
process: 'upload-file',
revert: null,
// this is the property you should set in order to render your file using Poster plugin
load: 'get-file/',
restore: null,
fetch: null
};
pond.files = [
{
source: iconId,
options: {
type: 'local',
metadata: {
poster: '127.0.0.1:3000/images/test.jpeg'
}
}
}
];
the source property is the variable you want to send to your end point which in my case I wanted to send to /get-file/{imageDbId}.
In this case it does not matter what your endpoint in the load property returns but my guess is, we have to return a file object.

Not seeing any improvement in speed by using Concurrent Chunk Upload

uploader 5 to upload large videos to S3 and we are using php.
As per documentation from fineuploader the upload speed should increase if we enable concurrent chunk upload, we didn't see any improvement, below is the configuration we are using please suggest if we are missing some thing.
Our Statistics:
152 MB file Upload to S3 without Concurrent Chunk Enabled: 7:40 sec
152 MB file Upload to S3 with Concurrent Chunk Enabled: 7:29 sec
Below is the Code we are using to enable Concurrent Chunk Upload:
$(document).ready(function () {
var idUpload = "fineuploader-s3";
$('#'+idUpload).fineUploaderS3({
debug: true,
request: {
// REQUIRED: We are using a custom domain
// for our S3 bucket, in this case. You can
// use any valid URL that points to your bucket.
//endpoint: "http://testvibloo.s3.amazonaws.com",
endpoint: "testvibloo.s3.amazonaws.com",
// REQUIRED: The AWS public key for the client-side user
// we provisioned.
accessKey: "AWS Access Key",
forceMultipart: false,
},
objectProperties: {
key: function(fileId) {
var keyRetrieval = new qq.Promise(),
filename = $("#"+idUpload).fineUploader("getName", fileId);
keyRetrieval.success('testing/'+new Date().getTime()+'_'+filename);
return keyRetrieval;
}
},
template: "simple-previews-template",
// REQUIRED: Path to our local server where requests
// can be signed.
signature: {
endpoint: "http://hostname/testing/html/templates/s3demo.php"
},
// USUALLY REQUIRED: Blank file on the same domain
// as this page, for IE9 and older support.
iframeSupport: {
localBlankPagePath: "success.html"
},
// optional feature
chunking: {
enabled: true,
concurrent: {
enabled: true,
},
},
//maxConnections: 5,
// optional feature
resume: {
enabled: true
},
// optional feature
validation: {
sizeLimit: 1024 * 1024 * 1024
},
})
// Enable the "view" link in the UI that allows the file to be downloaded/viewed
.on('complete', function(event, id, name, response) {
var $fileEl = $(this).fineUploaderS3("getItemByFileId", id),
$viewBtn = $fileEl.find(".view-btn");
if (response.success) {
$viewBtn.show();
$viewBtn.attr("href", response.tempLink);
}
});
});
As per documentation from fineuploader the upload speed should increase if we enable concurrent chunk upload,
The documentation does not exactly say this. Here's what it does say:
There is a clear benefit in terms of upload speed when sending multiple chunks at once. The concurrent chunks feature is primarily in place to maximize bandwidth usage for single large file uploads.
Note that last sentence: "for single large file uploads". If you are uploading multiple files at a time, you are already likely maxing out your connection to S3. The concurrent chunking feature is aimed at ensuring all available connections are used for single file uploads. Without this feature, only one connection would be used at a time.

Resources