How do I send the order of multiple drag and drop or browsed files in FilePond? - filepond

I have set up FilePond and it is working well but my next task is to preserve the order that files are added to FilePond.
I'm allowing multiple files to be added and have auto upload enabled but due to file size, transfer time and FilePond's asynchronous uploads it isn't possible to assume on the server side that the first to finish transfer was the first in the list.
I can see from the documentation that it's possible to get/remove files via their index so is it possible to use the file metadata plugin to send that index with each file uploaded.

Following on from #Rik's comment I have created the following code snippet which works well.
let filecount = 0;
pond.on('addfile', (file) => {
filecount += 1;
file.setMetadata("filecount", filecount);
... Other metadata removed for brevity ...
});

Related

Spring integration SFTP - issue with filters and number of messages emits

I started using spring integration SFTP and I have some questions.
Filters not working. I have example configuration:
Sftp.inboundAdapter(ftpFileSessionFactory())
.preserveTimestamp(true)
.deleteRemoteFiles(false)
.remoteDirectory(integrationProperties.getRemoteDirectory())
.filter(sftpFileListFilter()) // doesn't work
.patternFilter("*.xlsx") // doesn't work
And my ChainFileListFilter:
private ChainFileListFilter<ChannelSftp.LsEntry> sftpFileListFilter() {
ChainFileListFilter<ChannelSftp.LsEntry> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilter(new SftpPersistentAcceptOnceFileListFilter(metadataStore(), "INT"));
chainFileListFilter.addFilter(new SftpSimplePatternFileListFilter("*.xlsx"));
return chainFileListFilter;
}
If I understand correctly, only the XLSX file should be saved in the local directory. If yes it doesn't work with this configuration. Am I doing something wrong or misunderstood this?
How I can configure SFTP that each downloaded file emit message? I see in the doc two params max-messages-per-poll and max-fetch-size, but I don't know how to set it up so that every file emits a message. I would like to sync files once every 24 hours and produce batch job queue. Maybe there is a workaround?
Is there built-in filter which allow me fetch only files with changed content? The best solution would be to check the checksums of the files.
I will be grateful for your help and explanations.
You cannot combine filter() and patternFilter(). Only one of them can be used: the last one overrides whatever you used before. In other words: or filter() or patternFilter() - not both. By default the logic is like this:
public SftpInboundChannelAdapterSpec patternFilter(String pattern) {
return filter(composeFilters(new SftpSimplePatternFileListFilter(pattern)));
}
private CompositeFileListFilter<ChannelSftp.LsEntry> composeFilters(FileListFilter<ChannelSftp.LsEntry>
fileListFilter) {
CompositeFileListFilter<ChannelSftp.LsEntry> compositeFileListFilter = new CompositeFileListFilter<>();
compositeFileListFilter.addFilters(fileListFilter,
new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "sftpMessageSource"));
return compositeFileListFilter;
}
So, technically you don't need your custom one, if you don't use external persistent MetadataStore. But if you do, think about flipping SftpSimplePatternFileListFilter with SftpPersistentAcceptOnceFileListFilter. Since it is better to check for the pattern before storing the file into MetadataStore.
It is the fact that every synched remote file, passed those filters, is stored into local dir and the message for that local file is emitted immediately when the poller does a request.
The maxFetchSize plays the role when we load remote files into a local dir. The maxMessagesPerPoll is used from the poller, but those are already built from the local files. The message is emitted per local file, not as a batch for all of them. That's not what messaging is designed for.
Please, share more info what does not work with files. The SftpPersistentAcceptOnceFileListFilter checks not only file name, but also mtime of the file. So, that it not about any checksum, but more last modified timestamp of the file.

upload multiple Test cases at one time in ALM?

I'm trying to upload multiple Test cases at one go. How to upload multiple Test cases at one time in ALM ?
All flow files which you would upload should be updated with name attribute.
Make sure the src folder has a properties file named as “multipleFlows.properties” or you would have to create it.
Update the multipleFlows.properties file with all the flow ids and flow xml path that you would like to upload through ALMSync as mentioned below.
Ex: multipleFlows.properties file should contain as below format
flow1_id=flow1_xml_path
flow2_id=flow2_xml_path
flow3_id=flow3_xml_path
flow4_id=flow4_xml_path
Open the Run Configuration ALMSync >> Arguments tab and update the arguments as
createTestCase flow_map multipleFlows

fineuploader s3 and resume options

i'm using fineuploader s3 and i've just discovered a unexpected ( from my pov ) behaviour.
I give to our customers the possibility to upload files for their works.
Every work has a code for example 12345_1298681 and 12345_84792782.
let's assume that customer started to upload files for both orders ( starting the upload sets the work as interrupted so at next uploader init them are set as _isResumable = true)
User uploads file a.mov to work 12345_1298681 ( s3key is 12345_1298681/a.mov ) and b.mov to 12345_84792782 ( s3Key is 12345_84792782/b.mov)
If for some reason user tries to upload a.mov to work 12345_84792782 fineuploader recognizes the file as resumables and in the console i can see
Identified file with ID 0 and name of a.mov as resumable.
Server requested UUID change from '60aecf65-67ca-4811-aa3c-6425620cc3f1' to 'a2bfc111-0c82-4e48-8512-a08c3e24cbd8'
But a.mov is resumable if added to work 12345_1298681 not if added to work 12345_84792782.
i've seen in the doc that if i return false to the onResume callback the file is restarted from the beginning, but the problem whit this approach is that resume data is deleted for the file and not for the s3Key and in this way i'll lose the ability to resume upload in 12345_1298681?
To be clearer ( hopefully )
i have
order 12345_123456 => s3 key => 12345_123456/a.mov
order 12345_987654 => s3 key => 12345_987654/a.mov
if i start to upload file for order 12345_123456 than i stop it, if i want start the upload for order 12345_987654 using the same file in my filesystem, fineuploader recognizes that the file is the same ( that's correct but it differs in the final key ) and uploads it to 12345_123456/a.mov instead of 12345_987654/a.mov
and this can lead to a problem:
i think i'm uploading file for one order instead i'm not.
Digging in the source of fineuploader i can see that the cookie id is generated by function
_getLocalStorageId from qq.XhrUploadHandler
instead of
_getLocalStorageId from qq.s3.XhrUploadHandler but in none of those methods there's something that consider key to differentiate file from another

Purpose of /var/resource_config.json

I'm trying to figure out what the purpose of the file /var/resource_config.json is in Magento. It appears to perhaps be a caching of a configuration, but can't see where in the source code it is being created and/or updated.
I'm in the process of setting up local/dev/staging/prod environments for an EE1.12 build and want to figure out if I can safely exclude it from my repo or whether I need to script some updates to it for deploys.
Maybe the flash image uploader in admin creates it?
Any ideas or directions to look?
This is a configuration cache file for the "alternative media store" system. This is a system where requests for media files are routed through get.php, and allows you to store media in the database instead of the file system. (That may be a gross over simplification, as I've never used the feature myself)
You can safely, (and should) exclude this file from deployments/source control, as it's a cache file and will be auto generated as needed. See the following codeblock in the root level get.php for more information.
if (!$mediaDirectory) {
$config = Mage_Core_Model_File_Storage::getScriptConfig();
$mediaDirectory = str_replace($bp . $ds, '', $config['media_directory']);
$allowedResources = array_merge($allowedResources, $config['allowed_resources']);
$relativeFilename = str_replace($mediaDirectory . '/', '', $pathInfo);
$fp = fopen($configCacheFile, 'w');
if (flock($fp, LOCK_EX | LOCK_NB)) {
ftruncate($fp, 0);
fwrite($fp, json_encode($config));
}
flock($fp, LOCK_UN);
fclose($fp);
checkResource($relativeFilename, $allowedResources);
}
Speaking in general terms, Magento's var folder serves the same purpose as the *nix var folder
Variable files—files whose content is expected to continually change during normal operation of the system—such as logs, spool files, and temporary e-mail files. Sometimes a separate partition
and should be isolated to particular systems (i.e. not a part of deployments)

Issue with WatchService in java 7

I'm using jdk7's WatchService API for monitoring the folder on file system.I'm sending a new file through
email to that folder, when the file comes into that folder i m triggering the ENTRY_CRATE option. its working fine.
But the issue is its generating two events of ENTRY_CREATE instead of one event which i'm invoking.
BELOW IS THE CODE:
Path dir = Paths.get(/var/mail);
WatchService watcher = dir.getFileSystem().newWatchService();
dir.register(watcher, StandardWatchEventKinds.ENTRY_CREATE);
System.out.println("waiting for new file");
WatchKey watckKey = watcher.take();
List<WatchEvent<?>> events = watckKey.pollEvents();
System.out.println(events.size());
for(WatchEvent<?> event : events){
if(event.kind() == StandardWatchEventKinds.ENTRY_CREATE){
String fileCreated=event.context().toString().trim();
}
}
In the above code I'm gettng the events size as 2.
Can any one please help me in finding out the reason why i'm getting two events.
I am guessing that there might be some temporary files being created in the folder at the same time. Just check what are the name/paths of the file being created.

Resources