i'm using fineuploader s3 and i've just discovered a unexpected ( from my pov ) behaviour.
I give to our customers the possibility to upload files for their works.
Every work has a code for example 12345_1298681 and 12345_84792782.
let's assume that customer started to upload files for both orders ( starting the upload sets the work as interrupted so at next uploader init them are set as _isResumable = true)
User uploads file a.mov to work 12345_1298681 ( s3key is 12345_1298681/a.mov ) and b.mov to 12345_84792782 ( s3Key is 12345_84792782/b.mov)
If for some reason user tries to upload a.mov to work 12345_84792782 fineuploader recognizes the file as resumables and in the console i can see
Identified file with ID 0 and name of a.mov as resumable.
Server requested UUID change from '60aecf65-67ca-4811-aa3c-6425620cc3f1' to 'a2bfc111-0c82-4e48-8512-a08c3e24cbd8'
But a.mov is resumable if added to work 12345_1298681 not if added to work 12345_84792782.
i've seen in the doc that if i return false to the onResume callback the file is restarted from the beginning, but the problem whit this approach is that resume data is deleted for the file and not for the s3Key and in this way i'll lose the ability to resume upload in 12345_1298681?
To be clearer ( hopefully )
i have
order 12345_123456 => s3 key => 12345_123456/a.mov
order 12345_987654 => s3 key => 12345_987654/a.mov
if i start to upload file for order 12345_123456 than i stop it, if i want start the upload for order 12345_987654 using the same file in my filesystem, fineuploader recognizes that the file is the same ( that's correct but it differs in the final key ) and uploads it to 12345_123456/a.mov instead of 12345_987654/a.mov
and this can lead to a problem:
i think i'm uploading file for one order instead i'm not.
Digging in the source of fineuploader i can see that the cookie id is generated by function
_getLocalStorageId from qq.XhrUploadHandler
instead of
_getLocalStorageId from qq.s3.XhrUploadHandler but in none of those methods there's something that consider key to differentiate file from another
Related
I'm trying to upload multiple Test cases at one go. How to upload multiple Test cases at one time in ALM ?
All flow files which you would upload should be updated with name attribute.
Make sure the src folder has a properties file named as “multipleFlows.properties” or you would have to create it.
Update the multipleFlows.properties file with all the flow ids and flow xml path that you would like to upload through ALMSync as mentioned below.
Ex: multipleFlows.properties file should contain as below format
flow1_id=flow1_xml_path
flow2_id=flow2_xml_path
flow3_id=flow3_xml_path
flow4_id=flow4_xml_path
Open the Run Configuration ALMSync >> Arguments tab and update the arguments as
createTestCase flow_map multipleFlows
I have set up FilePond and it is working well but my next task is to preserve the order that files are added to FilePond.
I'm allowing multiple files to be added and have auto upload enabled but due to file size, transfer time and FilePond's asynchronous uploads it isn't possible to assume on the server side that the first to finish transfer was the first in the list.
I can see from the documentation that it's possible to get/remove files via their index so is it possible to use the file metadata plugin to send that index with each file uploaded.
Following on from #Rik's comment I have created the following code snippet which works well.
let filecount = 0;
pond.on('addfile', (file) => {
filecount += 1;
file.setMetadata("filecount", filecount);
... Other metadata removed for brevity ...
});
In my case, I wanted to read all XML form my s3bucket/ parsing then move all parsed files to the same s3Bucker/
for me parsing logic is working fine but I am not able to move all files.
this is the example I am trying to use
**s3 = boto3.resource('s3')
src_bucket = s3.Bucket('bucket1')
dest_bucket = s3.Bucket('bucket2')
for obj in src_bucket.objects.all():
filename= obj.key.split('/')[-1]
dest_bucket.put_object(Key='sample/' + filename, Body=obj.get()["Body"].read())**
above code is not working for me at all (I have to give s3 folder full access and for testing given public full access as well).
Thanks
Check out this answer. You could use python endshwith() function and pass ".xml" to it, get a list of those files and copy them to the destination bucket and then delete them from the source bucket.
I have a models.ImageField which I sometimes populate with the corresponding forms.ImageField. Sometimes, instead of using a form, I want to update the image field with an ajax POST. I am passing both the image filename, and the image content (base64 encoded), so that in my api view I have everything I need. But I do not really know how to do this manually, since I have always relied in form processing, which automatically populates the models.ImageField.
How can I manually populate the models.ImageField having the filename and the file contents?
EDIT
I have reached the following status:
instance.image.save(file_name, File(StringIO(data)))
instance.save()
And this is updating the file reference, using the right value configured in upload_to in the ImageField.
But it is not saving the image. I would have imagined that the first .save call would:
Generate a file name in the configured storage
Save the file contents to the selected file, including handling of any kind of storage configured for this ImageField (local FS, Amazon S3, or whatever)
Update the reference to the file in the ImageField
And the second .save would actually save the updated instance to the database.
What am I doing wrong? How can I make sure that the new image content is actually written to disk, in the automatically generated file name?
EDIT2
I have a very unsatisfactory workaround, which is working but is very limited. This illustrates the problems that using the ImageField directly would solve:
# TODO: workaround because I do not yet know how to correctly populate the ImageField
# This is very limited because:
# - only uses local filesystem (no AWS S3, ...)
# - does not provide the advance splitting provided by upload_to
local_file = os.path.join(settings.MEDIA_ROOT, file_name)
with open(local_file, 'wb') as f:
f.write(data)
instance.image = file_name
instance.save()
EDIT3
So, after some more playing around I have discovered that my first implementation is doing the right thing, but silently failing if the passed data has the wrong format (I was mistakingly passing the base64 instead of the decoded data). I'll post this as a solution
Just save the file and the instance:
instance.image.save(file_name, File(StringIO(data)))
instance.save()
No idea where the docs for this usecase are.
You can use InMemoryUploadedFile directly to save data:
file = cStringIO.StringIO(base64.b64decode(request.POST['file']))
image = InMemoryUploadedFile(file,
field_name='file',
name=request.POST['name'],
content_type="image/jpeg",
size=sys.getsizeof(file),
charset=None)
instance.image = image
instance.save()
I'm having this issue on magento 1.5.1:
The resource role tree is empty ( web service and permission )
To find out the error I have:
disabled all extension ( moved xml files away from /etc/modules/ ) but this is not fixing.
make a diff with original core files. ( files are identical )
So the problem should be at some db level.
I have found this old discussion, but it didn't help me :
http://www.magentocommerce.com/boards/viewthread/21449/
Update:
I found out that the empty tree is caused by these code lines:
file: /app/code/core/Mage/Adminhtml/Block/Permissions/Tab/Rolesedit.php
$rootArray = $this->_getNodeJson($resources->admin, 1);
$json = Mage::helper('core')->jsonEncode(isset($rootArray['children']) ? $rootArray['children'] : array());
$json is empty while $rootArray is looks correctly populated ( it contains a ['children'] node )
So the problem starts in jsonEncode() method
disable all extension
Crosscheck core file (/app/code/core, /js, /lib, /app/design/adminhtml) with a default magento files
ex. diff -qrbB magento_origina/js/ YOUR_MAGE_PROJECT/js/
revert any changes
clear cache ( also if you have disabled it, backend will continue to be cached )
check if it is fixed
isolate the problem and fix it
-> in this particular situation the problem was related to some mods in the file /js/ext-tree-checkbox.js