Fine Uploader S3, Large files, ERR_NAME_NOT_RESOLVED - fine-uploader

I'm using Fine Uploader to upload directly to S3. It works for small files, but anything large, like 6GB causes the following error in the javascript console:
screen shot of javascript Err_name_not_resolved
Here I'm uploading a file named "6gb".
This is the server log (Ruby on Rails). The first time the signature is requested, when uploading part#1:
Started POST "/signature" for 192.168.188.129 at 2016-01-31 00:51:12 +0000
Processing by UploadsController#signature as JSON
Parameters: {"headers"=>"PUT\n\n\n\nx-amz-date:Sun, 31 Jan 2016 00:51:14 GMT\n/bv-deliverables/uploadsFolder/7c54325c-1db6-4d66-a31f-ceecb3c4ee2a.6gb?partNumber=1&uploadId=AD7mVbSzuhyC9JhgmZVu7dGEr5mJuEh8EpZTmx4Tl5M6r3ki6YDJvBqLfkDzFLQJGBpKGpzbwC8uPo7h3JGwQA--", "upload"=>{}}
Completed 200 OK in 1ms (Views: 0.2ms | ActiveRecord: 0.0ms)
This POST to /signature is repeated for each part, up to part#133, where retry.gif is loaded, and the error in javascript console appears.
Started POST "/signature" for 192.168.188.129 at 2016-01-31 01:34:36 +0000
Processing by UploadsController#signature as JSON
Parameters: {"headers"=>"PUT\n\n\n\nx-amz-date:Sun, 31 Jan 2016 01:34:37 GMT\n/bv-deliverables/uploadsFolder/7c54325c-1db6-4d66-a31f-ceecb3c4ee2a.6gb?partNumber=133&uploadId=AD7mVbSzuhyC9JhgmZVu7dGEr5mJuEh8EpZTmx4Tl5M6r3ki6YDJvBqLfkDzFLQJGBpKGpzbwC8uPo7h3JGwQA--", "upload"=>{}}
Completed 200 OK in 1ms (Views: 0.2ms | ActiveRecord: 0.0ms)
Started GET "/assets/retry.gif" for 192.168.188.129 at 2016-01-31 01:35:16 +0000
I believe I'm correctly signing the header, as it seems to work up until that part.
My upload speed is around 2 megabits per second.
Here is my AWS CORS:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
And here is my Fine uploader javascript:
$('#fine-uploader-s3').fineUploaderS3({
template: 'qq-template-s3',
request: {
endpoint: "https://s3.amazonaws.com/bv-deliverables",
accessKey: "AKIAILA4KWEGIGGBAL5Q"
},
signature: {
endpoint: "signature",
customHeaders: {"X-CSRF-Token":"<%= form_authenticity_token %>"}
},
uploadSuccess: {
endpoint: "create",
customHeaders: {"X-CSRF-Token":"<%= form_authenticity_token %>"},
params: {
isBrowserPreviewCapable: qq.supportedFeatures.imagePreviews
}
},
iframeSupport: {
localBlankPagePath: "/server/success.html"
},
cors: {
expected: true
},
chunking: {
enabled: true
},
resume: {
enabled: true
},
deleteFile: {
enabled: false
},
validation: {
itemLimit: 5,
sizeLimit: 107374182400
},
objectProperties: {
reducedRedundancy: true,
key: function (fileId) {
var filename = $('#fine-uploader-s3').fineUploader('getName', fileId);
var uuid = $('#fine-uploader-s3').fineUploader('getUuid', fileId);
console.log(fileId);
var ext = filename.substr(filename.lastIndexOf('.') + 1);
return 'uploadsFolder/' + uuid + '.' + ext;
}
},
callbacks: {
onComplete: function(id, name, response) {
var previewLink = qq(this.getItemByFileId(id)).getByClass('preview-link')[0];
location.reload();
if (response.success) {
previewLink.setAttribute("href", response.tempLink)
}
}
}
});
Re-adding the failed file seems to resume the upload at where it failed.
Again, it works perfectly with small files (like 15KB), and smaller multipart files (like 10MB), but the error occurs when I try to upload a large 6GB file.

This is pretty clearly a network issue, outside of the control of Fine Uploader. The auto retry feature, if enabled, should attempt to resume automatically. Failing that, you can manually retry the upload using the retry API method, or the retry button that can be enabled with Fine Uploader UI.

Related

How to manipulate request Accept header for AWS API Gateway Lambda LAMBDA_PROXY integration

I've written a small lambda function and deployed to AWS using the serverless framework. It provides a single function that returns a png file.
When the resource is opened in a browser it correctly loads a png.
When requested with curl curl "https://******.execute-api.us-east-1.amazonaws.com/dev/image.png" it produces a base64 encoded version of the image.
When I request on the command line with Accept header curl -H "Accept: image/png" https://******.execute-api.us-east-1.amazonaws.com/dev/image.png" it produces a binary image/png version of the image.
How do I manipulate the request to the API gateway so that all requests have "Accept: image/png" set on them regardless of origin? Or is there another way to ensure that the response will always be binary rather than base64?
Source Code
The handler code loads a png image from disk and then returns a response object with a base64 encoded output of the image.
// handler.js
'use strict';
const fs = require('fs');
const image = fs.readFileSync('./1200x600.png');
module.exports = {
image: async (event) => {
return {
statusCode: 200,
headers: {
"Content-Type": "image/png",
},
isBase64Encoded: true,
body: image.toString('base64'),
};
},
};
The serverless configuration sets up the function and uses the "serverless-apigw-binary" and "serverless-apigwy-binary" plugins to set content handling and binary mime types for the response.
# serverless.yml
service: serverless-png-facebook-test
provider:
name: aws
runtime: nodejs8.10
functions:
image:
handler: handler.image
memorySize: 128
events:
- http:
path: image.png
method: get
contentHandling: CONVERT_TO_BINARY
plugins:
- serverless-apigw-binary
- serverless-apigwy-binary
custom:
apigwBinary:
types:
- 'image/*'
package.json
{
"name": "serverless-png-facebook-test",
"version": "1.0.0",
"main": "handler.js",
"license": "MIT",
"dependencies": {
"serverless-apigw-binary": "^0.4.4",
"serverless-apigwy-binary": "^1.0.0"
}
}

Fineuploader possible solution to force header return true if default server will not return following JSON Data

Not sure if it's possible to force fineuploader to fire true success upload in anyway. im facing the issue of submitting form to a url "http://119.29.222.368:8991/upload"(sample ip due to P&C) where it will return only Status : "OK" without Success : true.
Following is my code, pretty sure that it successfully submitted, but my UI will get error due to the API is not returning the value success.
var uploader = new qq.FineUploader({
element: document.getElementById("uploader"),
cors: {
allowXdr: 'true',
expected:'true'
},
request: {
method:'POST',
// endpoint: '/upload',
endpoint: 'http://119.29.222.368:8991/upload',
forceMultipart:'true',
inputName:'filename',
params: {
'token': <%- JSON.stringify(token) %>,
'path':"/images/feed/"
}
}
})
This was originally requested in github.com/FineUploader/fine-uploader/issues/1325. Follow the linked pull request at the end of that issue for updates. – Ray Nicholus 20 hours ago

Mandrill with parse server not working on heroku migration

I have migrate app from parse.com to heroku with mLab and everything works fine except cloud code.
I am using Mandrill for sending email from parse cloud code which is not working with heroku
Here is what I have done so far:
Installed mandrill ~0.1.0 into parse-server-example and push the code to heroku app
Put the cloud code into '/cloud/main.js'
Called the function from iOS app which respond error as:
[Error]: Invalid function. (Code: 141, Version: 1.13.0).
Here is my code script:
Parse.Cloud.define("sendMail", function(request, response) {
var Mandrill = require('mandrill');
Mandrill.initialize('xxxxxx-xxxxx');
Mandrill.sendEmail({
message: {
text: "ffff",
subject: "hello",
from_email: "xxxxx#gmail.com",
from_name: "pqr",
to: [
{
email: "xxxxxxxxxx#gmail.com",
name: "trump"
}
]
},
async: true
},{
success: function(httpResponse) {
console.log(httpResponse);
response.success("Email sent!");
},
error: function(httpResponse) {
console.error(httpResponse);
response.error("Uh oh, something went wrong");
}
});
});
But after calling 'sendMail' function I am getting this error:
[Error]: Invalid function. (Code: 141, Version: 1.13.0).
================================== MailGun ==========================
Parse.Cloud.define('hello', function(req, res) {
var api_key = 'key-xxxxxxxxxxxxxx';
var domain = 'smtp.mailgun.org';
var mailgun = require('mailgun-js')({apiKey: api_key, domain: domain});
var data = {
from: 'xxxxxxxald#gmail.com',
to: 'xxxxx8#gmail.com',
subject: 'Hello',
text: 'Testing some Mailgun awesomness!'
};
mailgun.messages().send(data, function (error, body) {
console.log(body);
});
//res.success(req.params.name);
});
I had a similar problem with sendgrid, but I finally find a way around the problem.
I think this steps may help you,
Miss some brackets or some code separator? ( try rewritting the entire code in the main.js )
The app is actually running? ( when you type "heroku open" in the terminal you get the default message? ) - if not check step 1.
If the previous are not working, rollback to a safe build and Add the add-ons in the heroku dashboard instead of installing them yourself, then download the git and do any changes to git and then push.
Below I have pasted from cloud code main.js code that is working using Mandrill on heroku parse application to send password recovery e-mail.
in cloud code main.js:
var mandrill_key = process.env.MANDRILL_KEY;
var Mandrill = require('mandrill-api/mandrill');
var mandrill_client = new Mandrill.Mandrill(mandrill_key);
{
success: function(gameScore) {
//alert('New object created with objectId: ' + gameScore.id);
mandrill_client.messages.send(
{
message: {
html: "<p>Hello " + firstUser.get('fullname') + ",</p><p>We received your request to reset your password.</p><p>Your user name is <strong>" + firstUser.get('username') + "</strong>. Please click here to create a new password. This link will expire in one hour after this mail was sent</p><p>If you need additional help, just let us know.</p><p>SampleCompany Support<br>customerservice#example.com</p><p>Copyright Sample Company, Inc. 2014-2017</p>",
subject: "Sample Company Name account recovery",
from_email: "customerservice#example.com",
from_name: "Sample Company Name",
to: [
{
email: firstUser.get('email'),
name: firstUser.get('fullname')
}
]
},
async: true
},
//Success
function(httpResponse) {
console.log(httpResponse);
//alert("Email sent!");
},
//Failure
function(httpResponse) {
console.error(httpResponse);
//alert("Uh oh, something went wrong");
});
},
error: function(gameScore, error) {
console.error(error.message);
//alert('Failed to create new object, with error code: ' + error.message);
},
useMasterKey: true
})

Is chunking.success.endpoint supposed to be called even when there is only a single chunk?

If I have chunking enabled and a success endpoint defined:
chunking: {
enabled: true,
concurrent: {
enabled: true
},
success: {
endpoint: "/FileUploadComplete"
}
},
The endpoint "/FileUploadComplete" is not hit unless the file is larger than the chunk size. Documentation (http://docs.fineuploader.com/features/concurrent-chunking.html) states that "Fine Uploader will send a POST after all chunks have been successfully uploaded for each file."
That is correct. If the file is not chunked, none of the chunking-related logic/options/endpoints apply.

S3 Upload Issue

I am trying to upload a file on S3. I have been verified everything couple of times and all looks correct to me, but whenever I am trying to upload file, getting error message saying
responseText =
InvalidArgumentPOST requires exactly one file upload per request.0file3670E4EE52B3BCD5b3rOF/9WJHymo1ZENIOlrct/ZusAJ50AnSIP0df3K3+DdEcAFolJDx8qU6DH2N1l
Can someone please help me to findout what I am doing wrong here?
<div id="s3-fileuploader" class="dropArea"></div>
<script type="text/javascript">
j$ = jQuery.noConflict();
//block and unblock UIbased on endpoint url
function setUI(){
j$('div.dropArea').unblock();
}
$(document).ready(function () {
$('#s3-fileuploader').fineUploader({
request: {
endpoint: "https://{!bucketname}.s3.amazonaws.com",
accessKey: "{!key}"
},
signature: {
//always included
"expiration": "{!expireStr}",
signature : "{!signedPolicy}",
policy: "{!policy}",
"conditions":
[
//always included
{"acl": "public-read"},
//always included
{"bucket": "{!bucketname}"},
//not included in IE9 and older or Android 2.3.x and older
{"Content-Type": "{!ContentType}"},
//always included
{"key": "{!key}"},
//always included
{"x-amz-meta-qqfilename": "{!URLENCODE('test.jpg')}"},
]
},
cors: {
expected: true, //all requests are expected to be cross-domain requests
sendCredentials: false, //if you want cookies to be sent along with the request
allowXdr: true
},
autoUpload: true,
multiple:false,
debug: true,
text: {
uploadButton: '<i class="icon-plus icon-white">Select Files</i> '
},
uploadSuccess: {
endpoint: "{!redirectURL}"
}
}).on('submit',function(event,id,name){
//set endpoint
console.log('https://{!bucketname}.s3.amazonaws.com');
$(this).fineUploader('setEndpoint','https://{!bucketname}.s3.amazonaws.com');
});
setUI();
});
</script>
</body>
The signature option is supposed to include information about your signature server. Instead you are apparently attaching an (invalid) policy document to this option. Furthermore, you do not create the police document, Fine Uploader creates it for you and passes it to your signature server for signing.
Also you should be using the fineUploaderS3 jquery plugin for this.
It looks like you have not read the documentation that describes how Fine Uploader S3 works. I suggest starting with the guides on the home page of http://docs.fineuploader.com.

Resources