Fineuploader delete file from S3 not work but no error - fine-uploader

So I have been using S3 uploads for a while and today I thought I would also try the delete file option, so I inserted
deleteFile: {
enabled: true,
endpoint: '/s3handler'
}
When testing after an upload , indeed the "delete" button appears and I click it and it does it's deleting message and then goes away.
The thing is that the file/s are not deleted from the S3 bucket.
I have checked that the user has been granted delete permission and i also noted that CORS on the bucket also allows for delete
I would have thought if it did not work there would be some error, but no error is shown

Fine Uploader sends the delete request to your server, and then your server is expected to make the call to S3 to actually delete the file. Your server must be acknowledging the delete request with a 200 response code, in which case the file is deleted as far as Fine Uploader is concerned.

Related

How to retrieve files from S3 in Laravel Vapor

I'm having a problem loading images in my html dynamically after storing them successfully with Laravel Vapor.
I have followed this documentation provided by laravel vapor to store files, and it works like a charm. I copy my uploaded files from the tmp directory into the root of my S3 bucket and then store the path of that file in my databases images table so that later I can return the file path to my front end and display the image in my browser.
Unfortunately this is always returning a 403 status code from AWS S3.
I could fix this by making my generated S3 bucket public, but that would raise a security issue. I believe this should work out of the box, not sure where I could have gone wrong... any ideas?
I am returning the uploaded image url using the Storage facade.
use Illuminate\Support\Facades\Storage;
return Storage::url($image->path);
Where $image->path is the file path in my S3 bucket.
I'm sure that the storage facade is working correctly because it is returning the correct url with the file's path.
I got the solution to this problem. I contacted laravel vapor support and I was told to set the visibility property for my file to public when I copy it to the permanent location, as stated in Laravel's official documentation here.
So after you upload your file using the js vapor.store method you should copy it to a permanent directory, then set it's visibility to public.
Storage::copy($request->path, str_replace('tmp/', '', $request->path));
Storage::setVisibility(str_replace('tmp/', '', $request->path), 'public');
I also noticed that your can set the visibility of the file directly in the vapor.store method by passing a visibility attribute with the respective value.
vapor.store(file, { visibility: 'public-read' });
As a side note: just 'public' will return a 400 bad request, it must be set to 'public-read'.

Cache internal routes with sw-precache

I'm creating a SPA using vanilla JavaScript and currently setting up sw-precache to handle the caching of resources. The service worker is generated as part of a gulp build and installed successfully. When I navigate to the root url (http://127.0.0.1:8080/) whilst offline the app shell displays, illustrating that resources are indeed cached.
I'm now attempting to get the SW to handle internal routing without failing. When navigating to http://127.0.0.1:8080/dashboard_index whilst offline I get the message 'Site can't be reached'.
The app handles this routing on the client side via a series of event listeners on the users actions or, in the case of using the back button, the url. When accessing one of these urls, no calls to the server should be made. As such, the service worker should allow these links to 'fall through' to the client side code.
I've tried a few things and expected this Q/A to solve the problem. I've included the current state of the generate-service-worker gulp task, and with this setup I'd expect to be able to access /dashboard_index offine. Once this is working I can adapt the solution to cover other routes.
Any help much appreciated.
gulp.task('generate-service-worker', function(callback) {
var rootDir = './public';
swPrecache.write(path.join(rootDir, 'sw.js'), {
staticFileGlobs: [rootDir + '/*/*.{js,html,png,jpg,gif,svg}',
rootDir + '/*.{js,html,png,jpg,gif,json}'],
stripPrefix: rootDir,
navigateFallback: '/',
navigateFallbackWhitelist: [/\/dashboard_index/],
runtimeCaching: [{
urlPattern: /^http:\/\/127\.0\.0\.1\:8080/getAllData, // Req returns all data the app needs
handler: 'networkFirst'
}],
verbose: true
}, callback);
});
update
The code to the application can be found here.
Removing the option navigateFallbackWhitelist does not chage the result.
Navigating to /dashboard_index whilst offline prints the following to the console.
GET http://127.0.0.1:8080/dashboard_index net::ERR_CONNECTION_REFUSED
sw.js:1 An unknown error occurred when fetching the script.
http://127.0.0.1:8080/sw.js Failed to load resource: net::ERR_CONNECTION_REFUSED
The same An unknown error occurred when fetching the script. is also duplicated in the 'application > service workers' tab of chrome debug tools.
It's also noted that the runtimeCaching option is not caching the json response returned from that route.
For the record, in case anyone else runs into this, I believe this answer from the comments should address the issue:
Can you switch from navigateFallback: '/' to navigateFallback:
'/index.html'? You don't have an entry for '/' in your list of
precached resources, but you do have an entry for '/index.html'.
There's some logic in place to automatically treat '/' and
'/index.html' as being equivalent, but that doesn't apply to what
navigateFallback is doing...

Localhost returns 404.3 when fetching json through ajax (Windows 8.1)

So I have been getting the infamous 404.3 error when trying to use AXAJ to access a .json file launching the site (or more of a test app hehe) through WebMatrix on localhost.
Yes, I am aware of the IIS configuration. I am on Windows 8.1(x64), so I had to even turn on MIME types functionality separately. I configured a MIME type for .json with application/javascript. Then I went and added a handler to *.json, pointed it to C:\WINDOWS\system32\inetsrv\asp.dll. I set the verbs to GET and POST (those are what I use in my ajax function). I also tried unchecking the "Invoke the handler only if request is mapped to..." to no avail.
I am using one function to send data to PHP file which writes it to the JSON file and then another to fetch data from the JSON file directly. Writing through PHP works. Fetching doesn't. I am completely at a loss, does anyone have any ideas? The code I am using to fetch the data is your bog-standard ajax:
function getDate(path, callback) {
var httpRequest = new XMLHttpRequest();
httpRequest.onreadystatechange = function() {
if (httpRequest.readyState === 4) {
if (httpRequest.status === 200) {
var data = JSON.parse(httpRequest.responseText);
if (callback) callback(data);
}
}
};
httpRequest.open('GET', path);
httpRequest.send();
}
When I host this on my server space, it works totally fine. But I want to get it to work locally for testing purposes as well.
If writing to the file works but fetching doesn't work. Then you should check for the link of the file.
The error 404 as the name refers to, is an error for the file name. There isn't any other sort of error, even the Ajax request is working fine and giving the error 404 (file not found). So the only thing that you can do is, to make sure that while fetching the data, you use the correct link.
Here can be a help, when you send the Request through Ajax, there is a Network tab in your Browser's console. Open it, and look for the request. It would in red color denoting an error and click it. You'll see that the link you're providing isn't valid.
Look for the errors in the File Link then and update it.
The lengths I go to, to clean up my profile...
When you require a JSON format, or any file for that matter you have to specify in your request what data type you need, IIS will not make any assumptions. So
xhr.setRequestProperty('Content-Type', 'application/json');
is something one must not forget. I set also the X-Requested-With header. Note that to reproduce this issue I used IIS that is installed on Windows 10 Pro, so not exactly the same system (3 years later - holy crap!).

Fine Uploader Response Error OnComplete In IE 10

Manual Uploader Working Fine with All Browser expect IE 10, i am not getting correct Response From the Server it is Showing on onComplete "No Valid message Received from Loaded iframe For i frame Name 1_97604 cec......".
File are uploading into the cloudbees server but not getting correct Response From the server.
In case of Other browser i am getting Response.success = true, but For IE 10 its undefined, how to handle this error. Please help me Out for this.
Regards
Yogesh
You aren't using IE10 if that is the message you are seeing. Most likely, you are using IE9 or older. The message you are seeing is logged by the form uploader, which is never used if you are uploading via IE10. Perhaps you are running IE10 in IE9 or IE8 mode. Either way, the message indicates that you are working in a cross-origin environment (you have set the cors.expected option to true) but are not returning the proper response from your server. Note that older browsers, such as IE9 and older, utilize form submits, targeting an iframe, to upload files. In order to access the contents of that cross-origin iframe, the iframe needs to post a message containing the server response to Fine Uploader's window. This is all very easy to do, all you need to do is return a text/html response from your server that looks something like this:
"{\"success\": true, \"uuid\": \"9da17ad5-ad6a-40cd-81b5-226e837db45b\"}<script src=\"http://<YOUR_SERVER_DOMAIN>/iframe.xss.response-<VERSION>.js</script>.js\"></script>"
The javascript file mentioned in the script tag is provided in the Fine Uploader released zip file. It does all of the work for you. You must return a JSON response before the script tag, as illustrated above, and the response must include the UUID of the associated file.
You should read about cross-origin support either in the associated blog post.

Why are my S3 images are not valid for Facebook Javascript SDK?

I'm running into a error with the Facebook SDK which appears to be related to the permissions on my S3 bucket. I'm using Ruby on Rails with the Paperclip gem with Amazon S3 for storage.
Right now I have the dialog setup like so:
FB.ui({
method: 'feed',
name: "Check out this project on WorkHands",
picture: "https://workhands_images.s3.amazonaws.com/images/avatars/1100/original/2013-08-05_04_13_28__0000.jpeg?1376351034",
link: link.attr('href'),
caption: 'Work by',
description: "hello",
display: 'popup',
redirect_ui: window.location.origin
}
The reason why I think it has something to do with S3 is that I can pass in an image url from another src not on S3 (even from google images) and the dialog works perfectly fine.
My understanding is that Paperclip sets the ACL of each object to public_read by default. https://github.com/thoughtbot/paperclip/blob/master/lib/paperclip/storage/s3.rb
I have tried setting a bucket policy similar to the example here: http://ariejan.net/2010/12/24/public-readable-amazon-s3-bucket-policy/
But that didn't seem to fix anything.
For the image above, when I call s3object.acle.grants.inspect, I get XML like this:
[<Grant><Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"CanonicalUser\"><ID>0e77d1de2a82b95d7b735e0071296ef5f903fa17ba0b98ecfe5ab2d36a8f17d0</ID>
cush4437FULL_CONTROL, http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"Group\">http://acs.amazonaws.com/groups/global/AllUsersREAD]
I think it's the numbers after the '?' in your url. Facebook is (probably?) being strict about formatting URL queries in the "k=v" format, and since there is no '=' it is unhappy.
Drop the 's' from 'https'. Facebook won't always reliably fetch them.
It turns out that Facebook throws this error because of the source url has two subdomains. see https://stackoverflow.com/a/7320178/1296645
mybucket.s3.amazonaws.com - throws an error
s3.amazonaws.com/mybucket - works fine

Resources