I'm using the aws-sdk, and I'm trying to delete an object with the #delete_object method, for example:
s3.delete_object(bucket: ENV["AWS_BUCKET"]), key: "images/123/myimage.png"))
How can I delete the route (that's "images/123") instead of only the .png file? I don't want empty "folders". I've tested adding only the first part of the route (s3.delete_object(bucket: ENV["AWS_BUCKET"]), key: "images/")) in the key parameter but doesn't work. Thanks!
The folder will auto delete when it's empty.
So if you delete your file and refresh the root folder, you will see that it's gone ! Aws-Magic =)
Related
This snippet from the Laravel docs, https://laravel.com/docs/5.5/responses#file-responses, seems to be just what I need:
return response()->file($pathToFile);
The problem is that my file is stored on an S3 disk, and I can't seem to reference it properly.
I tried using Storage::disk('s3')->getDriver()->getAdapter()->applyPathPrefix($myPath); to get the fully qualified file name. It just returned the value of $myPath though.
I then tried to get the url using Storage::disk('s3')->url($myPath); The URL looks fine however Symphony says it does not exist. When I check with Storage::disk('s3')->exists($myPath); however, it returns true.
How do I go about displaying a file from cloud storage directly in the user's browser?
EDIT:
More details below:
To save the item in the first instance I am using $map->storeAs('/my/path/maps/filename.pdf', ['disk' => 's3']);
The output of url() is "https://s3.ap-southeast-2.amazonaws.com/my.domain.com/my/path/maps/filename.pdf"
When I cut-and-past the url in a browser address bar, it loads no problem
It seems to me that the response()->file() method does not accept a URL parameter. Is that the case?
Question - does the file need to be publicly available? (It currently is, but I would prefer to make it private).
The following should be enough:
$path = Storage::disk('s3')->url($filename);
I don't have a working example set up, but from memory $filename should be the same as what you originally placed in the ->put() method. So when you say $mypath in your question, hopefully that isn't already prefixing stuff like your s3 instance.
If this isn't the case, can you edit the question to include the result of ->url() and an example of your put() call and $path.
Looking at the edit, I think I understand what you are trying to do, which has been solved here:
https://laracasts.com/discuss/channels/laravel/file-response-from-s3
I am sorry if this has been answered before but all my searching is not coming up with a result.
I would like to place files directly into the target path and it not generate the UUID folder and then place the file in there. I know about the whole same filename could exist that is why I change the filename on the onChange event before uploading
I have tried to modify the handler.php but either I am not editing the correct lines or something else is going on.
After long and tiring, trying to figure this out, hours I have found a work around on this.
If you sent a blank uuid to the script it will not create the folder and will just place the file in the folder that you told the endpoint to put items. Not sure if this is how the script is supposed to work but it works for me.
I do not have to worry about files that are named the same as i have the script also change the file name before it gets upload with pre-prending a unique string to the file name.
callbacks:
{
onSubmit: function(id, name) {
this.setUuid(id, "")
console.log("onSubmit called")
}
}
How to rename an object in Google Storage bucket through the API?
See also An error attempting to rename a Google Bucket object (Google bug?)
Objects can't be renamed. The best you can do is copy to a new object and delete the original. If the new and old object are the same location (which will be true if they're in the same bucket, for example) it will be a metadata-only (no byte copying) operation, and hence fast. However, since it's two operations it won't be atomic.
Not sure if you want to do this programmatically or manually, but the gsutil tool has a mv option which can be used for renaming objects.
gsutil mv gs://my_bucket/oldprefix gs://my_bucket/newprefix
As other posters noted, behind the scenes, this does a copy and delete.
First, use the "rewrite" method to produce a copy of the original object. Then, delete the original object.
Documentation on rewrite: https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite
I want to return the files names of a dir called public/uploads. I used Storage::allFiles and Storage::files, but only return an empty array.
Storage works only for storage directory. If you want to use it, you'll need to create a symbolic link.
Use File facade instead:
File::files(public_path('uploads'));
I'm using the aws sdk to delete an object (or objects) from a bucket, the problem is that keys that don't exist still get counted as successfully deleted, shouldn't the SDK raise an error that the key doesn't exist?
The other problem is that an object corresponding to a key that does exist isn't being removed but is returning as being successfully deleted.
EDIT:
The second problem only seems to be when the object to be deleted is inside of a folder, in the root it gets deleted fine.
The DELETE object operation for Amazon S3 intentionally returns a 200 OK even when the target object did not exist. This is because it is idempotent by design. For this reason, the aws-sdk gem will return a successful response in the same situation.
A quick clarification on the forward-slash. You can have any number of '/' characters at the beginning of your key, but an object with a preceding '/' is different from the object without. For example:
# public urls for two different objects
http://bucket-name.s3-amazonaws.com/key
http://bucket-name.s3-amazonaws.com//key
Just be consistent on whether you choose to use a slash or not.
Turns out what you can't have have '/' at the beginning of the key, which I didn't realise, not sure why it was there but it was screwing up the key.