I'm having a problem loading images in my html dynamically after storing them successfully with Laravel Vapor.
I have followed this documentation provided by laravel vapor to store files, and it works like a charm. I copy my uploaded files from the tmp directory into the root of my S3 bucket and then store the path of that file in my databases images table so that later I can return the file path to my front end and display the image in my browser.
Unfortunately this is always returning a 403 status code from AWS S3.
I could fix this by making my generated S3 bucket public, but that would raise a security issue. I believe this should work out of the box, not sure where I could have gone wrong... any ideas?
I am returning the uploaded image url using the Storage facade.
use Illuminate\Support\Facades\Storage;
return Storage::url($image->path);
Where $image->path is the file path in my S3 bucket.
I'm sure that the storage facade is working correctly because it is returning the correct url with the file's path.
I got the solution to this problem. I contacted laravel vapor support and I was told to set the visibility property for my file to public when I copy it to the permanent location, as stated in Laravel's official documentation here.
So after you upload your file using the js vapor.store method you should copy it to a permanent directory, then set it's visibility to public.
Storage::copy($request->path, str_replace('tmp/', '', $request->path));
Storage::setVisibility(str_replace('tmp/', '', $request->path), 'public');
I also noticed that your can set the visibility of the file directly in the vapor.store method by passing a visibility attribute with the respective value.
vapor.store(file, { visibility: 'public-read' });
As a side note: just 'public' will return a 400 bad request, it must be set to 'public-read'.
Related
I am trying to import an excel sheet in Laravel, hosted in AWS Lambda and I am getting error
touch(): Unable to create file /var/task/storage/framework/laravel-excel/laravel-excel-ToQHNqV18ybdHCmqQFJKidLr5dSsWSUe.xlsx because Read-only file system
My code to import is
Excel::toArray(new ClientCompanyImport, $request->file('sales_accounts_sheet'));
Then I tried to mention the disc name as third parameter as
Excel::toArray(new ClientCompanyImport, 'mysheet.xlsx', 's3');
and uploaded 'mysheet.xlsx' file in s3 bucket path as 'storage/frameworks/laravel-excel/mysheet.xlsx'
Still I am getting same error. As I understand correctly, after this change, system is getting the file from S3 location, but still trying to keep the file temporary in default location, that is readonly in Lambda.
Laravel Version: 8
Laravel-Excel: 3.1
After reading a lot of articles and forums, I found the solution.
Main reason of above error is due to the readonly mode of AWS Lambda. By default the excel is trying to upload in the root folder, that is readonly in the case of Lambda and it fails.
Only editable path supported by Lambda is /tmp
The solution is to specify the path in config/excel.php
return [
'temporary_files' => [
'local_path' => sys_get_temp_dir(),
],
.
.
That is correct, by default the root path where your Laravel application runs is "read-only", Lambda provides a temporary directory where it is possible to perform write operations.
In your case the solution is adequate, but keep in mind that you can have the same problem with other services such as session and cache using the "file" driver.
You can write this little block of code inside your AppServiceProvider.php to change the "storage_path" that points to the root folder where your application lives.
public function register()
{
// Valid only in the production evironment.
if ($this->app->environment('production')) {
$this->app->instance('path.storage', '/tmp/laravel/storage');
}
}
I have a Larevel-app in a shared hosting. For the setup, I had to create a new folder in the main carpet of the hosting and copy the content of my public folder to public_html. I made changes in index.php and all working fine and nice. However, when a user upload a file that needs to be public, this file it saved in the myproject/storage/app/public path but not reflected in the public_html/storage so I can't access to it.
Reading the documentation, I know it is a problem with the symbolic link.
how can I change it?
Note: I can't access to the cdm because it is a shared hosting without access. It is window hosting.
make this route in web.php then hit this
Route::get('/artisan/storage', function() {
$command = 'storage:link';
$result = Artisan::call($command);
return Artisan::output();
})
I am developing a Flask app running on Heroku that allows users to upload images. The app has a page displaying the user's images in a table.
For developing purposes, I am saving the uploaded files to Heroku's ephemeral file system, and everything works fine: the images are correctly loaded and displayed (I am using the last method shown here implying the use of send_from_directory()). Now I have moved the storage to S3 and I am trying to adapt the code. I use boto3 to upload the files to the bucket: it works fine. My doubts are related to the download to populate the users' pages with their images.
As explained here, I could set the file as "public-read" and use the URL (I think this is what Flask-S3 does), but I'd rather prefer not to leave free access to the files. So, my solution attempt is to download the file to Heroku's filesystem and serve the image using again the send_from_directory() as follows:
app.py
#app.route('/download/<resource>')
def download_image(resource):
""" resource: name of the file to download"""
s3 = boto3.client('s3',
aws_access_key_id=current_app.config['S3_ACCESS_KEY'],
aws_secret_access_key=current_app.config['S3_SECRET_KEY'])
s3.download_file(current_app.config['S3_BUCKET_NAME'],
resource,
os.path.join('tmp',
resource))
return send_from_directory('tmp', # Heroku's filesystem
resource,
as_attachment=False)
Then, in the template I generate the URL for the image as follows:
...
<img src="{{ url_for('app.download_image',
resource=resource) }}" height="120" width="120">
...
It works, but I don't think this is the proper way for some reasons: among them, I should manage the Heroku's filesystem to avoid using up all the space between dynos restart (I should delete the images from the filesystem).
Which is the best/preferred way, also considering the performance?
Thanks a lot
The preferred way is to simply create a pre-signed URL for the image, and return a redirect to that URL. This keeps the files private in S3, but generates a temporary, time limited, URL that can be used to download the file directly from S3. That will greatly reduce the amount of work happening on your server, as well as the amount of data transfer being consumed by your server. Something like this:
#app.route('/download/<resource>')
def download_image(resource):
""" resource: name of the file to download"""
s3 = boto3.client('s3',
aws_access_key_id=current_app.config['S3_ACCESS_KEY'],
aws_secret_access_key=current_app.config['S3_SECRET_KEY'])
url = s3.generate_presigned_url('get_object', Params = {'Bucket': 'S3_BUCKET_NAME', 'Key': resource}, ExpiresIn = 100)
return redirect(url, code=302)
If you don't like that solution, you should at least look into streaming the file contents from S3 instead of writing it to the file system.
I have a host for my Laravel website and another (non-laravel) for stored files. Direct access to my files are blocked completely by default and I want to control access to them by creating temporary links in my Laravel site. I know how to code, just want to know the idea of how to do it (not details).
From the Laravel docs
Temporary URLs For files stored using the s3 or rackspace driver, you
may create a temporary URL to a given file using the temporaryUrl
method. This methods accepts a path and a DateTime instance specifying
when the URL should expire:
$url = Storage::temporaryUrl(
'file.jpg', now()->addMinutes(5)
);
You could also make your own solution by directing all image request through your own server and making sure the file visibility is set to private.
Here is an example of how a controller could return image from your storage
public function get($path)
{
$file = Storage::disk('s3')->get($path);
// Do your temp link solution here
return response($file, 200)->header('Content-Type', 'image/png');
}
What i am using right now is Flysystem provided in laravel.Laravel Flysystem integration use simple drivers for working with local filesystems, Amazon S3 and other some space provide also. So for this doesn't matter whether is a server is laravel server or not.
Even better, it's very simple in this to switch between server by just changing server configuration in API.
As far as I know we can create temporary Url for s3 and rackspace in this also by calling temporaryUrl method. Caching is already in this.
That's the thing.
If your files are uploaded on an AWS S3 server
then,
use Storage;
$file_path = "4/1563454594.mp4";
if( Storage::disk('s3')->exists($file_path) ) {
// link expiration time
$urlExpires = Carbon::now()->addMinutes(1);
try {
$tempUrl = Storage::disk('s3')->temporaryUrl($file_path, $urlExpires);
} catch ( \Exception $e ) {
// Unable to test temporaryUrl, its giving driver dont support it issue.
return response($e->getMessage());
}
}
Your temporary URL will be generated, After given expiration time (1 minute). It will expire.
I am trying to save the image link in the db like this
'photo' => asset('uploads/'.$fileName2)));
and in the db it is being saved as
http://localhost:8000/uploads/8730.jpeg
irrespective of the url in config file
APP_URL=http://example.com
asset() helper function generates
a URL for an asset using the current scheme of the request
https://laravel.com/docs/5.3/helpers#method-asset
Then, When you access to your laravel app via http://localhost:8000/, asset('uploads/'.$fileName2) will returns http://localhost:8000/uploads/....