I have a host for my Laravel website and another (non-laravel) for stored files. Direct access to my files are blocked completely by default and I want to control access to them by creating temporary links in my Laravel site. I know how to code, just want to know the idea of how to do it (not details).
From the Laravel docs
Temporary URLs For files stored using the s3 or rackspace driver, you
may create a temporary URL to a given file using the temporaryUrl
method. This methods accepts a path and a DateTime instance specifying
when the URL should expire:
$url = Storage::temporaryUrl(
'file.jpg', now()->addMinutes(5)
);
You could also make your own solution by directing all image request through your own server and making sure the file visibility is set to private.
Here is an example of how a controller could return image from your storage
public function get($path)
{
$file = Storage::disk('s3')->get($path);
// Do your temp link solution here
return response($file, 200)->header('Content-Type', 'image/png');
}
What i am using right now is Flysystem provided in laravel.Laravel Flysystem integration use simple drivers for working with local filesystems, Amazon S3 and other some space provide also. So for this doesn't matter whether is a server is laravel server or not.
Even better, it's very simple in this to switch between server by just changing server configuration in API.
As far as I know we can create temporary Url for s3 and rackspace in this also by calling temporaryUrl method. Caching is already in this.
That's the thing.
If your files are uploaded on an AWS S3 server
then,
use Storage;
$file_path = "4/1563454594.mp4";
if( Storage::disk('s3')->exists($file_path) ) {
// link expiration time
$urlExpires = Carbon::now()->addMinutes(1);
try {
$tempUrl = Storage::disk('s3')->temporaryUrl($file_path, $urlExpires);
} catch ( \Exception $e ) {
// Unable to test temporaryUrl, its giving driver dont support it issue.
return response($e->getMessage());
}
}
Your temporary URL will be generated, After given expiration time (1 minute). It will expire.
Related
I'm looking for a hacky way to create temporary URLs with Minio
I see on the Laravel docs it says: Generating temporary storage URLs via the temporaryUrl method is not supported when using MinIO.
However from some digging I noticed that I can upload images successfully using:
AWS_ENDPOINT=http://minio:9000
I can't view them because the temporary url is on http://minio:9000/xxx
If I change the AWS endpoint to
AWS_ENDPOINT=http://localhost:9000
The temporary url is on http://localhost:9000/xxx, the signature is validated and the file can be viewed.
The issue exists in this call to make the command. The $command needs to have the host changed but I don't know if I can do that by just passing in an option.
$command = $this->client->getCommand('GetObject', array_merge([
'Bucket' => $this->config['bucket'],
'Key' => $this->prefixer->prefixPath($path),
], $options));
There is also the option to just change the baseUrl by providing a temporary_url in the filesystem config. however, because the URL has changed the signature is invalid.
Is there a way I can update the S3Client to use a different host either by passing an option to the getCommand function or by passing a new S3Client to the AWS adapter to use the correct host?
A very hacky solution I've found is to re-create the AwsS3Adatapter:
if (is_development()) {
$manager = app()->make(FilesystemManager::class);
$adapter = $manager->createS3Driver([
...config("filesystems.disks.s3_private"),
"endpoint" => "http://localhost:9000",
]);
return $adapter->temporaryUrl(
$this->getPathRelativeToRoot(),
now()->addMinutes(30)
);
}
I am trying to import an excel sheet in Laravel, hosted in AWS Lambda and I am getting error
touch(): Unable to create file /var/task/storage/framework/laravel-excel/laravel-excel-ToQHNqV18ybdHCmqQFJKidLr5dSsWSUe.xlsx because Read-only file system
My code to import is
Excel::toArray(new ClientCompanyImport, $request->file('sales_accounts_sheet'));
Then I tried to mention the disc name as third parameter as
Excel::toArray(new ClientCompanyImport, 'mysheet.xlsx', 's3');
and uploaded 'mysheet.xlsx' file in s3 bucket path as 'storage/frameworks/laravel-excel/mysheet.xlsx'
Still I am getting same error. As I understand correctly, after this change, system is getting the file from S3 location, but still trying to keep the file temporary in default location, that is readonly in Lambda.
Laravel Version: 8
Laravel-Excel: 3.1
After reading a lot of articles and forums, I found the solution.
Main reason of above error is due to the readonly mode of AWS Lambda. By default the excel is trying to upload in the root folder, that is readonly in the case of Lambda and it fails.
Only editable path supported by Lambda is /tmp
The solution is to specify the path in config/excel.php
return [
'temporary_files' => [
'local_path' => sys_get_temp_dir(),
],
.
.
That is correct, by default the root path where your Laravel application runs is "read-only", Lambda provides a temporary directory where it is possible to perform write operations.
In your case the solution is adequate, but keep in mind that you can have the same problem with other services such as session and cache using the "file" driver.
You can write this little block of code inside your AppServiceProvider.php to change the "storage_path" that points to the root folder where your application lives.
public function register()
{
// Valid only in the production evironment.
if ($this->app->environment('production')) {
$this->app->instance('path.storage', '/tmp/laravel/storage');
}
}
I have Laravel based web and mobile application that stores images on AWS S3 and I want to add cache support because even small number of app users produce hundreds and sometimes thouthands of GET requests on AWS S3.
To get image from mobile app I use GET request that is being handled by code like this
public function showImage(....) {
...
return Storage::disk('s3')->response("images/".$image->filename);
}
On the next image you can see response headers that I receive. Cache-Control shows no-cache so I assume that mobile app won't cache this image.
How can I add cache support for this request? Should I do it?
I know that Laravel Documentaion suggests caching for Filestorage - should I implement it for S3? Can it help to decrease GET requests count of read files from AWS S3? Where can I find more info about it.
I would suggest to use a temporary URL as described here: https://laravel.com/docs/7.x/filesystem#file-urls
Then use the Cache to store it until it is expired:
$value = Cache::remember('my-cache-key', 3600 * $hours, function () use ($hours, $image) {
$url = Storage::disk('s3')->temporaryUrl(
"images/".$image->filename, now()->addMinutes(60 * $hours + 1)
);
});
Whenever you update the object in S3, do this to delete the cached URL:
Cache::forget('my-cache-key');
... and you will get a new URL for the new object.
You could use a CDN service like CloudFlare and set a cache header to let CloudFlare keep the cache for a certain amount of time.
$s3->putObject(file_get_contents($path), $bucket, $url, S3::ACL_PUBLIC_READ, array(), array('Cache-Control' => 'max-age=31536000, public'));
This way, files would be fetched once by CloudFlare, stored at their servers, and served to users without requesting images from S3 for every single request.
See also:
How can I reduce my data transfer cost? Amazon S3 --> Cloudflare --> Visitor
How to set the Expires and Cache-Control headers for all objects in an AWS S3 bucket with a PHP script
I'm having a problem loading images in my html dynamically after storing them successfully with Laravel Vapor.
I have followed this documentation provided by laravel vapor to store files, and it works like a charm. I copy my uploaded files from the tmp directory into the root of my S3 bucket and then store the path of that file in my databases images table so that later I can return the file path to my front end and display the image in my browser.
Unfortunately this is always returning a 403 status code from AWS S3.
I could fix this by making my generated S3 bucket public, but that would raise a security issue. I believe this should work out of the box, not sure where I could have gone wrong... any ideas?
I am returning the uploaded image url using the Storage facade.
use Illuminate\Support\Facades\Storage;
return Storage::url($image->path);
Where $image->path is the file path in my S3 bucket.
I'm sure that the storage facade is working correctly because it is returning the correct url with the file's path.
I got the solution to this problem. I contacted laravel vapor support and I was told to set the visibility property for my file to public when I copy it to the permanent location, as stated in Laravel's official documentation here.
So after you upload your file using the js vapor.store method you should copy it to a permanent directory, then set it's visibility to public.
Storage::copy($request->path, str_replace('tmp/', '', $request->path));
Storage::setVisibility(str_replace('tmp/', '', $request->path), 'public');
I also noticed that your can set the visibility of the file directly in the vapor.store method by passing a visibility attribute with the respective value.
vapor.store(file, { visibility: 'public-read' });
As a side note: just 'public' will return a 400 bad request, it must be set to 'public-read'.
I'm writing an app at the moment which makes use of web-sockets and therefore needs to keep track of its users somehow.
I don't really want my users to register. Before using the app they should choose a name and will get a JWT-Token for it. I don't want to save anything in a database. As these names can be non-unique I will probaply add an Id.
I'm trying to use tymon/jwt-auth": "^1.0.0-rc.3.
public function login()
{
$token = auth()->tokenById(1234));
return $this->respondWithToken($token);
}
For some reason the tokenById-Function seems to not be available.
Postman says: BadMethodCallException: Method Illuminate\Auth\SessionGuard::tokenById does not exist.
In my case i have clear the cache. Then its working fine