Laravel use S3/CloudFront resources in Blade - laravel

I'm a bit confused about what the best approach is in order to access an image, which is uploaded to S3 and delivered via CloudFront and show it in a view (using the CloudFront URL). I use Laravel 5.5 and I deposited the CDN URL already to my S3 configuration:
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => 'https://xxxxxx.cloudfront.net/',
],
The following possibilities work
Copy and paste the CloudFront link into the template (not the laravel way, I guess).
Using <img src="{{ Storage::url('assets/img/image.png') }}" />. This one works, but is it the right approach? The problem here is, that if I change the FILESYSTEM_DRIVER back to local I can't reference the resources in my DOCROOT/public/img folder like I did earlier with {{ asset('img/icons/time.png') }}, so I'm loosing flexibility - maybe I need to copy the assets to DOCROOT/storage/app/public/ which used by the local driver?
I'm integrating CloudFront the first time to a Laravel app, so could someone who did that before tell me what the right approach is? Thank you very much.

This is a good approach. But when using the local filesystem driver, you would use the public/storage/assets/img directory, not the public/img directory to make it equivalent.
https://laravel.com/docs/5.6/filesystem#the-public-disk
The Public Disk
The public disk is intended for files that are going to be publicly accessible. By default, the public disk uses the local driver and stores these files in storage/app/public. To make them accessible from the web, you should create a symbolic link from public/storage to storage/app/public. This convention will keep your publicly accessible files in one directory that can be easily shared across deployments when using zero down-time deployment systems like Envoyer.
To create the symbolic link, you may use the storage:link Artisan command:
php artisan storage:link
File URLs
You may use the url method to get the URL for the given file. If you are using the local driver, this will typically just prepend /storage to the given path and return a relative URL to the file. If you are using the s3 or rackspace driver, the fully qualified remote URL will be returned:

Related

How to configure filesystems to send images from one Laravel application to another?

My application consists of two different laravel applications.
One, (APP-A) is the front end for users.
Another (APP-B) is the backoffice, used only by content managers.
Both consume the same database.
The problem to be solved has to do with the storage of images and other files.
During development, I want to store the images in APP-B storage.
For this, I need to send the images from APP-A to APP-B and perform the other applicable CRUD operations.
How should I configure filesystems.php for this purpose? Do I have to do it in APP-A and APP-B filesystems? And file .env?
EDITED 17/03
APP-B (backoffice)
Folders:
Storage/uploadFiles/images
filesystems.php
uploadFiles' => [
'driver' => 'local',
'root' => storage_path('app/uploadFiles'),
],
'links' => [
public_path('uploadFiles') => storage_path('app/uploadFiles')
],
On APP-B controllers to read images from APP-B storage
$url = asset('images/');
On APP-A (Front End for users)
.env file
ASSET_URL= http://my_app.dv/uploadFiles/
Note: php artisan config:clear required
For read images stored in APP-B from APP-A controller , just:
$url = asset('images')
It works.
Problem to solve: store a image in APP-B storage from a APP-A controller
$file = $request->file('file');
$path = $file->store('images');
This will store in APP-A instead of App-B as desired.
How to solve this (for development purposes only)?
You can set up AWS S3 for both applications and use the same key in both applications to access the same bucket. Result: Both applications uses the same "storage directory".

Should I use CloudFront together as TemporaryUrl for sensitive files in s3

I have a project where I was storing files in the server itself. The storage is increasing so I need to use a bucket. I thought s3 is the way to go.
The issue is the pdf files are sensitive and I don't want to open them to public. I read about a service called CloudFront but then the new feature of Laravel TemporaryUrl as well.
So as far as I understand, I shouldn't just use s3, but I should use TemporaryUrl too. Do I need to use CloudFront too? So s3 -> CloudFront -> TemporaryUrl? Or was TemporaryUrl's purpose to eliminate CloudFront in between?
So is this enough with TemporaryUrl approach?
// For saving the file:
Storage::put('file.jpg', $contents, 'private');
// For retrieving:
if ($user->has_permission) {
$url = Storage::disk('s3')->temporaryUrl(
'file1.jpg', Carbon::now()->addMinutes(5)
);
}
I am pretty confused and couldn't really find any walkthroughs on this topic. So how should I store and serve sensitive data with Laravel 5.6? I'd be glad for a clarification
You can use CloudFront and laravel's TemporaryUrl together. For that you just need to tell laravel s3 driver to use CloudFront url as endpoint in config/filesystem.php. Like this
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
]
Now in your .env file define your clouldFront url in it like this
AWS_ENDPOINT="https://mycloud.cloudfront.net"
Now when you use laravel's TemporaryUrl it will give you cloudFront url.
EDIT: (After comment)
Do I need to use CloudFront for sensitive data
CloudFront is used for Content delivery networks (CDN). So, it has nothing to do with security it uses S3 bucket as origin and server files from there based on it configured.
S3 is enough for security?
S3 has sufficient file permission system that can protect your file, just configure it properly. You can store your file privately at S3 and then use laravel TemporaryUrl. What it does internally just create a AWS signed url with expiry time. So, yes you can use it. If any day you need to speed your file delivery then create CloudFront and use it as endpoint

crudbooster, How to change storage path in laravel to public?

I use crudbooster admin generator in laravel 5.4, if I upload a file from a form, it will be stored in own storage directory. Then I should create a symlink to that storage path in public path.
I can't use creating symlinks on my system because of the limitation. How can I upload files directly to public/uploads instead of storage/app/uploads?
I honestly have no clue. Best I can do is point you in the right direction. Crudbooster depends on a package called UniShard/laravel-filemanager. I was looking through the config file for this package and it is similar to the lfm.php in your config folder except there are more options. Perhaps you could use one of these additional options such as base_directory. Not sure how this will work out and it could just be poorly coded.
Original Answer:
You need to create a new disk.
In config\filesystems.php, add this.
'disks' => [
'uploads' => [
'driver' => 'local',
'root' => public_path() . '/uploads'
],
],
Now, when you want to use the storage facade you can just do this:
Storage::disk('uploads');
An example usage:
Storage::disk('uploads')->put('filename', $file_content);
Hope this helps.
we can change it in
vendor/crocodicstudio/crudbooster/src/controllers/cbcontroller.php line: 1010

Sub-domain routing in Laravel on shared hosting

I'm using Laravel framework version 5.1 for my web application that will be run on a shared hosting. Since I have a very limited access to this host, (e.g. I don't have a SSH access, can't create virtual hosts inside the machine etc.) I had to make some tricks to run it smoothly. Here is what I've done so far:
Moved the files inside of the public/ directory to my root directory.
Changed the file paths of auto load registrar in the public/index.php with the new directory.
Everything works as intended except the sub-domain routing. I defined api.myapp.com as wildcard on routes.php but when I try to visit this subdomain, Chrome gives me DNS_PROBE_FINISHED_NXDOMAIN error. Here is the code from routes.php:
Route::group([
'domain' => 'api.myapp.com',
'prefix' => 'v1',
'middleware' => 'cors'
], function () {
// some RESTful resource controllers
});
How can I manage to work this subdomain? What am I missing?
Thanks in advance!

Session not persisting on shared hosting - Laravel 4.2.17

I have a problem with the sessions on the shared hosting.
I developed an app on a local server (XAMPP) and it works great (sessions, auth etc). The problems have appeared when I moved the app on a shared hosting.
I realized that the sessions are not persisting from a page to another or from AJAX files to another page and the Authentication does not work either .
The only session that persists is the _token which has a different value after every refresh of the page.
I have the following configuration in the session.php file:
'driver' => 'database',
'lifetime' => 120,
'expire_on_close' => false,
'lottery' => array(2, 100),
'path' => '/',
'domain' => null
First, I used file driver and I had the same problem, and now I used the database.
Both file and database work on the local server but on the shared hosting they do not.
I tried all the solutions found on the forum but still I have the same problem.
I think the problem is at the session domain setting because when I change the value from null to other string on my local server, I have the same problem that I have encountered online.
Can you help me, please!
Thanks, Mirel
I fixed the problem. In my case the error because I have added a php closed tag ?> in the end of the included files. So removing this tag will bring the application back to normal behavior.

Resources