Amazon S3 has different storage classes, with different price brackets.
I was wondering if there's a way I can choose a storage class in Laravel's Filesystem / Cloud Storage solution?
It would be good to choose a class on a per upload basis so I can choose throughout my application, not just once in a configuration file.
To pass additional options to flysystem you have to use getDriver()
Storage::disk('s3')->getDriver()->put(
'sample.txt',
'This is a demo',
[
'StorageClass' => 'REDUCED_REDUNDANCY'
]
);
This can be used in Laravel 7
Storage::disk('s3')->put(
'file path',
$request->file('file'),
[
'StorageClass' => 'STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA|INTELLIGENT_TIERING|GLACIER|DEEP_ARCHIVE',
]
);
You can use putFileAs() Method As Well Like Below
Storage::disk('s3')->putFileAs(
'file path',
$request->file('file'),
'file name',
[
'StorageClass' => 'STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA|INTELLIGENT_TIERING|GLACIER|DEEP_ARCHIVE',
]
);
I can't really find this answer on the internet. Hope it helps someone else.
If you want to set StorageClass on the disk level (once for every upload).
You can change it on the config\filesystems.php
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
'throw' => false,
'options' => [
'StorageClass' => 'INTELLIGENT_TIERING'
]
],
Other possible options...
'ACL',
'CacheControl',
'ContentDisposition',
'ContentEncoding',
'ContentLength',
'ContentType',
'Expires',
'GrantFullControl',
'GrantRead',
'GrantReadACP',
'GrantWriteACP',
'Metadata',
'RequestPayer',
'SSECustomerAlgorithm',
'SSECustomerKey',
'SSECustomerKeyMD5',
'SSEKMSKeyId',
'ServerSideEncryption',
'StorageClass',
'Tagging',
'WebsiteRedirectLocation',
Ref: thephpleague/flysystem-aws-s3-v3
Related
Trying to configure Laravel to upload to my aws s3 bucket. It is working fine until I change visibility to public. Then it seems to work or at least it does not show any error but nothing gets uploaded to aws.
Here is the part in my register controller where I am uploading a profile picture
if($request->hasFile('avatar')) {
$file = $request->file('avatar');
$filename = $file->getClientOriginalName();
$file->storeAs('avatars/' . $user->id, $filename, 's3');
$user->update([
'avatar' => $filename,
]);
}
And here is the configuration for s3 in filesystems.php
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
'throw' => false,
'visibility' => 'public',
],
Without the 'visibility' => 'public', it is working fine but as soon as I add it nothing gets uploaded anymore.
I had this before, you need to Edit Object Ownership to ACLs enabled. By default it is disabled which means that all objects. on the bucket is yours as a private, but ACLs let you choose if you won't object to being public or private.
if you will create a new s3 bucket, in Object Ownership enabled ACLs.
or if you have already created a bucket go to permissions and edit Object Ownership to enable ACLs.
What worked for me was enabling public access to my bucket and adding bucket policy that makes the files in question publicly readable. You can read the official docs here.
Also, make sure to remove the "visibility" => "public" line from Laravel as this prevents the upload from taking place.
I am using spatie media library in Laravel with the below code to upload images to s3 bucket
$file = $this->fileUploadModel
->addMediaFromDisk($file->path(), 's3')
->toMediaCollection();
The image is saved to s3 bucket on the format:
my_bucket_name/1/image_name.png
my_bucket_name/2/image_name.png
etc
However I want a way to store the images inside an images folder ie.
my_bucket_name/images/1/image_name.png
By using only Laravel you can do that with a simple:
$file->store('images','s3');
How can I do that?
I implemented the following solution:
Un file config/filesystems.php I defined the following disk:
's3-media' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'root' => 'images/media',
],
In position root of the s3-media array I indicate the main path where the images will be stored.
I'm using Storage:SFTP (league/flysystem-sftp) to upload some files to an external server. Everything goes fine with a small issue: the files are uploaded with a 0644 (-rw-r--r--) permission. I've tried to use 'public' option on the put method as the example from docs, like
Storage::disk('remote-sftp')->put($filename, $contents, 'public');
but if fails returning FALSE and doesn't uploads the file.
If I remove the 'public' parameter, everything goes well but with the wrong permissions for file.
Is there any way to set the uploaded file permissions to something like 0666?
Finally the solution was a combination of Alpy's answer and configuration.
Calling setVisibility() went without failure, but keep permissions in 0644. Digging into the FTP/SFTP driver found that the 'public' permission has a pattern that can be assigned in config using 'permPublic' key, so writting in config/filesystems.php the desired octal permission it worked as spected.
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
'remote-sftp' => [
'driver' => 'sftp',
'host' => '222.222.222.222',
'username' => 'myuser',
'password' => 'mypassword',
'visibility' => 'public',
'permPublic' => 0766, /// <- this one did the trick
// 'port' => 22,
'root' => '/home',
// 'timeout' => 30,
],
],
];
File permissions are based on two factors. Visibility and Permissions. You can set these two options in the driver config as such:
'remote' => [
'driver' => 'sftp',
'host' => 'hostname',
'root' => '/',
'username' => 'user',
'password' => env('SYSTEM_PASS'),
'visibility' => 'public', // defaults to 'private'
'permPublic' => 0775
]
The permissions are set based on the visibility. So if you set 'permPublic' and don't set 'visibility' nothing will change as, the setVisibility() function uses 'visibility' to get the permissions.
vendor/league/flysystem-sftp/src/SftpAdapter.php
public function setVisibility($path, $visibility)
{
$visibility = ucfirst($visibility);
// We're looking for either permPublic or permPrivate
if (! isset($this->{'perm'.$visibility})) {
throw new InvalidArgumentException('Unknown visibility: '.$visibility);
}
$connection = $this->getConnection();
return $connection->chmod($this->{'perm'.$visibility}, $path);
}
The public default is 0755.
The private default is 0700.
umask
If 'visibility' is not set, I believe the permissions are set based on the remote system user's umask. You are able to modify this on the remote system, if you so choose. set umask for user
Directories
One thing to note while working with permissions is that this will only affect created files. To set the permissions on created directories, use the 'directoryPerm' attribute in your config.
This defaults to 0744
Here is a more global and efficient solution. I needed to control permission on Files and also directories when saving a file under recursive directories.
League SftpAdapter is creating the directories recursively if not exist yet. But the main problem is that, it won't add the permPublic => 0755 for directories, but only files, hence www-data user end up to have no access to the file if it's inside of a newly created directory. The solution is to dive in the code to see what's happening:
'disks' => [
'remote-sftp' => [
'driver' => 'sftp',
'host' => '222.222.222.222',
'port' => 22,
'username' => 'user',
'password' => 'password',
'visibility' => 'public', // set to public to use permPublic, or private to use permPrivate
'permPublic' => 0755, // whatever you want the public permission is, avoid 0777
'root' => '/path/to/web/directory',
'timeout' => 30,
'directoryPerm' => 0755, // whatever you want
],
],
In League\Flysystem\Sftp\StfpAdapter, there is 2 important attributes to see clearly:
/**
* #var array
*/
protected $configurable = ['host', 'hostFingerprint', 'port', 'username', 'password', 'useAgent', 'agent', 'timeout', 'root', 'privateKey', 'passphrase', 'permPrivate', 'permPublic', 'directoryPerm', 'NetSftpConnection'];
/**
* #var int
*/
protected $directoryPerm = 0744;
The $configurable is all possible keys to configure filesystem sftp driver above. You can change directoryPerm from 0744 to 0755 in config file:
'directoryPerm' => 0755,
HOWEVER, because there is kind a like a Bug in StfpAdapter https://github.com/thephpleague/flysystem-sftp/issues/81 that won't use the $config parameter on createDir:
$filesystem = Storage::disk('remote-sftp');
$filesystem->getDriver()->getAdapter()->setDirectoryPerm(0755);
$filesystem->put('dir1/dir2/'.$filename, $contents);
Or set it with public in purpose:
$filesystem->put('dir1/dir2/'.$filename, $contents, 'public');
I found this while looking for a solution and I think I've found what works in Laravel 9 after digging through the flysystem code.
Adding the following settings to my config looks to have done the trick.
'visibility' => 'public',
'permissions' => [
'file' => [
'public' => 0664,
'private' => 0664,
],
'dir' => [
'public' => 0775,
'private' => 0775,
],
],
Please try this:
Storage::disk('remote-sftp')->put($filename, $contents)->setVisibility( $filename, 'public');
assuming the filename is also having the path..
Storage::disk('sftp')->download(...
I'm really just looking for an explanation about memecached and laravel. I understand what it does, but can I use my memcached installation with laravel. More specifically:
'memcached' => [
'driver' => 'memcached',
'persistent_id' => env('MEMCACHED_PERSISTENT_ID'),
'sasl' => [
env('MEMCACHED_USERNAME'),
env('MEMCACHED_PASSWORD'),
],
'options' => [
// Memcached::OPT_CONNECT_TIMEOUT => 2000,
],
'servers' => [
[
'host' => env('MEMCACHED_HOST', '127.0.0.1'),
'port' => env('MEMCACHED_PORT', 11211),
'weight' => 100,
],
],
],
I know/will set up the server aspect, and I get what the options do...but persistent_id, a memcached um and pw...what are they? Their uses? etc.. typically laravel is extremely well document but on memcached it says very little (And the little it does, seems to be dated and not based on 5.0 laravel)
Here is explanation from php.net:
By default the Memcached instances are destroyed at the end of the request. To create an instance that persists between requests, use persistent_id to specify a unique ID for the instance. All instances created with the same persistent_id will share the same connection.
http://php.net/manual/en/memcached.construct.php
So for your project just define a unique name for it.
Hope it helps.
I am developing a laravel 5 project and storing image files using imagine. I would want to store my image files in a folder outside the project's root folder. I am stuck at the moment The external folder where image files are supposed to be stored, I want to make it accessible via a sub-domain something like http://cdn.example.com Looking towards your solutions.
The laravel documentation could give you a helping hand.
Otherwise you could go to config/filesystems.php and add your own custom storage path for both local and production:
return [
'default' => 'custom',
'cloud' => 's3',
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path().'/app',
],
'custom' => [
'driver' => 'custom',
'root' => '../path/to/your/new/storage/folder',
],
's3' => [
'driver' => 's3',
'key' => 'your-key',
'secret' => 'your-secret',
'region' => 'your-region',
'bucket' => 'your-bucket',
],
'rackspace' => [
'driver' => 'rackspace',
'username' => 'your-username',
'key' => 'your-key',
'container' => 'your-container',
'endpoint' => 'https://identity.api.rackspacecloud.com/v2.0/',
'region' => 'IAD',
],
],
];
get ur path name from base_path(); function, then from the string add your desired folder location.
suppose ur
base_path() = '/home/user/user-folder/your-laravel-project-folder/'
So ur desired path should be like this
$path = '/home/user/user-folder/your-target-folder/'.$imageName;
make sure u have the writing and reading permission
You can move all or a part of storage folder in any folder of yours in your server. You must put a link from old to new folder.
ln -s new_fodler_path older_folder_path
You can make a new virtual host to serve the new folder path.