I am developing an API using Codeigniter and I want to let users upload images to my Amazon S3 account. I am using Phils Restserver and Donovan Schönknecht S3 library (for Ci).
It works perfectly to upload a local file to Amazon but how can I get the image file
sent via normal external form?
Using the built in Ci upload library it works fine but then I have to store the files
locally on my own server and I want them on S3. Can the two be combined?
I guess what I am asking is how can I "get" the image file that is sent to the controller
and resize it and then upload it to S3?
Do I perhaps need to temporary save it on the local server, upload it to S3 and then remove it from the local server?
This is my "upload" modal:
// Load the s3 library
$this->load->library('S3');
// Make the upload
if ($this->s3->putObjectFile($args['local'], "siticdev", $args['remote'], S3::ACL_PUBLIC_READ)) {
// Handle success
return TRUE;
} else {
// Handle failure
return FALSE;
}
Thankful for all input!
If I understand you correctly, you want a user to upload an image via a form, resize that image, then transfer that to Amazon S3.
You'll have to store the file locally (at least for a few seconds) to resize it with CI. After you resize it, then you can transfer it to Amazon S3. In your success callback from the transfer, you can delete the image from your server.
You should definitely check out the CI S3 library. The "spark" is available here - http://getsparks.org/packages/amazon-s3/versions/HEAD/show
Related
I'm using Laravel Vapor to host a site. Up until now, I've not had a problem with the lack of a filesystem, but now I've hit a brick wall.
I'm trying to optimize .png and .jpeg files, and the libraries I found require a filesystem to write the compressed files:
Image Optimizer (https://github.com/spatie/image-optimizer)
PHP Image Cache (https://nielse63.github.io/php-image-cache/)
I'm guessing that I can set up an external service that runs on an additional traditional server... But I'd prefer to make it work with Vapor.
Any ideas?
Have you tried using GD Library or Imagick directly?
Using Imagick with files on s3 you can do something like this:
$s3 = \Storage::disk('s3');
$file = $s3->get('tmp/'.$uuid); // assuming Vapor upload to S3 here
$imagick = new \Imagick(); // extension can be added to Vapor.
$imagick->readImageBlob($file);
$imagick->thumbnailImage(200,200);// whatever size you are looking for.
$s3->put('path/on/s3/for/your/optimized/file',$imagick->getImageBlob(),['CacheControl' => 'max-age=10000000, public', 'ACL' => 'public-read']) // whatever options you need
Note both read and write are from/to s3 directly. No need to write to local disk
Can anyone help me how to upload a file into aws S3 bucket using PHP laravel. But the file should directly get uploaded into S3 using pre signed URL.
I will try to answer this question. So, there are two ways to do this:
You send the pre-signed URL to Frontend Client and let them upload the file to S3 directly, and once uploaded they notify your server of the same.
You receive the file directly on the server and upload it to S3, in this case, you won't need any pre-signed URL, as you would have already configured the AWS access inside the project.
Since solution 1 is self-explanatory, I will try to explain the solution 2.
Laravel provides Storage Facade for handling filesystem operations. It follows the philosophy of multiple drivers - Public, Local Disk, Amazon S3, FTP plus option of extending the driver.
Step 1: Configure your .env file with AWS keys, you will need the following values to start using Amazon S3 as the driver:
AWS Key
AWS Secret
AWS Bucket Name
AWS Bucket Region
Step 2: Assuming that you already have the file uploaded to your server. We will now upload the file to S3 from our server.
If you have mentioned s3 as the default disk, following snippet will do the upload for you:
Storage::put('avatars/1', $fileContents);
If you are using multiple disks, you can upload the file by:
Storage::disk('s3')->put('avatars/1', $fileContents);
We are done! Your file is now uploaded to your S3 bucket. Double-check it inside you S3 bucket.
If you wish to learn more about Laravel Storage, click here.
use Storage;
use Config;
$client = Storage::disk('s3')->getDriver()->getAdapter()->getClient();
$bucket = Config::get('filesystems.disks.s3.bucket');
$command = $client->getCommand('PutObject', [
'Bucket' => $bucket,
'Key' => '344772707_360.mp4' // file name in s3 bucket which you want to access
]);
$request = $client->createPresignedRequest($command, '+20 minutes');
// Get the actual presigned-url
return $presignedUrl = (string)$request->getUri();
We can use 'PutObject' to generate a signed-url for uploading files onto S3.
Make sure this package is insalled:
composer require league/flysystem-aws-s3-v3 "^1.0"
Create access credentials on AWS and set these variables in .env file
AWS_ACCESS_KEY_ID=ORJATNRFO7SDSMJESWMW
AWS_SECRET_ACCESS_KEY=xnzuPuatfZu09103/BXorsO4H/xxxxxxxxxx
AWS_DEFAULT_REGION=ap-south-1
AWS_BUCKET=xxxxxxx
AWS_URL=http://xxxxx.s3.ap-south-1.amazonaws.com/
public function uploadToS3(Request $request)
{
$file = $request->file('file');
\Storage::disk('s3')->put(
'path/in/s3/filename.jpg',
file_get_contents($file->getRealPath())
);
}
Create credentials here:
I have Laravel based web and mobile application that stores images on AWS S3 and I want to add cache support because even small number of app users produce hundreds and sometimes thouthands of GET requests on AWS S3.
To get image from mobile app I use GET request that is being handled by code like this
public function showImage(....) {
...
return Storage::disk('s3')->response("images/".$image->filename);
}
On the next image you can see response headers that I receive. Cache-Control shows no-cache so I assume that mobile app won't cache this image.
How can I add cache support for this request? Should I do it?
I know that Laravel Documentaion suggests caching for Filestorage - should I implement it for S3? Can it help to decrease GET requests count of read files from AWS S3? Where can I find more info about it.
I would suggest to use a temporary URL as described here: https://laravel.com/docs/7.x/filesystem#file-urls
Then use the Cache to store it until it is expired:
$value = Cache::remember('my-cache-key', 3600 * $hours, function () use ($hours, $image) {
$url = Storage::disk('s3')->temporaryUrl(
"images/".$image->filename, now()->addMinutes(60 * $hours + 1)
);
});
Whenever you update the object in S3, do this to delete the cached URL:
Cache::forget('my-cache-key');
... and you will get a new URL for the new object.
You could use a CDN service like CloudFlare and set a cache header to let CloudFlare keep the cache for a certain amount of time.
$s3->putObject(file_get_contents($path), $bucket, $url, S3::ACL_PUBLIC_READ, array(), array('Cache-Control' => 'max-age=31536000, public'));
This way, files would be fetched once by CloudFlare, stored at their servers, and served to users without requesting images from S3 for every single request.
See also:
How can I reduce my data transfer cost? Amazon S3 --> Cloudflare --> Visitor
How to set the Expires and Cache-Control headers for all objects in an AWS S3 bucket with a PHP script
I am developing a Flask app running on Heroku that allows users to upload images. The app has a page displaying the user's images in a table.
For developing purposes, I am saving the uploaded files to Heroku's ephemeral file system, and everything works fine: the images are correctly loaded and displayed (I am using the last method shown here implying the use of send_from_directory()). Now I have moved the storage to S3 and I am trying to adapt the code. I use boto3 to upload the files to the bucket: it works fine. My doubts are related to the download to populate the users' pages with their images.
As explained here, I could set the file as "public-read" and use the URL (I think this is what Flask-S3 does), but I'd rather prefer not to leave free access to the files. So, my solution attempt is to download the file to Heroku's filesystem and serve the image using again the send_from_directory() as follows:
app.py
#app.route('/download/<resource>')
def download_image(resource):
""" resource: name of the file to download"""
s3 = boto3.client('s3',
aws_access_key_id=current_app.config['S3_ACCESS_KEY'],
aws_secret_access_key=current_app.config['S3_SECRET_KEY'])
s3.download_file(current_app.config['S3_BUCKET_NAME'],
resource,
os.path.join('tmp',
resource))
return send_from_directory('tmp', # Heroku's filesystem
resource,
as_attachment=False)
Then, in the template I generate the URL for the image as follows:
...
<img src="{{ url_for('app.download_image',
resource=resource) }}" height="120" width="120">
...
It works, but I don't think this is the proper way for some reasons: among them, I should manage the Heroku's filesystem to avoid using up all the space between dynos restart (I should delete the images from the filesystem).
Which is the best/preferred way, also considering the performance?
Thanks a lot
The preferred way is to simply create a pre-signed URL for the image, and return a redirect to that URL. This keeps the files private in S3, but generates a temporary, time limited, URL that can be used to download the file directly from S3. That will greatly reduce the amount of work happening on your server, as well as the amount of data transfer being consumed by your server. Something like this:
#app.route('/download/<resource>')
def download_image(resource):
""" resource: name of the file to download"""
s3 = boto3.client('s3',
aws_access_key_id=current_app.config['S3_ACCESS_KEY'],
aws_secret_access_key=current_app.config['S3_SECRET_KEY'])
url = s3.generate_presigned_url('get_object', Params = {'Bucket': 'S3_BUCKET_NAME', 'Key': resource}, ExpiresIn = 100)
return redirect(url, code=302)
If you don't like that solution, you should at least look into streaming the file contents from S3 instead of writing it to the file system.
I have a host for my Laravel website and another (non-laravel) for stored files. Direct access to my files are blocked completely by default and I want to control access to them by creating temporary links in my Laravel site. I know how to code, just want to know the idea of how to do it (not details).
From the Laravel docs
Temporary URLs For files stored using the s3 or rackspace driver, you
may create a temporary URL to a given file using the temporaryUrl
method. This methods accepts a path and a DateTime instance specifying
when the URL should expire:
$url = Storage::temporaryUrl(
'file.jpg', now()->addMinutes(5)
);
You could also make your own solution by directing all image request through your own server and making sure the file visibility is set to private.
Here is an example of how a controller could return image from your storage
public function get($path)
{
$file = Storage::disk('s3')->get($path);
// Do your temp link solution here
return response($file, 200)->header('Content-Type', 'image/png');
}
What i am using right now is Flysystem provided in laravel.Laravel Flysystem integration use simple drivers for working with local filesystems, Amazon S3 and other some space provide also. So for this doesn't matter whether is a server is laravel server or not.
Even better, it's very simple in this to switch between server by just changing server configuration in API.
As far as I know we can create temporary Url for s3 and rackspace in this also by calling temporaryUrl method. Caching is already in this.
That's the thing.
If your files are uploaded on an AWS S3 server
then,
use Storage;
$file_path = "4/1563454594.mp4";
if( Storage::disk('s3')->exists($file_path) ) {
// link expiration time
$urlExpires = Carbon::now()->addMinutes(1);
try {
$tempUrl = Storage::disk('s3')->temporaryUrl($file_path, $urlExpires);
} catch ( \Exception $e ) {
// Unable to test temporaryUrl, its giving driver dont support it issue.
return response($e->getMessage());
}
}
Your temporary URL will be generated, After given expiration time (1 minute). It will expire.