20 images get uploaded instead of 30 - laravel

I'm using Laravel with the plugin to create files in AWS S3 (league/flysystem-aws-s3-v3).
I'm having an issue where:
I have an API call with a method in a controller that receives an array of files.
The method reads all the files and uploads them to S3.
For some reason, if I send more than 20 files, only 20 files get uploaded to AWS S3.
Since the plugin for AWS S3 uses Guzzle under the hood, I was thinking it could be related to a timeout or maximum number of calls to be made within a certain period.
Any ideas of what might be causing this?

Looks like a limitation in your php.ini file.
When you install php this is the default configuration:
; Maximum number of files that can be uploaded via a single request
max_file_uploads = 20
Try changing this limit and then restarting your server (apache, nginx, etc)

Please verify your php.ini file with below two values
Please try increasing "upload_max_filesize"
; Maximum allowed size for uploaded files.
upload_max_filesize = 2M
Also check "max_file_uploads" is greater than 20.
; Maximum number of files that can be uploaded via a single request
max_file_uploads = 20

Related

Laravel Lumen directly Download and Extract ZIP file to Google Cloud Storage

My goal is to download a large zip file (15 GB) and extract it to Google Cloud using Laravel Storage (https://laravel.com/docs/8.x/filesystem) and https://github.com/spatie/laravel-google-cloud-storage.
My "wish" is to sort of stream the file to Cloud Storage, so I do not need to store the file locally on my server (because it is running in multiple instances, and I want to have the disk size as small as possible).
Currently, there does not seem to be a way to do this without having to save the zip file on the server. Which is not ideal in my situation.
Another idea is to use a Google Cloud Function (eg with Python) to download, extract and store the file. However, it seems like Google Cloud Functions are limited to a max timeout of 9 mins (540 seconds). I don't think that will be enough time to download and extract 15GB...
Any ideas on how to approach this?
You should be able to use streams for uploading big files. Here’s the example code to achieve it:
$disk = Storage::disk('gcs');
$disk->put($destFile, fopen($sourceZipFile, 'r+'));

Heroku - Redis memory > 'maxmemory' with a 20MB file in hobby:dev where it should be 25MB

So I am trying to upload a file with Celery that uses Redis on my Heroku website. I am trying to upload a .exe type file with the size of 20MB. Heroku is saying in they're hobby: dev section that the max memory that could be uploaded is 25MB. But I, who is trying to upload a file in Celery(turning it from bytes to base64, decoding it and sending it to the function) is getting kombu.exceptions.OperationalError: OOM command not allowed when used memory > 'maxmemory'. error. Keep in mind when I try to upload for e.g a 5MB file it works fine. But 20MB doesn't. I am using Python with the Flask framework
There are two ways to store files in DB (Redis is just an in-memory DB). You can either store a blob in the DB (for small files, say a few KBs), or you can store the file in memory and store a pointer to the file in DB.
So for your case, store the file on disk and place only the file pointer in the DB.
The catch here is that Heroku has a Ephemeral file system that gets erased every 24 hours, or whenever you deploy new version of the app.
So you'll have to do something like this:
Write a small function to store the file on the local disk (this is temporary storage) and return the path to the file
Add a task to Celery with the file path i.e. the parameter to the Celery task will be the "file-path" not a serialized blob of 20MB data.
The Celery worker process picks the task you just enqueued when it gets free and executes it.
If you need to access the file later, and since the local heroku disk only has temporary, you'll have to place the file in some permanent storage like AWS S3.
(The reason we go through all these hoops and not place the file directly in S3 is because access to local disk is fast while S3 disks might be in some other server farm at some other location and it takes time to save the file there. And your web process might appear slow/stuck if you try to write the file to S3 in your main process.)

Issue with GSM FTP file upload

We were trying to upload image file to the FTP server using GSM. We are able to upload small .text files. (less than 1KB).
But we are not able to upload files(image file of size 10kb) to the server as when we are trying to connect to server we are getting the following return command
"
AT+FTPPUT=1
OK
+FTPPUT:1,300
"
from our knowledge, we can upload file with size of 1300 bytes.
How we can upload larger files (around 10kb) to the server?
Do we need to split the files? (we tried that but recombining may cause error).
I request your support in this regards. Thanks in advance.....

heroku rack:cache .. vs Amazon S3 + Amazon Cloudfront

Using this reference about Heroku Cedar,
https://devcenter.heroku.com/articles/rack-cache-memcached-rails31#rackcache-storage
They recommend using a combination of rack:cache (as entity store) and memcache (as meta store) .. The actual files are stored in the entity store, I believe..
In the guide above they set it to "file:tmp/cache/rack/body"..
Say I want to store static html files in cache and have them expire in 7 days.. Am I better off by using the above rack:cache + memcached combo or would I be better off by just storing all my html files in Amazon S3 + Cloudfront CDN and then running a cron job to delete all html files from my S3 bucket to ensure fresh html pages every seven days.
the logic would be as follows:
If user requests a particular html file from S3 and it does not exist .. my app generates a new html file in S3 and we wait until the next S3 Mass-delete operation via cron.
Files are only generated if users "request for them"... I have about 5 million static files that need to be at most a week old.. I do not want my S3 bucket to fill up with 5 million HTML files unless my visitors are requesting all 5 million html files every week... I estimate they will request only about 10k unique files a week or so..
So my question here is.. which would be more efficient and faster? Storing all my html files in rack:cache entitystore.... with memcache acting as metastore....... OR to run with Amazon S3 + Cloudfront?
I'm looking for 2 angles here:
Which is better for reducing total time to get to user?
Which is better for reducing the load on my webserver?
The solution might address both issues.

Upload 6 MB image in magento

I wants to upload 6MB image in the products of the magento store. Please help me where i have to change my maximum limit ? This code did not work in php.ini file
upload_max_filesize = 10M
post_max_size = 10M
Any suggestion would be appreciable
Typically the server process (apache, httpd or php-cgi) needs to be restarted after making changes to php.ini. This might be why you are not seeing any difference.
Another way is to put your upload_max_filesize and post_max_size settings in a .htaccess file in the root of your Magento directory. Apache tends to read that more often.
Are you getting an error when you upload the file, or does it just time out? It might be that the dimensions of the image (say 4,000 x 5,000) are too big for scaling/cropping.
Place a file in your root with in it and call it phpinfo.php. Now go to http://www.yoursite.com/phpinfo.php and see what the maximum upload size is.
If you are on shared hosting it may not be possible to increase your php settings beyond what your hosting provider allows. This could be the reason why your settings are not taking hold. Run phpinfo.php and take things from there.

Resources