magento 2 catalog:images:resize. Stop and go? - image

Running magento 2.3.6
Launched php bin/magento catalog:images:resize command
Since I've got more than 600thousands of entries it will take a while.
Problem is: no matter how much memory do I give (-dmemory_limit=10G), it stops after few hours due to memory failure.
Is there a way to stop and resume?
I wonder why memory accumulates like this

About your memory issue you can use this module.
https://dolphinwebsolution.com/shop/catalog-image-resizer-for-magento.html
I Hope This Helps You.

Related

Redis intermittent "crash" on Laravel Horizon. Redis stops working every few weeks/months

I have an issue with Redis that effects running of Laravel Horizon Queue and I am unsure how to debug it at this stage, so am looking for some advice.
Issue
Approx. every 3 - 6 weeks my queues stop running. Every time this happens, the first set of exceptions I see are:
Redis Exception: socket error on read socket
Redis Exception: read error on connection to 127.0.0.1:6379
Both of these are caused by Horizon running the command:
artisan horizon:work redis
Theory
We push around 50k - 100k jobs through the queue each day and I am guessing that Redis is running out of resources over the 3-6 week period. Maybe general memory, maybe something else?
I am unsure if this is due to a leak wikthin my system or something else.
Current Fix
At the moment, I simply run the command redis-cli FLUSHALL to completely clear the database and we are back working again for another 3 - 6 weeks. This is obviously not a great fix!
Other Details
Currently Redis runs within the webserver (not a dedicated Redis server). I am open to changing that but it is not fixing the root cause of the issue.
Help!
At this stage, I am really unsure where to start in terms of debugging and identifing the issue. I feel that is probably a good first step!

Memurai hangs while processing

I am using Memurai 2.0.2 for cache in my distributed application. It runs different services on different machines and all services have Memurai details with them.
The problem that happens is, that sometimes Memurai process just hangs. The Memurai process keeps on running but no queries are served. I am not able to create a connection to it. It's log file consists of an error:
Error trying to rename the existing AOF to old tempfile: Broken pipe
This generally occurs when I restart the Memurai service. Although I am not sure what is the reason for it. Memurai works fine if I restart its service once.
What can be the issue here? What steps can I take to avoid/ minimize its occurrence?
Memurai 2.0.2 is fairly outdated now. Perhaps get the latest version (3.1.4 at the time of this response) at https://www.memurai.com/get-memurai
For whoever looking for an answer, this happened because another service restarted Memurai service when background rewriting of AOF was in progress. Due to this, some zombie processes were getting created and when Memurai started again, this error was coming up.
The solution that we did was to check if any background rewriting is happening by using settings aof_rewrite_scheduled and aof_rewrite_in_progress from Persistence info. If any of these flags is true then don't stop the service.

Laravel 8 - Queue jobs timeout, Fixed by clearing cache & restarting horizon

My queue jobs all run fairly seamlessy in our production server, but about every 2 - 3 months I start getting a lot of timeout exceeded/too many attempts exceptions.
Our app is running with event sourcing and many events are queued so neededless to say we have a lot of jobs passing through the system (100 - 200k per day generally).
I have not found the root cause of the issues yet, but a simple re-deploy through Laravel Envoyer fixes the issue. This is most likely due to the cache:clear command being run.
Currently, the cache is handled by Redis and is on the same server as the app. I was considering moving the cache to its own server/instance but this still does not help me with the root cause.
Does anyone have any ideas what might be going on here and how I can diagnose/fix it? I am guessing the cache is just getting overloaded/running out of space/leaking etc. over time but not really sure where to go from here.
Check :
The version of your redis make an update of the predis package
The version of your Laravel
Your server
I hope I gave you some solutions

How does memcached work? (details inside)

As I understand memcached should cache stuff to virtual/physical memory. I've a Wordpress installation with W3TC installed, and my server is capable of using either Disk or Memcached, so I went for Memcached.
When I check the memory usage in cPanel, it's on 0 (physical's on ~100Mb). When I try to load the site, memory usage jumps to 100-300Mb (different values for both types of memory, but they are both at ~300Mb). CPU usage jumps to 100%. It stays like that for a few minutes.
So how does memcached work then? It doesn't make sense to me. Would I be better off using Disk cache instead? The site's utterly slow too, unless I'm reloading it or revisiting pages I've already visited - then it's lightning-fast. Disk cache however, seems slow-ish too in general...
What am I supposed to do? Is there a way of "fixing" this, if there is something to fix in the first place of course?
Any info or insight is appreciated,
Thanks!

Magento + APC + PHP-FPM crashes

I've set up Magento 1.8.1 with PHP 5.4 and APC 3.1.15 (all running on EC2 - AWS Linux).
Magento crashes intermittently (and frequently) with APC active. I can't reproduce the issue with any specific area of the site; but going to various pages in the admin, I can get it to crash within five minutes.
According to the logs...
PHP Fatal error: Cannot override final method Mage_Core_Block_Abstract::toHtml() in /var/www/includes/src/Mage_Adminhtml_Block_Widget.php on line 35
With APC off, the problem goes away (and performance sucks). I've read everything I could find from Google searches, but nothing specific to this issue. All the issues seem to be about memory consumption (adjusting shared memory, etc). It's not a segfault or out of memory error.
Anyone have any insight into this error?

Resources