Redis memory filled up to 2 GB with Laravel Horizon - laravel

redis info memory:
used_memory_human:1.95G
used_memory_rss_human:1.93G
used_memory_peak_human:2.04G
used_memory_peak_perc:95.63%
I use Laravel Horizon and it is the only thing that uses redis.
Now it reached 2GB limit and stay like this
First question:
Why there is 2 GB limit and how to increase it? maxmemory set to 0
Second question:
I think there not that much jobs pending on Laravel Horizon to fill up 2GB, looks like trimmer or something not working. Jobs are small, not much information stored. There is about 1-2k jobs per hour and maybe pending around 3-4k.
My trim settings from horizon.php:
'trim' => [
'recent' => 60,
'pending' => 43200,
'completed' => 60,
'recent_failed' => 10080,
'failed' => 10080,
'monitored' => 10080,
],
Where to look next? Everithing working so far but I don't like the situation. Once with big batch of jobs we faced allocate memory issue (Allowed memory size of 536870912 bytes exhausted)

Related

Laravel job stays in "Processing" state although finished

I have a problem with the laravel queue system. For performance purposes we use laravel queuing system with amazons SQS for heavier calculations. This works fine at least for most of our jobs. Some of them where the raw calculation time is about 25 seconds keep blocking the queue in the "Procesing" state for 6 minutes.
We did log the complete handle function of the job and the output was right at any time. As a matter of fact the last log statement (end of the function) was printed 20 seconds after entering the function. The data was calculated as expected and the database was up to date but the job was still "Processing".
After we intentionally crashed the job at the end of the handle function the calculated data was stored perfectly but obviously the queue crashed as well. So i guess it has to be something happing after the handle function. Maybe something with allocated memory?
The config of the queue is the default sqs driver configuration:
'sqs' => array(
'driver' => 'sqs',
'key' => env('AWS_KEY', 'secret'),
'secret' => env('AWS_SECRET', 'secret'),
'queue' => env('AWS_SQS_QUEUE', 'secret'),
'region' => env('AWS_SQS_REGION', 'secret'),
),
Edit:
I found out it is not only the queue but when I execute the job as a command the same behavior appears:
I print "Done." as the last statement in the command and after it gets printed the console halts for a few seconds before returning to the console input.
When I comment out the part where the most queries are the issue is gone, like the more queries I use the more I have to wait for the console.
I hope any of you guys know what causes this behavior and how we can fix it.
Thanks in advance
Ok I found the issue.
The problem was that telescope was enabled. So after the code was executed telescope was busy logging all requests and cache hits.
After disabling telescope there was no delay any more.

What is the number of processes in Laravel Horizon I should use?

I have installed Laravel Horizon to mange my queues, and inside the published config there are these settings:
'local' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default'],
'balance' => 'simple',
'processes' => 3,
'tries' => 3,
],
],
In the processes setting:
How can I determine the number of processes I should use in a real
world app?
Is there a limit?
This is really a guess until your application is in the real world. You need to balance acceptable wait times for jobs to kick off with projections on how often jobs will be scheduled and how long they will take to run. A reasonable upper limit is going to be heavily dependent on the hardware you're running on and the overall server workload.
It's probably not worth spending too much time on this ahead of time vs monitoring it and dialing it in as you get up and running.

Many Logstash instances reading from Redis

I have one Logstash process running inside one node consuming from a Redis list, but I'm afraid that just one process cannot handle the data throughput without a great delay.
I was wondering if I run one more process for Logstash inside this same machine will perform a little better, but I'm not certain about that. I know that my ES index is not a bottleneck.
Would Logstash duplicate my data, if I consume the same list? This approach seems to be a right thing to do?
Thanks!
Here my input configuration:
input {
redis {
data_type => "list"
batch_count => 300
key => "flight_pricing_stats"
host => "my-redis-host"
}
}
You could try adjusting logstash input threads, if you are going to run another logstash process in the same machine. Default is 1.
input {
redis {
data_type => "list"
batch_count => 300
key => "flight_pricing_stats"
host => "my-redis-host"
threads => 2
}
}
You could run more than one logstash against the same redis, events should not get duplicated. But I'm not sure that would help.
If you're not certain whats going on, I recommend the logstash monitoring API. It can help you narrow down your real bottlenck.
And also an interesting post from elastic on the subject: Logstash Lines Introducing a benchmarking tool for Logstash

Wip process interface api takes too much time in R12.2.5 instance

I have a performance issue in one private API of Proces interface of Work In Process "wip_movProc_priv.processIntf". It takes around 2.5 Minutes for all transaction and
When I run this API in R12.1.3 instance it not take this much amount of time.
wip_movProc_priv.processIntf (p_group_id => p_group_id,
p_proc_phase => WIP_CONSTANTS.MOVE_VAL,
p_time_out => 0,
p_move_mode => 3, --WIP_CONSTANTS.ONLINE,
p_bf_mode => WIP_CONSTANTS.ONLINE,
p_mtl_mode => WIP_CONSTANTS.ONLINE,
p_endDebug => 'T',
p_initMsgList => 'T',
p_insertAssy => 'T',
p_do_backflush => 'F',
x_returnStatus => x_returnstatus);
Please help me.
Thanks,
Yasin Musani
This question has actually far too little detail to be answered.
Typically, the majority of time spent for Oracle EBS code execution is due to few badly performing SQLs.
You can identify the offending SQLs by looking at the AWR or SGA e.g. using Blitz Reports DBA AWR SQL Performance Summary or DBA SGA SQL Performance Summary and would then need to analyze further why these are not executing properly.

Why does my default RabbitMQ config have such low throughput?

Background
I'm setting up an Elastic Search ELK stack for real-time log analysis using RabbitMQ as the broker. I am shipping my log files to RabbitMQ using a python project called Beaver. Things to take note are:
Using exchange_type: "direct"
Using queue_durable: 1 (not sure if this is worth mentioning)
Using exchange_durable: 1 (not sure if this is worth mentioning)
RabbitMQ
I downloaded the newest RabbitMQ from their website and ran it using the rabbitmq.config file. The only thing I included in the config file was:
tcp_listeners -> {"0.0.0.0", 5672}
loopback_users -> [] (allow guest user to connect remotely)
frame_max -> 2155000 explained below
I am parsing netscreen log files and log entry is roughly 431 bytes. I multiplied this by 5000 and hence arrived at the frame_max value.
On the consumer side of things (logstash rabbitmq input plugin) I have the following setup:
input {
rabbitmq {
host => "rabbitmq server ip here"
queue => "indexer-queue"
exchange => "logstash-exchange"
key => "logstash-routing-key"
exclusive => false
durable => true
auto_delete => false
type => "logstash-indexer-input"
prefetch_count => 2000
threads => 5
}
}
This setup was recommended here.
Problem
When I fired up everything, I realized that I was only able to achieve a maximum of 300 or so messages per second. Both the produce and consume values are 300 messages/second. I am assuming this means that whatever is produced is consumed immediately. Looking at the queue, it is mostly empty.
Comparing this with Redis, RabbitMQ is pathetic in terms of throughput. I heard great reviews about RabbitMQ, in particular its speed.
Can someone please tell me what's wrong with my setup? Why am I only achieving 300+ messages/second with RabbitMQ rather than 3500 messages/second with Redis?
I'm using rabbitmq too, I reach easily 5/7k msg/seconds.
My setup is:
2 rabbitmq server 3.3.1 for HA (replicated), cluster (disk mode)
logstash conf:
rabbitmq {
exclusive => false
host => '...'
password => '...'
user => '...'
vhost => 'logstash'
# No ack will boost your perf
ack => false
# Too high prefetch will slow down
prefetch_count => 50
auto_delete => false
durable => true
exchange => "logstash"
key => 'logstash.logs'
queue => "logstash.logs"
threads => 8
}
The may differences are 'ack: false', and prefetch_count => 50.
if you have 4000 msg in the queue and 5 threads only two threads will process logs:
thread 1: 2000 msg
thread 2: 1000 msg
thread 3-5: 0
Moreover I suggest to check if it's not your input stream that is slow:
desactivate the indexer (only ship logs)
-> the queue will be filled with millions messages
deactivate the shipper, and reactivate the indexer
-> monitor how fast it consumes the messages

Resources