java + MQ chunk keeps increasing - ibm-mq

I have a situation where 1 batch from branch is at the pre-processing queue.
The number of records is only 700 but the chunk keeps increasing.
What can be the problem?
I would appreciate it if you could advise what I should check.

Related

how to optimally use nifi wait processor

I am currently creating a flow, where I will be merging result of 10K http response. I have couple of questions. (please refer image below, I am numbering my questions as per image).
1) As queue is becoming too long, is it ok to put "concurrent task" as 10 for invokeHTTP? what should drive this? # of cores on the server?
2) wait is showing quite a big number, is this just # of bytes it is writing? or is this using that much memory? if this is just a write, then I might be ok...but if it is some internal queue, then soon I may run out of memory?
does it make sense to reduce this number? by increasing "Run Schedule" from 0 to say 20 sec?
3) what exactly is "Back Pressure Data Size Threshold", value is set at 1 GB, does it meant, if size of ff in queue is more than that, nifi will start dropping it? or will it somehow stop processing of upstream processor?
1) Yes increasing concurrent tasks on InvokeHttp would probably make sense. I wouldn't jump right to 10, but would test increasing from 1 to 2, 2 to 3, etc until it seems to be working better. Concurrent tasks is the number of threads that can concurrently execute the processor, the total number of threads for your NiFi instance is defined in the controller settings from top right menu under Timer Driven threads, you should set the timer driven threads based of the # of CPUs/core you have.
2) The stats on the processor are totals for the last 5 mins, so "In" is the total size of all the flow files that have come in to the processor in the last 5 mins. You can see "Out" is almost the same # which means almost all the flow files in have also been transferred out.
3) Back-pressure stops the upstream processor from executing until the back pressure threshold is reduced. The data size threshold is saying "when the total size of all flow files in the queue exceeds 1GB, then stop executing the upstream processor so that no more data enters the queue while the downstream processor works on the queue". In the case of a self-loop connection, I think back-pressure won't stop the processor from executing otherwise it will end up in a dead-lock where it can't produce more data but also can't work off the queue. In any case, data is never dropped unless you set flow file expiration on the queue.

Dozens of small messages lead to 1 GB chronicle queue file

I write utf8 strings to chronicle queue with daily rolling. The default queue file size is 81920 KB. After I write dozens of messages (1 KB each), the file becomes over 1 GB quickly. How can I control the file size?
Chronicle Queue records every message. It is designed on the assumption that disk space is cheap. You can now by 1,000 GB enterprise SSD for a reasonable price. Some users retain over 100 TB in queues.
You can increase the roll rate to hourly and delete files you don't need. There is a store file listener so you can determine when files roll over.
The file shouldn't be much larger than the data you are storing. If it is can you a test case which reproduces the problem.

How much does the message size affect RabbitMQ Producer sending to Queue?

So I've been learning as I go with RabbitMQ and I have servers and producer set up; not caring about consumer until i get this fixed. So im collecting my String data over 30 seconds, concatenating them, and sending them as 1 big string; average size is 1180000 bytes. And what I'm experiencing is that the messages take a minimum of 10 minutes to appear in the queue; and up to 30 minutes!
My question is does the size of the message impact the time it takes to get into the queue that much? Or does that not matter and I must be doing something else wrong...
Any help is appreciated.

WebSphere MQ Transactional Log file system full

Transactional log file system(/var/mqm/log) become full and i am getting MQRC 2102 resource problem with Queue Manager while attempting client connection to this queue manager. What course of action we can do to resolve this?
LogPrimaryFiles=2
LogSecondaryFiles=8
LogFilePages=16384
LogType=CIRCULAR
LogBufferPages=0
LogPath=/var/mqm/log/QMGRA/
LogWriteIntegrity=TripleWrite
Is adding additional disk space to /var/mqm/log is the only solution?
I have few queues that were full,but queue storage file system were only 60% used.
Please give me some ideas on this.
Log file pages are 4096 bytes each so a setting of LogFilePages=16384 results in log files extents of 64MB each. With a setting of LogPrimaryFiles=2 and LogSecondaryFiles=8 there can be up to 10 log files for a total of 640MB. If the file system that the circular logs resides on is less than this amount, it may fill up.
The optimum solution here is to increase the size of the log file disk allocation to something a little larger than the log file extents require. If that is not possible or you need a temporary fix then it is necessary to change the size of the log file requirement by reducing the number of extents and restarting the QMgr. Note that you can adjust the number of log extents but not the size of the extents. If it becomes necessary to change the LogFilePages=16384 parameter then it is necessary to rebuild the QMgr.
The number and size of of extents represents the total amount of data that can be under syncpoint at once but 640MB is generous in most cases. In terms of time, it also limits the longest possible duration of a unit of work on an active QMgr. This is because an outstanding transaction will be rolled back if it happens that the head pointer in the log file ever overtakes the tail pointer. For example, suppose a channel goes into retry. This holds a batch of messages under syncpoint and holds that log extent active. As applications and other channels perform their normal operations, additional transactions drive the head pointer forward. Eventually all extents will be used, and although there may be very few outstanding transactions the oldest one will be rolled back to free up that extent and advance the tail pointer forward. If the error log shows many transactions are rolled back to free log space then you really would need to allocate more space to the log file partition and bump the number of extents.

Oracle Database performance related

I am currently working on 9.2.0.8 Oracle database.I Have some questions related to Performace of Database that
too related to Redo logs latches & contention. Answers from real practice will be highly appreciated. please help.
My data is currently having 25 redo log files with 2 members in each file. Each member is of size 100m.
So Is this worth keeping 25 redo log file each with 2 members (100MB each).
My database is 24*7 with a min user of 275 & Max of 650. My database is having mostly SELECT's but very
less INSERT/UPDATE/DELETE's .
And since 1 month i started obsorving that my database is generating archives on an average of 17GB
min to 28GB at MAX.
But the LOGSWITCH is taking place on an average every 5-10 min. some times more frequently.
And even some times 3 times in a min.
But my SPFILE says log_checkpoint_timeout=1800 ( 30 min's).
And About Redo log latches & contention,
when i isssue:- SELECT name, value
FROM v$sysstat
WHERE name = 'redo log space requests';
Output:-
NAME VALUE
-------------------------------------------------------------------- ----------
redo log space requests 20422
(This value is getting increased day by day)
Where as Oracle recommened's to have the redo log space request close to zero.
So i want to know why my database is going for log switch frequently. Is this Because of
data Or Becoze of some thing else.
My doubt was, If i increase REDO LOG Buffer the Problem may resolve. And i increased redo log buffer
from 8MB to 11MB. But i did'nt find much difference.
If i increase the size of REDO LOG FILE from 100MB to 200MB, Will it help. Will it help me to reduce
the log switching time & bring the value of REDO LOG SPACE REQUEST close to zero.
Something about the information you supplied doesn't add up - if you were really generating around 20G/min of archive logs, then you would be switching your 100M log files at least 200 times per minute - not the 3 times/minute worst case that you mentioned. This also isn't consistent with your description of "... mostly SELECT's".
In the real world, I wouldn't worry about log switches every 5-10 minutes on average. With this much redo, none of the init parameters are coming into play for switching - it is happening because of the online redo logs filling up. In this case, the only way to control the switching rate is to resize the logs, e.g. doubling the log size will reduce the switching frequency by half.
17GB of logfiles per minute seems pretty high to me. Perhaps one of the tablespaces in your database is still in online backup mode.
It would probably help to look at which sessions are generating lots of redo, and which sessions are waiting on the redo log space the most.
SQL> l
1 select name, sid, value
2 from v$sesstat s, v$statname n
3 where name in ('redo size','redo log space requests')
4 and n.statistic# = s.statistic#
5 and value > 0
6* order by 1,2

Resources