With the below fluentbit configuration we are getting errors from opensearch under heavy load.
Http bulk requests to opensearch by fluentbit(respresenting 429 errors as spike)
Fluentbit config:
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
DB /var/log/flb_kube.db
Mem_Buf_Limit 400M
storage.type filesystem
Skip_Long_Lines On
Refresh_Interval 1
Rotate_Wait 600
[OUTPUT]
Name es
Match kube.*
Host ${ES_HOST}
Port ${PORT}
Buffer_Size False
AWS_Auth Off
AWS_Role_ARN ${ES_ARN}
AWS_External_ID ${ES_IAMROLE}
HTTP_User ${ES_USER}
HTTP_Passwd ${ES_PASSWD}
tls On
tls.verify Off
Trace_Output ${TRACE_OUTPUT}
Trace_Error On
Replace_Dots On
Index fluentbit
Type flb
AWS_Region ${AWS_REGION}
Logstash_Format On
Logstash_Prefix ${ES_LOGSTASHPREFIX}_app_log
Logstash_DateFormat %Y.%m.%d
Retry_Limit 10
storage.total_limit_size 1G
For resolving this we have upgraded our opensearch instance type from r5.xlarge.search(4 nodes) to r5.2xlarge.search(3 nodes) but that also didn't solve the issue.
We have also increased the ES index refresh_interval to 60s but that didn't help.
We read that output to ES from fluentbit can be controlled via buffering so we decreased Mem_Buf_Limit to 400M and it didn't help.
Can someone help if can try any other things or we are missing something.
The issue here is not that of fluentbit but is of opensearch/elasticsearch.
The HTTP 429 errors (es_request_rejected_exception) in ES occur when too many requests are sent to the cluster, than what the thread pool for it can handle. The thread pool in OpenSearch for different tasks are allocated differently with search operations getting a larger share. The option to manually modify thread pool allocation is not available for versions 5.1 and later.
You can try to resolve this by few ways.
1: Refresh rate (you already did that and it didn't help).
2: Change the indexing speed. Try to send logs with an interval greater than your current.
3: Upscale (you did and it didn't work either)
You can get an idea with the following formula for thread pools.
Number of thread pools allocated for writes = Number of Virtual CPUs (your case)
Number of thread pools allocated for search = ((3 * Number of virtual CPUs)/2) + 1
So, I am guessing your issue here is a big number of shards! You can either decrease the shards for each index or if you are having this issue only once in a while when there is extra load, you can change the replica count to 0 and when the period is finished, change it back to the original.
Check these two links to find out more about optimizing your ES domain.
indexing performance
Best practices
I have configured my Producer with request.timeout.ms = 70,0000ms and retries=5. I have doubt how this actually works,
After this "request.timeout.ms=70,000" expires it retries for 5 times or within given "request.timeout.ms=70,000" it retries for 5 time with retry.backoff.ms value.?
There are 3 important configs to be aware of:
"request.timeout.ms" - time to retry a single request
"delivery.timeout.ms" - time to complete the entire send operation
"retries" - how many times to retry when the broker responds with retriable errors.
The Apache Kafka recommendation is to set "delivery.timeout.ms" and leave the other two configurations with their default value. The idea is that the main thing you as a user should worry about is how long you want to way for Kafka to figure things out before giving up on it. It doesn't really matter what is taking Kafka so long - the connection, getting metadata, long queues, etc, the only thing that matters is how long you are willing to wait.
Now to your question - request.timeout.ms applies on each retry. So Producer will send the recordbatch to Kafka, and if there's no response after 70,000ms it will consider this a failure and retry. Note that most errors (say, NoLeaderForPartition) will return from the broker much faster (which is why retry backoffs are needed).
Reasoning about delivery times with retries + request.timeout.ms turned out to be near impossible even for those who wrote the producer. Hence, the introduction of delivery.time.ms with a very clear contract.
We have been facing an issue where message rate of a xmitq is very slow comparing with what should be a normal performance.
We have many other Qmgrs with bigger MQ flows that don't have the same issue.
Our HUB qmgr connects to business line in the same company HUB qmgr, and even the destination queues on their side being empty the flow is really slow.
At OS and Network level they say nothing can be done. At MQ we have changed the Buffersizes so it matches the OS level and uses the system tcp windows.
Now at MQ level we have the channel SDR setup with BATCHSZ to 100, but seems the receiver is configured with 30.
We noticed that because we see messages flow in batches fof 30 messages. Also not sure if that is related but we see the XMITQ havs always 30 uncommited messages.
Our questiong for advice.
Would increase the BATCHSZ parameter on SDR/RCVR help the perfomance?
Is there any other parameter at MQ level that could help it?
DIS CHS(NAME) ALL
AMQ8417: Display Channel Status details.
CHANNEL(QMGRA.QMGRB.T7) CHLTYPE(SDR)
BATCHES(234) BATCHSZ(30)
BUFSRCVD(235) BUFSSENT(6391)
BYTSRCVD(6996) BYTSSENT(14396692)
CHSTADA(2020-04-16) CHSTATI(14.38.17)
COMPHDR(NONE,NONE) COMPMSG(NONE,NONE)
COMPRATE(0,0) COMPTIME(0,0)
CONNAME(159.50.69.38(48702)) CURLUWID(398F3E5EEA43381C)
CURMSGS(30) CURRENT
CURSEQNO(43488865) EXITTIME(0,0)
HBINT(300) INDOUBT(YES)
JOBNAME(000051FC00000001) LOCLADDR(10.185.8.122(54908))
LONGRTS(999999999) LSTLUWID(398F3E5EE943381C)
LSTMSGDA(2020-04-16) LSTMSGTI(14.49.46)
LSTSEQNO(43488835) MCASTAT(RUNNING)
MONCHL(HIGH) MSGS(6386)
NETTIME(2789746,3087573) NPMSPEED(NORMAL)
RQMNAME(QMGRB) SHORTRTS(10)
SSLCERTI(*******************)
SSLKEYDA( ) SSLKEYTI( )
SSLPEER(*******************)
SSLRKEYS(0) STATUS(RUNNING)
STOPREQ(NO) SUBSTATE(RECEIVE)
XBATCHSZ(23,7) XMITQ(QMGRB.X7)
XQTIME(215757414,214033427) RVERSION(08000008)
RPRODUCT(MQMM)
qm.ini:
Log:
LogPrimaryFiles=10
LogSecondaryFiles=10
LogFilePages=16384
LogType=LINEAR
LogBufferPages=4096
LogPath=/apps/wmq/QMGR/log/QMGR/
LogWriteIntegrity=SingleWrite
Service:
Name=AuthorizationService
EntryPoints=13
TCP:
SvrSndBuffSize=0
SvrRcvBuffSize=0
ServiceComponent:
Service=AuthorizationService
Name=MQSeries.UNIX.auth.service
Module=/opt/mqm75/lib64/amqzfu
ComponentDataSize=0
Channels:
MaxChannels=500
UPDATED: 15:41 GMT
Just to update the information, both sides are now with BATCHSZ 100 and seems slightly.
AMQ8417: Display Channel Status details.
CHANNEL(QMGRA.QMGRB.T7) CHLTYPE(SDR)
BATCHES(403) BATCHSZ(100)
BUFSRCVD(405) BUFSSENT(23525)
BYTSRCVD(11756) BYTSSENT(53751066)
CHSTADA(2020-04-17) CHSTATI(15.13.51)
COMPHDR(NONE,NONE) COMPMSG(NONE,NONE)
COMPRATE(0,0) COMPTIME(0,0)
CONNAME(159.50.69.38(48702)) CURLUWID(6D66985E94343410)
CURMSGS(0) CURRENT
CURSEQNO(44115897) EXITTIME(0,0)
HBINT(300) INDOUBT(NO)
JOBNAME(0000172A00000001) LOCLADDR(10.185.8.122(2223))
LONGRTS(999999999) LSTLUWID(6D66985E93343410)
LSTMSGDA(2020-04-17) LSTMSGTI(15.30.06)
LSTSEQNO(44115897) MCASTAT(RUNNING)
MONCHL(HIGH) MSGS(23505)
NETTIME(101563,480206) NPMSPEED(NORMAL)
RQMNAME(QMGRB) SHORTRTS(10)
SSLCERTI(*************************************)
SSLKEYDA( ) SSLKEYTI( )
SSLPEER(****************************)
SSLRKEYS(0) STATUS(RUNNING)
STOPREQ(NO) SUBSTATE(MQGET)
XBATCHSZ(1,1) XMITQ(QMGRB.X7)
XQTIME(191225,794134) RVERSION(08000008)
RPRODUCT(MQMM)
AMQ8450: Display queue status details.
QUEUE(QMGRB.X7) TYPE(QUEUE)
CURDEPTH(0) IPPROCS(1)
LGETDATE(2020-04-17) LGETTIME(15.30.06)
LPUTDATE(2020-04-17) LPUTTIME(15.30.06)
MEDIALOG(S2488154.LOG) MONQ(LOW)
MSGAGE(0) OPPROCS(9)
QTIME(794134, 191225) UNCOM(NO)
I'll put a few observations in this answer, but based on any further feedback I may add more.
You are running a very old version of the software on the sender side, MQ 7.5 went out of support almost two years ago (April 30 2018). IBM for a cost will provide extended support for an additional three years, so maybe you fall in that group. The 7.5.0.2 maintenance release itself came out in July 11th 2013, so it is almost seven years old at this point. I would strongly suggest you move to a newer version.
Note that MQ v8.0 goes out of support April 30 2020, and IBM just announced a few days ago that MQ v9.0 goes out of support September 30 2021. When you do migrate you should go with either 9.1 which has no announced end of support (they give five years minimum so it could be 2023) or go with the next version of MQ that should be out some time this year.
You mention setting the following:
TCP:
SvrSndBuffSize=0
SvrRcvBuffSize=0
The above setting apply to the SVRCONN end of a client connection. You can see this in the MQ v7.5 Knowledge Center page WebSphere MQ>Configuring>Changing configuration information>Changing queue manager configuration information>TCP, LU62, NETBIOS, and SPX:
SvrSndBuffSize=32768|number
The size in bytes of the TCP/IP send buffer used by the server end of a client-connection
server-connection channel.
SvrRcvBuffSize=32768|number
The size in bytes of the TCP/IP receive buffer used by the server end of a client-connection
server-connection channel.
At IBM MQ v7.5.0.2 APAR IV58073 introduced the concept of setting various buffer settings to a value to 0 which means that it will allow the operating system defaults to be used. Unfortunately like many things in the Knowledge Center it does not look like IBM documented this correctly for 7.5.
You can however review the IBM MQ v8.0 Knowledge Center to get the full picture regarding these settings at the page Configuring>Changing configuration information>Changing queue manager configuration information>TCP, LU62, and NETBIOS, specifically you would want to set these two settings to have any impact on your Sender Channel:
SndBuffSize=number| 0
The size in bytes of the TCP/IP send buffer used by the sending end of
channels. This stanza value can be overridden by a stanza more
specific to the channel type, for example RcvSndBuffSize. If the
value is set as zero, the operating system defaults are used. If no
value is set, then the IBM MQ default, 32768, is used.
RcvSndBuffSize=number| 0
The size in bytes of the TCP/IP send buffer used by the sender end of
a receiver channel. If the value is set as zero, the operating system
defaults are used. If no value is set, then the IBM MQ default, 32768,
is used.
Starting at IBM MQ v8.0 any newly created queue manager will have all of the following in the qm.ini:
TCP:
SndBuffSize=0
RcvBuffSize=0
RcvSndBuffSize=0
RcvRcvBuffSize=0
ClntSndBuffSize=0
ClntRcvBuffSize=0
SvrSndBuffSize=0
SvrRcvBuffSize=0
However, any queue manager that is upgraded will not by default get those settings, meaning if those are not present they will not be added, if they are present they will remain the same. If the setting is not present then as it says above "the IBM MQ default, 32768, is used."
I had extensive discussions with IBM support on this topic and came to the conclusion that they did not see any reason to not set it to 0, they only saw benefit in doing so, but with an abundance of caution they do not change it to 0 for you.
I would recommend you add all of those to your qm.ini, but at minimum add the two I highlighted above.
These are good setting to implement but may not solve your problem if nothing changed recently on either end. If however something did change, for example a network difference, or MQ was upgraded to 8.0.0.8 on the remote side, then this setting just might solve your problem.
In the channel status output two values are interesting:
NETTIME(2789746,3087573)
XQTIME(215757414,214033427)
NETTIME means that based on recent activity it took 2.7 seconds to receive a response from the RCVR channel, over a longer period of time it took 3.1 seconds to receive a response from the RCVR channel. Can you compare this to a TCP ping from the sender channel server to the receive channel server, 2.7 seconds for a response over the network seems excessive. In the presentation Keeping MQ Channels Up and Running given at Capitalware's MQ Technical Conference v2.0.1.4, Paul Clarke who used to work for IBM states "NETTIME only measures network time, and does not include
the MQCMIT for example".
XQTIME means that based on recent activity and over a longer period of time it took ~215 seconds for a message on the XMITQ to be picked up by the SDR channel to be sent.
See below for how IBM documents these:
NETTIME
Amount of time, displayed in microseconds, to send a request to the remote end of the channel and receive a response. This time only measures the network time for such an operation. Two values are displayed:
A value based on recent activity over a short period.
A value based on activity over a longer period.
XQTIME
The time, in microseconds, that messages remained on the transmission queue before being retrieved. The time is measured from when the message is put onto the transmission queue until it is retrieved to be sent on the channel and, therefore, includes any interval caused by a delay in the putting application.
Two values are displayed:
A value based on recent activity over a short period.
A value based on activity over a longer period.
Information on the BATCHSZ channel parameter can be found in the IBM MQ v8.0 Knowledge Center page Reference>Configuration reference>Channel attributes>Channel attributes in alphabetical order>Batch size (BATCHSZ). I have quoted it and highlighted a few areas in bold.
This attribute is the maximum number of messages to be sent before a sync point is taken.
The batch size does not affect the way the channel transfers messages; messages are always transferred individually, but are committed or backed out as a batch.
To improve performance, you can set a batch size to define the maximum number of messages to be transferred between two sync points. The batch size to be used is negotiated when a channel starts, and the lower of the two channel definitions is taken. On some implementations, the batch size is calculated from the lowest of the two channel definitions and the two queue manager MAXUMSGS values. The actual size of a batch can be less; for example, a batch completes when there are no messages left on the transmission queue or the batch interval expires.
A large value for the batch size increases throughput, but recovery times are increased because there are more messages to back out and send again. The default BATCHSZ is 50, and you are advised to try that value first. You might choose a lower value for BATCHSZ if your communications are unreliable, making the need to recover more likely.
This attribute is valid for channel types of:
Sender
Server
Receiver
Requester
Cluster sender
Cluster receiver
Follow up questions:
Are the messages that are PUT to this XMITQ persistent?
Answer: Yes, in our PROD env all messages are pesistent.
Have you had a recent increase in volume going to this XMITQ?
Answer: No, we use a monitoring tools, we extracted a report that show very similar message rate during the period. The same rate over the last 2 weeks.
Do the putting applications set MQPMO_SYNCPOINT and then commit after 1 or more messages are PUT to the queue?
Answer: I will check with the application team.
A couple of things..
You have XBATCHSZ(1,1) so your recent batch size is 1 message per batch.
Total messages 23505 batches 403, so an average of 58 messages per batch. If your recent batch size is 1, then you must have had some larger (100?) batch sizes
XQTIME 191225 is number of microseconds messages were on the xmit queue before being sent. This is 0.1 second!
Nettime 101563 microseconds. This is a long time ( 0.1 seconds) 10,000 would be a good value. Compare this with a "TCP PING"
BUFSSENT 23525 is similar to number of messages - so message size is typically under 32K. Bytessent. messages gives 2286 so small messages.
Things to check
The queue at the remote end. Has it filled up? This would cause the sender queue to get more messages
The nettime seems very long. Compare this with TCP Ping. Nettime can include slow IO at the remote end - or a queue full at the remote end
XQTIME is high. This could be caused by sending applications not committing, or slow disk IO
I wrote "Why is my xmit queue filling up" in this blog
*Search for the title
have a read.
Capture these metrics over a day and see if they are typical
regards
Colin Paice
I have a strange problem with kafka -> elasticsearch connector. First time when I started it all was great, I received a new data in elasticsearch and checked it through kibana dashboard, but when I produced new data in to kafka using the same producer application and tried to start connector one more time, I didn't get any new data in elasticsearch.
Now I'm getting such errors:
[2018-02-04 21:38:04,987] ERROR WorkerSinkTask{id=log-platform-elastic-0} Commit of offsets threw an unexpected exception for sequence number 14: null (org.apache.kafka.connect.runtime.WorkerSinkTask:233)
org.apache.kafka.connect.errors.ConnectException: Flush timeout expired with unflushed records: 15805
I'm using next command to run connector:
/usr/bin/connect-standalone /etc/schema-registry/connect-avro-standalone.properties log-platform-elastic.properties
connect-avro-standalone.properties:
bootstrap.servers=kafka-0.kafka-hs:9093,kafka-1.kafka-hs:9093,kafka-2.kafka-hs:9093
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
# producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
# consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
#rest.host.name=
rest.port=8084
#rest.advertised.host.name=
#rest.advertised.port=
plugin.path=/usr/share/java
and log-platform-elastic.properties:
name=log-platform-elastic
key.converter=org.apache.kafka.connect.storage.StringConverter
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=member_sync_log, order_history_sync_log # ... and many others
key.ignore=true
connection.url=http://elasticsearch:9200
type.name=log
I checked connection to kafka brokers, elasticsearch and schema-registry(schema-registry and connector are on the same host at this moment) and all is fine. Kafka brokers are running on port 9093 and I'm able to read data from topics using kafka-avro-console-consumer.
I'll be gratefull for any help on this!
Just update flush.timeout.ms to bigger than 10000 (10 seconds which is the default)
According to documentation:
flush.timeout.ms
The timeout in milliseconds to use for periodic
flushing, and when waiting for buffer space to be made available by
completed requests as records are added. If this timeout is exceeded
the task will fail.
Type: long Default: 10000 Importance: low
See documentation
We can optimized Elastic search configuration to solve issue. Please refer below link for configuration parameter
https://docs.confluent.io/current/connect/kafka-connect-elasticsearch/configuration_options.html
Below are key parameter which can control message rate flow to eventually help to solve issue:
flush.timeout.ms: Increase might help to give more breath on flush time
The timeout in milliseconds to use for periodic flushing, and when
waiting for buffer space to be made available by completed requests as
records are added. If this timeout is exceeded the task will fail.
max.buffered.records: Try reducing buffer record limit
The maximum number of records each task will buffer before blocking
acceptance of more records. This config can be used to limit the
memory usage for each task
batch.size: Try reducing batch size
The number of records to process as a batch when writing to
Elasticsearch
tasks.max: Number of parallel thread(consumer instance) Reduce or Increase. This will control Elastic Search if bandwidth not able to handle reduce task may help.
It worked my issue by tuning above parameters
What's the topic of connection threshold events? How do I listen to connection count threshold events over the message bus, and how do I figure out what is the current connection count?
Connection threshold events can be published over the message bus to the following topics:
#LOG/WARNING/VPN/<router-name>/VPN_VPN_CONNECTIONS_HIGH/<vpn-name> when the connection count exceeds the high threshold.
#LOG/INFO/VPN/<router-name>/VPN_VPN_CONNECTIONS_HIGH_CLEAR/<vpn-name> when the connection count goes below the clear threshold.
If desired, you can apply wildcards to the topics. For example, #LOG/*/VPN/<router-name>/VPN_VPN_CONNECTIONS*/<vpn-name>.
Note that you will need to fill in <router-name> and <vpn-name> with appropriate values.
In order to have the connection count threshold events published over the message bus, you will need to do the following:
a. Configure the VPN to "Publish Message VPN Event Messages".
b. Your application needs to subscribe to the topic for connection threshold events.
In order to figure out the current connection count, you will need to send a SEMP over message bus query.
a. Enable SEMP over Message Bus Show Commands on the VPN.
b. Send a SEMP over Message Bus query. There's an SempGetOverMB sample in the API with detailed instructions to do this. You can also refer to the documentation for details.
<rpc semp-version="soltr/7_2">
<show>
<message-vpn>
<vpn-name>default</vpn-name>
</message-vpn>
</show>
</rpc>
c. Parse the XML based response.
<rpc-reply semp-version="soltr/7_2">
<rpc>
<show>
<message-vpn>
<vpn>
<name>default</name>
<connections-service-smf>3</connections-service-smf>
<connections-service-web>0</connections-service-web>
<connections-service-rest-incoming>0</connections-service-rest-incoming>
<connections-service-mqtt>0</connections-service-mqtt>
<connections-service-rest-outgoing>0</connections-service-rest-outgoing>
<max-connections>10</max-connections>
<max-connections-service-smf>9000</max-connections-service-smf>
<max-connections-service-web>9000</max-connections-service-web>
<max-connections-service-rest-incoming>9000</max-connections-service-rest-incoming>
<max-connections-service-mqtt>9000</max-connections-service-mqtt>
<max-connections-service-rest-outgoing>6000</max-connections-service-rest-outgoing>
... Removed non-relevant portions for clarity ...
</vpn>
</message-vpn>
</show>
</rpc>
<execute-result code="ok"/>
</rpc-reply>
Note that there is a system limit of 10 SEMP poll requests per second, and some topics should not be polled. Refer to the documentation for details.