How to read throughput-sampler sink values in spring-xd - spring-xd

Stream definition:
"http --port=9400 | throughput-sampler"
I've sent the START and END/ STOP payloads but I'm not sure how to read out the throughput values.
It doesn't show up in logs, so I'm wondering how do I access the values. There seems to be no documentations detailing it either.
Appreciate any help!

This looks like a bug; add
log4j.logger.org.springframework.xd.integration.throughput=INFO
to the
xd/config/xd-singlenode-logger.properties
or
xd/config/xd-container-logger.properties
depending on which topology you are using, and you'll see...
09:56:16,358 1.1.0.SNAP INFO pool-10-thread-10 throughput.ThroughputSamplerMessageHandler - Throughput sampled for 2 items: 0/s in 5584ms elapsed time.
The default endMessage is END.

Related

messages lost due to rate-limiting

We are testing the capacity of a Mail relay based on RHEL 7.6.
We are observing issues when sending an important number of msgs (e.g.: ~1000 msgs in 60 seconds).
While we have sent all the msgs and the recipient has received all the msgs, logs are missing in the /var/log/maillog_rfc5424.
We have the following message in the /var/log/messages:
rsyslogd: imjournal: XYZ messages lost due to rate-limiting
We adapted the /etc/rsyslog.conf with the following settings but without effect:
$SystemLogRateLimitInterval 0 # turn off rate limit
$SystemLogRateLimitBurst 0 # turn rate limit off
Any ideas ?
The error is from imjournal, but your configuration settings are for imuxsock.
According to the rsyslog configuration page you need to set
$imjournalRatelimitInterval 0
$imjournalRatelimitBurst 0
Note that for very high message rates you might like to change to imuxsock, as it says:
this module may be notably slower than when using imuxsock. The journal provides imuxsock with a copy of all “classical” syslog messages, however, it does not provide structured data. Only if that structured data is needed, imjournal must be used. Otherwise, imjournal may simply be replaced by imuxsock, and we highly suggest doing so.

Traces not getting sampled in jaeger tracing

I'm new to using Jaeger tracing system and have been trying to implement it for a flask based microservices architecture. Below is my jaeger client config implemented in python:
config = Config(
config = {
'sampler': {
'type': 'const',
'param': 1,
},
'logging': True,
'reporter_batch_size': 1,
},
service_name=service,
)
I read somewhere that Sampling strategy is being used to sample the number of traces especially for the trace which doesn't have any metadata. So as per this config, does it mean that I'm sampling each and every trace or just the few traces randomly? Mysteriously, when I'm passing random inputs to create spans for my microservices, the spans are getting generated only after 4 to 5 minutes. I would like to understand this configuration spec more but not able to.
So as per this config, does it mean that I'm sampling each and every trace or just the few traces randomly?
Using the sampler type as const with 1 as the value means that you are sampling everything.
Mysteriously, when I'm passing random inputs to create spans for my microservices, the spans are getting generated only after 4 to 5 minutes. I would like to understand this configuration spec more but not able to.
There are several things that might be happening. You might not be closing spans, for instance. I recommend reading the following two blog posts to try to understand what might be happening:
Help! Something is wrong with my Jaeger installation!
The life of a span

How to get Jstorm get tuplelifecycle time

My question:I want to monitor the duration of each data in storm. How should I do?Are there any demos?I can only count the duration of each bolt processing.
In both apache-storm and jstorm, you can get data of the monitor, and export that to whereever you want.
In storm, use Metrics Consumer (see http://storm.apache.org/releases/1.0.6/Metrics.html).
And in jstorm, use metrics uploader (see http://jstorm.io/Maintenance/JStormMetrics.html)
And there is a jstorm metricsUploader example here: https://github.com/lcy362/StormTrooper/blob/master/src/main/java/com/trooper/storm/monitor/MetricUploaderTest.java

Timeout of JMS Point-to-point requests in JMeter does not result in an error

We are using Apache JMeter 2.12 in order to measure the response time of our JMS queue. However, we would like to see how many of those requests take less than a certain time. This, according to the official site of JMeter (http://jmeter.apache.org/usermanual/component_reference.html) should be set by the Timeout property. You can see in the photo below how our configuration looks like:
However, setting the timeout does not result in an error after sending 100 requests. We can see that some of them take apparently more than that amount of time:
Is there some other setting I am missing or is there a way to achieve my goal?
Thanks!
The JMeter documentation for JMS Point-to-Point describes the timeout as
The timeout in milliseconds for the reply-messages. If a reply has not been received within the specified time, the specific testcase failes and the specific reply message received after the timeout is discarded. Default value is 2000 ms.
This is timing not the actual sending the message but receipt of a response.
The source for the JMeter Point to Point will determine if you have a 'Receive Queue' Configured. If you do it will go through the executor path and use the timeout value, otherwise it does not use time timeout value.
if (useTemporyQueue()) {
executor = new TemporaryQueueExecutor(session, sendQueue);
} else {
producer = session.createSender(sendQueue);
executor = new FixedQueueExecutor(producer, getTimeoutAsInt(), isUseReqMsgIdAsCorrelId());
}
In your screen shot JNDI name Receive Queue is not defined, thus it uses temporary queue, and does not use the timeout. Should or should not timeout be supported in this case, that is best discussed in JMeter forum.
Alternately if you want to see request times in percentiles/buckets please read this stack overflow Q/A -
I want to find out the percentage of HTTPS requests that take less than a second in JMeter

How to Fix Read timed out in Elasticsearch

I used Elasticsearch-1.1.0 to index tweets.
The indexing process is okay.
Then I upgraded the version. Now I use Elasticsearch-1.3.2, and I get this message randomly:
Exception happened: Error raised when there was an exception while talking to ES.
ConnectionError(HTTPConnectionPool(host='127.0.0.1', port=8001): Read timed out. (read timeout=10)) caused by: ReadTimeoutError(HTTPConnectionPool(host='127.0.0.1', port=8001): Read timed out. (read timeout=10)).
Snapshot of the randomness:
Happened --33s-- Happened --27s-- Happened --22s-- Happened --10s-- Happened --39s-- Happened --25s-- Happened --36s-- Happened --38s-- Happened --19s-- Happened --09s-- Happened --33s-- Happened --16s-- Happened
--XXs-- = after XX seconds
Can someone point out on how to fix the Read timed out problem?
Thank you very much.
Its hard to give a direct answer since the error your seeing might be associated with the client you are using. However a solution might be one of the following:
1.Increase the default timeout Globally when you create the ES client by passing the timeout parameter. Example in Python
es = Elasticsearch(timeout=30)
2.Set the timeout per request made by the client. Taken from Elasticsearch Python docs below.
# only wait for 1 second, regardless of the client's default
es.cluster.health(wait_for_status='yellow', request_timeout=1)
The above will give the cluster some extra time to respond
Try this:
es = Elasticsearch(timeout=30, max_retries=10, retry_on_timeout=True)
It might won't fully avoid ReadTimeoutError, but it minimalize them.
Read timeouts can also happen when query size is large. For example, in my case of a pretty large ES index size (> 3M documents), doing a search for a query with 30 words took around 2 seconds, while doing a search for a query with 400 words took over 18 seconds. So for a sufficiently large query even timeout=30 won't save you. An easy solution is to crop the query to the size that can be answered below the timeout.
For what it's worth, I found that this seems to be related to a broken index state.
It's very difficult to reliably recreate this issue, but I've seen it several times; operations run as normal except certain ones which periodically seem to hang ES (specifically refreshing an index it seems).
Deleting an index (curl -XDELETE http://localhost:9200/foo) and reindexing from scratch fixed this for me.
I recommend periodically clearing and reindexing if you see this behaviour.
Increasing various timeout options may immediately resolve issues, but does not address the root cause.
Provided the ElasticSearch service is available and the indexes are healthy, try increasing the the Java minimum and maximum heap sizes: see https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html .
TL;DR Edit /etc/elasticsearch/jvm.options -Xms1g and -Xmx1g
You also should check if all fine with elastic. Some shard can be unavailable, here is nice doc about possible reasons of unavailable shard https://www.datadoghq.com/blog/elasticsearch-unassigned-shards/

Resources