Quoting from the MODBUS over Serial Line
Specification and Implementation Guide
V1.02:
Intervals of up to one second may elapse between characters within the
message. Unless the user has configured a longer timeout, an interval
greater than 1 second means an error has occurred. Some
Wide-Area-Network application may require a timeout in the 4 to 5
second range
I would like to know how to set a longer time out and how this timer configuration is handled in modbus ASCII. Please share me any links available for more information.
Related
I'm working with NIFI and PutDataBaseRecord to insert records to tables. I am simulating the case that the database is down in order to handle the error (to send an mail indicating connection time out for example ). The problem is when I disconnect the net cable to simulate the error and turns on PutDataBaseRecord the flows do not pass either to the relationship of failure, or to the relationship of retry and the processor sends bulletin error messages continually, it never stops sending messages.
I put 10 seconds in the Max wait time property with the hope that after that time the processor stops throwing errors and sends the flows to the fault relationship, but it does not work.
I think the option is not working as you expected. See HERE.
Max Wait Time: The maximum amount of time allowed for a running SQL statement, zero means there is no limit. Max time less than 1 second will be equal to zero.
Supports Expression Language: true (will be evaluated using variable registry only)
Since you are using PutDatabaseRecords processor, it will assume the database connection is well done. The error with this processor should be related to the SQL, not connection problem and so database connection failure is not going to failure relationship, I guess.
I have a performance test in Jmeter.
I have some SSH listeners that should retrieve CPU and RAM usage. I want to get a clear explanation about the delay used by Jmeter to gather the listeners value while the test is running.
Is it possible to set that delay value by the user ?, if yes what is the minimum value that Jmeter supports.
The current data gathering by listener is a bit random I think which isn't good at all. Currently I don't have the similar number of entries in results although in both listeners I have the same number of commands.
I tried to set the value of jmeter.sshmon.interval in jmeter.properties to 100 and 3000 ms but that didn't help.
The measurement I did gave the following:
Remark 1:
* CPU CSV usage file has 1211 entries
* RAM CSV usage file has 1201 entries
* Number of used threads CSV file has 1276 entries
Although in my test plan the three listeners have exactly the same of number of SSH commands (15) and they are set on same level in test plan.
Remark 2:
The duration to execute each set of SSH commands to retrieve values of CPU usage is variable. I used timestamp difference to measure it and it is not the same duration with remarkable difference.
Remark 3:
When I compare the duration to execute set of SSH commands to retrieve CPU usage and RAM usage I see big difference duration.
I found this link: https://github.com/tilln/jmeter-sshmon by the plug-in owner but that didn't resolve my issue.
Thanks
As per the link you provided:
Samples are collected by a single thread, so if a command takes more than an insignificant amount of time to run, the frequency of sample collection will be limited. Even more so if more than one command is sampled. In this case, use a separate monitor for each sample command.
So basically after each sampler JMeter has to execute 45 SSH commands and according to the above explanation some results might be discarded.
So I would suggest the following workarounds:
Use separate Thread Group with a single Sampler which does nothing and has fixed execution time, i.e. Dummy Sampler. In this case you can control the interval by adding a Constant Timer and define the desired monitoring poll interval
Go for JMeter PerfMon Plugin which doesn't require establishing connection and executing the command, only plain metrics (numbers) are being passed via TCP or UDP channels. The approach from point 1 is still highly recommended.
I have a tomcat application running on an 8 core system. I observed that when changed maxthread count from 16 to 2 ,there was a dramatic improvement of performance for throughputs of 13 req/sec
So, started printing the active thread count , it seems that when maxthread of tomcat was set to 2 , the active threads on an average for 8 , so basically 8 threads operating on 8 cores , best possible outcome
However, when I increased the throughput to 30-40 req/sec I saw requests queueing up . So , what happened here is that due to only maxthreads 2 requests started piling up .
And when I then set maxThreads to very high value like 10k I saw JVM taking long again context switching .
My question is , is there any property in tomcat wherein I can specify how many requests are to be picked up to process in JVM parallely .
acceptCount property wont help cause it only defines threshold of request up .
There is another property called acceptorThreadCount which is defined as number of threads to be used to accept connections , is this the property I need to tune , or is there any other property , or anything I am missing here?
According to the Connector documentation for maxThreads (I'm assuming that this is where you changed your maxThreads configuration):
The maximum number of request processing threads to be created by this
Connector, which therefore determines the maximum number of
simultaneous requests that can be handled. If not specified, this
attribute is set to 200. If an executor is associated with this
connector, this attribute is ignored as the connector will execute
tasks using the executor rather than an internal thread pool. Note
that if an executor is configured any value set for this attribute
will be recorded correctly but it will be reported (e.g. via JMX) as
-1 to make clear that it is not used.
There's no problem (quite the opposite) setting the thread count to higher than the number of available cores, as not every core always is working (quite often they're waiting for external input, e.g. data from a database).
In case I've missed the point and you change a different maxThreads configuration, please clarify. On the other hand, your question is about the configuration that specifies how many requests are handled in parallel: If you referred to a different maxThreads, then tomcat's default is 200, and it can be changed in the Connector's configuration (or, as the documentation says, with Executors)
What is the behavior of read and write timeouts in OkHttp?
Is the timeout exception triggered when the whole request exceeds the timeout duration or is when the socket doesn't receive (read) or send (write) any packet for this duration.
I think is the second behavior but could someone clarify this?
Thanks in advance.
The timeouts are triggered when you block for too long. On read that occurs if the server doesn't send you response data. On write it occurs if the server doesn't read the request you sent. Or if the network makes it seem like that's what's happening!
Timeouts are continuous: if the timeout is 3 seconds and the response is 5 bytes, an extreme case might succeed in 15 seconds just as long as the server sends something every 3 seconds. In other words, the timeout is reset after ever successful I/O.
Okio’s Timeout class also offers a deadline abstraction that is concerned with the total time spent.
I have a nodejs->kafka>storm->Mongo deployed in Linux Ubuntu. Everything is normal originally. Then I changed the method in storm worker which makes storm worker process message very slow, around 1 minute per message, I notice the message is sent again and again from storm. I revert back to original method, everything is fine. (original method process time is 90ms per message).
I guess this is Storm reliability come into player. When message is not acknowledged, or time out, it sends message again.
If my guess is right, how to configure this timeout?
If my guess is wrong, why same message is sent twice or three times?
You can set the timeout via configuration parameter Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS. See https://storm.apache.org/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_MESSAGE_TIMEOUT_SECS
The default value is 30 seconds, see defaults.yaml here: https://github.com/apache/storm/blob/master/conf/defaults.yaml
# maximum amount of time a message has to complete before it's considered failed
topology.message.timeout.secs: 30
When a tuple fails, it should show up in Storm UI and should be logged, too (maybe you need to adjust log level). So you can double check if a tuple times out or not.