In the Queue manager object we have a parameter under log section to define the log write integrity. What is the difference between SingleWrite, DoubleWrite and TripleWrite in IBM MQ log write integrity ? Please explain in detail.
LogWriteIntegrity is all about how the queue manager logger writes partial 4KB pages. Unless you are absolutely certain that your file system provides atomically written pages under all circumstances you should leave it at the default setting of TripleWrite. The option to set anything other than TripleWrite only exists because of a possible performance enhancement, however since partial pages should be rare with a queue manager with a good amount of concurrent work going on, it's not a big area for performance improvement, and a better way to improve the performance of your queue manager is to increase concurrency rather than the risks associated with changing this setting.
There is a very useful blog post from MQ Development that you should read. You can find it here: LogWriteIntegrity.... should I pick SingleWrite or TripleWrite?
Related
Mirth Connect is a software that is designed to handle a message flow and it has built-in support to handle HL7 messages in particular and therefore this software is widely used for interfacing in Healthcare applications. Over the years I have seen the Mirth software experiencing performance issues primarily due to the message build up over time and in scenarios where it receives a heavy message load in quick succession.
Mirth has a channel-based architecture and it's ideal if there is some way we can performance test the Mirth channel and get JMeter statistics for its performance. Whereby we can gather the necessary information to optimize the channel transformers and also to set the purge routines accordingly.
However in the Internet there was little to no information on this area, that is how one can use JMeter to test a Mirth channel. A team in Sri Lanka did some research on this area back in 2013 and I found their findings and achievements below
http://pragmatictestlabs.com/2016/10/09/performance-testing-healthcare-application-hl7-jmeter/
However this is very specific the output here was a JSon object which they extracted, in Mirth however we can have outputs in various forms and there need to be a better way to do this. An important takeaway from this is the input that is the input is general we can use JMeter to generate HL7 messages and pass them to Mirth that's great but how to capture the response generally, it would be ideal if there is a way to read the Mirth Dashboard through JMeter, all the output statistics are there it's just a matter of reading them.
I have an application where Mirth reads HL7 messages both ADT and RDE and creates a text file accordingly with appropriate content and drops it to a shared location. Then the application reads the files and shows the information to the user.
I wish to do two performance tests here
Measure how much time the complete system takes and how it varies with load from the arrival of a message to its information being available to the user
Measure how much time the channel takes and how it does it as the load increases
I can do the first one because I can generate HL7 messages using JMeter and I can get JMeter to read the output in the application or the database. The problem is with the second, can I do this in a general way.
You asked for suggestions, so I'm going to share my general strategy for performance testing Mirth channels. I suspect that this won't be a complete answer to your question, and I might not be telling you anything you don't already know, but I'm hoping this will help you find an answer that you are comfortable with.
For several reasons, try not to spend too much time "testing the complete system":
Firstly, testing the entire system necessarily includes testing low-level configuration like the number of CPU cores, the NICs being used in the box, and kernel level software like the TCP/IP stack. You don't usually have any control over these things, so you can't optimize them in any way.
Secondly, the performance of the entire system is going to be heavily dependant on whatever ancillary code is running on the box. If a sysadmin decides to 'nice' my Mirth process down, or to use that box to also host a SQL server, that will have an impact on the system that I (again) have no control over.
Thirdly and most frankly, I find that the "performance of an entire system" is something that management asks about during system setup so they can get a cost estimate; but they know that they're only getting an estimate. You do your best to use test metrics to give a good guess for the initial hardware provisioning, but everyone knows that it's really the production performance metrics that will drive later provisioning costs.
Make sure that you build your channels for testability. I find that it's much easier to test a channel when the source and destination can be changed to "Channel Reader" and "Channel Writer" without changing message handling. One way to look at this is that you're not going to overhaul Mirth's MLLP stack or Java's TCP stack, so just eliminate these things from your testing.
I keep a source of useful test messages. I have a couple of files on a network drive that have around a hundred messages that test for nasty edge cases that I've run into over the years on my HL7 interfaces. I wrote a small Mirth channel that reads these in from a file and spews out copies as fast as it can. By turning on "Queueing" on the destination side of that channel, I can queue up a bajillion test messages that are ready to send to the channel I want to test. In the past I took the time to build a test interface that acted like a fake EMR to spew out randomly constructed messages, but there didn't seem to be any advantage over just spewing copies of the same messages from my test files.
Finally, and most importantly, it's critical that you measure the performance of your test instance using the same metrics that you'll use to measure the performance of your production instance. If the sole production metric you care about is 'messages per second', then that's what you need to measure on your test box. If memory footprint is a concern in production, then you need to measure memory usage in your test environment as well. When you make a change to to your test instance that decreases an important metric by 10%, you'll need to make sure your management is aware before you push that change to production.
Note that getting some of these metrics can be tricky, since Mirth doesn't include good tools to monitor its own performance. The Mirth dashboard is a good place to keep an eye on errors or crashes, but it's not a great place to find performance data. During my testing I make sure that I use whatever resource monitoring tool that the sysadmins will be using to monitor the performance of the production instance. Beyond that, I use a manual process to test performance: If I want to count message per second, I send through a batch of messages and look at the timestamps of the first and last messages. If I want to get an idea of the CPU load of a Mirth channel, I use the Windows Performance Monitor or the posix 'top' command.
I found that it's possible to automatically extend Liferay's session. So that the session doesn't expire till you close your browser. Is there any limitations or disadvantages of such approach. Any performance degrade or load issues?
As with any abstract question about hypothetical performance impact (or preliminary optimization) this question is basically unanswerable - but here's some criteria:
Naturally, pinging the server in order to extend a session will incur some extra load - if that results in a performance decrease, you'll most likely have a highly congested installation in the first place. If your server is bored all day, the extra ping won't bring it down.
You may or may not have custom applications running in your installation that store data in the user's session. If those are a few bytes (like Liferay does, e.g. the currently logged in user's information): There's probably no degradation. If you store 1MB of information per session (in your own custom apps - Liferay doesn't do this), things might differ: Just multiply your session storage size by the number of concurrent users that you expect. In case this use of memory indicates a problem: Make your custom apps use the session less - it's bad style anyway.
Will your particular installation suffer from any degradation? Measure. There's no way around this.
From a system maintenance point of view: If you're running a cluster and want to take individual machines out of the load balancer: Artificially extending sessions might indicate that a machine still has sessions open, even though they're mostly on unattended browsers - you'll get inflated numbers and it takes longer to bring machines down when you need to wait for the session count to come close to zero.
Does anyone know how well tested LevelDB is and what is its status for use in production? It's a relatively new library and when I checked the source code it didn't appear to be handling errors too well. Does anyone use LevelDB in production and can comment on my question?
We use LevelDB in our website, but wrapped in SSDB(https://github.com/ideawu/ssdb), the LevelDB network server, with hash/zset data types support. Our SSDB instance serves 100 million queries per day.
LevelDB has a lot of high-visibility problems https://github.com/bitcoin/bitcoin/issues/2770 and the code is so poorly written that a bounty was needed to find a fix https://bitcointalk.org/index.php?topic=337294.0;all And the leveldb discussion group is predominantly bug reports about very fundamental database functionality that fails to work as advertised. https://groups.google.com/forum/#!forum/leveldb (e.g., "snapshots" aren't actually snapshots, and can be tainted by subsequent writes https://groups.google.com/forum/#!topic/leveldb/IAKJaL2zqZM etc...)
On the date that this question was asked, LevelDB was certainly NOT production ready and anyone who thought so was delusional. The code quality is abysmal, as confirmed by independent developers https://twitter.com/rescrv/status/406106256890286080
One place it is used in a production environment is the Bitcoin project. Within bitcoin, it's usage is critical for the security of the platform. See the release notes for Bitcoin QT 0.8.0
How do you qualify "relatively new" as it was out in 2011?
Can you please give more detail on "not handling errors too well"?
LevelDB is used as a backend in Riak and Hyperdex, which have both customised it to improve throughput under huge loads. There was a great video from Ricon East 2013 explaining the Riak changes made by Basho. (taken down at some point prior to 2019-03).
Note that RocksDB is another major fork, by Facebook, which is recommended for serverside. History of it forking from LevelDB is on WikiPedia. You can read about how RocksDB handles errors on this page:
Currently in RocksDB, any error during a write operation (write to
WAL, Memtable Flush, background compaction etc) causes the database
instance to go into read-only mode by default and further user writes
are not accepted....
Call DB::Resume() to manually resume the DB and put it in read-write
mode. This function will clear the error, purge any obsolete files,
and restart background flush and compaction operations. At present, it
only supports resuming from background errors that happen during
compaction. In the future, we will add more cases.
I am implementing a monitoring and administrative MQ API using the WebSphereMQ java PCF (Program Control Format) library. What I would like to know is if the PCFAgent and/or the PCFMessageAgent classes are thread safe. The documentation does not make it clear [to me].
If not, then I have 2 choices:
Create a pool of agents
Create (and disconnect) agents on demand.
Any insight into this issue is appreciated.
Cheers.
The important information you seek is probably on this page:
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=%2Fcom.ibm.mq.csqzaw.doc%2Fja11160_.htm
The main issue you will see is that the MQQueueManager object (that you either pass in, or is created for you) cannot really do 2 things at once on a single connection.
So if you have one Agent sitting on a get-with-wait waiting for a response to a big query (saying getting full details for thousands of queues) nothing else can be done using that connection until the reply comes back.
Connect/Disconnect are the biggest overhead when talking to MQ, so if you need multiple threaded access I would go with option 1 otherwise you'll pay a big penalty in performance having to wait for connect each time.
I have an issue where I need to monitor how long it takes for Websphere to process a request. Specifically I need to know how much time is spent in the "application world", that is time spent processing code in the ear file.
I can't just compute request_time - reponse_time because that contains time spent in the container or what I call "websphere world". I need to know the time spent only in the ear file.
Is there some performance setting I can toggle in websphere so this information gets logged to the server system log file? The application does not have log4j.
I am using Websphere 6.1
Take a look at the PMI interface under the WAS admin console. It provides some performance metrics -- not the prettiest or easiest interface, but it might provide what you're looking for.
A monitoring plugin is often used to do this. My company uses Introscope via a WAS JBM plugin, and it provides a better interface than PMI for viewing the performance data. Of course, it isn't free, but there may be free or cheap alternatives that are better than PMI.
WebSphere has something called "request metrics"
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.express.doc/info/exp/ae/tprf_rqenable.html
It gives you the possibility to log braked down request execution times on different levels. As you may expect with such monitoring it's easy to collect to much data so there is a possibility to filter events based on additional criteria like java package namespace, EJB name, URI etc.