How can I define a JSR 352 chunk step to checkpoint every 10 seconds (and use built-in checkpointing in general)? - websphere-liberty

Is there a simple way using <chunk time-limit="10"...>?
Can I combine a time-limit with my custom CheckpointAlgorithm?

Short Answer:
Probably the simplest way is something like
<chunk time-limit="10" item-count="999999">
where 999,999 is just some big number that will never be hit.
Background
To explain why, here's a more general answer to "how can I use the JSR 352 built-in checkpoints?"
There are two ways to configure checkpoints in JSR 352, the two checkpoint policies.
In JSL (XML)
This is the default, "built-in" behavior.
In Java
Controlled by application's CheckpointAlgorithm impl, and enabled via checkpoint-policy="custom" in JSL.
Built-in policy is based on item OR time, whichever "hits" first
The fundamental point to understand about the built-in checkpoint policy is that it is based on item count OR time limit, whichever comes first.
Examples
<chunk>
After 10 items (default)
<chunk item-count="25">
After 25 items
<chunk time-limit="10">
After 10 seconds or 10 items (again, the item-count default), whichever comes first.
<chunk time-limit="10" item-count="25">
After 25 items, or 10 seconds, whichever comes first.
<chunk time-limit="10" item-count="999999">
After 999,999 items, or 10 seconds, whichever comes first (So in all but the simplest processing, this effectively means after 10 seconds, and you could make this number even bigger if necessary).
<chunk checkpoint-policy="custom">
Implemented in your own application code via CheckpointAlgorithm, using method isReadyToCheckpoint(), and optionally the timeouts as well), and referenced like:
<chunk checkpoint-policy="custom">
<checkpoint-algorithm ref="myCustomCheckpointAlgorithm">
Discussion
So the time-limit defaults to '0' which is defined to mean "unlimited time" or "don't checkpoint on time". On the other hand, the item-count defaults to '10', and an analogous behavior for item-count of '0' is undefined.
So the best way to checkpoint based on, some number of seconds, is just to set the item-count high enough that it doesn't matter, which would typically not be hard in real-world applications.
This is example #5 above.
You can not combine built-in controls with your "custom" algorithm !
<chunk checkpoint-policy="custom" time-limit="5"> The time-limit is IGNORED !
You can't combine a custom algorithm with any of the "item" checkpoint policy attributes (these attributes are just ignored). You can't say checkpoint based on my custom algorithm OR at 5 seconds, whichever comes first.
<chunk checkpoint-policy="custom" item-count="500"> The item-count is IGNORED !
Same as previous example.
Caveat
There are some older, obsolete examples floating around, e.g. this article that also incorporate ""commit-interval" and a "time" checkpoint policy. These were NOT incorporated into the final 1.0 specification.
Closing Thoughts
The above solution has a less-elegant, "hack" quality to it.
Perhaps the specification should define a behavior where item-count="0" means, "never checkpoint based on item count", which would allow a simpler solution. Perhaps this should be considered for a possible 1.1 update.

Related

JFR events start time

I have custom JFR event. I found that the RecodedEvent.getStartTime() is actually couple of seconds later than the time when this event was really created and committed. Then what time the getStartTime() shows?
In my case I added current time to my event and read it while jfr file parsing. But how can I get it in built-in events, like jdk.ExecutionSample?
There's a field in built-in events getLong("startTime"), but it gives strange numbers, that doesn't look like current time in millis. What is it?
By default JFR uses invariant TSC for taking timestamps (not used by System.currentMillis() or System.nanoTime()).
Invariant TSC allows JFR to have very low ovehead, but on some CPUs or in some scenarios, the clock may drift. You can use the command-line flag:
-xx:-UseFastUnorderedTimeStamps
to get a more accurate clock, but at a higher overhead.
The time you get from event.getLong("startTime") is the raw ticks, typically only useful if you want to compare with some other system that uses the same timing mechanism.

Lost Duration while Debugging Apex CPU time limit exceeded

I'm open to posting the code in this section to work through the optimization but its a bit length and complex, so instead I'm hoping that somebody can assist me with a few debugging questions I have. My goal is to find out what is causing my Apex CPU Time Limit Exceeded issue.
When using the Debug Log in its basic or normal layout I receive the message
Maximum CPU Time: 15062 out of 10,000 ** Close to Limit
I've optimized and re-wrote various loops and queries several times now and in each case this number concludes around there which leads me to believe it is lying to me and that my actual usage far exceeds that number. So on my journey I switched the Log Panels of the Developer Console to Analysis in hopes of isolating exactly what loop, method, or area of the code is giving me a headache.
This leads me to my main question and problem.
Execution Tree, Performance Tree & Executed Units
All show me that my durations UNDER the 10,000ms allowance. My largest consumption is 3,556.19ms which is being used by a wrapper class I created and consumed in the constructor method where there is a fair amount of logic that is constructing a fairly complicated wrapper class that spans over 5-7 custom objects. Still even with those 3,000ms the remainder of the process shows at negligible times bringing my total around 4,000ms. Again my question is.... Why am I unable to see or find what is consuming all my time?
Incorrect Iteration Data
In addition to this, on the Performance tree there is a column of data that shows the number of iterations for each method. I know that my Production Org has 81 objects that would essentially call the constructor for my custom wrapper object. I.E. my Constructor SHOULD be called 81 times, but instead it is called 32 times. So my other question is can I rely on the iteration data in the column? Or because it was iterating so many times does it stop counting at a certain point? Its possible that one of my objects is corrupted or causing an infinite loop somehow, but I don't want to dig through all the data in search of that conclusion if its a known issue that the iteration data is not accurate anyway.
System.Debug in the Production org
The Last question is why my System.Debug() lines are not displaying in my Developer Console on the production org. I've added serveral breadcrumbs throughout the code that would help me isolate just which objects are making it through and which are not, however, I cannot in any layout view system.debug messages outside of my Sandbox.
Sorry for the wealth of questions but I did want to give an honest effort to better understand the debugging process in Salesforce. If this is a lost cause I'm happy to start sharing some code as well but hopefully some debugging tips can get me to the solution.
It's likely your debug log got truncated, see "Each debug log must be 20 MB or smaller. If it exceeds this amount, you won’t see everything you need." in https://trailhead.salesforce.com/en/content/learn/modules/apex_basics_dotnet/debugging_diagnostics
Download the log and search for text similar to "skipped 123456 bytes of detailed log" to confirm, some system.debug statements will just not show up.
You might have to fine-tune the log levels (don't log validation rules and workflows? don't log every single variable assignment with "FINE" level etc). You might have to set all flags to NONE, then track only 1 particular class/trigger that you suspect (see https://help.salesforce.com/articleView?id=code_debug_log_classes.htm&type=5 and https://salesforce.stackexchange.com/questions/214380/how-are-we-supposed-to-use-debug-logs-for-a-specific-apex-class-only)
If it's truncated it's possible analysis tools give up (I had mixed luck with console to be honest, sometimes https://apextimeline.herokuapp.com/ is great to give overview - but it'll also fail to parse a 20 MB log...
When all else fails you can load up the log into Notepad++ (or any editor of your choice), find lines related to method entry/method exit (you might need a regular expression search), take these filtered lines tor excel, play with "text to columns" and just look at timing manually, see if there's a record that causes the spike. Because it could be #10 that's the problem, the fact it exhausts limits on #32 of 81 doesn't mean much. Search like [METHOD_ENTRY|METHOD_EXIT]MyTriggerHandler.onBeforeUpdate could be a good start. But first thing is to make sure log is not truncated.

How to schedule individual MedicationRequest Administrations?

I am looking to find the best/recommended way to implement the MedicationRequest/MedicationAdministration workflow. Possiblities that I have explored are:
Using the MedicationRequest by itself, and at runtime, determine when the dosages should occur and if they fall withing the boundaries of the current shift, or
Using Tasks to create a limited amount of upcoming dosage Tasks, or
Using MedicationRequests resources based on the original MedicationRequest to indicate each separate dosage
Pros of option 1:
Smallest storage footprint
Cons of option 1:
Requires most run-time work (have to evaluate timing to determine if dosage is required this shift, more work determining missed dosages)
Pros of option 2:
Common use of Tasks could be used against other Orders (ProcedureRequests, etc.) for a common workflow (e.g. show all Tasks this shift)
Cons of option 2:
Default Fhir SearchPararmeters defined do not allow for search on Task.restriction.period (which I believe is how you define the period in which the Task is to be performed).
Only place to link MedicationAdministration to Task is supportingInformation, but the field definition (Additional information (for example, patient height and weight)) doesn't seem like it is appropriate to put the Task there. Possibly use Provenenace, and use that to link Task to MedicationAdministration in eventHistory, but this seems like a stretch.
Pros of option 3:
MedicationRequest.intent has order and instance-order as values. The documentation seems to indicate that this would be a good fit (overall request has intent=order, individual specific dosages would have instance-order)
MedicationRequest has a Fhir-defined search parameter on timing.event that could be used to find events for a specific period.
Cons of option 3:
http://hl7.org/fhir/us/meds/guidance.html#fetching-active-medications-orders states "A MedicationRequest resource query SHALL be all that is required to access the “all active medication orders”." The query example given is GET /MedicationRequest?patient=[id]&status=active{&_include=MedicationRequest:medication}. This kind of hints to me that they expect searches to be more done on status than time period. Not really a strong "con" against this approach, but definitely not a "pro" for using this method.
Any advice about the methods used by other implementations would be greatly appreciated.
The general design expectation is that you would creaste "instance" orders for each administration.

Gethbase processor from 1 table Apache NIFI

gethbase >> execute_script
Hello, I have problem with backpressure object threshold when processing data from hbase to executing script with Jython. If just 1 processor is executed, my queue is always full, because the first processor is faster than the second. I was making concurrent tasks of second processor from 1 to 3 or 4 but it makes new error message. Here:
Image
Anyone here has a solution?
This might actually increase your work a bit but I would highly recommend writing Groovy for your custom implementation as opposed to Python/Jython/JRuby.
A couple of reasons for that!
Groovy was built "for the JVM" and leverages/integrates with Java more cleanly
Jython is an implementation of Python for the JVM. There is a lot of back and forth which happen between Python and JVM which can substantially increase the overhead.
If you still prefer to go with Jython, there are still a couple of things that you can do!
Use InvokeScriptedProcessor (ISP) instead of ExecuteScript. ISP is faster because it only loads the script once, then invokes methods on it, rather than ExecuteScript which evaluates the script each time.
Use ExecuteStreamCommand with command-line Python instead. You won't have the flexibility of accessing attributes, processor state, etc. but if you're just transforming content you should find ExecuteStreamCommand with Python faster.
No matter which language you choose, you can often improve performance if you use session.get(int) instead of session.get(). That way if there are a lot of flow files in the queue, you could call session.get(1000) or something, and process up to 1000 flow files per execution. If your script has a lot of overhead, you may find handling multiple flow files per execution can significantly improve performance.

PLC Ladder Logic - memory and processing management

I am beginning with ladder programing and english is not my first language. A professor of mine once said that I could not put more than one output on a same ladder rung, is that correct? And if so, is it preferable to put the outputs on other rungs or on the same one to save memory space and processing time?
This completely depends on the vendor providing the ladder logic implementation.
Rockwell (and I'm sure some other vendors) RLL allows OTEs and other actions anywhere in a rung. The output is controlled by the logic condition it is fed; it also (at least, OTEs) passes that value further in the rung unchanged to be processed by more of the rung.
It is a nice style to have only one output per rung. It is more efficient code-wise (and time wise) to have more than one output per rung, because the outputs can share the rung condition.
I have yet to see a PLC that can't handle multiple outputs on same rung.
Like #franji1 said, he might have said do not and not you can't. I would never recommend to have multiple outputs on a rung, but sometimes it can be necessary.
He could also have told you to not have the same output in multiple rungs since the PLC would always read the last rung.
So let's say you activate O:1.0 in rung 1, but in rung 20 O:1.0 is not active, then this output will never turn on, since the PLC handels output after each full scan.
AlwaysON O:0.0
---[]-------------()
AlwaysOFF O:0.0
---[]-------------()
Like her the last rung would never be true so O:0.0 will always be false even thought it is active in the earlier rung.
If you then swap the rungs around so the AlwaysON state is on the last rung with O:0.0 then it would always be active and the AlwaysOFF rung would be
redundant.
Hope this helps you out.
if you are using ladder(LAD), just like what T.Nesset said-
AlwaysON O:0.0
---[]-------------()
AlwaysOFF O:0.0
---[]-------------()
Q0.0 will turn "OFF" because PLC is scanning the program from then top to end.if you change a little bit just like this:
AlwaysOFF O:0.0
---[]-------------()
AlwaysON O:0.0
---[]-------------()
The result of Q0.0 will be "ON".
In Mitsubishi programming software just like GXWorks2, while you created the ladder program in this style, you will get a warning after complied. In Japanese the name of this style is "double coil".
If you would like to use in this style, please insert a jump instruction to separate these network/ separate in difference blocks and make sure only one block is running in each time.
Sorry for my poorly english.
I have very often seen two parallel outputs in one rung. This would be logical if two outputs shared all but one input condition, for example, with the different input condition leading to two different outputs.
I:0.00 I:0.01 I:0.02 O:1.00
---[/]-----[ ]-----[ ]-----O---
|
| I:0.03 O:1.01
---[ ]-----O---
If two output conditions have completely different logic branches, though, it would not make sense to put them in the same rung. In fact, some vendors (Omron CX-One, for example) will not allow disconnected branches to be placed in the same rung.
Perhaps, as #franji1 mentioned in his comment, your professor was referring to putting the same output in more than one rung.

Resources