I got my end point/connectors working now part of my application I need to look up some configuration data from the database...
So as soon I start my application in mule I want data from the database and store it in memory and I want it to refresh the data every few minutes...
Then my main business logic can look up the data in Cache memory rather then hit the DB all the time...
You can make use of muleregistry to store information. Say if you want to refresh data every 10 mins. Below are the steps to be implemented. Step 1 and 2 are part of same flow.
1. Create database component inside poll component with frequency of 10 mins.
<poll doc:name="Poll">
<fixed-frequency-scheduler frequency="10" timeUnit="MINUTES"/>
<!--Place db component here-->
</poll>
2. Once step 1 is completed store the value retrieved in step 1 inside muleregistry. Below is example code to set key value using groovy.
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[muleContext.getRegistry().registerObject('Key1', new String('Value1')]]></scripting:script>
</scripting:component>
3. Access value stored in muleregistry any where within application where mulecontext is available using MEL.
<set-variable variableName="keyVar" value="#[app.registry.get("Key1")]" mimeType="text/plain" doc:name="Variable"/>
Now the value from database is stored in flow variable KeyVar. Business logic always needs to read value from flow variable KeyVar which will be updated periodically.
Related
I have a reactive quarkus app with hibernate-panache-reactive. The problem is it behaves differently when I run it as a Java app or a native app.
The app
loads a lot of data from a MySQL DB via hibernate-panache-reactive
builds a graph based on the data loaded
runs some time consuming algorithm on the graph
loads some more data from the DB based on the results returned from 3)
So initially the code looked something like this:
GraphProcessor graphProcessor = createInitialProcessor();
return Uni.createFrom().item(graphProcessor)
// 1) loading of initial data
.onItem().transformToUni(this::loadDataViaPanaceReactive1)
.onItem().transformToUni(this::loadDataViaPanaceReactive2)
.onItem().transformToUni(this::loadDataViaPanaceReactive3)
// 2) building of graph
.onItem().transform(graphProcessor::processLoadedData)
.onItem().invoke(graphProcessor::loadingComplete) //sync
// 3) running time consuming algorithm on graph
.onItem().transformToMulti(this::runTimeConsumingTask)
.onItem().invoke(this::prepareDBQueries)
// 4) load more data from DB
.onItem().transformToUniAndConcatenate(this::loadMoreData1)
.onItem().transformToUniAndConcatenate(this::loadMoreData2)
.onItem().transformToUniAndConcatenate(this::transformToPublicForm)
.onFailure().invoke(log::error);
That worked fine when run as a Java app but when I tried to run it as a native app it first complained that the computation in 2 and 3 were taking too long and this was blocking the calling thread.
I fixed that by using
.emitOn(Infrastructure.getDefaultWorkerPool())
Between 1 and 2
This time I got another error
java.lang.IllegalStateException: HR000069: Detected use of the
reactive Session from a different Thread than the one which was used
to open the reactive Session - this suggests an invalid integration;
original thread: 'vert.x-eventloop-thread-0' current Thread:
'vert.x-eventloop-thread-1'
I've fixed that by inserting
.emitOn(Infrastructure.getDefaultExecutor())
between 3 and 4.
GraphProcessor graphProcessor = createInitialProcessor();
return Uni.createFrom().item(graphProcessor)
// 1) loading of initial data
.onItem().transformToUni(this::loadDataViaPanaceReactive1)
.onItem().transformToUni(this::loadDataViaPanaceReactive2)
.onItem().transformToUni(this::loadDataViaPanaceReactive3)
// 2) building of graph
.emitOn(Infrastructure.getDefaultWorkerPool()) // Required for native mode
.onItem().transform(graphProcessor::processLoadedData)
.onItem().invoke(graphProcessor::loadingComplete)
// 3) running time consuming algorithm on graph
.onItem().transformToMulti(this::runTimeConsumingTask)
.onItem().invoke(this::prepareDBQueries)
.emitOn(Infrastructure.getDefaultExecutor()) // Required for native mode
// 4) load more data from DB
.onItem().transformToUniAndConcatenate(this::loadMoreData1)
.onItem().transformToUniAndConcatenate(this::loadMoreData2)
.onItem().transformToUniAndConcatenate(this::transformToPublicForm)
.onFailure().invoke(log::error);
That worked when run in native mode but now when I run it in Java I get the same exception (Detected use of the
reactive Session from a different Thread than the one which was used
to open the reactive Session)
The emitOn(Infrastructure.getDefaultExcecutor()) should have switched back to the original thread.
The odd thing is also that this exception is not thrown every time I hit the app.
So what am I doing wrong here? What is the best way to handle time consuming tasks and then having to do some more DB queries after?
You could use .runSubscriptionOn(Executor) but I would need to switch back to the original thread for part 4 again.
Thanks for you help.
I have been writing a spring batch wherein I have to perform some error-handling.
I know that spring batch has its own way of handling errors and restarts.
However, when the batch fails and restarts again, I want to pass my own values, conditions and parameters (for restart) which need to be followed before starting/executing the first step.
So, is it possible to write such custom restart in spring batch?
UPDATE1:(Providing a better explanation for above question.)
Let's say that the input to my reader in Step 1 is in the following format:
Table with following columns:
CompanyName1 -> VehicleId1
CN1 -> VID2
CN1 -> VID3
.
.
CN1 -> VID30
CN2 -> VID1
CN2 -> VID2
.
.
CNn -> VIDn
The reader reads this table row by row for a chunk size 1 (so in this case the row retrieved will be CN -> VID ) processes it and writes it to a File object.
This process goes on until all the CN1 type data is written into the File object. When the reader sends the row with company Name of type CN2 , the File object that was created earlier (for company name of type CN1) will be stored in a remote location. Then the process of File Object creation will continue for CN2 until we encounter CN3, in which case CN2 File Object will be sent for storage to a remote location
and the process will continue.
Now, once you understand this, here's a catch.
Let's say the data is currently being written by the writer for company Name 2 (CN2) and vehicle ID is VID20 (CN2 -> VID20)
in the File object. Then, due to some reason we had to stop the job/the job fails. In that case, the instance that will be saved will be CN2 -> VID20. So, next time when the job runs, it will start from CN2->VID20
As you might have guessed, all the 19 entries before CN2->VID20 which were written in the File Object were deleted permanently when the file Object got destroyed and these entries were never sent through the File to remote location.
So my question here is this:
Is there a way where I can write my custom restart for the batch where I could tell the job to start from CN2->VID1 instead of CN2->VID20?
If you could think of any other way to handle this scenario then such suggestions are also welcome.
Since you want to write the data of each company in a separate file, I would use the company name as a job parameter. With this approach:
Each job will read the data of a single company and write it to a file
If a job fails, you can restart it and it would resume where it left off (Spring Batch will truncate the file to last known written offset before writing new data). So there is no need for a custom restart "policy".
No need to set the chunk-size to 1, this is not efficient. You can use a reasonable chunk size with this approach
If the number of companies is small enough, you can run jobs manually. Otherwise, you can always get the distinct values with something like select distinct(company_name) from yourTable and write a script/loop to launch batch jobs with different parameters. Bottom line, make each job do one thing and do it well.
I have a scheduled script execution that needs to persist a value between runs. It is updated with each run. Using gs.setProperty seemed like the natural place until I came across this:
Care should be taken when setting system properties (sys_properties)
using this method as it causes a system-wide cache flush. Each flush
can cause system degradation while the caches rebuild. If a value must
be updated often, it should not be stored as a system property. In
general, you should only place values in the sys_properties table that
do not frequently change.
Creating a separate table to store a single scalar value seems like overkill. Is there a better place to store it?
You could set a preference if you need it in the instance. Another place could be the events table. Log the event with the data in parm1 or parm2 and on next run query the most recent event.
I'd avoid making a table as that has cost implications for some clients. I agree with the sys_properties.
var encrypter = new GlideEncrypter();
var encrypted = encrypter.encrypt('Super Secret Phrase');
gs.info('encrypted: ' + encrypted);
var decrypted = encrypter.decrypt(encrypted);
gs.info('decrypted: ' + decrypted);
/**
*** Script: encrypted: g/bXLJHa7xNRMKZEo5q/YtLMEdse36ED
*** Script: decrypted: Super Secret Phrase
*/
This way only administrators could really read this data. Also if I recall correctly, the sysevent table is cleared after 7 days. You could have the job remove the event as soon as it has it in memory.
How do I force the ehcache to load all the data from DB once, after that i need to read the values from ehcache.
I am seeing examples in which every new search goes to db first and then next hit from cache.
getProduct("1") - goes to db - ok
getProduct("1") - goes to cache - ok
getProduct("2") - goes to db - **instead i want this from cache**
getProduct("2") - goes to cache - ok
Please advice.
If you want up-front loading of a set of information in the cache, this is something you need your application to trigger.
The cache itself does not know the valid values to getProduct and so cannot prefetch them on its own.
We are using Kaltura to notify our CMS about changes in the videos. In the KMC under Settings->Integrations Settings we have checked all the checkboxes under "Sent by Server".
Some times these checkmarks disappear? IT happens maybe once a week or once a month. How can we find the reason to these boxes being deactivated?
Those notifications are being stored on the partner object in partner table. The actual data is stored in the custom_data field, which holds large amount of PHP-serialized data.
I can suspect cases that due to updates of other fields in the custom_data object, the notifications section will be erased.
Your best shot would be first check the value of that field when the config got erased. If it was actually erased in the database, try to find the following log messages in api_v3.log (which can lead you to the actual API request that modified the field):
[2124167851][propel] */ UPDATE partner SET
`UPDATED_AT`='2017-10-04 14:11:36',
`NOTIFY`='1',
`CUSTOM_DATA`='a:79:{s:9:"firstName";s:5:"Roman";s:12:"isFirstLogin";b:0;
... tons of PHP serialized data ...
i:1;s:19:"notificationsConfig";s:42:"*=0;1=1;2=1;3=1;4=0;21=0;6=0;7=0;26=0;5=0;";
... tons of PHP serialized data ...
}' WHERE partner.ID='101' AND MD5(cast(partner.CUSTOM_DATA as char character set latin1)) = '7eb7781cc04c7f98077efc2e3c1e9426'
The key that stores the notifications config is notificationsConfig (Each number represents the notification type, then 0 / 1 for off / no).
As a side note, which CE version are you using? There might be a more reliable way to integrate with your CMS.