SPRING BATCH : dynamic commit-interval - spring

I need to know how to set programmatically (in java class not xml)a commit-interval in my batch. My program is as the following :
// loop on lines information from flat file
// treatement on line
// commit
Is there a method in a library which permit to do the commit in java class ?
Thank you for your help

You would need to define your own custom CompletionPolicy. Then you set that as your chunk-completion-policy in your chunked step.
This old forum has an example implementation.

Related

Apache Geode - Creating region on DUnit Based Test Server/Remote Server with same code from client

I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.

Running unix shell TalendJob

I have a issue. I build myTalendJob and I am running myShell succesfully by adding a contextVariable. The command I use is:
./mainJob_run.sh --context_param myVar="/myDirectory/file.txt"
Is it possible to simply run ./mainJob_run.sh and passing dynamically --context_param myVar="/myDirectory/file.txt" avoiding to rewrite it anytime?
Thank you in advance!
I am not sure I understand your question but this is my attempt to answer.
Either:
When exporting your job, override the context "myVar" by this given value
Write caller script to call mainJob_run.sh appending this additional parameter. I prefer this one as it gives more flexibility
Implicit contexts load
You can read your context params from a file.
With this, you don't need to pass the context params through the shell command, but instead it reads the context params from a file when the job is executing.Ideally, you should put this in your tPreJob.
After reading the values, you can also pass the context params through a tJavaRow for further processing. This way you can format your context params, or generate new context params based on the input values.
TalendByExample has provided a great guide on how to build a reusable context loading job which you can call from any of your jobs.
https://www.talendbyexample.com/talend-reusable-context-load-job.html

Invoke Method with client.soda ( Statement Object Model )

I am trying to build Esper EPL statements in Java.
I use the com.espertech.esper.client.soda lib for this, but I can't find
a tutorial to help me.
The PatternExpressions are the only part that I need as of now.
As an example let's use the EPL:
every a=Event((a).getEventTypeCode()='E00001')
So he should trigger on every Event with the event type code E00001, we get the code by
calling the getEventTypeCode Method.
How do I project this to SOM?
With:
PatternExpr pattern = Patterns.everyFilter("Event","a");
I only get:
every a=Event
(of course)
I know there is a class called "MethodInvocationStream" but I don't know how to use it.
And I cannot find examples for its use.
Thanks to user650839 I found out how to add Methods via SOM.
Here is a simple EPL as an SOM Object: http://imgur.com/SDrTsa7
One source of info is the javadoc.
You could simply do the reverse and compile EPL text to a model object and inspect that. Use "epAdmin.compileEPL", the output is the same object you want to build via API.\

HP UFT API Test - Saving Response/Checkpoint values

Is there a way to capture and store (or write to a file) the values returned in the Response? (Checkpoint values)
Using HP UFT 11.52
Thanks,
Lynn
I figured it out. In UFT API under Standard Activities, there are File function modules including "Write to File". I added the module to the test, set the path and other properties, passed the variable to the file and it worked! Couldn't be easier.
I mentioned this on my other answer , you can also write it programatically if you have dynamic array response please refer below:
https://stackoverflow.com/a/28012383/3972994
After running a test, in the test folder, you can find a Snapshots/LastIteration directory.
In it you can find the return value for each step saved in a txt file.
Pay attention that if you data drive the step, only the last iteration will be saved to file.
However, in the Test's log (Test dir/Log/vtd_user.log) you can find all the iterations persisted
Thanks,
Yossi
You do not need to use the standard activities if you do this
var iResponse = this.Activity.responsebody;
System.IO.File.WriteLines(#"directorypath&FileName);
the above will write the response to the file and rewrite it for every run

Unitils and DBMaintainer - how to make them work with multiple users/schemas?

I am working on a new Oracle ADF project, that is using Oragle 10g Database, and I am using Unitils and DBMaintainer in our project for:
updating the db structure
unittesting
read in seed data
read in test data
List item
In our project, we have 2 schemas, and 2 db users that have privilegies to connect to these schemas. I have them in a folder structure with incremental names and I am using the #convention for script naming.
001_#schemaA_name.sql
002_#schemaB_name.sql
003_#schemaA_name.sql
This works fine with ant and DBMaintainer update task, and I supply the multiple user names by configuring extra elements for the ant task.
<target name="create" depends="users-drop, users-create" description="This tasks ... ">
<updateDatabase scriptLocations="${dbscript.maintainer.dir}" autoCreateDbMaintainScriptsTable="true">
<database name="${db.user.dans}" driverClassName="${driver}" userName="${db.user.dans}" password="${db.user.dans.pwd}" url="${db.url.full}" schemaNames="${db.user.dans}" />
<database name="idp" driverClassName="${driver}" userName="${db.user.idp}"
password="${db.user.idp.pwd}" url="${db.url.full}" schemaNames="${db.user.idp}" />
</updateDatabase>
</target>
However, I cant figure out, how to make the DBMaintainer update task create the xsd schemas from my db schemas?
So, I decided to use Unitils, since its update creates xsd schemas.
I haven't found any description or documentation for the Unitils ant tasks - can anyone give some hints?
For the time being I have figured out to run Unitils by creating a Junit test, with #Dataset annotation. I can make it work with one schema, and one db user. But I am out of ideas how to make it work with multiple users?
Here is the unitils-local.properties setup I have:
database.url=jdbc\:oracle\:thin\:#localhost\:1521\:vipu
database.schemaNames=a,b
database.userName=a
database.password=a1
Can any of you guys give me a tip, how to make Unitils work with the second user/schema ??
I will be extremely gratefull for your help!
eventually I found a way to inject any unitil.properties of your choice --- by instantiating Unitils yourself!
You need a method that is evoked #BeforeClass, in which you perform something like the following:
#BeforeClass
public void initializeUnitils {
Properties properties;
...
// load properties file/values depending on various conditions
...
Unitils unitils = new Unitils();
unitils.init(properties);
Unitils.setInstance( unitils );
}
I choose the properties file depending on which hibernate configuration is loaded (via #HibernateSessionFactory), but there should be other options as well
I have figure out how to make dbmaintain and unitils work together on multi-database-user support, but the solution is a pure ant hack.
I have set up the configuration for dbmaintain, using multi-database-user support.
I have made a unitils-local.properties file with token keys for replacement.
The init target of my ant script is generating a new unitils-local.properties file, by replacing tokens for username/password/schema with values that are correct for the target envirnonment, and then copies it to the users home directory.
I have sorted the tests into folders, that are prefixed with the schema name
When unitils is invoked, it picks up the unitils-local.properties file just created by the ant script, and does its magic.
Its far from pretty, but it works.
Check out this link: http://www.dbmaintain.org/tutorial.html#From_Java_code
Specifically you may need to do something like:
databases.names=admin,user,read
database.driverClassName=oracle.jdbc.driver.OracleDriver
database.url=jdbc:oracle:thin://mydb:1521:MYDB
database.admin.username=admin
database.admin.password=adminpwd
database.admin.schemaNames=admin
database.user.userName=user
database.user.password=userpwd
database.user.schemaNames=user
database.read.userName=read
database.read.password=readpwd
database.read.schemaNames=read
Also this link may be helpful: http://www.dbmaintain.org/tutorial.html#Multi-database__user_support
I followed Ryan suggestion. I noticed couple changes when I debugged UnitilsDB.
Following is my running unitils-local.properties:
database.names=db1,db2
database.driverClassName.db1=oracle.jdbc.driver.OracleDriver
database.url.db1=jdbc:oracle:thin:#db1d.company.com:123:db1d
database.userName.db1=user
database.password.db1=password
database.dialect.db1=oracle
database.schemaNames.db1=user_admin
database.driverClassName.db2=oracle.jdbc.driver.OracleDriver
database.url.db2=jdbc:oracle:thin:#db2s.company.com:456:db2s
database.userName.db2=user
database.password.db2=password
database.dialect.db2=oracle
Make sure to use #ConfigurationProperties(prefix = "database.db1") to connecto to particular database in your test case:
#RunWith(UnitilsJUnit4TestClassRunner.class)
#ConfigurationProperties(prefix = "database.db1")
#Transactional
#DataSet
public class MyDAOTest {
..
}

Resources