Insert Elsa WorkFlow's Information in Database table while Creating WorkFlow through Builder Api - elsa-workflows

When I create WorkFlow using Elsa Dashboard, the WorkFlow's information(Name, Description, Data, IsPublished, Persistance behaviour, version, finished at.....) is automatically inserted in These 3 Tables
WorkflowDefinitions
WorkflowInstances
WorkflowExecutionLogRecords
But When I Create WorkFlow using Builder API, Workflow's information is not inserted. Nothing is inserted in any table
I want to insert & Store WorkFlow's info in Those 3 tables.
So is there any way to insert WorkFlow's Info in tables
Or do I have to Manually Write code to Insert data in those 3 tables in Builder API?

As the name of these tables infer the WorkflowInstances are the instantiated workflows and the WorkflowExecutionLogRecords is the log for a flow execution. what you define by Builder API is the workflow blueprint or as it's called workflow definition so it's the only table that could be affected when saving a blueprint.
But you usually don't need to persist your workflow defined by Builder API in the DB, cause if you have it in your code it's somehow persisted and is part of your code.
I guess getting to know workflow providers can put you in the right direction, you can find out about them in this article.

Related

How to add a "Viewed By" audit in Javers with using an explicit field on the entity being audited

In our application, we have a requirement to audit "viewed by" events. Currently, we implemented this functionality by using an Audit table and manually logging it to the table during "GET" calls. I am trying to understand how to accomplish this in Javers.
In our current application, to find our changes, we use hibernate interceptor and manually add the changes to the audit table.
I thought that the easiest way to accomplish the "viewed" audit functionality in Javers is to add a "viewedBy" field to the entity being audited and manually update it in "GET" calls. But I am concerned about this approach as each time, there is a view, we are changing the version of the object(by physically updating it) and the state is being saved to the jv_snapshot table.
I expect that the viewed by audits will be part of javers.findChanges() method, so that the changes are tracked in a chronological order and also possibly be paginated.

Dynamics CRM Plugin can't retrieve records created earlier in the pipeline

I have a chain of synchronous events that take place.
a custom control calls an action
action creates a couple of records
action then triggers a plugin which tries to retrieve records that were created in step 2, but the query returns nothing
I suspect this is happening because all the events are in the same transaction and therefore the records they create are not yet committed to the database. Is this correct?
Is there an easy way to retrieve records that were created earlier in the pipeline or am I stuck having to stuff OutputParameter object into SharedVariables?

How to do table operations in Google BigQuery?

Wanted some advice on how to deal with table operations (rename column) in Google BigQuery.
Currently, I have a wrapper to do this. My tables are partitioned by date. eg: if I have a table name fact, I will have several tables named:
fact_20160301
fact_20160302
fact_20160303... etc
My rename column wrapper generates aliased queries. ie. if I want to change my table schema from
['address', 'name', 'city'] -> ['location', 'firstname', 'town']
I do batch query operation:
select address as location, name as firstname, city as town
and do a WRITE_TRUNCATE on the parent tables.
My main issues lies with the fact that BigQuery only supports 50 concurrent jobs. This means, that when I submit my batch request, I can only do around 30 partitions at a time, since I'd like to reserve 20 spots for ETL jobs that are runnings.
Also, I haven't found of a way where you can do a poll_job on a batch operation to see whether or not all jobs in a batch have completed.
If anyone has some tips or tricks, I'd love to hear them.
I can propose two options
Using View
Views creation is very simple to script out and execute - it is fast and free to compare with cost of scanning whole table with select into approach.
You can create view using Tables: insert API with properly set type property
Using Jobs: insert EXTRACT and then LOAD
Here you can extract table to GCS and then load it back to GBQ with adjusted schema
Above approach will a) eliminate cost cost of querying (scan) tables and b) can help with limitations. But might not depends on the actual volumke of tables and other requirements you might have
The best way to manipulate a schema is through the Google Big Query API.
Use the tables get api to retrieve the existing schema for your table. https://cloud.google.com/bigquery/docs/reference/v2/tables/get
Manipulate your schema file, renaming columns etc.
Again using the api perform an update on the schema, setting it to your newly modified version. This should all occur in one job https://cloud.google.com/bigquery/docs/reference/v2/tables/update

Count inserts, deletes and updates in a PowerCenter session

Is there a way in PowerCenter 9.1 to get the number of inserts, deletes and updates after an execution of a session? I can see the data on the log but I would like to see it in a more ordered fashion in a table.
The only way I know requires building the mapping appropriately. You need to have 3 separate instances of the target and use a router to redirect the rows to either TARGET_insert or TARGET_update or TARGET_delete. Workflow Monitor will then show a separate row for the inserted, updated and deleted rows.
There are few ways,
1. You can use $tgtsuccessrows / $TgtFailedRows and assign it to workflow variable
2. Expression transformation can be used with a variable port to keep track of insert/update/delete
3. You can even query OPB_SESSLOG in second stream to get row count inside same session.
Not sure if PowerCenter 9.1 offers a solution to this problem.
You can design your mapping to populate a Audit table to track the number of insert/update/delete's
You can download a sample implementation from Informatica Marketplace block titled "PC Mapping : Custom Audit Table"
https://community.informatica.com/solutions/mapping_custom_audit_table
There are multiple ways like you can create a assignment task attach this assignment task just after you session once the session complete its run the assignment task will pass on the session stats from session to the workflow variable defined at workflow level, sessions stats like $session.status,$session.rowcount etc and now create a worklet having a mapping included in it, pass the session stats captured at workflow level to the newly created worklet and from worklet to the mapping, now once the stats are available at mapping level in the mapping scan these stats (using a SQL or EXP transformation) and then write these stats to the AUDIT table ... attach the combination of assignment task and worklet after each session and it will start capturing the stats of each session after the session completes it run....

managing/implementing auto-increment primary key in oracle without triggers

We have many tables in our database with autoincrement primary key ids setup the way they are in MySQL since we are in the process of migrating to Oracle from MySQL.
Now in oracle I recently learned that implementing this requires creating a sequence and a trigger on the id field for each such table. We have like 30 -40 tables in our schema and we want to avoid using database triggers in our product, since management of database is out of scope for our software appliance.
What are my options in implementing the auto increment id feature in oracle... apart from manually specifying the id in the code and managing it in the code which would change a lot of existing insert statements.
... I wonder if there is a way to do this from grails code itself? (by the way the method of specifying id as increment in domain class mapping doesnt work - only works for mysql)
Some info about our application environement: grails-groovy, hibernate, oracle,mysql support
This answer will have Grails/Hibernate handle the sequence generation by itself. It'll create a sequence per table for the primary key generation and won't cache any numbers, so you won't lose any identifiers if and when the cache times out. Grails/Hibernate calls the sequence directly, so it doesn't make use of any triggers either.
If you are using Grails hibernate will handle this for you automatically.
You can specify which sequence to use by putting the following in your domain object:
static mapping = {
id generator:'sequence', params:[sequence:'MY_SEQ']
}

Resources