How to create and run bulk delete sequentially - dynamics-crm

I'm using power apps in dynamics 365. I have a few conditions to delete records in multiple DATAVERSE tables. I know that I can achieve this by using bulk delete option in dynamics 365 but the important thing is that I should have to initiate bulk delete sequentially, like if I had 2 tables table A and B. Here I should have to delete table B first and then I can go for table A.
How Can I achieve this? help me to get suggested ways to do this.

Related

Send an email according to a query result after each insert in the table - Talend

I'm building a job on Talend Open Studio that allows me to charge Data from SQL Server table into different tables. Then I want to execute some queries on the new charged tables and send e-mail according to the query result after each insert in those tables. 
Is it possible to do that with Talend Open Studio and if so, Can you guide me through this?
You can achieve this by using tDBInput -> tFlowToIterate -> tSendMail components. tDBInput will read the data from your DB table, and for every row it will send an email. You can grab the data fetched from tDBInput in tSendMail by using (String) globalMap.get("row1.FieldNeeded")
Hope this helps.

How to retrive more than 5000 record from CRM using kingswaysoft for SSIS packages?

I am trying to migrate data between two CRM databases(dynamics 365) but when in kingswasoft there is limit of 5000 record per batch. can anyone please suggest an approach wherein I can send n number of records?
We will page through all records in the source entity. The Batch Size setting on the CRM Source component is just used to specify how many records you want to retrieve per service call, not the total number you will get from the source entity. Hope this clarifies things a bit more.

ZOHO CRM - How to find and merge all the duplicate records in bulk?

I am working in Zoho Crm. I have many duplicates records in Account. Is it possible to find and merge all the duplicate records at once.
Thanks.
As far my knowledge for now, Currently there is not any possible way by which it can be possible to merge all the duplicate records in bulk in ZOHO CRM.
The only way to merge it one by one by click on find and merge. but not in bulk.
you have to get it done manually. All the best

How to do table operations in Google BigQuery?

Wanted some advice on how to deal with table operations (rename column) in Google BigQuery.
Currently, I have a wrapper to do this. My tables are partitioned by date. eg: if I have a table name fact, I will have several tables named:
fact_20160301
fact_20160302
fact_20160303... etc
My rename column wrapper generates aliased queries. ie. if I want to change my table schema from
['address', 'name', 'city'] -> ['location', 'firstname', 'town']
I do batch query operation:
select address as location, name as firstname, city as town
and do a WRITE_TRUNCATE on the parent tables.
My main issues lies with the fact that BigQuery only supports 50 concurrent jobs. This means, that when I submit my batch request, I can only do around 30 partitions at a time, since I'd like to reserve 20 spots for ETL jobs that are runnings.
Also, I haven't found of a way where you can do a poll_job on a batch operation to see whether or not all jobs in a batch have completed.
If anyone has some tips or tricks, I'd love to hear them.
I can propose two options
Using View
Views creation is very simple to script out and execute - it is fast and free to compare with cost of scanning whole table with select into approach.
You can create view using Tables: insert API with properly set type property
Using Jobs: insert EXTRACT and then LOAD
Here you can extract table to GCS and then load it back to GBQ with adjusted schema
Above approach will a) eliminate cost cost of querying (scan) tables and b) can help with limitations. But might not depends on the actual volumke of tables and other requirements you might have
The best way to manipulate a schema is through the Google Big Query API.
Use the tables get api to retrieve the existing schema for your table. https://cloud.google.com/bigquery/docs/reference/v2/tables/get
Manipulate your schema file, renaming columns etc.
Again using the api perform an update on the schema, setting it to your newly modified version. This should all occur in one job https://cloud.google.com/bigquery/docs/reference/v2/tables/update

SSIS duplicate data Insert Update approach

New to SSIS.
I have data from a set of flat files. It is possible to have the same person on the same flat file or different flat files with person different information. I process them all at the same time.
But I need to know either to insert the record or update it (if the same person). I'm using a lookup to determine if the person exist on the table. I have already set the Old DB Destination FastLoadMaxInsertCommitSize to 1 and using Ole DB Command for updates.
But still it cant determine an update if the same person is encountered.
I also tried merge on control flow but failed.
What could be the solution for this?
After Insert/Update, look for duplicate data and delete IDs(if using identity keys) lower than the max(ID).

Resources