Limit Power Automate Flow Trigger to Specific Excel Table - power-automate

Is there a way to control a Power Automate Flow based on which table is updated in an Excel file?
I have one Flow that reads table data when the Excel file has been modified. I also have another Flow that will update a row, in this same file but a different table, when a SharePoint list is modified. The Flow that runs when the list is modified is kicking off the flow when the Excel file is modified, which I don't need or want.
Is there any way to qualify which table is being updated to allow/prevent the Flow to continue running? Is there something available in MS Graph maybe?
This is not preventing any work, it's more of an annoyance and the fact that it contributes to daily limits.
In Power Automate, I don't see any way to identify which table has been updated to allow/prevent the Flow from continuing.

From my experience, no, it's not possible.
To overcome this, you could have that table in an isolated file and then sync them with a main file when it's updated.
Annoying as hell but it will work.

Related

Storing user input (Visual Basic)

I'm creating an application that will take a number of user inputs, store the data for a while, and eventually (at the end of the day) export it to an excel file.
An example might be that a user would input what they did throughout the day. Breakfast/At Home/for 10 minutes. Then later on they would input Coding/At Work/8 hours. Then later on Commuting/Subway/15 minutes. Etc.
I can handle the user interface, and the exporting to excel.
I'm just wondering what might be the best way to store that data and display it back to the user while the program is running. I'm used to working with macros in Excel itself, where I could simply store each row of data in another row on the excel spreadsheet itself.
I would still like a spreadsheet-like display, so that the user can go in to each data point and correct any mistakes. But I am making this as a standalone application using visual basic. Fortunately, I think the ListView or DataGridView tools will let me do this.
At the moment the method I'm thinking of using is simply to store all the user inputs in an array. But I would have to ReDim the array and increase its size each time the user created a new entry.
I can already see a problem with this, however, and that is that an array would have to be constantly stored in active memory. If the user's computer were to crash then all the data would be lost for good.
I'm really a rookie here, so I could use some guidance on how to store a bunch of user inputs like this.
You can use a database file. A local Sql Server Compact Editon database (a single file) that will store your data. You can use Entity Framework to interact with this database.
If you want to use Code First (generate your database from your code) use this:
https://www.codeproject.com/Articles/680116/Code-First-with-SQL-CE
If you want to use Database First (generate your entities from your database) use this:
http://erikej.blogspot.com/2013/11/entity-framework-6-sql-server-compact-4_25.html
You can also use SQLite or other kind database file, but i like SQL Server CE

How to order ETL tasks in Sql Server Data Tools (Integration Services)?

I'm a newbie in ETL processing. I am trying to populate a data mart through ETL and have hit a bump. I have 4 ETL tasks(Each task filling a particular table in the Mart) and the problem is that I need to perform them in a particular order so as to avoid constraint violations like Foreign Key constraints. How can I achieve this? Any help is really appreciated.
This is a snap of my current ETL:
Create a separate Data Flow Task for each table you're populating in the Control Flow, and then simply connect them together in the order you need them to run in. You should be able to just copy/paste the components from your current Data Flow to the new ones you create.
The connections between Tasks in the Control Flow are called Precendence Constraints, and if you double-click on one you'll see that they give you a number of options on how to control the flow of your ETL package. For now though, you'll probably be fine leaving it on the defaults - this will mean that each Data Flow Task will wait for the previous one to finish successfully. If one fails, the next one won't start and the package will fail.
If you want some tables to load in parallel, but then have some later tables wait for all of those to be finished, I would suggest adding a Sequence Container and putting the ones that need to load in parallel into it. Then connect from the Sequence Container to your next Data Flow Task(s) - or even from one Sequence Container to another. For instance, you might want one Sequence Container holding all of your Dimension loading processes, followed by another Sequence Container holding all of your Fact loading processes.
A common pattern goes a step further than using separate Data Flow Tasks. If you create a separate package for every table you're populating, you can then create a parent package, and use the Execute Package Task to call each of the child packages in the correct order. This is fantastic for reusability, and makes it easy for you to manually populate a single table when needed. It's also really nice when you're testing, as you don't need to keep disabling some Tasks or re-running the entire load when you want to test a single table. I'd suggest adopting this pattern early on so you don't have a lot of re-work to do later.

How to implement an ETL Process

I would like to implement a synchronization between a source SQL base database and a target TripleStore.
However for matter of simplicity let say simply 2 databases. I wonder what approaches to use to have every change in the source database replicated in the target database. More specifically, I would like that each time some row changes in the source database that this can be seen by a process that will read the changes and populate the target database accordingly while applying some transformation in the middle.
I have seen suggestion around the mechanism of notification that can
be available in the database, or building tables such that changes can
be tracked (meaning doing it manually) and have the process polling it
at different intervals, or the usage of Logs (change data capture,
etc...)
I'm seriously puzzle about all of this. I wonder if anyone could give some guidance and explanation about the different approaches with respect to my objective. Meaning: name of methods and where to look.
My organization mostly uses: Postgres and Oracle database.
I have to take relational data and transform them in RDF so as to store them in a triplestore and keep that triplestore constantly synchronized with the data is the SQL Store.
Please,
Many thanks
PS:
A clarification between ETL and replication techniques as in Change Data capture, with respect to my overall objective would be appreciated.
Again i need to make sense of the subject, know what are the methods, so i can further start digging for myself. So far i have understood that CDC is the new way to go.
Assuming you can't use replication and you need to use some kind of ETL process to actually extract, transform and load all changes to the destination database, you could use insert, update and delete triggers to fill a (manually created) audit table. Columns GeneratedId, TableName, RowId, Action (insert, update, delete) and a boolean value to determine if your ETL process has already processed this change. Use that table to get all the changed rows in your database and transport them to the destination database. Then delete the processed rows from the audit table so that it doesn't grow too big. How often you have to run the ETL process depends on the amount of changes occurring in the source database.

update app database regularly without needing an app update

I am working on a WP7 app that contains
CategoryGroups
Categories
Products
The rows for each of these entities are populated on first run of the application.
The issues is that when the app gets published, the rows in each of the entities will change (added, deleted, modified). I would like some suggestions on how I should handle this? Any pointers to existing code samples will be great?
I am using an object oriented database to store my entities. The app also allows the user to add their own entities (which get added to the database as personalized (flagged) entities). One solution I was thinking was to read an xml file from the server and then loop through the database entries and make the necessary modifications in the database. So, on the first run, all the entities will just get inserted. On subsequent runs, if the version number attribute in xml is different, then the system populated data is reloaded from xml but the user data is preserved.
Also, maybe only check for the new xml file on the server when internet connection is available and only periodically (like every 2 weeks).
Any other suggestions are welcome. If there is a simpler, cleaner way - please share.
Pratik
I think it's fair to say that this question has nothing to do with WP7 and everything to do with finding an efficient way to to compute and deliver update deltas.
Timestamp your items. When requesting an update, specify the time of last update. You server can trivially query for items newer than this and return a delta. At the client (ie in the phone) it is not necessary to store a last update time because you can simply add one second to the most recent timestamp in the items present on the phone.

VS2010 Database Project Deployment, to fail if data loss may occur or not?

I have a database project for a web app, and currently I have it configured to fail if data loss may occur during deployment. I feel safer this way. However I've run into a problem. I actually need to deploy changes on some things where I'm okay with the possible data loss, i.e. shortening column lengths where nothing would actually get deleted, but the system thinks it would.
I have 2 questions.
The first is this: other than enabling or disabling the catch all go or no go, is there any way to have more granular control over this process, i.e. specify columns it's okay to drop or shorten? Is there any way to get more granular control of this process?
The second is, how do you guys handle these situations? Initially I had hoped that adding a pre-deployment script to drop the columns would be sufficient, however it seems to catch drops etc. in those files as well.
No there isn't any way to control it at a more granular way unfortunately.
I disable it when I know I'll be deploying something that will cause data loss but is what I want. Then I re-enable it after. Also, I would always check the change script that comes out when deploying to production.
Just update the column in a pre-deployment script to the truncate length?
Eg : to truncate my col to 20 :
UPDATE mycol = LEFT(mycol, 20)
FROM mytable
WHERE mycol != LEFT(mycol, 20)
The Microsoft guidance is to move the data out into a temporary table in pre-deployment, let the deployment engine run a check to see whether the table contains rows (this will pass because it is now empty) and upgrade the schema, and move the data back in a post deployment script.
For more information, see Barclay Hills posts on the subject:
Managing data motion during your deployments (Part 1)
Managing data motion during your deployments (Part 2)

Resources