I have 2 SharePoint sites (SiteA and SiteB) In siteA I have an excel file called LocationA, when this file is edited (rows added, rows edited or rows deleted) I want to reflect these changes in another excel file called LocationB which is stored in SiteB (I have not added the delete operation yet but suggestions on how I might do so are welcomed).
The issue is that the flow is adding rows instead of updating the existing rows in LocationB.
Please find my flow below (it is running without errors but the output is the problem)
Note
The expression in the filter array is string(items('Apply_to_each')?['ID']) which changes the ID field to String
The expression in condition 2 is empty(body('Filter_array')) this condition checks if the list item exists in excel
Because you are using the action to add a row in the power automate, you need to change the action for example, Update a row. That will work.
Related
I’m making a Power Query (M Code) that combines all Sheets in Workbooks stored in a folder. The logic is the following
Read folder content
Form a list of Workbooks(Sheets) within
Invoke a Function to “Format” content of each Sheet to append the records in a consolidated table
The invoked Function on every Sheet should:
Identify where the “Titles" Row is located
Remove “n” records until “Titles" Row
Remap “Titles" to a standard Name from HeaderMap table
Reorder the columns according to HeaderMap table
Promote the “Titles” Row to Columns Headers
Change column types according to HeaderMap table
Remove “Blank” records
The caveat is that I may encounter Sheets that have no useful information. I need to peek inside the Sheet to verify if valid content and then execute the function to format. How can I Skip the Sheet when consolidating all tables? Something like
Identify indexed row where the “Titles” are located
If (no valid “Titles” found) then Skip Sheet
else (continue with remaining steps)
Thank you in advance
I am attempting to create a flow which will be used to update the members of various SharePoint Permission Groups. I ran into an issue with one of the actions not executing due to the fact it said that the value could not be found. After much trial and error I still could not figure out why it was failing so I started to remove actions and steps from the flow. I've taken it all the way back to my trigger and 1 action and I can't figure out what is causing my issue. Here is the setup.
I have a list with the following fields:
Employee Name - Person or Group
Folder - Choice Column
Action - Choice Column
Flow is triggered when an item is created or modified and has the trigger condition of #not(equals(triggerBody()?['Action'],'Updated'))
1st action is just a Get items
When I add an entry to the list and select a person, a Folder and an Action the flow will run. But when it does it is deleting or removing the selected choice in the Folder column leaving it blank. Why would it do that? In the 2 steps I'm not even specifically calling that field and if it could be due to the fact that it is a choice field, why isn't the Action column value also not removed? It is not my intent to delete or remove field values.
I need the value in that field to not be removed as I intend to call on it later in a concat string but I can't call what isn't there.
What is going on?
Update #1: As an update I deleted the original flow and rebuilt it again with just the 2 steps but without the trigger condition. Re-ran the flow and immediately the option selected in the "Folder" column is removed from the list. None of the list columns are set as "required" and the choice fields are not multi-select.
Update #2: In looking at the trigger action settings the Split On statement is #triggerOutputs()?['body/value']. In looking at the sample I was using to build my flow they show the statement to be #triggerBody()?['value']. There doesn't seem to be any way for me to change the statement, could this have anything to do with why my field value is being removed from the list?
Power Query sourcing multiple Excel files from a folder.
Files are monthly transactions. The month and year are part of the file names. When the next month comes, new files (in the same format of course, but with new file names) replace the previous ones in the source folder. Having the new file names causes the query to fail on refresh in the following way.
When the files are combined and displayed to begin the transformations, the files names constitute a column of data (named Source). One of my steps in transforming the data is to “use first row as headers”; at this point the first file name in that Source column becomes its column header name.
The problem is that when files having new names replace the previous ones, that column name is no longer found, since the row promoted to be the column header is the name of a new file. PQ is looking for a column header having the original file name and doesn’t find it, so subsequent transformations using that column cause errors.
The error message is: “[Expression.Error] The column ‘[OriginalFileName]’ of the table wasn’t found.”
Basically, that original file name takes on a permanent role as a column name that is part of the query.
I successfully managed to get around the problem by manually renaming all the columns instead of promoting the first data row to be the column headers. Now files with new names are processed without complaint. But this solution is clunky and I would like to keep the step of promoting the first row to be the header.
Does anyone know how to overcome this problem?
I am getting an "Violation of UNIQUE KEY constraint 'AK_User'. Cannot insert duplicate key in object 'dbo.tblUsers when trying to copy data from an excel file to sql db using SSIS.
Is there any way of ingnoring this error, and let the package continue to the next record without stopping?
What I need is if it inserts three records but the first record is a duplicate, instead of failing, it should continue with the other records and insert them.
There is a System variable called propagate which can be used to continue or stop the execution of package .
1.Create an ON-Error event handler for the task which is failing .Generally it is created for the entire Data Flow Task.
2.Press F4 to get the list of all variables and click on the Icon at the top
to show System Variable.By default Propagate variable will be True ,you need to change it to false ,which basically means that SSIS wont propagate the Error to other component and let the execution continue
Update 1:
To skip the bad rows there are basically 2 ways to do so :-
1.Use Lookup
Try to match the primary key column values in source and destination and then use Lookup No Match Output to your destination.If the value doesn't match with the destination then insert the rows else just skip the rows or redirect to some table or flat file using Lookup Match Output
Example
For more details on Lookup refer this article
2.Or you can redirect the error rows to a flat file or a table .Every SSIS Data Flow components has a Error Output .
For example for Derived component ,the error output dialogue box is
But this condition may not helpful to u in your case as redirect error rows in destination doesn't work properly .If an error occurs it redirects the entire data without inserting any row in the destination .I think this happens because OLEDB destination does a bulk insert or inserts data using transactions.So try to use lookup to achieve your functionality .
I have a C#.Net add-in to Excel 2003. I am hoping there is a hook (event?) to which I can attach, to detect when the user has deleted a row or rows from the active worksheet, as some caches will need to be recomputed or discarded when this happens.
Is there any such hook or event? If not, is there a way of achieving what I want?
Unfortunately, there doesn't appear to be a way to detect when a row is deleted. According to the Worksheet event list, you could use the Change event to figure out that something has changed, then loop through all rows in the Worksheet to figure out what has changed & update your cache accordingly. This may help you think through other ways of using the Change method as well.
You can try the following:
Create a named range that points to the far most cell in the sheet (cell XFD1048576).
If a sheet change event is raised, then test if this named range still refers to the same cell. If so, no row or column has been inserted or deleted and the event indicates some other change (like a cell value change).
If this named range refers to a different address, then a row/column has been deleted.
If it returns a #REF error, then a row or column has been added and the named range's address exceeded the maximum. In both cases, delete the named range and recreate it again.