"Queuing" for Powerautomate flow reduction? - power-automate

I am a low-credentialled creator at my institution (i.e. not a sysadmin with access to Azure VM execution, on a per-owner licenced non-premium account) and am developing a simple but intricate Sharepoint list-based app to allow group members to book appointments in predetermined blocks. As a nod to the local office team, who want to keep using their existing dashboard system, I am also trying to "synchronize" their Excel workbook (with a table of 750 rows x 15 columns) with my list structure (smaller but comparable) using PowerAutomate, so that support staff can continue to input, modify and report in Excel while users are booking through the app.
I'm aware synchronization is a frustrating task, especially given that essentially I have to loop through the entire excel table to evaluate if anything needs updating (if anyone has a good shortcut for this, please let me know!), but I think I've found a scheme which gives them basically what they want by reducing how much of the workbook needs to be updated live. I can use a partial "quick sync" as a flow triggering when sharepoint detects the file has been modified, and set up a scheduled overnight "full sync" to bring all the data up to speed. By taking the lookup work out of the core table row loop and using some parallel execution I've gotten the runtime of this "quick sync" flow down to a manageable 2-3 minutes.
However, it seems disruptive to run the sync loop every time the workbook file is modified. If folks are editing workbooks directly in Teams or Excel 365, it seems reasonable that there may be multiple edits within that window of time, and it might cause problems both server and client-side to continuously loop sync jobs while the file is being edited. I could use some message passing to skip those edits prompted by my list flows changing the workbook, but that wouldn't address the "live editing" use case.
So I had a couple of thoughts about how I might reduce the amount of work being done by this flow using a hidden sharepoint list. Option 1 is to run a scheduled daemon job, which checks every half hour and executes once if it sees a flag set by a sharepoint file modified trigger. Option 2 is to set up a kind of Buffer queue structure as a sharepoint list, where we record jobs as they come in, use Delay components to wait some time, and then only run if we're the last job in the queue.
I think I can implement either of these options, though they're definitely a hacky solution to the problem, so can I ask if I'm reinventing the wheel? Are there other ways within the PowerAutomate platform or Sharepoint folder modify connector to reduce redundant flow triggers, or is there a more effective form of Excel listener that might simplify the problem?
I'd be grateful for any tips you might have!

Related

Delays in updating content controls when more context.sync() are used on WORD for Mac

We update content control for every character typed in the task pane’s input field. So that user can see the live updates on the word document.
Recently we added functionality for locking content controls. And it happens as below:
User input (types a character) in a input field
We search a content control for that input field (involves context.sync)
Unlock the content control (involves context.sync)
Update value in content control (involves context.sync)
Lock back the content control (involves context.sync)
All this works nice in Word for windows without problems.
But is extremely (visibly) slow with Word for Mac (apple machines)
How should I overcome the delays happening on Mac?
As Juan mentioned in the comment, there are some important details that the team would need to investigate. Sample code would be good too.
That being said, just looking at what you describe, I think you can dramatically cut down on the context.sync() statements. Unlocking the content control, updating its value, and locking it should all be possible to do in one sync.
I have a bunch of details about minimizing sync-s in my book, "Building Office Add-ins using Office.js. Quoting one of the sections from it:
As an add-in author, your job is to minimize the number of context.sync()
calls. Each sync is an extra round-trip to the host application; and when
that application is Office Online, the cost of each of those round-trip adds up
quickly.
If you set out to write your add-in with this in principle in mind, you will
find that you need a surprisingly small number of sync calls. In fact, when
writing this chapter, I found that I really needed to rack my brain to come up with a scenario that did need more than two sync calls. The trick for
minimizing sync calls is to arrange the application logic in such a way that
you're initially scraping the document for whatever information you need
(and queuing it all up for loading), and then following up with a bunch
of operations that modify the document (based on the previously-loaded
data). You've seen several examples of this already: one in the intro chapter,
when describing why Office.js is async; and more recently in the "canonical
sample" section at the beginning of this chapter. For the latter, note that the
scenario itself was reasonably complex: reading document data, processing
it to determine which city has experienced the highest growth, and then
creating a formatted table and chart out of that data. However, given the
"time-travel" superpowers of proxy objects, you can still accomplish this task
as one group of read operations, followed by a group of write operations.
Still, there are some scenarios where multiple loads may be required. And in
fact, there may be legitimate scenarios where even doing an extra sync is the
right thing to do – if it saves on loading a bunch of unneeded data. You will
see an example of this later in the chapter.

Making an application in Visual Basic to handle Dialogue in Morrowind?

I want to make a program for a very catered, specific purpose, to aid me in making a large set of quest mods to the videogame Elder Scrolls III: Morrowind. I’m attempting to do this through either excel or Visual Basic, and here I’ve provided a little summary of how dialogue works in the game’s normal creation program and then what I want to create outside of it and improve on.
How Morrowind Dialogue works?
For those of you who may be familiar with the game, you’ll remember that the talking to NPC’s will bring up a set of text, and this text is their dialogue. There are different “topics” that if an NPC has dialogue set for, the player can see the topic and click on it, bringing up a new wall of text, and this is generally how dialogue works in the entire game on the player’s end.
In creating a Morrowind Mod, the way dialogue really works in the “Construction Set” (the program used to create and edit the game) is that a database contains every entry of text, and this these entries have conditions set to them which limit which NPCs can say a given entry of dialogue. So for instance, a topic like “latest rumors”, will have lots of entries in it with lots of different NPCs having something to say about it. The topic itself is a condition of sorts with potentially dozens of entries attached to it, and conditions set to specific entries can also be applied. Conditions can include checking to see if the NPC is in a given city, if the in-game time is night or day, if the player is at a certain numbered stage/index of a given quest line and much, much more. This system is what makes all quests possible and the game dynamic.
What I want to create:
I am beginning a rather large mod project that includes many entries of dialogue, many new and old topics, and many quest and quest stages. I could list all the reasons here but essentially my problem is that the Construction Set has many limitations in terms of organization that make it difficult to make a large mod’s dialogue in. I would be better off to design, set the topics for, and edit all of my dialogue entries outside of the Construction Set program and implement them when I’m confident that the writing and quests are finished.
Essentially if this is too complicated I could just write all the quests and dialogue in Microsoft Word, but optimistically I'd like to do something more dynamic and helpful to me, as a writer, and be able to use real variables to store and set Journal/Quest Indexes, filter dialogue by Quest or by NPC, and easily edit dialogue and quests without getting lost in the normal game’s thousands of lines of other dialogue.
*I can't post more than two links here, but I posted on reddit and there I have a gallery showing how the Construction Set works and what I have made in Visual Studio so far:
https://www.reddit.com/r/learnprogramming/comments/4oap6w/making_an_application_in_visual_basic_to_handle/
So, my intention is to make a program in Visual Studio using Visual Basic or Python that leaves me with a program that lets me write, organize, and set the text for dialogue and filter based on conditions.
This likely requires creating a database file for the program in Visual Studio and being able to create variables in runtime, for the program. That is because I want the user of the program to be able to add new dialogue topics, new journal/quests, and all of these things will have conditions with values associated with them.
Any help, advice, and direction is appreciated. I am relearning Visual Studio (I took two courses in it) and I am unfortunately very new to excel and databases in general.
You are correct in that a database of some kind would be needed. However, you could approach this several different ways depending upon your comfort level, money, portability requirements, etc...
One way to do it would be to use XML to store your data. It has the advantage of being extremely portable and transformable. Since this is likely a program where only one person would be directly accessing the data at any given time, it might be your best bet.
Another option is to use MS Access if you have office. This gives you a workable, albeit fairly basic, relational database. This would probably be a better choice if you have 2 or 3 people that could possibly be working in it.
A third option would be a full DBMS. MySQL is free and you could install that to your local machine, or to a remote server. Installed to a remote server would give you the option of allowing many people to connect to it and modify data transactionally. However, this would be overkill if it is only a one or two person system.
Circling back around to XML... That will most likely be your best bet. It is simple and integrates perfectly with .Net applications. It can be imported/transformed to any data-store later once you are finished (or multiple times as you progress). Interfacing with XML via .Net allows you to work with it like a database within your code, so if you design your data layer properly up front, you could even migrate to a full database later if the project expands drastically. The biggest downside to XML would be that it isn't relational in the way that a regular DBMS is, and it is not inherently transactional. You do not have atomic updates, so if you have several people modifying things at once you could lose data if it is overwritten.
You could get around that to an extent by writing a more advanced data layer to interface with the XML files, but if only one person is making changes locally, and then the data file is, say, uploaded to a remote datastore later, the only thing to keep in mind would be coordinating when and who can modify that file. Mostly logistics stuff at that point.

Excel List-Object VBA Performance Bug?

I have an issue with performance on an excel application which uses List Objects (AKA Excel Tables). I suspect it may be a bug, but despite my Googling I could not find any reference of it. I've already developed a workaround for my application, but what I’m interested in is if anyone can give any insight into why this happens.
Note: I’m using Excel 2007 on Windows Vista. The setup is as follows: I have a spreadsheet which holds data in a List Object, with VBA code which can be kicked off via a command button; this code may make several edits to any number of cells on the worksheet, so Excel’s Calculation mode is set to Manual prior to any edits.
The problem I’ve encountered is that if the currently active cell is within the List Object, then setting the Calculation Mode to manual seems to have no effect whatsoever. So if a user happens to have a heavy calculation workbook open in the same instance, then the VBA code runs very slowly. I practically had to pull my application apart to discover that this was caused by the active cell; and I created a new workbook with simple version of this scenario to confirm that there wasn’t some sort of corruption on my application.
I’ve been doing a number of test cases with this, and below are the results from what I’ve found:
Although it seems generally related to the calculation, there is still a time difference when the calculation mode is switched between Manual and Automatic...
Manual = 7.64 secs
Automatic = 9.39 secs
Manual mode is just fewer than 20% faster than Automatic. But my expectation was they’d be more or less the same, considering the issue seems to be the calculation kicking off even when in Manual mode.
Compare that to when the active cell is not on a List Object, and the results are vastly different...
Manual = 0.14 secs
Automatic = 3.23 secs
Now, the Manual run is 50 times faster, and Automatic run shows that the calculation shouldn’t have taken any more than 3.2 secs! So now the first test looks like it might have run the Calculation twice while in Manual mode, and nearly 3 times while in Automatic mode.
Repeating this test again, this time in an instance with no calculation formula in any cells, and suddenly it doesn’t seem as bad,
Active cell is List Object & Calc is Manual = 0.17 secs
Active cell is List Object & Calc is Automatic = 0.20 secs
Active cell is Empty & Calc is Manual = 0.14 secs
Active cell is Empty & Calc is Automatic = 0.18 secs
It’s still slower, but now it’s only by 10-20%, making it unnoticeable. But this does show that the issue must be related to the Calculation in some way, as otherwise it should have taken just as long as the first test.
If anyone wants to create these tests to see for themselves, the setup is as follows:
New Workbook with a List Object added (doesn’t have to be linked to any data)
Add some formula that will take excel a while to calculate (I just did ‘=1*1’ repeated 30,000 times)
Write a quick VBA code which will; (i) loop through a simple edit of a cell several hundred times, (ii) and record the time it took
Then just run the code while changing the active cell between the List Object and an empty cell
I’d be very interested to hear if anyone can explain why Excel behaves in this way, and if is a bug or if is some feature to do with List Objects which actually has some genuine use?
Thanks,
Stuart
This is not relative to the "bug" you found, which is quite interesting and intriguing.
I just want to share that there is a great way to avoid calculation delays. I had fantastic results with this and now I use it all the time.
Simply put, Excel takes a long time copying data back and forth between the "VBA world" and the "spreadsheet world".
If you do all the "reads" at once, process, and then do all the "writes" at once, you get amazing performance. This is done using variant arrays as documented here:
http://msdn.microsoft.com/en-us/library/ff726673.aspx#xlFasterVBA
in the section labeled: Read and Write Large Blocks of Data in a Single Operation
I was able to refactor some code I had that took 5 minutes to run and bring it down to 1.5 minutes. The refactoring took me 10 minutes, which is amazing because it was quite complex code.
Regarding Table performance (and performance, in general):
I know this is an old question, but I want to get this documented.
One thing that changed between older versions of Excel and the post-2007 versions is that Excel now activates the target sheet of any PasteSpecial operation. You cannot override it by turning off ScreenUpdating and making calculations manual. Such Activation WILL make the sheet visible, and cause uncontrollable flicker.
My original VBA code ran very fast on an old, single-processor XP box running Excel 2000. The change to Excel 2013 on a modern machine was stunning in the terrible slowness of code execution. The three areas that kill performance are PasteSpecial from one sheet to another, any other code that requires activating sheets (Zoom level, Advanced Filter, Sheet-Level range names, etc), and automating sheet protection/unprotection.
This is too bad, because PasteSpecial helped "cleanse" data you copy (Direct use of .Copy to a target will throw the occasional error).
So you need to review your code and make sure you are using direct assignment to the right property for the data type you need (from among Value, Value2, Text, and Formula, for example), instead of PasteSpecial.
e.g. .Range("MYRANGE").Value = .Cells(5, 7).Value2
You also need to be scrupulous in resisting use of Select and Activate throughout your code.
As referenced above, many comments you'll find in Excel fora about that last point will make statements that you "never" need to use Activation, which is clearly untrue, since several things in Excel only apply to or require active sheets. Understanding the cases where activation is forced automatically by a particular method or use of an object will help in coding as well. Unfortunately, you won't see much in the way of documentation of this.
Update:
Regarding Conditional Formatting, you'll find many complaints in various fora about the slowness of Excel when encountering a large number of Conditionally-formatted cells. I suspected this would impact Excel Tables since they have many table format options. To test this, I took a large workbook we use that is currently formatted as several worksheets with the same style of Excel Table on them.
After converting the tables to a conventional range, I noticed no difference in speed of code execution. This would seem to indicate that use of Excel Table formats is far superior to conditionally-formatting your own arrays of cells.

Is it possible to assign a work item in TFS to different people?

TFS (2008) has the great feature of work item tracking where I can easily see what people are doing all day long. Now I was wondering if I could assign a work item to different people or if they could write time on an item in a trackable way.
For example: We have two developers Mr. A and Ms. B. A did 4 hours of work and 50% of the work item "Create customer screen" until he gets ill. Than B has to finish the other 50% but I do not want to lose the progress of A because it could seem that A worked 4 hours less and B 4 hours too much.
Unfortunatly it seems that I can enter only one name in "assigned to" when I am using TFS 2008 and can not store the item if I try to seperate the names by comma or semicolon. Do you know if such a feature is included in TFS 2010?
Thank you for help.
No. This is one of the few aspects that haven't changed from 2008 to 2010.
Thomas
I'm not sure about assigning one item to multiple people but you could setup groups to which multiple people belonged. I'm not sure of your other requirements but this should solve this issue here. In essence Mr A and Mr B would both belong to a group called, say, 'Developers' to which the work item is assigned. Thus the full 8 hours is logged against a single entity.
Here is an (old) article on how to do this elegantly. You may want to split up your groups to as specific a category name as possible (e.g. 'Core Developers', 'Javascript Developers')
Found this link that implies that they are aware of the need but have not implemented a resolution yet
In TFS, if you assign a work item to someone else it will maintain that in the Work Item History, which is available for reports. TFS 2010, however, only tracks 3 fields: completed work (in hours usually), remaining work, and original estimate. If A and B both update completed work, you should be able to separate that work out in reporting services.
As #DarrellNorton said, all the information is recoded in the history of the work item, so you can retrieve the completed work values for each historical entry and correlate that to the assignee at that point in time. So the information you need is already in your database, if you can work out how to extract it. (The danger is that if someone leaves the completed-work field unchanged you might record the first dev's hours against the second dev as well - you'd ideally need to add a state transition rule in your work item templates that clears the field back to 0 whenever it was assigned to a new developer)
Another approach is to add your own fields to your TFS work items. It would be very easy to add (for example) fields "HoursDoneByMrA" and "HoursDoneByMrB" and expose these onto the work item form so that each developer could have independent statistics by which you could track the information you require. As long as your team size isn't huge, this would be quick/easy to achieve, and would also give you an instant summary on the work item itself of all developers who had touched the work item and their contributed hours, so you wouldn't even need to go as far as building a specialised report. (TFS PowerTools provides editors for the work item types that make adding and displaying this information much easier than hand-editing the XML templates. This approach would work in TFS 2005, 2008, 2010 - once you know how to use the power tools to do it, it would only take about a minute per developer to put this in place).

Designing a complex workflow diagram

We've got a surprisingly complex workflow that needs to be monitored by a quasi-technical employees with an in-house webapp. There's about 30 steps, some of which are manual (editing), some are semi-automated stop points (like "the files have been received" or customer approval of certain templates), and some are completely automated (file conversion, search indexing, etc). The flowchart for all of these steps is large and complicated, and three people might be working on three completely different steps at any one time.
How would you present this vast amount of information as usefully as possible to your users? Just showing the whole diagram seems like the brute force solution. But it's big, and it'll likely get bigger as we do more things. Not to mention the complexity necessary to encode this entire diagram in HTML.
I assume you don't want to show these just for entertainment or mockery, but help the users along the way, automating as much as possible, document the process etc. It would probably help if you clearly define the goals or purpose of your app.
I don't see a point in showing the entire workflow, except for "debugging the business rules" or maybe the clients want to see it.
If your goal is to help users do their job, I would present the state of the "project" (or whatever term fits better) is at, and possible transitions to other states.
The State might be multiple mostly independent variables, e.g. one might describe the progress of content - e.g. "incomplete" / "complete" / "reviewed by 2nd staffer" / "signed off by 2nd staffer", others might contain a schedule that is developed in parallel, e.g. "test print date = not scheduled", "print date = not scheduled", "final delivery = tomorrow, preferredly yesterday".
A transition might be "Seint to customer for review", "mark as content-complete", "content modified", etc.
Is this what you have in mind?
I propose to divide your workflow in modules and represent the active state for each module.
A module is a subset of your main workflow. For example it could be divided by tasks, person, roles, department, etc. This will greatly simplify the representation of the workflow. Let's says someone is responsible for data entry at many critical moments. We can group all his tasks in one module (or sub-workflow) containing the same activities, inputs, outputs and conditions. Modules could be inter-dependants and related.
A state is where we are located in a module. In simple workflows there is only one active task. In real life we are multi-threaded! So maybe in one module many states could be active at the same time. The state also includes active inputs, outputs and memory bits.
An input is something required to perform an activity for evaluation a boolean condition. It could be a document, a piece of data, a signal...
An output is something resulting from a task: an information, a document, a signal...
Enough definitions?
Then simply convert your workflow into a LADDER LOGIC and you have your states!
See Ladder Logic definition on Wikipedia
You display only active states:
Active task(s) for the module
Inputs required / inputs confirmed
Output required / output realized
Conditions to continue
Seems abstract?
Here is a small example...
Janet enters data in the system. She manages the green tasks of the diagram. We focus only on her work, not other tasks. She knows how to do 16 tasks in the workflow. We are waiting the following actions from her to continue, and her Intranet dashboard says:
Priority 1: You must send a PO to order enough pencils for the next month based on the sales report.
Task: Send a purchase order
Inputs: Forecast report from the marketing department
Outputs: PO, vendor, item, quantity
Condition for completion: PO sent and order confirmation received from supplier
Priority 2: You must enter into the financial system the number of erasers rejected by production
Task: Data entry
Inputs: Reject count from production
Outputs: Number of rejects
Condition for completion: data entered and confirmed
We do a lot of troubleshooting on automated production systems having hundreds of thousands ladder steps (the workflow is too complex to be represented in a whole). When the system is blocked we look at each module and determine what inputs are missing to activation task completion.
Good luck!
This sounds like the sort of application for which BPEL is suited.
Of course you don't want to re-architect your system right now. But there are a number of BPEL implmentations out there, some of which include graphical editing tools. One of these might help you in your current situation, because they are good at handling scope and hiding detail. So I think you might derive benefit from drawing your workflow as a BPEL diagram even if you don't do anything else with the language.
The Wikipedia page lists several of the available implementations. In addition, Oracle's JDeveloper IDE includes a BPEL Diagrammer as part of its SOA suite; unfortunately it is no longer part of the standard install but it is still available. Find out more.
Try doing it in layers. You have the most detailed layer done, now add additional docs with the details hidden, grouped into higher-level business processes. Users should be able to safely ignore some of those details, but it's good for them to have visibility of how their part fits in to the whole.
You may need more than one higher-level document.
You can use Prezi to present this information to users in a lucid manner.
Split and present the work flow into phases such that the end user is easily able to identify the phase he is currently in.
Display as many number of phases as the number of inputs. The workflow starts with 6 different inputs so display the six different buttons on screen enabling the user to select the input that he wants.
On selecting the button zoom into the workflow depicting the next steps. This would also help the user to verify the actions that he has done so far to reach the current states.
This would also help the user to verify the actions that he has done so far to reach the current states. But this way of presenting could become cumbersome for the users as the number of steps that he has completed goes up. Say the user has almost reached the end of the workflow. To check for the next step he should go through all the steps which might frustrate the user.
To avoid this you can split the complete work flow chronologically into 3-5 phases. The phases should be split logically. The ultimate aim would be not to overwhelm the users with the full work flow. Personally i would try to avoid the task involving this workflow if presented the way you have shown. No offense. I bet you also feel the same.
Could give you a better picture if you could re-post the image after replacing the state names with numbers.
I'd recommend having the whole flow documented somewhere, but in terms of what is distributed to users, how about focusing on task-oriented flows? No one user will be responsible for the entire process I would imagine.
For example, let's say I have 2 roles, A and B, and 6 tasks, 1 through 6, executed in order. Each task may have multiple steps but is self-contained (e.g. download the file, review, run process, review again, upload). A does the even tasks and B does the odd tasks.
A would need to know about those detailed steps that comprise tasks 2, 4, and 6 but not about what goes on in 1, 3, and 5. So hand A a detailed set of flows for the tasks he is responsible for, along with a diagram that treats each task as a black box.
If the flow can't be made modular in this way, you may want to review the process itself to see why it's so complex.
How about showing an example of a workflow scenario, that is, showing the transitions in one possible passing through the workflow? You could cater this to a specific user profile and highlight the pertinent states, dimming the others. This allows them to get a clear idea of the transitions by seeing a real-life example.

Resources