User-Exit EXIT_SAPMM07M_003 is not triggered - include

When I created batches manually or from goods movement, the program seems to be NOT passing through the codes I created in the user exit include. When I create batch and save it, it does not stop in the breakpoint inside the user exit include..., that is why I concluded that the exit was not even been triggered.
When I debug the codes I created in the user-exit separately, all the values are in place, and data retrieval goes fine, so it is a mystery why the value is not reflected in the table MCHA?
Do I need to configure or activate something???

You can activate or de-activate a User-Exit in transaction CMOD.

Related

Flow Triggering Itself(Possibly), Each run hits past IDs that were edited

I am pretty new to power automate. I created a flow that triggers when an item is created or modified. It initializes some variables and then does some switch cases to assign values to each of them. The variables then go into an array and another variable is incremented to get the total of the array. I then have a conditional to assign a value to a column in the list. I tested the flow specifically going into the modern view of the list and clicking the save button. This worked a bunch of times and I sent it for user testing. One of the users edited multiple items by double clicking into the item which saves after each column change(which I assume triggers a run of the flow)
The flow seemingly works but seemed to get bogged down at a point based on run history. I let it sit overnight and then tested again and now it shows runs from multiple IDs at a time even though I only edited one specific one.
I had another developer take a look at my flow and he could not spot anything wrong with it and it never had a hard error in testing only warnings about conditionals causing a loop but all my conditionals rectify. Pictures included. I am just not sure of any caveats I might be missing.
I am currently letting the flow sit to see if it finishes getting caught up. I read about the concurrent run option as well as conditions on the trigger itself. I am curious as to why it seems to run on two records(or more) all at once without me or anyone editing each one.
You might be able to ignore the updates from the service account/account which is used in the connection of the actions by using the following trigger condition expression:
#not(equals(triggerOutputs()?['body/Editor/Claims'], 'i:0#.f|membership|johndoe#contoso.onmicrosoft.com'))

Maximo: Use script to update work order when a related table is updated

I have an automation script in Maximo 7.6.1.1 that updates custom fields in the WORKORDER table.
I want to execute the automation script when the LatitudeY and LongitudeX fields (in the WOSERVICEADDRESS table) are edited by users.
What kind of launch point do I need to do this?
Edit:
For anyone who's learning automation scripting in Maximo, I strongly recommend Bruno Portaluri's Automation Scripts Quick Reference PDF. It doesn't have information about launch points, but it's still an incredibly valuable resource.
I wish I'd known about it when I was learning automation scripting...it would have made my life so much easier.
You can create an attribute action launch point on the latitudeY field and another on the longitudeX field. These will trigger whenever each field is modified, so it will fire once when the latitudeY field was changed, again if the longitudeX field is changed, again if the longitudeX field is changed again, and so on. This is also all before the data is saved, so the user may choose to cancel their changes, but the scripts will still have fired.
You could also make an "on save" object launch point for WOSERVICEADDRESS (if that's what is actually being updated via the map). This will run any time data in the object is saved, so you would have to do the extra checks of seeing if either of those fields have changed and then do your logic, but at least it would run once and only if the user commits to their changes.
Related:
Populates WORKORDER.WOSAX and WORKORDER.WOSAY (custom fields) from the values in WOSERVICEADDRESS.LONGITUDEX and WOSERVICEADDRESS.LATITUDEY.
woMbo=mbo.getOwner()
longitudex=mbo.getString('longitudex')
latitudey=mbo.getString('latitudey')
if woMbo is not None:
wosax=woMbo.getString('WOSAX');
wosay=woMbo.getString('WOSAY');
if longitudex!=wosax:
woMbo.setValue('WOSAX',longitudex)
if latitudey!=wosay:
woMbo.setValue('WOSAY',latitudey)
The launch points are Attribute Launch Points, not Object Launch Points.

PL/SQL triggers won't run

I have an Oracle Forms 11 g application running on Weblogic server. The default form/login page has a few PL/SQL triggers that simply will not fire. The rest of the configuration seems successful.
Can anyone give me pointers as to where to start looking?? Thanks in advance.
As their name suggests, triggers fire when something triggers them. For example,
a WHEN-BUTTON-PRESSED trigger is triggered by pushing a button
a POST-QUERY trigger fires after executing a query in a data block
a WHEN-NEW-FORM-INSTANCE trigger fires when the form is being run
and so on.
Therefore, make sure that triggers really are triggered. The fact that you have them doesn't mean that they'll run, just because.
In order to find that out, you have two options:
run a form in debug mode:
in one of the triggers (for example in a WHEN-NEW-FORM-INSTANCE) set a breakpoint by right clicking its left margin (you'll see what to do next)
then run the form; that green toolbar icon you use to run it shouldn't be used, but the one next to it, with something reddish on it
as soon as execution gets to the breakpoint, it'll stop, you'll be transferred to Forms Builder, a debug console will open and will let you navigate through the rest of the code step-by-step
do that, and you'll know what's going on, i.e. whether those triggers are called and what they do
as of you being suspicious: did you, by any chance, put some exception handlers that use WHEN OTHERS THEN NULL or something similar? If so, get rid of them. Even if an exception is raised (such as NO_DATA_FOUND or TOO_MANY_ROWS, just to mention two of the most popular and frequent ones), THEN NULL will silently mask it
another one is to put MESSAGE calls into the triggers, such as
begin
message('running WBP trigger: step 1');
... the rest of your code goes here
end;
Doing so, message-after-message will raise an "alert" on the screen (as you'll have to click OK that you saw what it said), and you'll quickly see which triggers fired and which did not. Then investigate it further - debugging described previously will help.
If none of that helps, you'll have to describe what's going on, but this time providing some more info. What you wrote isn't very descriptive. Anyway, best of luck.

How to force ActiveRecord to ALWAYS insert, update, delete and retrieve from the database?

We are using ActiveRecord stand-alone (not part of a Rails application) to do unit testing with RSpec. We are testing a trigger on the database that it inserts rows into an audit table.
The classes are:
Folder has many File
Folder has many FileAudit
The sequence of events is like this:
Create Folder
START TEST ONE
Create File
Do some stuff to File
Get Folder.file_audits
Check associated FileAudit records
Destroy File
Destroy FileAudits
END TEST ONE
START TEST TWO
Create File
Do some other stuff to File
Get Folder.file_audits
Check associated FileAudit records
Destroy File
Destroy FileAudits
END TEST TWO
Destroy Folder
The FileAudits from test one are getting destroyed, but not from test two. ActiveRecord seems to think that there is nothing new in that table to delete at the end of the second test.
I can do Folder.file_audits(true) to refresh the cache, but I would rather just disable any and all kinds of caching and have ActiveRecord just do what I tell it instead of it doing what it thinks is best.
I also need to set a flag on File to the same value and verify that the trigger did not create an audit record. When I set the flag to a different value, I can see the update statement in the log, but when I set it to the same value and save, there is no update in the log.
I am sure that the caching and etc. is fine for a web site, but we are not doing that. We need it to always get all records from the database and always update and delete no matter what. How can we do that?
We are using ActiveRecord 3.1.3.
Thanks
My initial guess would be that there you are doing something with transactions in one of the tests. If you are, you are effectively eliminating the outer transaction that wraps the unit test itself, in which case causes the unit test cleanup to have nothing to rollback.
I don't know if this applies to you or not, but I've had problems in the past with doing model.save instead of model.save!. Sometimes I would get validation errors on save, but without the bang the validation errors don't raise an actual exception, so I never know that the save wasn't successful.

How to handle multiple asynch downloads

I recently moved my background synch downloads to a view controller and need some advice on how to best handle them asynch. I have written all the code to show a progressview as the download occurs but as you might have guessed it's not that simple. Here's how it works.
user sees a tableview with two entires one for each database. they can press a button to download the database and when the download starts that fires off the asynch URL connection,etc. This works to a certain extent however it's not that simple.
here's what i want it to do.
download the main update URL (works ok)
then download a secondary URL.
then apply the first URL content to the sqlite store (code written for that)
then apply the 2nd URL content to the sqlite store (code written for that)
(All the while showing progress to the user)
when the downloads were synch it was easy as i just waited for them to finish in order to fire the next activity off but when using the asynch method i'm struggling with how to get them to wait. Step 3 depends upon step 1 finishing and step 4 depends on step 2 finishing and overall success relies on all finishing. step 4 needs to wait for step 3 to finish otherwise the database locks will cause a clash.
the second complication is that if the user presses the second button while the first is downloading then steps 3, 4 will clash if they execute at the same time as the first row is accessing the database.
Has anyone done anything similar and if so what was the strategy you used to manage the flow of events.
Also i wanted to wrap this all up in a backgroundTask with ExpirationHandler so it would survive the user pressing the home button... but the delegate methods don't get called when i do that.
Ok Here is what i did to fix the problem.
Created an NSOperationQueue
Added the URL operations as NSURLInnvocationOperations
3.waited until the URL operations were complete (waituntilalloperationsarefinished).
Then set the max concurrent count to 1 which forced the subsequent database operations to execute in series one after the other and thus prevented SQLite from locking it's self out.

Resources