Oracle CRM On-Demand - Use Tasks Status to update Service Request Status - oracle

We're using CRM On-Demand for our Service Group and I'm running into an application limitation and am wondering if anyone has a workaround or just some general ideas on how to accomplish our goal.
In the application, our major focus is around the Service Request and driving for users to create Tasks for all Activities related to working towards closure. For example, a customer calls in and we need a technical resource to make a return call to diagnose the issue in detail, so a Task is assigned to that resource. Once that Task has been marked as completed, I'd like the Status to be updated. I tried creating a workflow using JoinFieldValue(), which wasn't working. I tried a more basic approach and tried to just have a field on the Service Request be populated with the Status of the Task, but that did not work either.
Upon further investigation in the Help File, there is a relationship from the Activity object to the Service Request object, but not one the other way.
So, has anyone else run into this limitation and found some other method to have a Status change on the Task update the Status of a Service Request?
(Also, I'd like to try and avoid writing a custom web service for this purpose, which is why I'm trying to use the tools in the app)
Thanks in advance for any ideas!

actually, if I well understood your issue is related on workflow cross object.
OCOD doesn't manage this type of workflow when you need to use workaround.
In order to cover a cross object worklflow you have many possibilities:
webservices as you said, but you could imagine a js code that will run WS and hosted directly into OCOD (in R19 you could hoste that in Client Side extension). That could be a good solution
Another one could be using Report with a custom look up functionnality with the usage of "Callback" function
I would prefer the 1st solution.

Related

Change of Status in Dynamics CRM 2016 8.1

I have written code that is supposed to help us automate some specific cases. It will create a addresstag for the customer and change the status of the case to "Address Tag Sent".
All this works as intended, but for some reason the status of the case is changed back to "New".
As you can see here there is an event called "Activate" that changes the status.
I haven't found what this event is or why it occurs. I have gone through all the Workflows we got, all processes, all code (As good as I can) and spent a good amount of time trying to google it but I still come out empty handed.
Is there someone who might know what this event is? Or maybe got any idea how to access/modify it?
'Activate' will essentially re-activate any record and put the statuscode back to the default statuscode\status reason - I am guessing your default is set to 'New'.
I would investigate in these directions:
Since “changed by” showing as “CRM migration account”, this maybe an ETL job like SSIS, or Scribe which is syncing data changes from outside integration
Maybe the same service account is used by plugin, to reset the StateCode and StatusCode on some business logic
Is there some Business process flow stages available in your form, as I see “Service stage” attribute in audit before that, there may be logics coupled with that
Verify the dependencies of statecode attribute in customizations to see any SDK steps or workflows referencing that. Check in your code repos and check with any long timers in your project for any business logics implemented in the past.

Handling Action errors/output inside out-of-the-box Workflows

I am working on designing a lengthy approval system in CRM using a combination of OOB workflows (designed using CRM UI Workflow Designer) and custom actions (actions written using .NET code). Idea is to keep the entire branching/simpler logic in OOB workflow and call custom Actions wherever necessary. However I have few questions with this approach:
How can I handle run-time errors generated in the action code?
For example, one of my Actions contain the code to push data to an external system via web service. In case this web service call fails, I need to perform some steps in the parent workflow.
How can I handle 'if conditions' which can't be handled by 'Check Condition' step? For example, suppose that before performing a certain workflow step I need to check some data which can't be queried within CRM. I can create an Action which will return true/false based on the custom logic which can then be checked in parent workflow.
An alternate approach would be to use plugins but I am inclined towards using OOB functionalities as much as possible. Any inputs would be helpful.
First of all let's clear the semantics, because I'm not sure if you understand what are you talking about - there are Actions (you can refer to them as custom actions, but then you should refer to every workflow you create as custom and I figured out of your post that you are describing them as OOB, which also is semantically wrong - every workflow you create is a custom workflow, maybe it's using OOB steps, but that's a different story) and Custom Workflow Activities. I'm assuming that you want to use Custom Workflow Activities, because the are more suited for what you are trying to achieve here. Also you tagged your question as CRM 2011 and CRM 2013 - not sure what you meant, because Actions were not available for CRM 2011.
So basically Custom Workflow Activities can have Input and Output parameters. Output parameters are answer to both your questions, because you can use them to get the error message after your custom processing or use then in conditional statements later in your workflow. Output parameters can be defined like that:
[Output("Error message")]
public OutArgument<string> ErrorMessage { get; set; }
You can find more examples here:
https://technet.microsoft.com/en-us/library/gg327842.aspx
You can of course set this properties simply by calling
ErrorMessage.Set(executionContext, messageText)
So now when you define your workflow, wherever you need something not configurable in OOB blocks, you can put your Custom block, after it's done simply check it's output for the error (this is just an example, you can pimp it up by adding additional output parameters, to make it more generic), if it's empty then do something, if not then do something else for example send email with the error message. It all depends on what are you trying to achieve.
Actions are serving different purposes, they are useful to create a logic that can be easily called through plugin or javascript (webAPI) and allows you to also put a plugin on it alongside doing everything within one transaction. Maybe it will be useful somewhere in your workflow, but as far as I remember in CRM 2013 actions could not be called from a workflow...
UPDATE:
Ok so if we are dealing with CRM 2016, we can call Action from a workflow. What is best in this situation really depends on the scenario and what we are trying to achieve, but to make it easier to decide let me highlight main differences:
1) Activities are simply a blocks of code that can be put inside your workflow. Actions by themself are not code, they are custom Messages that you can call. Of course you can register a plugin on this custom Message and do there any custom logic you want, but this is another step to take
2) Actions can be run in transaction, Activities not (but you can run Activities inside Actions, so in this case they can run in transaction)
3) Actions can be called directly from Javascript, plugins and workflows. It's a great thing, but if you will make let's say 10 custom Actions which you will be using ONLY inside you one workflow, they will be visible when you will be registering plugins (and also any js developer will be able to call them with JS)
So basically Actions are a big, fat feature that can serve many purposes (including running Activities on their own!), Activities are much simpler but in your case they will also do their job. So you should ask yourself questions:
Do I need my logic to run inside transaction?
And
Do I need to call this logic somewhere else than my workflow?
If you have any "Yes" then go for Actions, of no, then go for Activities, because you will be overcomplicating things without any good reason.

How to organize my code?

I'm still in a learning phase with PHP and Laravel 5 and since I upgraded to L5, I struggle with where my code belongs to. There are so many files and folders which seem to have the same purpose or at least are very similar. There are Commands, Controllers, Events, Services, Requests, etc. I give an example with my workflow and where I would place the code and I hope you guys can comment on that and correct/help me.
Situation
I want to register a new user in my application and send a welcome e-mail when he registered successfully.
Workflow
Controller (UserController): Returns requested view (register).
Request (RegisterRequest): The "RegisterRequest" validates the entered data.
Controller (UserController): Passes the validated data to the "UserRegistrar" (service) in 'App/Services'.
Service (UserRegistrar): Creates a new user and saves it to the database.
Controller (UserController): Fires the "UserWasRegistered" Event.
Event (UserWasRegistered): This Event call the "SendWelcomeEmail" Command.
Command (SendWelcomeEmail): This Command will send/queue the welcome e-mail.
Controller (UserController): Redirects the user to a view with the information that he has been registerd successfully and a message has been send to him.
Logic
Okay, let's discuss some logic:
Controller:
Doesn't hold much code.
Mainly there to return views (with requested data).
Handles workflow and "connects" modules (Services, Requests, Events).
Request: Validates the data for a specified request
Service: A service "does" something. For example it's doing requests to the database.
Event: An Event is a central place to call one or more tasks if it is fired (SendConfirmationMail, SendWelcomeMail).
Command: Mainly there to handle the logic for ONE certain task. For example sending a confirmation mail. Another command will hold the logic for sending the welcome mail. Both commands are called in the Event described before.
Repositories: What is that?!
So this is what I understand. Please help me and feed me with information.
Thanks,
LuMa
Your question is a little vague and will likely attract downvotes as being "too broad". That said, here's my take on this...
The biggest issue I see is that your application structure is very different from the recommended L5 structure - or even the standard MVC structure - that it's no wonder you're getting confused.
Let's talk about your "Logic" section:
controller - you're on the right track here. The controller is the glue between your models and your views. It can do some processing, but most should be offloaded to classes that handle specific tasks.
request - what is this? L5 includes a Request class that includes methods for examining the HTTP request received from the client. Are you talking about subclassing that? Why? If your idea of a "request" class is primarily concerned with examining input, you can either do that in your model (ie. validating stuff before sticking it in the database) or in your controller (see the L5 docs on controller validation)
service - again, what is this? You talk about "doing requests to the database", but L5 provides a class for that (DB). At a higher level, database access should primarily be done through models, which abstract away most of the low level database access. As for other services, what I usually do is create libraries to perform specific processing. For example, my application has a particular third party project management application that it accesses via an API. I have a library for that, with methods such as getProject or createProject.
event - An event is a way of ensuring that some code is called when the event happens, without a whole lot of messing about. It sounds like you have the right idea about events.
command - again, it sounds like you have the basic idea about commands.
repositories - these are way of abstracting the connection between a resource (primarily the database, but it can apply to other resources too) and the code that uses the resource. This gives a way to switch the underlying resource more easily if you (for example) decide to change database servers in the future. They are optional.
You also haven't mentioned anything about models. L5 provides an excellent way to deal with your data in understandable chunks via Eloquent models - this will make your life much easier.
My suggestion is this: start small. Build a simple MVC application with L5 - A model (to save some data), a view (to display the data), and a controller (to put the model & view together by handling the client request). Once you have that, start extending it.
There are tutorials out there that will give you this basic structure for Laravel - most are for Laravel 4, but see if you can follow the basic ideas and build something similar for Laravel 5.

How entity edit URL from within plug-in in MS Dynamics CRM 4.0

I would like to have a workflow create a task, then email the assigned user that they have a new task and include a link to the newly created task in the body of the email. I have client side code that will correctly create the edit URL, using the entities GUID and stores it in a custom attribute. However, when the task is created from within a workflow, the client script isn't run.
So, I think a plug-in should work, but I can't figure out how to determine the URL of the CRM installation. I'm authoring this in a test environment and definitely don't want to have to change things when I move to production. I'm sure I could use a config file, but seems like the plug-in should be able to figure this out at runtime.
Anyone have any ideas how to access the URL of the crm service from within a plug-in? Any other ideas?
There is no simple way to do this. However, there is one.
The MSCRM_Config is the deployment database that handle physical deployment properties, like the URL from which users are accessing the CRM deployment. The url that you might want is the one stored in "ADWebApplicationRootDomain", in the MSCRM_CONFIG.dbo.DeploymentProperties table. You may need some permission to access this database.
Note that this doesn't work in a deployment that is an Internet Facing Deployment.
Another way could be to query the discovery service to retrieve the same information (in the case that you are on the Online edition of MSCRM4).
What do you mean by "change things"?
If you create a custom workflow assembly, you can give it a server url input. Once you register it with CRM, you can simply type in the server url when you configure the workflow. You'll have to update the url for any workflows that use the custom workflow assembly once you move to production, but you'll only have to do that once.
My apologies if this is what you meant you wanted to avoid.
Edit: Sounds like you may be able to use the CustomConfiguration attribute when you register the plugin. Here's some more info.
http://blogs.msdn.com/crm/archive/2008/10/24/storing-configuration-data-for-microsoft-dynamics-crm-plug-ins.aspx
String Url = ((string)(Registry.LocalMachine.OpenSubKey(
"Software\\Microsoft\\MSCRM").GetValue("ServerUrl"))
).Replace("MSCRMServices", "");

How do you measure the progress of a web service call?

I have an ASP.NET web service which does some heavy lifting, like say,some file operations, or generating Excel Sheets from a bunch of crystal reports. I don't want to be blocked by calling this web service, so i want to make the web service call asynchronous. Also, I want to call this web service from a web page, and want some mechanism which will allow me to keep polling the server so that i can i can show some indicator of progress on the screen, like say, the number of files that have been processed. Please note that i do not want a notification on completion of the web method call, rather, i want a live progress status. How do i go about it?
Write a separate method on the server that you can query by passing the ID of the job that has been scheduled and which returns an approximate value between 0-100 (or 0.0 and 1.0, or whatever) of how far along it is.
E.g. in REST-style, you could make a GET request to http://yourserver.com/app/jobstatus/4133/ which would return a simple '52' as text/plain. Then you just have to query that every (second? two seconds? ten seconds?) to see how far along it is.
How you actually accomplish the monitoring on the backend hugely depends on what your process is and how it works.
I think XML web service is slow, so creating multiple methods and polling the progress will be extremely slow and will generate huge load on the server. I wouldn't do it in production environment. I see the same (but smaller) problems with database polling.
Try SOAP extensions instead. It implements an event-driven model. See Adding a Progress Bar to Your Web Service Client Application on MSDN.
You can also use SoapExtensions to notify your client of the download/process progress. The server can then send events to the client. Nothing in the client has to be changed if you don't use it.
Allows for something like this in your client:
//...
private localhost.MyWebServiceService _myWebService = new localhost.MyWebServiceService ();
_myWebService.processDelegate += ProgressUpdate;
_myWebService.CallHeavyMethod();
//...
private void ProgressUpdate(object sender, ProgressEventArgs e)
{
double progress = ((double)e.ProcessedSize / (double)e.TotalSize) * 100.00;
//Show Progress...
}
Have the initial "start report generation" web service call create a task in some task pool, and return the caller the ID of the task.
Then, provide another method that returns the "percent done" for a given taskId.
Provide a third method that returns the actual result for a completed task.
Easiest way would be to have the Web Service update a field on a database with the progress of the call, and then create a Web Service that queries that field and returns the value.
Make the web service to return some sort of task ID or session ID. Make another web method to query with that ID, which returns the information needed (% completion, list of files, whatever). Poll this method at some interval from the client.
Use a database to store the process information, if you do this in memory of the web service, this will not scale well in web farm environment, as it may happen that the task runs on another server, than the one you are polling.
EDIT: I just saw another similar answer, and comment to it. The commenter is right - you can use in-memory table to avoid disk operations, but still using a separate db server.

Resources