We're using MSCRM Dynamics, and we're trying to get all the children of a particular user. (A user has a manager, the manager has 'children'.) The following works, but throws an exception if the user has no children. This seems logical at first, maybe, but why not just return an empty set? And of all things, it throws a SoapException with a cryptic message of "Invalid Argument" (which is wrong) and the .Detail.InnerText says "0x80040203 The value passed for ConditionOperator.In is empty Platform". If you look at the corresponding Response class, it has a collection -- why not just leave it empty?
// Create the request object.
RetrieveAllChildUsersSystemUserRequest retrieve =
new RetrieveAllChildUsersSystemUserRequest();
// Create the column set object that indicates the fields to be retrieved.
ColumnSet cols = new ColumnSet();
cols.EntityName = "systemuserid";
// Set the column set.
retrieve.ColumnSet = cols;
// Set the ID of the parent user.
retrieve.EntityId = context.UserId;
RetrieveAllChildUsersSystemUserResponse retrieved =
new RetrieveAllChildUsersSystemUserResponse();
/// Execute the request.
/// Catches if user does not have children
/// (Check to see if user is manager)
try
{
retrieved =
(RetrieveAllChildUsersSystemUserResponse)crmService.Execute(retrieve);
}
catch (System.Web.Services.Protocols.SoapException e)
{
throw new Exception(string.Format("{0}", e.Detail.InnerText));
}
I agree, it should probably just return an empty result. My guess would be under the hood, some step of executing the request or preparing the response is translated into a QueryExpression. QueryExpressions blow up if you use a ConditionExpression that uses ConditionOperator.In and you pass it an empty list. So it may be something like it gets a list of child systemuser guids, which in some cases is an empty list, then tries to retrieve all attributes of the system users in that list using another QueryExpression and that is what throws the exception.
You can probably design a QueryExpression or FetchXML of your own that will net you the same results without the side effect of it throwing an exception when the list is empty, or just catch the exception and check for that particular error code and swallow it.
Related
I have a code similar to the one below. Every-time a DBLock appears, I want to get an alert in Dynatrace creating a problem so that I can see it on the dashboard and possibly get an email notification also. The DB lock would appear if the update count is greater than 1.
private int removeDBLock(DataSource dataSource) {
int updateCount = 0;
final Timestamp lastAllowedDBLockTime = new Timestamp(System.currentTimeMillis() - (5 * 60 * 1000));
final String query = format(RELEASE_DB_CHANGELOCK, lastAllowedDBLockTime.toString());
try (Statement stmt = dataSource.getConnection().createStatement()) {
updateCount = stmt.executeUpdate(query);
if(updateCount>0){
log.error("Stale DB Lock found. Locks Removed Count is {} .",updateCount);
}
} catch (SQLException e) {
log.error("Error while trying to find and remove Db Change Lock. ",e);
}
return updateCount;
}
I tried using the event API to trigger an event on my host mentioned here and was successful in raising a problem alert on my dashboard.
https://www.dynatrace.com/support/help/dynatrace-api/environment-api/events/post-event/?request-parameters%3C-%3Ejson-model=json-model
but this would mean injecting an api call in my code just for monitoring, any may lead to more external dependencies and hence more chance of failure.
I also tried creating a custom service detection by adding the class containing this method and the method itself in the custom service. But I do not know how I can link this to an alert or a event that creates a problem on the dashboard.
Are there any best practices or solutions on how I can do this in Dynatrace. Any leads would be helpful.
I would take a look at Custom Services for Java which will cause invocations of the method to be monitored in more detail.
Maybe you can extract a method which actually throws the exception and the outer method which handles it. Then it should be possible to alert on the exception.
There are also some more ways to configure the service via settings, i.e. raise an error based on a return value directly.
See also documentation:
https://www.dynatrace.com/support/help/how-to-use-dynatrace/transactions-and-services/configuration/define-custom-services/
https://www.dynatrace.com/support/help/technology-support/application-software/java/configuration-and-analysis/define-custom-java-services/
Similar to Spring Reactor: How to throw an exception when publisher emit a value?
I have a finder method in my DAO java findSomePojo which returns result SomePojo . The finder calls amazon db apis and the javasoftware.amazon.awssdk.services.dynamodb.model.GetItemResponse has output of call.
So I am trying this hasElement() check in my service layer createSomePojo method. (Not sure if I am using it correctly- Iwas trying and debugging)
Basically :
I want to check if there is already element, it is illegal to save and I would not call DAOs save. So I need to throw exception.
Assuming that there is already a record of SomePojo in DB, I try to invoke create_SomePjo of service .But I see in logs that filter is not working and is get NPE when reactor invokes createModel_SomePojo making me believe that somehow even after check filter it throws NPE
///service SomePjoService it has create_SomePojo, find_SomePojo etc
Mono<Void> create_SomePojo(reqPojo){
// Before calling DAO 's save I call serivice find (which basically calls DAOs find (Shown befow after this methid)
Mono<Boolean> monoPresent = find_SomePojo(accountId, contentIdExtn)
.filter(i -> i.getId() != null)
.hasElement();
System.out.println("monoPresent="+monoPresent.toString());
if(monoPresent.toString().equals("MonoHasElement")){
//*************it comes here i see that***********//
System.out.println("hrereee monoPresent="+monoPresent);
// Mono<Error> monoCheck=
return monoPresent.handle((next, sink) -> sink.error(new SomeException(ITEM_ALREADY_EXISTS))).then();
} else {
return SomePojoRepo.save(reqPojo).then();
}
}
Mono<SomePojo> find_SomePojo(id){
return SomePojoRepo.find(id);
}
==============================================================
///DAO : SomePojoRepo.java : it has save,find,delete
Mono<SomePojo> find( String id) {
Mono<SomePojo> fallback = Mono.empty();
Mono<GetItemResponse> monoFilteredResponse = monoFuture
.filter(getItemResponse -> getItemResponse.item().size() > 0&& getItemResponse!=null);
Mono<SomePojo> result = monoFilteredResponse
.map(getItemResponse -> createModel_SomePojo(getItemResponse.item()));
Mono<SomePojo> deferedResult = Mono.defer(() -> result.switchIfEmpty(fallback));
return deferedResult;
}
I see there is hasElement() method on Mono . Not sure how to correctly use it.
I can achieve exception if I call DAO save in my service create_SomePojo(reqPojo) directly without doing all this findner check because primary key constraint will take care and throw excpetion and I cna rethrow and then catch in service but what If I want to check in service and throw exception with error codes . The idea is not to pass response error object to dao layer .
Try to use Hooks.onOperatorDebug() hook to get better debugging experience.
Correct way to use hasElement (assuming that find_SomePojo never returns null)
Mono<Boolean> monoPresent = find_SomePojo(accountId, contentIdExtn)
.filter(i -> i.getId() != null)
.hasElement();
return monoPresent.flatMap(isPresent -> {
if(isPresent){
Mono.error(new SomeException(ITEM_ALREADY_EXISTS)));
}else{
SomePojoRepo.save(reqPojo);
}
}).then();
Sidenote
There is a common misconception about what Mono actually is. It does not hold any data - it's just a fragment of pipeline, which transmits signals and data flowing through it. Therefore, line System.out.println("monoPresent="+monoPresent.toString()); makes no sense, because it just prints the hasElements() decorator around the existsing pipeline. Internal name of this decorator is MonoHasElement, no matter what is contained in it(true /false), MonoHasElement would be printed anyway.
Correct ways to print signal (and data transmitted along with them) are:
Mono.log(), Mono.doOnEach/next(System.out::println) or System.out.println("monoPresent="+monoPresent.block());. Beware of third one: it will block whole thread until data is emitted, so use it only if you know what you are doing.
Example with Monos printing to play with:
Mono<String> abc = Mono.just("abc").delayElement(Duration.ofSeconds(99999999));
System.out.println(abc); //this will print MonoDelayElement instantly
System.out.println(abc.block()); //this will print 'abc', if you are patient enough ;^)
abc.subscribe(System.out::println); //this will also print 'abc' after 99999999 seconds, but without blocking current thread
I've configured a FileNet workflow subscription on Add, Update and Delete events. The workflow calls a Java component to send a notification message (to a third party).
We would like to see "before" and "after" property values in the notification message for "Update" events.
The "Event" object that triggers the subscription has a "Modified Properties" member, so I was hoping I could just create a corresponding "ModifiedProperties" string array in the workflow, and have the subscription map "Update.ModifiedProperties = ModifiedProperties". Unfortunately, the Event's "ModifiedProperties" only gives the NEW value, not the "before" value.
<= So I don't see any way to get "before/after" values directly from the subscription...
It looks like the "UpdateEvent" object also has an "OriginalObject" member ... and I might be able to use the Java API to get the "before" value from the OriginalObject.
Q: Does this sound plausible method for getting the before/after document property values?
Q: Any ideas how to pass the "OriginalObject" object from the subscription to the workflow, so the Java component can use it?
The target platform is P8 5.2.1; I'm developing on P8 5.5.
You are right, the only way to the original values is through the OriginalObject object. And the quickest way to get data to a workflow is using a subscribable object.
Therefore, a solution to your problem is to define a custom object containing the properties describing the new and the old property values. You create this custom object in a custom event handler triggered on an update event from the document. Here you can populate the properties of the custom object using the original object:
Document document = (Document) event.get_OriginalObject();;
Iterator<?> iterator = event.get_ModifiedProperties().iterator();
while (iterator.hasNext()) {
String modifiedProperty = (String) iterator.next();
// TODO: Fetch the values from the original object
// and set them on the custom object. The details depend
// on the data structure you choose.
}
Next you create a Workflow subscription triggered on the creation of the custom object. You can map the properties of your custom object to the data fields of your workflow. In the workflow that is started you can define an attachment and specify that the custom object is the initiating attachment. Using the CE_Operation queue methods you can now and delete the custom object when your processing is finished.
if(objEvent instanceof UpdateEvent) { try { String strModifiedProperties = ""; UpdateEvent updateEvent = (UpdateEvent) objEvent; StringList propertyNames = updateEvent.get_ModifiedProperties(); Iterator iterModifiedProps = propertyNames.iterator(); while(iterModifiedProps.hasNext()) { String modifiedProperty = (String) iterModifiedProps.next(); strModifiedProperties = strModifiedProperties+modifiedProperty+","; } strModifiedProperties = strModifiedProperties.substring(0, strModifiedProperties.lastIndexOf(",")); } catch (Exception e) { System.out.println("onEvent : Exception while executing UpdateEvent: "+e.getMessage()); } }
I'm prototyping some simple audit logging functionality. I have a mid sized entity model (~50 entities) and I'd like to implement audit logging on about 5 or 6. Ultimately I'd like to get this working on Inserts & Deletes as well, but for now I'm just focusing on the updates.
The problem is, when I do session.Save (or SaveOrUpdate) to my auditLog table from within the EventListener, the original object is persisted (updated) correctly, but my AuditLog object never gets inserted.
I think it's a problem with both the Pre and Post event listeners being called to late in the NHibernate save life cycle for the session to still be used.
//in my ISessionFactory Build method
nHibernateConfiguration.EventListeners.PreUpdateEventListeners =
new IPreUpdateEventListener[]{new AuditLogListener()};
//in my AuditLogListener
public class AuditLogListener : IPreUpdateEventListener
{
public bool OnPreUpdate(PreUpdateEvent #event)
{
string message = //code to look at #event.Entity & build message - this works
if (!string.IsNullOrEmpty(message))
AuditLogHelper.Log(message, #event.Session); //Session is an IEventSource
return false; //Don't veto the change
}
}
//In my helper
public static void Log(string message, IEventSource session)
{
var user = session.QueryOver<User>()
.Where(x => x.Name == "John")
.SingleOrDefault();
//have confirmed a valid user is found
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = user
};
(session as ISession).SaveOrUpdate(logItem);
}
When it hits the session.SaveOrUpdate() in the last method, no errors occur. No exceptions are thrown. it seems to succeed and moves on. But nothing happens. The audit log entry never appears in the database.
The only way I've been able to get this to work it to create a completely new Session & Transaction inside this method, but this isn't really ideal, as the code proceeds back out of the listener method, hits the session.Transaction.Commit() in my main app, and if that transaction fails, then I've got an orphaned log message in my audit table for somethign that never happened.
Any pointers where I might be going wrong ?
EDIT
I've also tried to SaveOrUpdate the LogItem using a child session from the events based on some comments in this thread. http://ayende.com/blog/3987/nhibernate-ipreupdateeventlistener-ipreinserteventlistener
var childSession = session.GetSession(EntityMode.Poco);
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = databaseLogin.User
};
childSession.SaveOrUpdate(logItem);
Still nothing appears in my Log table in the db. No errors or exceptions.
You need to create a child session, currentSession.GetSession(EntityMode.Poco), in your OnPreUpdate method and use this in your log method. Depending on your flushmode setting, you might need to flush the child session as well.
Also, any particular reason you want to roll out your own solution? FYI, NHibernate Envers is now a pretty mature library.
I am trying to perform a straighforward update using LinqToSQL, and just cannot get it to work.
Here's the data model: there is a timesheet_entry and customer. One customer has many timesheet_entries. It's a simple foreign key relationship.
If I'm editing an existing timesheet_entry, I get an exception if I change the customer_id.
Here's my attempt at the code. Can someone help me out here?
internal void CommitChangesToEntry(Timesheet_Entry entry)
{
Timesheet_Entry latest = (from e
in m_dataContext.Timesheet_Entries
where e.Timesheet_Entry_ID == entry.Timesheet_Entry_ID
select e).First();
latest.Entry_Start_DateTime = entry.Entry_Start_DateTime;
latest.Entry_End_DateTime = entry.Entry_End_DateTime;
latest.Task_Description = entry.Task_Description;
// Comment out this line of code and it
// doesn't throw an exception
latest.Customer_ID = entry.Customer_ID;
m_dataContext.SubmitChanges(); // This throws a NotSupportedException
}
The error is: "An attempt has been made to attach or add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported".
Do you have any reason for not using the Attach method? Like the following:
m_dataContext.Timesheet_Entries.Attach(entry);
m_dataContext.SubmitChanges();