I have a code similar to the one below. Every-time a DBLock appears, I want to get an alert in Dynatrace creating a problem so that I can see it on the dashboard and possibly get an email notification also. The DB lock would appear if the update count is greater than 1.
private int removeDBLock(DataSource dataSource) {
int updateCount = 0;
final Timestamp lastAllowedDBLockTime = new Timestamp(System.currentTimeMillis() - (5 * 60 * 1000));
final String query = format(RELEASE_DB_CHANGELOCK, lastAllowedDBLockTime.toString());
try (Statement stmt = dataSource.getConnection().createStatement()) {
updateCount = stmt.executeUpdate(query);
if(updateCount>0){
log.error("Stale DB Lock found. Locks Removed Count is {} .",updateCount);
}
} catch (SQLException e) {
log.error("Error while trying to find and remove Db Change Lock. ",e);
}
return updateCount;
}
I tried using the event API to trigger an event on my host mentioned here and was successful in raising a problem alert on my dashboard.
https://www.dynatrace.com/support/help/dynatrace-api/environment-api/events/post-event/?request-parameters%3C-%3Ejson-model=json-model
but this would mean injecting an api call in my code just for monitoring, any may lead to more external dependencies and hence more chance of failure.
I also tried creating a custom service detection by adding the class containing this method and the method itself in the custom service. But I do not know how I can link this to an alert or a event that creates a problem on the dashboard.
Are there any best practices or solutions on how I can do this in Dynatrace. Any leads would be helpful.
I would take a look at Custom Services for Java which will cause invocations of the method to be monitored in more detail.
Maybe you can extract a method which actually throws the exception and the outer method which handles it. Then it should be possible to alert on the exception.
There are also some more ways to configure the service via settings, i.e. raise an error based on a return value directly.
See also documentation:
https://www.dynatrace.com/support/help/how-to-use-dynatrace/transactions-and-services/configuration/define-custom-services/
https://www.dynatrace.com/support/help/technology-support/application-software/java/configuration-and-analysis/define-custom-java-services/
Related
I have a business application with the following versions
spring boot(2.2.0.RELEASE) spring-Kafka(2.3.1-RELEASE)
spring-cloud-stream-binder-kafka(2.2.1-RELEASE)
spring-cloud-stream-binder-kafka-core(3.0.3-RELEASE)
spring-cloud-stream-binder-kafka-streams(3.0.3-RELEASE)
We have around 20 batches.Each batch using 6-7 topics to handle the business.Each service has its own state store to maintain the status of the batch whether its running/Idle.
Using the below code to query th store
#Autowired
private InteractiveQueryService interactiveQueryService;
public ReadOnlyKeyValueStore<String, String> fetchKeyValueStoreBy(String storeName) {
while (true) {
try {
log.info("Waiting for state store");
return new ReadOnlyKeyValueStoreWrapper<>(interactiveQueryService.getQueryableStore(storeName,
QueryableStoreTypes.<String, String> keyValueStore()));
} catch (final IllegalStateException e) {
try {
Thread.sleep(1000);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
}
}
When deploying the application in one instance(Linux machine) every thing is working fine.While deploying the application in 2 instance we find the folowing observations
state store is available in one instance and other dosen't have.
When the request is being processed by the instance which has the state store every thing is fine.
If the request falls to the instance which does not have state store the application is waiting in the while loop indefinitley(above code snippet).
While the instance without store waiting indefinitely and if we kill the other instance the above code returns the store and it was processing perfectly.
No clue what we are missing.
When you have multiple Kafka Streams processors running with interactive queries, the code that you showed above will not work the way you expect. It only returns results, if the keys that you are querying are on the same server. In order to fix this, you need to add the property - spring.cloud.stream.kafka.streams.binder.configuration.application.server: <server>:<port> on each instance. Make sure to change the server and port to the correct ones on each server. Then you have to write code similar to the following:
org.apache.kafka.streams.state.HostInfo hostInfo = interactiveQueryService.getHostInfo("store-name",
key, keySerializer);
if (interactiveQueryService.getCurrentHostInfo().equals(hostInfo)) {
//query from the store that is locally available
}
else {
//query from the remote host
}
Please see the reference docs for more information.
Here is a sample code that demonstrates that.
In a microservice environment I see two main benefits from tracing requests through all microservice instances over an entire business process.
Finding latency gaps between or in service instances
Finding roots of failures, whether technical or regarding the business case
With Zipkin there is a tool, which addresses the first issue. But how can tracing be used to unveil failures in your microservice landscape? I definitely want to trace all error afflicted spans, but not each request, where nothing went wrong.
As mentioned here a custom Sampler could be used.
Alternatively, you may register your own Sampler bean definition and programmatically make the decision which requests should be sampled. You can make more intelligent choices about which things to trace, for example, by ignoring successful requests, perhaps checking whether some component is in an error state, or really anything else.
So I tried to implement that, but it doesn't work or I used it wrong.
So, as the blog post suggested I registered my own Sampler:
#Bean
Sampler customSampler() {
return new Sampler() {
#Override
public boolean isSampled(Span span) {
boolean isErrorSpan = false;
for(String tagKey : span.tags().keySet()){
if(tagKey.startsWith("error_")){
isErrorSpan = true;
}
}
return isErrorSpan ;
}
};
}
And in my controller I create a new Span, which is being tagged as an error if an exception raises
private final Tracer tracer;
#Autowired
public DemoController(Tracer tracer) {
this.tracer = tracer;
}
#RequestMapping(value = "/calc/{i}")
public String calc(#PathVariable String i){
Span span = null;
try {
span = this.tracer.createSpan("my_business_logic");
return "1 / " + i + " = " + new Float(1.0 / Integer.parseInt(i)).toString();
}catch(Exception ex){
log.error(ex.getMessage(), ex);
span.logEvent("ERROR: " + ex.getMessage());
this.tracer.addTag("error_" + ex.hashCode(), ex.getMessage());
throw ex;
}
finally{
this.tracer.close(span);
}
}
Now, this doesn't work. If I request /calc/a the method Sampler.isSampled(Span) is being called before the Controller method throws a NumberFormatException. This means, when isSampled() checks the Span, it has no tags yet. And the Sampler method is not being called again later in the process. Only if I open the Sampler and allow every span to be sampled, I see my tagged error-span later on in Zipkin. In this case Sampler.isSampled(Span) was called only 1 time but HttpZipkinSpanReporter.report(Span) was executed 3 times.
So what would the use case look like, to transmit only traces, which have error spans ? Is this even a correct way to tag a span with an arbitrary "error_" tag ?
The sampling decision is taken for a trace. That means that when the first request comes in and the span is created you have to take a decision. You don't have any tags / baggage at that point so you must not depend on the contents of tags to take this decision. That's a wrong approach.
You are taking a very custom approach. If you want to go that way (which is not recommended) you can create a custom implementation of a SpanReporter - https://github.com/spring-cloud/spring-cloud-sleuth/blob/master/spring-cloud-sleuth-core/src/main/java/org/springframework/cloud/sleuth/SpanReporter.java#L30 . SpanReporter is the one that is sending spans to zipkin. You can create an implementation that will wrap an existing SpanReporter implementation and will delegate the execution to it only when some values of tags match. But from my perspective it doesn't sound right.
I have a CRM plugin registered on Create (synchronous, post-operation) of a custom entity that performs some actions, and I want the Create operation to succeed in spite of errors in the plugin. For performance reasons, I also want the plugin to fire immediately when a record is created, so making the plugin asynchronous is undesirable. I've implemented this by doing something like the following:
public class FooPlugin : IPlugin
{
public FooPlugin(string unsecureInfo, string secureInfo) { }
public void Execute(IServiceProvider serviceProvider)
{
try
{
// Boilerplate
var context = (IPluginExecutionContext) serviceProvider.GetService(typeof (IPluginExecutionContext));
var serviceFactory = (IOrganizationServiceFactory) serviceProvider.GetService(typeof (IOrganizationServiceFactory));
IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);
// Additional validation omitted
var targetEntity = (Entity) context.InputParameters["Target"];
UpdateFrobber(service, (EntityReference)targetEntity["new_frobberid"]);
CreateFollowUpFlibber(service, targetEntity);
CloseTheEntity(service, targetEntity);
}
catch (Exception ex)
{
// Send an email but do not re-throw the exception
// because we don't want a failure to roll-back the transaction.
try
{
SendEmailForException(ex, context);
}
catch { }
}
}
}
However, when an error occurs (e.g. in UpdateFrobber(...)), the service client receives this exception:
System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault]:
There is no active transaction. This error is usually caused by custom plug-ins
that ignore errors from service calls and continue processing.
Server stack trace:
at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ref ProxyRpc rpc)
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(ref MessageData msgData, Int32 type)
at Microsoft.Xrm.Sdk.IOrganizationService.Create(Entity entity)
at Microsoft.Xrm.Sdk.Client.OrganizationServiceProxy.CreateCore(Entity entity)
at Microsoft.Xrm.Sdk.Client.OrganizationServiceProxy.Create(Entity entity)
at Microsoft.Xrm.Client.Services.OrganizationService.<>c__DisplayClassd.<Create>b__c(IOrganizationService s)
at Microsoft.Xrm.Client.Services.OrganizationService.InnerOrganizationService.UsingService(Func`2 action)
at Microsoft.Xrm.Client.Services.OrganizationService.Create(Entity entity)
at MyClientCode() in MyClientCode.cs: line 100
My guess is that this happens because UpdateFrobber(...) uses the IOrganizationService instance derived from the plugin, so any CRM service calls that it makes participate in the same transaction as the plugin, and if those "child" operations fail, it causes the entire transaction to rollback. Is this correct? Is there a "safe" way to ignore an error from a "child" operation in a synchronous plugin? Perhaps a way of instantiating an IOrganizationService instance that doesn't re-use the plugin's context?
In case it's relevant, we're running CRM 2013, on-premises.
You cannot ignore unhandled exceptions from child plugins when your plugin is participating in a database transaction.
However, when your plugin is operating On Premise in partial trusted mode, you can actually create a OrganizationServiceProxy instance of your own and use that to access CRM. Be sure you reference the server your plugin is executing on to avoid "double hop" problems.
If really needed, I would create an ExecuteMultipleRequest with ContinueOnError = true, for your email you could just check the ExecuteMultipleResponse...
But it looks a bit overkill.
You can catch exceptions if running in async mode. Be sure to verify your mode when catching the exception.
Sample Code:
try
{
ExecuteTransactionResponse response =
(ExecuteTransactionResponse)service.Execute(exMultReq);
}
catch (Exception ex)
{
errored = true;
if (context.Mode == 0) //0 sync, 1 Async.
throw new InvalidPluginExecutionException(
$"Execute Multiple Transaction
Failed.\n{ex.Message}\n{innermessage}", ex);
}
if(errored == true)
{
//Do more stuff to handle it, such as Log the failure.
}
It is not possible to do so for a synchronous plugin.
A more detailed summary, explaining the execution mode and use case can be found on my blog: https://helpfulbit.com/handling-exceptions-in-plugins/
Cheers.
I am trying to fetch Customer data to parse them into customer object to display on TableView. The following code sometimes works, sometimes not. Whenever it does crash, it shows Customer data is empty in the foreach loop even though I run the same code every time. I do not have clue what could be wrong in this circumstances. I am quite new on this platform. If I am missing anything/ extra information, please let me know.
namespace TableViewExample
{
public partial class MyDataServices : ContentPage
{
private ODataClient mODataClient;
private IEnumerable <IDictionary<string,object>> Customers;
public MyDataServices ()
{
InitializeComponent ();
InitializeDataService ();
GetDataFromOdataService ();
TableView tableView = new TableView{ };
var section = new TableSection ("Customer");
foreach (var customers in Customers) {
//System.Diagnostics.Debug.WriteLine ((string)customers ["ContactName"]);
var name = (string)customers ["ContactName"];
var cell = new TextCell{ Text = name };
section.Add (cell);
}
tableView.Root.Add (section);
Padding = new Thickness (10, 20, 10, 10);
Content = new StackLayout () {
Children = { tableView }
};
}
private void InitializeDataService(){
try {
mODataClient = new ODataClient ("myURL is here");
}
catch {
System.Diagnostics.Debug.WriteLine("ERROR!");
}
}
private void GetDataFromOdataService (){
try {
Customers = mODataClient.For ("Customers").FindEntries ();
}
catch {
System.Diagnostics.Debug.WriteLine("ERROR!");
}
}
}
}
Its hard helping out here, however here are some things to consider:-
It sounds like the dataservice could either be not contactable / offline; too busy or it could even be throwing an exception itself and returning a data response that you are not expecting to receive, that then triggers an exception and crash in your application as your always expecting an exact response without catering for any abnormal responses / events.
If you are contacting an external service over the internet it may just be your internet connection is slow / faulty and not returning the information fast enough as other possibilities.
In your code you are assuming that you always get a response from the server - and that this response will always be of an anticipated structure that your expecting to decode - without factoring in any possibility of abnormal responses returned by the dataservice. I have not used ODataClient personally, so not sure how it behaves in the event of maybe no data received / timeout or in your case the dataservice and how it behaves internally in the response to a bad-request etc.
I am assuming an exception would get thrown, and you do get your debug line executed indicating a failure.
You may want to also adjust this statement so that you write out the exception as well, i.e.:-
private void GetDataFromOdataService ()
{
try
{
Customers = mODataClient.For ("Customers").FindEntries ();
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("ERROR!" + ex.ToString());
}
}
If there was a bad response, then the line at Customers = ..... would throw the exception as there may be no Customers returned or some other information packaged in the response from the dataservice.
The Customers variable would also be null at this point I am assuming due to this failing.
So when you get back to your code at foreach (var customers in Customers) { it will then throw a null reference exception as Customers is infact null.
As all your current code executes in the constructor without any try and catch block around this, it will also crash your application at this point as well.
Also you are doing all of this work in the constructor. Try seperating this out. I haven't investigated exactly where the constructor gets called in an iOS page life-cycle, however, if it is in the viewDidLoad, then you have something like 10 seconds for everything to complete, otherwise it will exit automatically. I imagine in your case, this isn't applicable however.
Going forward also try putting your layout controls in the constructor, and move your data task to maybe the OnAppearing override instead.
Using async would definitely be advisable as well, but remember you need to inspect the response from your dataservice, as the error could be embedded within the response also and you will need to detect when it is OK to process the data.
I'm using LLBLGen and I have some code like so:
if (onlyRecentMessages)
{
messageBucket.PredicateExpression.Add(MessageFields.DateEffective >= DateTime.Today.AddDays(-30));
}
var messageEntities = new EntityCollection<MessageEntity>();
using (var myAdapter = PersistenceLayer.GetDataAccessAdapter())
{
myAdapter.FetchEntityCollection(messageEntities, messageBucket);
}
I'm currently getting a SqlException on the FetchEntityCollection line. The error is:
System.Data.SqlClient.SqlException: The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Too many parameters were provided in this RPC request. The maximum is 2100.
but that's a side note. What I actually want to be able to do is include the generated SQL in a custom exception in my code. So for instance something like this:
using (var myAdapter = PersistenceLayer.GetDataAccessAdapter())
{
try
{
myAdapter.FetchEntityCollection(messageEntities, messageBucket);
}
catch (SqlException ex)
{
throw new CustomSqlException(ex, myAdapter.GeneratedSqlFromLastOperation);
}
}
Of course, there is no such property as GeneratedSqlFromLastOperation. I'm aware that I can configure logging, but I would prefer to have the information directly in my stack track / exception so that my existing exception logging infrastructure can provide me with more information when these kinds of errors occur.
Thanks!
Steve
You should get an ORMQueryExecutionException, which contains the full query in the description. The query's execute method wraps all exceptions in an ORMQueryExecutionException and stores the query in the description.
ps: please, if possible ask llblgen pro related questions on our forums, as we don't monitor stackoverflow frequently. Thanks. :)