Laravel 4 beanstalkd exception catching issue in job processing - laravel-4

I am using beastalkd for the job queue processing email validations.
job processor implementation is like
public function fire($job, $data)
{
// processing
try {
//some processing including some smtp simulation checks
} catch (Exception $e) {
//saving some status
}
//further processing
$job->delete();
}
Like the example above at some point there is an exception throwing which is indented to happen as per the process. The exception is catching properly and doing some actions in catch block. The problem is after catching the exception that released back onto the queue.
Is there any way to continue processing the current attempt of the job even after the exception catches. ?

Related

How to call kafkaconsumer api from partition assignor' s implementation

I have implemented my own partition assignment strategy by implementing RangeAssignor in my spring boot application.
I have overridden its subscriptionUserData method and adding some user data. Whenever this data is getting changed I want to trigger partition rebalance by invoking below kafkaConsumer's api
kafkaconsumer apis enforce rebalance
I am not sure how can I get the object of kafka consumer and invoke this api.
Please suggest
You can call consumer.wakeup() function
consumer.wakeup() is the only consumer method that is safe to call from a different thread. Calling wakeup will cause poll() to exit with WakeupException, or if consumer.wakeup() was called while the thread was not waiting on poll, the exception will be thrown on the next iteration when poll() is called. The WakeupException doesn’t need to be handled, but before exiting the thread, you must call consumer.close(). Closing the consumer will commit off‐ sets if needed and will send the group coordinator a message that the consumer is leaving the group. The consumer coordinator will trigger rebalancing immediately
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
System.out.println("Starting exit...");
consumer.wakeup(); **//1**
try {
mainThread.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
} });
...
Duration timeout = Duration.ofMillis(100);
try {
// looping until ctrl-c, the shutdown hook will cleanup on exit
while (true) {
ConsumerRecords<String, String> records =
movingAvg.consumer.poll(timeout);
System.out.println(System.currentTimeMillis() +
"-- waiting for data...");
for (ConsumerRecord<String, String> record : records) {
System.out.printf("offset = %d, key = %s, value = %s\n",
record.offset(), record.key(), record.value());
}
for (TopicPartition tp: consumer.assignment())
System.out.println("Committing offset at position:" +
consumer.position(tp));
movingAvg.consumer.commitSync();
}
} catch (WakeupException e) {
// ignore for shutdown. **//2**
} finally {
consumer.close(); **//3**
System.out.println("Closed consumer and we are done");
}
ShutdownHook runs in a separate thread, so the only safe action we can take is to call wakeup to break out of the poll loop.
Another thread calling wakeup will cause poll to throw a WakeupException. You’ll want to catch the exception to make sure your application doesn’t exit unexpect‐ edly, but there is no need to do anything with it.
Before exiting the consumer, make sure you close it cleanly.
full example at:
https://github.com/gwenshap/kafka-examples/blob/master/SimpleMovingAvg/src/main/java/com/shapira/examples/newconsumer/simplemovingavg/SimpleMovingAvgNewConsumer.java

2 Minute Plugin Time Out Unhandled Exception

I'm trying to simulate a 2 minute time out on my Plugin by using Thread.Sleep(120000) to generate a Time Out but when that exception arises my try catch can't seem to catch them, even the finally block is skipped.
How would i properly catch it so that i can perform on creating a case record on this error?
I've tried using different catch exceptions to no avail. I've even tried profiling the plugin but it also won't profile the plugin due to the error.
protected override void ExecuteCrmPlugin(LocalPluginContext localContext) {
IPluginExecutionContext context = localContext.PluginExecutionContext;
IOrganizationService service = localContext.OrganizationService;
ITracingService tracer = localContext.TracingService;
try {
if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity) {
Entity entity = (Entity)context.InputParameters["Target"];
if (context.MessageName.ToLower() != "create")
return;
if (entity.LogicalName.ToLower() == Contact.EntityLogicalName) {
tracer.Trace("Sleep for 2 min");
Thread.Sleep(120000);
}
}
}
catch (System.ServiceModel.FaultException<System.TimeoutException ex) {
throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
}
catch (System.ServiceModel.FaultException<Microsoft.Xrm.Sdk.OrganizationServiceFault ex) {
throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
}
catch (TimeoutException e) {
throw new InvalidPluginExecutionException("A timeout has occurred during the execution of the plugin.", e);
}
catch (FaultException ex) {
throw new InvalidPluginExecutionException("Err occurred.", ex);
}
catch (Exception ex) {
tracer.Trace(ex.ToString());
throw;
}
finally {
tracer.Trace("Finally");
}
}
In the Base Class I also have the same catch blocks.
The error is:
Unhandled exception:
Exception type: System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault]
Message: An unexpected error occurred from ISV code. (ErrorType = ClientError) Unexpected exception from plug-in (Execute): MyPlugin.Plugins.PreCreateContact: System.TimeoutException: Couldn’t complete execution of the MyPlugin.Plugins.PreCreateContact plug-in within the 2-minute limit.Detail:
<OrganizationServiceFault xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
<ActivityId>397e0f4c-2e16-43ea-9368-ea76607820a5</ActivityId>
<ErrorCode>-2147220956</ErrorCode>
<ErrorDetails xmlns:d2p1="http://schemas.datacontract.org/2004/07/System.Collections.Generic" />
<HelpLink i:nil="true" />
<Message>An unexpected error occurred from ISV code. (ErrorType = ClientError) Unexpected exception from plug-in (Execute): MyPlugin.Plugins.PreCreateContact: System.TimeoutException: Couldn’t complete execution of the MyPlugin.Plugins.PreCreateContact plug-in within the 2-minute limit.</Message>
<Timestamp>2019-07-17T00:49:48.360749Z</Timestamp>
<ExceptionRetriable>false</ExceptionRetriable>
<ExceptionSource>PluginExecution</ExceptionSource>
<InnerFault i:nil="true" />
<OriginalException>System.TimeoutException: Couldn’t complete execution of the MyPlugin.Plugins.PreCreateContact plug-in within the 2-minute limit.
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.Crm.Sandbox.SandboxAppDomainHelper.Execute(IOrganizationServiceFactory organizationServiceFactory, Dictionary`2 sandboxServices, String pluginTypeName, String pluginConfiguration, String pluginSecureConfig, IPluginExecutionContext requestContext, Boolean enablePluginStackTrace, Boolean chaosFailAppDomain, String crashOnPluginExceptionMessage)
at Microsoft.Crm.Sandbox.SandboxAppDomainHelper.Execute(IOrganizationServiceFactory organizationServiceFactory, Dictionary`2 sandboxServices, String pluginTypeName, String pluginConfiguration, String pluginSecureConfig, IPluginExecutionContext requestContext, Boolean enablePluginStackTrace, Boolean chaosFailAppDomain, String crashOnPluginExceptionMessage)
at Microsoft.Crm.Sandbox.SandboxWorker.<>c__DisplayClass3_0.<Execute>b__0()</OriginalException>
<TraceText>
Entered MyPlugin.Plugins.PreCreateContact.Execute(), Correlation Id: 0c2b0dd3-d27c-46ea-a7e2-90c0729b326e, Initiating User: 61e01dfa-668a-e811-8107-123456
</TraceText>
</OrganizationServiceFault>
I guess once the plugin hits the 2-minute timeout, our ability to process anything is over. And that makes sense - if the system allowed us to run a catch block after the 2-minute timeout that catch block could run for another minute, or five.
One approach I've taken to allow a plugin or custom workflow process something that might take longer than the 2-minute timeout is to use a cancellation token to gracefully shut down before the timeout. Rather than catch the platform's timeout exception, you can throw your own at 115 seconds and cancel gracefully.
I've even gone so far as to preserve the state of the execution as XML stored in an annotation and retrigger the plugin up to the infinite loop limit of 7 times. And beyond that I've created four processes that each recur every hour, staggered by 15 minutes - i.e. 15, 30, 45, and 60 minutes after the main process. (A process can recur hourly without triggering an infinite loop exception). These processes catch any jobs that have preserved their state after 7 runs and retrigger them for another 7. Using this method I have had workflows complete calculations that took hours.
Mind you, this could be considered abuse of the asynchronous processing system. I'll admit it's a stretch, but the use case needed to handle big calculations entirely inside of Dynamics. Of course the standard approach to "escape" the sandbox timeout is to move processing to an external service (e.g. Azure API, Azure Functions, Flow, Logic Apps, etc.)

Laravel 5.4 - unable to catch all exceptions

On Laravel 5.4, I'm using a console command of my own. This command is a scheduled task (every minute). This task instantiates a library. In this library, the very first thing I'm doing is:
try {
$this->doSomething1();
$this->doSomething2();
...
}
catch (Exception $e) {
...
}
catch (\Exception $e) {
...
}
Most of the times, the exceptions are caught, my code keeps running smoothly.
But sometimes, it isn't caught. For example, I had an exception yesterday because I was trying to insert a longtext into a text field in MySQL who complained about it. The exception was not raised and my command aborted.
What am I missing? How can I catch all the exception at once?
Thanks a lot,
Axel

how to prevent hadoop job to fail on corrupted input file

I'm running hadoop job on many input files.
But if one of the files is corrupted the whole job is fails.
How can I make the job to ignore the corrupted file?
maybe write for me some counter/error log but not fail the whole job
It depends on where your job is failing - if a line is corrupt, and somewhere in your map method an Exception is thrown then you should just be able to wrap the body of your map method with a try / catch and just log the error:
protected void map(LongWritable key, Text value, Context context) {
try {
// parse value to a long
int val = Integer.parseInt(value.toString());
// do something with key and val..
} catch (NumberFormatException nfe) {
// log error and continue
}
}
But if the error is thrown by your InputFormat's RecordReader then you'll need to amend the mappers run(..) method - who's default implementation is as follows:
public void run(Context context) {
setup(context);
while (context.nextKeyValue()) {
map(context.getCurrentKey(), context.getCurrentValue(), context);
}
cleanup(context);
}
So you could amend this to try and catch the exception on the context.nextKeyValue() call but you have to be careful on just ignoring any errors thrown by the reader - an IOExeption for example may not be 'skippable' by just ignoring the error.
If you have written your own InputFormat / RecordReader, and you have a specific exception which denotes record failure but will allow you to skip over and continue parsing, then something like this will probably work:
public void run(Context context) {
setup(context);
while (true) {
try {
if (!context.nextKeyValue()) {
break;
} else {
map(context.getCurrentKey(), context.getCurrentValue(), context);
}
} catch (SkippableRecordException sre) {
// log error
}
}
cleanup(context);
}
But just to re-itterate - your RecordReader must be able to recover on error otherwise the above code could send you into an infinite loop.
For your specific case - if you just want to ignore a file upon the first failure then you can update the run method to something much simpler:
public void run(Context context) {
setup(context);
try {
while (context.nextKeyValue()) {
map(context.getCurrentKey(), context.getCurrentValue(), context);
}
cleanup(context);
} catch (Exception e) {
// log error
}
}
Some final words of warning:
You need to make sure that it isn't your mapper code which is causing the exception to be thrown, otherwise you'll be ignoring files for the wrong reason
GZip compressed files which are not GZip compressed will actually fail in the initialization of the record reader - so the above will not catch this type or error (you'll need to write your own record reader implementation). This is true for any file error that is thrown during record reader creation
This is what Failure Traps are used for in cascading:
Whenever an operation fails and throws an exception, if there is an associated trap, the offending Tuple is saved to the resource specified by the trap Tap. This allows the job to continue processing without any data loss.
This will essentially let your job continue and let you check your corrupt files later
If you are somewhat familiar with cascading in your flow definition statement:
new FlowDef().addTrap( String branchName, Tap trap );
Failure Traps
There is also another possible way. You could use mapred.max.map.failures.percent configuration option. Of course this way of solving this problem could also hide some other problems occurring during map phase.

About Spring Transaction Manager

Currently i am using spring declarative transaction manager in my application. During DB operations if any constraint violated i want to check the error code against the database. i mean i want to run one select query after the exception happened. So i am catching the DataIntegrityViolationException inside my Catch block and then i am trying to execute one more error code query. But that query is not get executed . I am assuming since i am using the transaction manager if any exception happened the next query is not getting executed. Is that right?. i want to execute that error code query before i am returning the results to the client. Any way to do this?
#Override
#Transactional
public LineOfBusinessResponse create(
CreateLineOfBusiness createLineOfBusiness)
throws GenericUpcException {
logger.info("Start of createLineOfBusinessEntity()");
LineOfBusinessEntity lineOfBusinessEntity =
setLineOfBusinessEntityProperties(createLineOfBusiness);
try {
lineOfBusinessDao.create(lineOfBusinessEntity);
return setUpcLineOfBusinessResponseProperties(lineOfBusinessEntity);
}
// Some db constraints is failed
catch (DataIntegrityViolationException dav) {
String errorMessage =
errorCodesBd.findErrorCodeByErrorMessage(dav.getMessage());
throw new GenericUpcException(errorMessage);
}
// General Exceptions handling
catch (Exception exc) {
logger.debug("<<<<Coming inside General >>>>");
System.out.print("<<<<Coming inside General >>>>");
throw new GenericUpcException(exc.getMessage());
}
}
public String findErrorCodeByErrorMessage(String errorMessage)throws GenericUpcException {
try{
int first=errorMessage.indexOf("[",errorMessage.indexOf("constraint"));
int last=errorMessage.indexOf("]",first);
String errorCode=errorMessage.substring(first+1, last);
//return errorCodesDao.find(errorCode);
return errorCode;
}
catch(Exception e)
{
throw new GenericUpcException(e.getMessage());
}
}
Please help me.
I don't think problem you're describing has anything to do with Transaction management. If DataIntegrityViolationException happens within your try() block you code within catch() should execute. Perhaps exception different from DataIntegrityViolationException happens or your findErrorCodeByErrorMessage() throwing another exception. In general, Transaction logic would be applied only once you return from your method call, until then you could do whatever you like using normal Java language constructs. I suggest you put breakpoint in your error error handler or some debug statements to see what's actually happening.

Resources