Ruby time in milli seconds - ruby

I have a ruby method
def generate_CurrentDateTime()
puts "Generating current date and time";
dateTimeObj = Time.now();
dateTimeObj.year.to_s + dateTimeObj.month.to_s + dateTimeObj.day.to_s +
dateTimeObj.hour.to_s + dateTimeObj.min.to_s + dateTimeObj.sec.to_s;
end
I would like to add milliseconds in this tried millis.to_s and ms.to_s both are incorrect please help
I already fould an alternative solution
def generate_CurrentDateTime()
puts "Generating current date and time";
Time.now.strftime('%Y%m%d%H%M%S%L');
end
But i wanna know if any direct method is available.

The default shutdown strategy is to gracefully complete all in flight tasks, which means for batch consumers (like the file consumer) it will finish processing the subsequent steps in the route until the end, but will not process more items in the batch because that could be forever. Only the in process task message goes to completion.
You can override this behavior if you know it will eventually finish with the ShutdownRunningTask.CompleteAllTasks parameter:
public void configure() throws Exception {
from(url).routeId("foo").noAutoStartup()
// let it complete all tasks during shutdown
.shutdownRunningTask(ShutdownRunningTask.CompleteAllTasks)
.process(new MyProcessor())
.to("mock:bar");
}

Related

JMeter - Pause (and resume) execution on demand

I'm executing JMeter task for a few hours on a server,
I want to be able to pause execution for a few seconds/minutes and resume when server finish restarted
Is there a way to signal JMeter to pause and resume its execution?
I saw similar question, but it doesn't fit my issue
As of current JMeter version 5.3 there is no way to accomplish your "issue" with built-in JMeter components.
The easiest solution I can think if is: given you're restarting your server it should be not available for some time and when it becomes available - it should respond with a HTML page containing some text.
So you can "wait" for the server to be up and running as follows:
Add JSR223 Sampler to the appropriate place in the Test Plan where you need to "wait' for the server to be up and running
Put the following code into "Script" area:
import org.apache.http.client.config.RequestConfig
import org.apache.http.client.methods.HttpGet
import org.apache.http.impl.client.HttpClientBuilder
import org.apache.http.util.EntityUtils
SampleResult.setIgnore()
def retry = true
def requestConfig = RequestConfig.custom().setConnectTimeout(1000).setSocketTimeout(1000).build()
def httpClient = HttpClientBuilder.create().setDefaultRequestConfig(requestConfig).build()
while (retry) {
def httpGet = new HttpGet('http://jmeter.apache.org')
try {
def entity = httpClient.execute(httpGet).getEntity()
if (EntityUtils.toString(entity).contains('Apache JMeter')) {
log.info('Application is up, proceeding')
retry = false
} else {
log.info('Application is still down, waiting for 5 seconds before retry')
sleep(5000)
}
}
catch (Throwable ex) {
sleep(5000)
ex.printStackTrace()
}
}
That's it, the code will try to open the web page and look for some text in it, if the page doesn't open and/or text is not present - it will wait for 5 seconds and retry
More information:
HttpClient Quick Start
Apache Groovy - Why and How You Should Use It

dynamics crm 365 plugin delay between execution pipeline stages

Our plugin is running slow on the "Retrieve" message, so I placed a few timestamps in the code to determine where the bottle neck is. I realized there is a 7 second delay which happens intermittently between the end of the pre-operation stage and the start of the post operation stage.
END PRE - 3/22/2018 11:57:55 AM
POST STAGE START - 3/22/2018 11:58:02 AM
protected virtual void RetrievePreOperation()
{
var message = $"END PRE - {DateTime.Now}";
PluginExecutionContext.SharedVariables.Add("message", message);
}
protected virtual void RetrievePostOperation()
{
// Stop recursive calls
if (PluginExecutionContext.Depth > 1) return;
if (PluginExecutionContext.MessageName.ToLower() != Retrieve ||
!PluginExecutionContext.InputParameters.Contains("Target") ||
PluginExecutionContext.Stage != (int)PipelineStages.PostOperation)
return;
var entity = (Entity)PluginExecutionContext.OutputParameters["BusinessEntity"];
string message = PluginExecutionContext.SharedVariables["message"].ToString();
message += $"POST STAGE START - {DateTime.Now}";
}
Any ideas on how to minimize this delay would be appreciated. Thanks
If your plugin step is registered on Asynchronous execution mode, this delay totally depends on Async service load & pipeline of waiting calls/jobs. You can switch it to Synchronous.
If its registered in Synchronous mode but still delay is there intermittently, it depends on many things like which entity, query & complex logic if any.

Rate Exceeding in workflow_execution polling

I am currently trying to modify a plugin for posting metrics to new-relic via AWS. I have successfully managed to make the plugin post metrics from swf to new relic (not originally in plugin), but have encountered a problem if the program runs for too long.
When the program runs for a bout 10 minutes I get the following error:
Error occurred in poll cycle: Rate exceeded
I believe this is coming from my polling swf for the workflow executions
domain.workflow_executions.each do |execution|
starttime = execution.started_at
endtime = execution.closed_at
isOpen = execution.open?
status = execution.status
if endtime != nil
running_workflow_runtime_total += (endtime - starttime)
number_of_completed_executions += 1
end
if status.to_s == "open"
openCount = openCount + 1
elsif status.to_s == "completed"
completedCount = completedCount + 1
elsif status.to_s == "failed"
failedCount = failedCount + 1
elsif status.to_s == "timed_out"
timed_outCount = timed_outCount + 1
end
end
This is called in a polling cycle every 60 seconds
Is there a way to set the polling rate? Or another way to get the workflow executions?
Thanks, here's a link to the ruby sdk for swf => link
The issue is likely that you are creating a large number of workflow executions and each iteration through the loop in workflow_executions is causing a lookup, which eventually is exceeding your rate limit.
This could also be getting a bit expensive, so be careful.
It's not clear what you're really trying to do, so I can't tell you how to fix it unless you post all your code (or the parts around calls to SWF).
You can see here:
https://github.com/aws/aws-sdk-ruby/blob/05d15cd1b6037e98f2db45f8c2597014ee376a59/lib/aws/simple_workflow/workflow_execution_collection.rb
That a call is made to SWF for each workflow in the collection.

Using Spring #Scheduled and #Async together

Here is my use case.
A legacy system updates a database queue table QUEUE.
I want a scheduled recurring job that
- checks the contents of QUEUE
- if there are rows in the table it locks the row and does some work
- deletes the row in QUEUE
If the previous job is still running, then a new thread will be created to do the work. I want to configure the maximum number of concurrent threads.
I am using Spring 3 and my current solution is to do the following (using a fixedRate of 1 millisecond to get the threads to run basically continuously)
#Scheduled(fixedRate = 1)
#Async
public void doSchedule() throws InterruptedException {
log.debug("Start schedule");
publishWorker.start();
log.debug("End schedule");
}
<task:executor id="workerExecutor" pool-size="4" />
This created 4 threads straight off and the threads correctly shared the workload from the queue. However I seem to be getting a memory leak when the threads take a long time to complete.
java.util.concurrent.ThreadPoolExecutor # 0xe097b8f0 | 80 | 373,410,496 | 89.74%
|- java.util.concurrent.LinkedBlockingQueue # 0xe097b940 | 48 | 373,410,136 | 89.74%
| |- java.util.concurrent.LinkedBlockingQueue$Node # 0xe25c9d68
So
1: Should I be using #Async and #Scheduled together?
2: If not then how else can I use spring to achieve my requirements?
3: How can I create the new threads only when the other threads are busy?
Thanks all!
EDIT: I think the queue of jobs was getting infinitely long... Now using
<task:executor id="workerExecutor"
pool-size="1-4"
queue-capacity="10" rejection-policy="DISCARD" />
Will report back with results
You can try
Run a scheduler with one second delay, which will lock & fetch all
QUEUE records that weren't locked so far.
For each record, call an Async method, which will process that record & delete it.
The executor's rejection policy should be ABORT, so that the scheduler can unlock the QUEUEs that aren't given out for processing yet. That way the scheduler can try processing those QUEUEs again in the next run.
Of course, you'll have to handle the scenario, where the scheduler has locked a QUEUE, but the handler didn't finish processing it for whatever reason.
Pseudo code:
public class QueueScheduler {
#AutoWired
private QueueHandler queueHandler;
#Scheduled(fixedDelay = 1000)
public void doSchedule() throws InterruptedException {
log.debug("Start schedule");
List<Long> queueIds = lockAndFetchAllUnlockedQueues();
for (long id : queueIds)
queueHandler.process(id);
log.debug("End schedule");
}
}
public class QueueHandler {
#Async
public void process(long queueId) {
// process the QUEUE & delete it from DB
}
}
<task:executor id="workerExecutor" pool-size="1-4" queue-capcity="10"
rejection-policy="ABORT"/>
//using a fixedRate of 1 millisecond to get the threads to run basically continuously
#Scheduled(fixedRate = 1)
When you use #Scheduled a new thread will be created and will invoke method doSchedule at the specified fixedRate at 1 milliseconds. When you run your app you can already see 4 threads competing for the QUEUE table and possibly a dead lock.
Investigate if there is a deadlock by taking thread dump.
http://helpx.adobe.com/cq/kb/TakeThreadDump.html
#Async annotation will not be of any use here.
Better way to implement this is to create you class as a thread by implementing runnable and passing your class to TaskExecutor with required number of threads.
Using Spring threading and TaskExecutor, how do I know when a thread is finished?
Also check your design it doesn't seem to be handling the synchronization properly. If a previous job is running and holding a lock on the row, the next job you create will still see that row and will wait for acquiring lock on that particular row.

Issue or confusion with JMS/spring/AMQ not processing messages asynchronously

We have a situation where we set up a component to run batch jobs using spring batch remotely. We send a JMS message with the job xml path, name, parameters, etc. and we wait on the calling batch client for a response from the server.
The server reads the queue and calls the appropriate method to run the job and return the result, which our messaging framework does by:
this.jmsTemplate.send(queueName, messageCreator);
this.LOGGER.debug("Message sent to '" + queueName + "'");
try {
final Destination replyTo = messageCreator.getReplyTo();
final String correlationId = messageCreator.getMessageId();
this.LOGGER.debug("Waiting for the response '" + correlationId + "' back on '" + replyTo + "' ...");
final BytesMessage message = (BytesMessage) this.jmsTemplate.receiveSelected(replyTo, "JMSCorrelationID='"
+ correlationId + "'");
this.LOGGER.debug("Response received");
Ideally, we want to be able to call out runJobSync method twice, and have two jobs simultaneously operate. We have a unit test that does something similar, without jobs. I realize this code isn't very great, but, here it is:
final List result = Collections.synchronizedList(new ArrayList());
Thread thread1 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(1000);
result.add(Thread.currentThread().getName());
}
}, "thread1");
Thread thread2 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(500);
result.add(Thread.currentThread().getName());
}
}, "thread2");
thread1.start();
Thread.sleep(250);
thread2.start();
thread1.join();
thread2.join();
Assert.assertEquals("both thread finished", 2, result.size());
Assert.assertEquals("thread2 finished first", "thread2", result.get(0));
Assert.assertEquals("thread1 finished second", "thread1", result.get(1));
When we run that test, thread 2 completes first since it just has a 500 millisencond wait, while thread 1 does a 1 second wait:
Thread.sleep(delayInMs);
return result;
That works great.
When we run two remote jobs in the wild, one which takes about 50 seconds to complete and one which is designed to fail immediately and return, this does not happen.
Start the 50 second job, then immediately start the instant fail job. The client prints that we sent a message requesting that the job run, the server prints that it received the 50 second request, but waits until that 50 second job is completed before handling the second message at all, even though we use the ThreadPoolExecutor.
We are running transactional with Auto acknowledge.
Doing some remote debugging, the Consumer from AbstractPollingMessageListenerContainer shows no unhandled messages (so consumer.receive() obviously just returns null over and over). The webgui for the amq broker shows 2 enqueues, 1 deque, 1 dispatched, and 1 in the dispatched queue. This suggests to me that something is preventing AMQ from letting the consumer "have" the second message. (prefetch is 1000 btw)
This shows as the only consumer for the particular queue.
Myself and a few other developers have poked around for the last few days and are pretty much getting nowhere. Any suggestions on either, what we have misconfigured if this is expected behavior, or, what would be broken here.
Does the method that is being remotely called matter at all? Currently the job handler method uses an executor to run the job in a different thread and does a future.get() (the extra thread is for reasons related to logging).
Any help is greatly appreciated
not sure I follow completely, but off the top, you should try the following...
set the concurrentConsumers/maxConcurrentConsumers greater than the default (1) on the MessageListenerContainer
set the prefetch to 0 to better promote balancing messages between consumers, etc.

Resources