Expired job executions in Spring Batch - spring

Problem
Users can submit data to generate a report, which triggers a spring-batch job. If the same data is submitted (by the same user or another user) the same report should be generated such that Spring Batch doesn't start a new job, under the premise that the report has already been generated.
To make matters a little more complicated, generated reports expire after 90 days. The idea behind this is that the data gleaned from various web services used to build the report is likely out of date. Therefore, after 90 days the report should be re-generated using new data from those web services.
Questions
When a job has already run, how can I discover the job execution id for that job? This id is used in the URL to uniquely identify a report. JobExplorer is severely limited in querying Spring Batch data.
How can I trigger another instance of the job only after 90 days? The issue is that given duplicate job parameters, a JobInstanceAlreadyCompleteException will be thrown. Must I encode the 90 days has an extra identifying parameter, or is there an easier way?

Clean up old jobs must be done using business methods as well as for expired reports.
After this premise you can try a different path to solve your problem:
Every user launch a different job, with same report properties but
an extra job-parameters to make every jobs different
First step is to check - using business method - if there is a running job for that report ; in this case notify user he have to wait or retry later (use a decider)
Second step is to check if there is a completed - not expired - report using a business method;if so retrieve it and show to user (use a decider as for step 2)
Generate report (delete old, if necessary)
Show report to user.
Of course, generated report metadata tables are different from SB tables and should be accessed using DAO related to domain context (the report in your case).
Can this a valid alternative?

Related

Can I create a report that will list all previously run reports?

I'm using Dynamics CRM 2015 and want to create a report that will show all reports run in the last 12 months.
I've been using the Report Wizard and can't seem to find the entity that is created when a report is run. I can find when a report was created but not every time it has been run.
Example of expected results:
Report X
4/3/2019 Admin 1
4/2/2019 Admin 3
Report Y
4/3/2019 Admin 2
4/2/2019 Admin 1
I'm not worried about the formats, I will most likely tinker with it after. I simply want to find a way to display every instance any report has been run.
Since you're on CRM 2015, it would follow that your system is on-prem.
That means you're not able to use the relatively new Activity Logging a.k.a. Read Auditing that's available in D365 Online, which seems to have what you're looking for.
The out-of-box audit in CRM 2015 would give you some kind of "user access" auditing (i.e. when people login), but not show you specific report runs. It's really designed to capture changes to the data for audited entities.
As far as I know there is no entity record created when a user runs a report. Provided you were willing to hook into and/or replace all the report triggers throughout the system (i.e. in all ribbons), you could hypothetically build something to track report runs. But it seems like that would be cost prohibitive.
According to this article you should be able to pull this info out of the ReportServer DB. I'd quote the relevant parts here but it seems very involved - creating temp tables, etc.

Step Failure not reported by Composed Task Runner or reflected in Spring Cloud Dataflow Tables

Currently we are using Spring Cloud Dataflow to run a sequence of apps we have created based on a definition. Each of the apps we have made are spring batch jobs, with individual steps. The current issue we are having is that when one of these steps inside the app's batch job fails, it is reflected as expected in the step_execution, job_execution, and task_execution tables in the scdf database. However, we are not able to rerun any scdf job that has failed in an app from the top scdf level because it seems the row entry in the step_execution table for SCDF's step related to the overall app never propagates to FAILED in the status column, instead always being COMPLETED no matter what happens. Below I have included a picture which gets across what I am saying. test-simple8-test-app is the app we have created, while check-step, sleep-step, and should-error-step are steps inside the job for that app. You can see in the should-error-step that it has FAILED for both ExitCode and Status, while the entry for the app itself has COMPLETED for status and FAILED for ExitCode.
Relevant Table
We have tried altering what we report in the task_execution table since we saw CTR is looking for certain fields there, but it still seems it does not affect the Status column in step_executions. If we manually change the entry in the db to FAILED for that value, it proceeds as we would expect and as is normal for spring batch, in that it resumes the job from that app and re executes it.
Is there a good way to relieve this problem, or is it a problem with the way we are approaching it?
Edit: Added Flow Diagram for better clarity

How to locate web elements having dynamic Id's in a cluster of servers using JMeter?

I am using JMeter to test performance of the following Server Infrastructure. The Code Base uses ICEfaces framework and hence generates dynamic ID's each time there is a new build.
I record the scripts and run them for different variants of load (10 Users, 20 Users, 30 Users and so on). Whenever a new code base is deployed, because of change in ID's, I have to re-record the scripts before I perform Test runs again.
As of now I am able to satisfactorily get my job done.
I wish to take my job to a whole new level by trying to test performance on the following Server Infrastructure.
The following are my issues -
Because of two different nodes (Node1 and Node2) each node has a unique set of dynamic ID's associated with it and when I record a script on a particular login session, I cannot be sure of the Node my session is pinned on and as a result the recorded script is tailor made for a single node and not a cluster.
When "Load Balancer" gets in action, I cannot be sure of the Node JMeter hits during performance run and for obvious reasons the run fails to generate results.
I want a cleaver way to record script which can successfully run on a multiple server configuration.
How to perform performance Testing on this configuration?

Storing ssrs reports in a file that can be called immediately

Hi Fellow SSRS Developers,
I have a scenario that I'm trying to tend to but need to know if what I want to do is even possible.
I have 4 reports that I would like to have run and then store the actual report in a file on a server. The reason for this need is because the response time on the reports is a bit long and I've done everything in SQL to speed it up.
What I want to have happen, is when a user clicks on the report name, instead of rendering the report on their screen I simply want to call the report that is already in a file so that it will load in lightning quick time.
Has anyone ever done this with SSRS and is it even possible?
Thanks,
Other than running reports on demand, there are two specific options: Running from a Cached report and running from a Snapshot.
You can see details on all of this in Setting Report Processing Properties.
Caching
From Books Online:
To enhance performance, you can specify a report (and data) to be
cached temporarily when a user runs the report. The cached copy is
subsequently available to other users who access the same report. With
this approach, if ten users open the report, only the first request
results in report processing. The report is subsequently cached, and
the remaining nine users view the cached report.
So here you can see that it is a specific user action that causes a stored report to be created.
See Report Caching in Reporting Services.
Snapshots
From Books Online:
A report snapshot is a report that contains layout information and
data that is retrieved at a specific point in time. You can run a
report as a report snapshot to prevent the report from being run at
arbitrary times (for example, during a scheduled backup). A report
snapshot is usually created and subsequently refreshed on a schedule,
allowing you to time exactly when report and data processing will
occur. If a report is based on queries that take a long time to run,
or on queries that use data from a data source that you prefer no one
access during certain hours, you should run the report as a snapshot.
Here you can see that these are these are generally set up on a regular schedule, i.e. independent of user activity.
See Creating, Modifying, and Deleting Snapshots in Report History.
In this case it seems like Snapshots might be your best option so you have more control of when the stored report is created. The main issue with Snapshots is that they need either stored credentials or an unattended execution account so might not be possible in all cases.

Jmeter with ZK framework

I need to do a Load test on application which is built on ZK framework.
When i record a script which performs below action
a. User Login
b. Select Role
c. Open and Create Record
d. Log out.
When i run the script with multiple users say 10 users then scripts create 10 records in application.
But after some random duration say 4-5 hours later same script does not create any record even though all requests are shown as passed. Script also records COMET request (Ajax Push)
I am not able to figure out the reason.
Read this which explains how ids work :
http://books.zkoss.org/index.php?title=Small_Talks/2012/January/Execute_a_Loading_or_Performance_Test_on_ZK_using_JMeter
http://blog.zkoss.org/index.php/2013/08/06/zk-jmeter-plugin/

Resources