I want to change the execution time of a load plan in Oracle ODI Scheduling. I change Starting date and times manually but it is again executed in previous date and time settings. How can i be able to execute the load plans in required date and time? Thanks
EDITING VERSION 1:
The Images are
P.S.: We have also converted to "Active for the period" and didn't work. Is it related with the Java version?
Changing the schedules of a Load Plan or a Scenario will update the schedule information in the repository but will not update that information in the agent.
There is therefore an extra step to perform. On the topology tab, a right click on the physical agent will give the option Update Schedule. That will refresh the agent memory with the schedule stored into the repository.
Related
I am trying to create an alert for a dbms scheduler job if it is running for a duration longer than expected. For example, if a job that usually takes 2 hours to run is now running for more than 2.5 hours, I want to be notified.
What would be the best way to do this? Can I use Oracle Enterprise Manager for this?
I achieved this by setting the parameter max_run_duration in the dbms job.
An event will be raised if the job run time exceeds the time mentioned in the property.
I've started using Jmeter to run daily performance tests, and have also just figured out how to produce an HTML dashboard.
What I need to do now is find a way to run Jmeter every day, producing an HMTL dashboard of the results, but with comparisons of the results of the last few days. This would mean adding to the data of existing files instead of creating a new HTML dashboard every day.
Can anyone help me with this?
The easiest solution is adding your JMeter test under Jenkins control.
Jenkins provides:
Flexible mechanism of scheduling a job
There is a Performance Plugin for Jenkins which automatically analyses current and previous builds and displays performance trend chart on JMeter Dashboard
Alternatively you can schedule JMeter runs using i.e. Windows Task Scheduler and compare the current run with the previous one using Merge Results plugin
I'm currently testing HP LoadRunner 12.53 with Oracle EBS R12.2.5.
I created a simple script using both Oracle Apps, and Oracle NCA + Http protocol (Log in, bring up a form and close/log out) and replayed but run into below error. (same error for both scripts)
nca_connect_server: cannot communicate with host
icx_ticket is correlated and works OK as it is picked and replaced in the parameter.
No need to correlate JSessionIDForms as EBS is running on socket mode.
It is s just simple script with single correlation but can't find any clue for the error.
What could be the root cause of the error?
Where should I look at for a clue? How to make the error / log more verbose and detailed
Thanks in advance
Record it twice. If the value shifts, then correlate it.
Please ensure that you have properly set up the environment before recording, Below steps need to be taken for setting up the environment
1.Set "record = names" flag for specific user profile in Oracle EBS Application** via administrator login (search google how to achieve this or simply ask your application team to do it for you)
2.Run Time Settings and Default.cfg file changes
Run Time Settings
Keep the below values to high limit to avoid replay timeout errors
Run-Time Settings > Internet Protocol >Preferences > Options>
Step Download Limit
HTTP-request connect timeout:
HTTP-receive receive timeout:
Keep-Alive timeout:
Run-Time Settings >Browser>Browser Emulation>
1) Simulate a new user on each iteration – checked
3. default.cfg file inside script directory
"RelativeURL={NCAJServSessionId}" statement in the default.cfg file rolls back each time we run the script so we need to check that it is
/forms/lservlet;JsessionIDForms={NCAJServSessionId} -- R12 Version or
/forms/formservlet?JServSessionIdforms={NCAJServSessionId} -- EBS 11
i Version
4. Correlation - Last but not the least
Ensure correct correlation of each and every parameter, The best way to achieve this is by recording the script 2 times and comparing them using a suitable tool, Correlate each value which might be changing each time you replay the script
Note :- The Oracle EBS is not fully supported by Loadrunner please download the loadrunner compatability matrix and check if your version is supported by Loadrunner.
Problem
Users can submit data to generate a report, which triggers a spring-batch job. If the same data is submitted (by the same user or another user) the same report should be generated such that Spring Batch doesn't start a new job, under the premise that the report has already been generated.
To make matters a little more complicated, generated reports expire after 90 days. The idea behind this is that the data gleaned from various web services used to build the report is likely out of date. Therefore, after 90 days the report should be re-generated using new data from those web services.
Questions
When a job has already run, how can I discover the job execution id for that job? This id is used in the URL to uniquely identify a report. JobExplorer is severely limited in querying Spring Batch data.
How can I trigger another instance of the job only after 90 days? The issue is that given duplicate job parameters, a JobInstanceAlreadyCompleteException will be thrown. Must I encode the 90 days has an extra identifying parameter, or is there an easier way?
Clean up old jobs must be done using business methods as well as for expired reports.
After this premise you can try a different path to solve your problem:
Every user launch a different job, with same report properties but
an extra job-parameters to make every jobs different
First step is to check - using business method - if there is a running job for that report ; in this case notify user he have to wait or retry later (use a decider)
Second step is to check if there is a completed - not expired - report using a business method;if so retrieve it and show to user (use a decider as for step 2)
Generate report (delete old, if necessary)
Show report to user.
Of course, generated report metadata tables are different from SB tables and should be accessed using DAO related to domain context (the report in your case).
Can this a valid alternative?
Hi Fellow SSRS Developers,
I have a scenario that I'm trying to tend to but need to know if what I want to do is even possible.
I have 4 reports that I would like to have run and then store the actual report in a file on a server. The reason for this need is because the response time on the reports is a bit long and I've done everything in SQL to speed it up.
What I want to have happen, is when a user clicks on the report name, instead of rendering the report on their screen I simply want to call the report that is already in a file so that it will load in lightning quick time.
Has anyone ever done this with SSRS and is it even possible?
Thanks,
Other than running reports on demand, there are two specific options: Running from a Cached report and running from a Snapshot.
You can see details on all of this in Setting Report Processing Properties.
Caching
From Books Online:
To enhance performance, you can specify a report (and data) to be
cached temporarily when a user runs the report. The cached copy is
subsequently available to other users who access the same report. With
this approach, if ten users open the report, only the first request
results in report processing. The report is subsequently cached, and
the remaining nine users view the cached report.
So here you can see that it is a specific user action that causes a stored report to be created.
See Report Caching in Reporting Services.
Snapshots
From Books Online:
A report snapshot is a report that contains layout information and
data that is retrieved at a specific point in time. You can run a
report as a report snapshot to prevent the report from being run at
arbitrary times (for example, during a scheduled backup). A report
snapshot is usually created and subsequently refreshed on a schedule,
allowing you to time exactly when report and data processing will
occur. If a report is based on queries that take a long time to run,
or on queries that use data from a data source that you prefer no one
access during certain hours, you should run the report as a snapshot.
Here you can see that these are these are generally set up on a regular schedule, i.e. independent of user activity.
See Creating, Modifying, and Deleting Snapshots in Report History.
In this case it seems like Snapshots might be your best option so you have more control of when the stored report is created. The main issue with Snapshots is that they need either stored credentials or an unattended execution account so might not be possible in all cases.