SSRS Report to run based on an event - events

Here is the scenario the user wants...
The ETL is usually completed at 5 AM and a row is inserted in a table upon successful load.
Starting at 5 AM the SSRS report should look for that row to be inserted, say every 30 seconds.
When it sees a row is inserted and the SSRS report should run which in turn sends an email to the users specified in the subscription.
Is there any way to achieve this without SQL -RD?
Please give me a direction as I am new to MSBI.
Regards,
Chakrapani M

It's not an ideal solution, but if you're running Enterprise edition, you could set up a data driven subscription to check for the inserted row, and if it's present, return the list of email addresses - use this returned value to populate the "TO:" field in the data driven subscription. If no data is present, no email addresses would be returned, and the subscription would simply "fail".
Alternatively, if you have access to do so, just find the job that the subscription creates on the SSRS SQL host, and alter it to add the necessary conditional logic to only run the subscription when appropriate.

Use a SSIS package with 3 tasks:
SQL Task - Single Row Result set - Check if data based on last run date scenario and then update last-run-date
SQL Task - On success of task 1 run "Windows File Share" SSRS report (say export as PDF or Excel format)
EMAIL Task - On success of task 2 Add file as an attachment and send
Deploy and add the package to a SQL Server job, scheduled for 30 seconds.
Alternatively put the delay (schedule) in the package. I.e. use a For Loop Container (with an optional loop end count) and dump the 3 tasks into it with an additional delay task scenario on failure of task 1 and a exit loop early on success of task 3. Remember to set the MaximumErrorCount of the For Loop to 0 and the each individual task in the loop besides the first task FailPackageOnError property to True in order to avoid the intentional error to call the delay if no result from the ETL.

Related

Jmeter interactions with UI and Database inserts simultaneosly

When we create a Jmeter script through Blazemeter/third party script recorder and there are some insert/update and delete functions on UI involve in records. Just want to know when we run same JMeter script with 100 users, Do those new records get inserted/update/delete in database as well ? If yes, then what should be remaining 99 users data if there are unifications on UI.
When you record the user action it results into hard-coded values so if you add foo line in the UI it gets added to the database.
When you replay the test with 1 user depending on your application implementation
either another foo line will get added to the database
or you will get an error regarding this entry is present already
When you run the same test with 100 users the result will be the same, to wit:
either you will have 100 identical new/updated entries
or you will have 100 errors
So I would suggest doing some parameterization of your tests so each thread (virtual user) would operate its own unique data, like:
have a CSV file with credentials for 100 users which can be read using CSV Data Set Config
when you add an entry you can also consider adding an unique prefix or postfix to this like:
current virtual user number via __threadNum() function
current iteration via ${__jm__Thread Group__idx} pre-defined variable
current timestamp via __time() function
unique GUID-like structure via __UUID() function
etc.

Dynamics CRM workflow - prevent contemporary wait conditions on the same record

We have a wait condition of 5 days on a background workflow. If the workflow is triggered again on the same record before the 5 days period expires, we would like to stop the first wait condition, and only consider the new one. Is it possible, and how?
I solved by adding a "workflow execution counter" field to the entity, and modifying the workflow depending on the value of this field.

How should I construct an Oracle function, or other code, to measure time for business processes between two records

I need suggestions for designing tables and records in Oracle to handle business processes, status, and report times between statuses.
We have a transaction table that records an serial numbered record id, a document id, a date and time, and a status. Statuses reflect where a document is in the approval process, reflecting a task that needs to be done on a document. There are up to 40 statuses, showing both who needs to approve and what is the task being done. So there is a document header or parent record, and multiple status records as child records.
The challenge is to analyzes where bottlenecks are, which tasks are taking the longest, etc.
From a business pov, a task receives a document, we have the date and time this happens. We do not have a release or finish date and time for a current task. All we have is the next task's start date and time. Note that a document can only have one status at a time.
For reasons I won't go into, we cannot use ETL to create an end date and time for a status, although I think that is the solution.
Part of the challenge is that statuses are not entirely consecutive or have a fixed order. Some statuses can start, stop, and later in the process start again.
What I would like to report is the time, on a weekly or monthly basis, that each status record takes, time date time end minus date time start. Can anyone suggest a function or other way to accomplish this?
I don't need specific code. I could use some example in pseudo code or just in outline form of how to solve this. Then I could figure out the code.
You can use a trigger after insert and after update on transaction table to record on a LOG_TABLE every change: id transaction, last status, new status, who approve, change date-time (maybe using TiMESTAMP data type is fractional seconds care), terminal, session ID, username.
For insert: you need to define a type of "insert status", diferent for other 40 statuses. For example, is status are of numeric type, a "insert status" can be "-1" (minus one), so last status is "-1" and new status is status of the inserted record on transaction table.
With this LOG_TABLE you can develop a package with functions to calculate time between changed status, display all changes, dispplay last change, etc.

Multiple group by for dataset query

I am currently working on report generation using BIRT Tool. consider the table below,
TaskId Status SLAMiss
----------------------------------------------------------
1 Completed Yes
2 In Progress No
3 Completed No
I need to create a table which shows the count of Completed ,In progress tasks along with the count SLA missed tasks like below,
Tasks Completed Tasks InProgress SLA Adherence SLA Miss
---------------------------------------------------------------------------
2 1 2 1
Now i need to create the dataset using sql query. For the first two columns i have to group by 'Status'. And for the last two columns i have to group by 'SLA Miss'. So,
1.Is it possible to achieve this using a single dataset?
2.If yes what will be the sql query for the dataset?
3.If not, I can create 4 dataset's for each column and apply that to the table.
Will that be a good idea?
Thanks in advance.
The easiset way to do this is use computed columns. You would use some JavaScript like the following in a new colum named "CompletedCount" as Interger. Then when you build your report you sum the values with an "Aggregation" item from the palette.
if (row["Status"] == "Completed" )
{
"1"
}else{
"0"}

Ignore error in SSIS

I am getting an "Violation of UNIQUE KEY constraint 'AK_User'. Cannot insert duplicate key in object 'dbo.tblUsers when trying to copy data from an excel file to sql db using SSIS.
Is there any way of ingnoring this error, and let the package continue to the next record without stopping?
What I need is if it inserts three records but the first record is a duplicate, instead of failing, it should continue with the other records and insert them.
There is a System variable called propagate which can be used to continue or stop the execution of package .
1.Create an ON-Error event handler for the task which is failing .Generally it is created for the entire Data Flow Task.
2.Press F4 to get the list of all variables and click on the Icon at the top
to show System Variable.By default Propagate variable will be True ,you need to change it to false ,which basically means that SSIS wont propagate the Error to other component and let the execution continue
Update 1:
To skip the bad rows there are basically 2 ways to do so :-
1.Use Lookup
Try to match the primary key column values in source and destination and then use Lookup No Match Output to your destination.If the value doesn't match with the destination then insert the rows else just skip the rows or redirect to some table or flat file using Lookup Match Output
Example
For more details on Lookup refer this article
2.Or you can redirect the error rows to a flat file or a table .Every SSIS Data Flow components has a Error Output .
For example for Derived component ,the error output dialogue box is
But this condition may not helpful to u in your case as redirect error rows in destination doesn't work properly .If an error occurs it redirects the entire data without inserting any row in the destination .I think this happens because OLEDB destination does a bulk insert or inserts data using transactions.So try to use lookup to achieve your functionality .

Resources